Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems they do not want you to report an issue without an accompanying fix for it.

> If you would like to report a problem you find when using you-get, please open a Pull Request, which should include [snip]

Can't say I've encountered this before.



As the other commenter said, they want a failing test, not a fix.

    A detailed description of the encountered problem;
    At least one commit, addressing the problem through some unit test(s).
        Examples of good commits: #2675, #2680, #2685
"Addressing" is probably a bad word to use here. "Demonstrating" would have been better, IMO.


the most expensive piece of writing software is scoping work.

i’m almost tempted to add a test suite just to give people more agency over my output because right now i’m only soliciting feedback in person to cut down on internet bullshit, like what happened to xz-utils


The Chinese version of the text has an extra header line that translates to "to prevent abuse via GitHub Issues, we are not accepting general issues". An earlier commit has this for the English text:

   `you-get` is currently experimenting with an aggressive approach to handling issues. Namely, a bug report must be addressed with some code via a pull request.
https://github.com/soimort/you-get/commit/75b44b83826b3c2d9a...

Maybe they got too much spam.

By the way, `tests/test.py` seems to just run the extractors against various websites directly. I can't find where it's mocking out network requests and replies. Maybe this is to simplify the process for people creating pull requests?


I can get this, but I aggressively report accounts and issues. I'm not sure how GitHub handles them but they seem to not come back.

Though what I'm unsure how to deal with is legitimate users being idiotic. For example, recently one issue was opened that asked where the source code was. Not only was there a directory named "src" but there were some links in the readme to specific parts. While I do appreciate GitHub and places like hugging face [0], there are a lot of very aggressive and demanding noobs.

I'd like ways to handle them better.... I'm tired of people yelling at me because 5 year old research code no longer works out of the box or because you've never touched code before.

[0] check any hugging face issue and you'll see far more spam. Same accounts will open multiple issues that just barate owners and hugging face makes it difficult to report these accounts.


The solution is to ignore them and close their issue. Open source maintainers have enough to worry about and are unpaid, it's okay to be a little dictatorial when it comes to "bad questions".


That's not a solution.

It addresses the specific issue but does nothing to prevent future similar issues. A solution to a cold is not handing someone a tissue.

I like that these platforms are open to everyone but at the same time there are a lot of people who have no business participating. Being able to filter those people out is unfortunately a necessary tool to not get overloaded.

Worse, I find that due to this many open source maintainers and up being quick to close issues and say rtfm. I can't tell you how many times I've had this happen where in my opening issue I quote the fm and even include a reproducible test. It's also common to just close and say "not our problem".


Kind of makes sense, it weeds out all of the people who didn't read the manual.


They want you to just submit a PR with a test that, if passed, would indicate the problem for you is fixed.


I kind of like this. It's a more formal proof of concept. You prove the bug exists by writing a failing test. If they cannot construct a failing test then it's either too hard to mock or reproduce (and therefore maybe not even worth fixing, for a free tool), or it's impossible because it's not a bug. Frees up maintainer time from dealing with reports that aren't bugs.


> If they cannot construct a failing test then it's either too hard to mock or reproduce (…), or it's impossible because it's not a bug.

Or, you know, the user is not a developer. Or is unfamiliar with Python, or their test suite, or git, or…

It is perfectly possible to be good at reporting bugs but be incapable of submitting pull requests.


The problem with popular tools is that they have more bugs that can be fixed. So bug reports are pretty much worthless: You know that there are 1000 bugs out there, but you only have resources to fix 10 of them.

By asking users to provide reproducible test cases, you can massively reduce the amount of work you have to do. Of course that means 90% of bugs will never be reported. But since you don't have the resources to fix them anyway, why not just focus on the bugs that can be reproduced and come with a test case...


I don't think it's necessarily about fixing those bugs, but I think a lot of times it's more about at least having those bugs be documented in order to raise awareness on (probable) issues down the line for whoever would want to use that project in the future.


You missed the point entirely.

It’s your prerogative if and how you want to limit the amount of people who can contribute, but I was explicitly replying to someone claiming that a person’s inability to code is in any way related to the validity or importance of the bug.


What happens if you don’t know Python? Python is a relatively easy language to learn but no way I’m gonna learn Python just to report an issue


Did you (or anyone) in this thread look to see exactly what they are looking for with their provided examples?

https://github.com/soimort/you-get/pull/2680/commits/313b8d2...

You do not need to know Python deeply to construct what they are expecting. They’re not actually looking for a unit test or something.


> Did you (or anyone) in this thread look to see exactly what they are looking for with their provided examples?

I did. And I looked at all examples of “good commits”, not just the trivial ones.

https://github.com/soimort/you-get/pull/2685/files

That’s already complex for someone unfamiliar with the software (which might nonetheless be able to open a competent bug report).


Then you don’t get to contribute bug reports.

Perfectly fine rule for a maintainer to have.


If the bug is egregious enough, somebody else will find it. If the bug is important enough to you but esoteric, then ask on a forum or enlist the help of someone you know who does know Python.

How do you currently submit bug reports on e.g. MS Word or Adobe Photoshop? This way is certainly more open than those commonly-deployed software.


Good chance you wouldn't be writing good bug reports either, then. Github issues have enough noise that a first-pass filter like this feels like a good idea, even if it has some false positives.


This in no way aligns with reality. I frequently interact with users who can’t code at all but make good bug reports. One of the best ways to ensure success is to have a form (GitHub allows creating those) which describe exactly what is necessary and guide people in the right direction.

What you're saying is even worse, since you’re implying someone could be an expert computer programmer or power user, but because they’re unfamiliar with the specific language this project chose, they are incapable of making good bug reports. That makes no sense.


I fail to see the logic in your comment. Just another case of Goodhart's law.


This isn't really a metric though. It's a formal existence proof that the bug exists. The key difference IMO is that you have to create a test which A) looks (to the maintainer) like it should pass, while simultaneously B) not passing. It's much harder to game.

There are other cases where Goodharts Law fails as well: consider quant firms, where the "metric" used to judge a trader is basically how much money you pull in. Seems to be working fine for them


If you can’t describe your bug in a test, then you probably can’t describe it sufficiently in English either.

Seems to make sense


The same thing that happens if the author of the OSS you use doesn't know English.


That's exactly it. They put up a gate that blocks low-effort issues that only add busywork. I like it!


Interesting. I like the idea of encouraging people to try creating a test or even a whole fix, but saying that’s all you will accept is a bit much. On the other hand, I’m not doing the work to maintain you-get. I don’t know what they deal with. This may be an effective way to filter a flood of repetitive issues from people who don’t know how to run a command line program.


I believe there are two extremes. On one end you get a bunch of repetitive non-issues, while on the other end you only get issues about (say) bugs in FreeBSD 13.3 because only hard-core users have the skills and patience to follow THE PROCESS.

I know how to make an isolated virtual environment, install the package, make a fork, create a test and make a PR. But I don't know whether I care enough about a random project to actually do it.


It’s relatively easy to write a failing test and it massively cuts down the work related to moderating issues. Also, reduces the danger of github issues turning into a support forum.

If this results in the project being easier to maintain and being maintained longer, then I’m fine with this.


> It’s relatively easy to write a failing test and it massively cuts down the work related to moderating issues.

Relative to what? Learning someone else's code base well enough to write a useful test is not trivial.

It's not a bad method, but the vast majority of users won't be capable of writing a test that encapsulates their issue.


In the case of this tool, adding a failing test case looks trivial if you've got the URL of a page it fails on.

Provided the maintainer is willing to provide some minimal guidance to issue reporters who lack the necessary know-how, it even seems like a clever back door way of helping people learn to contribute to open source.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: