Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need better social norms about disclosure, but maybe those don't need to be about "whether or not you used LLMs" and might have more to do with "how well you understand the code you are opening a PR for" (or are reviewing, for that matter). Normalize draft PRs and sharing big messes of code you're not quite sure about but want to start a conversation about. Normalize admitting that you don't fully understand the code you've written / are tasked with reviewing and that this is perfectly fine and doesn't reflect poorly on you at all, in fact it reflects humility and a collaborative spirit.

10x engineers create so many bugs without AI, and vibe coding could multiply that to 100x. But let's not distract from the source of that, which is rewarding the false confidence it takes to pretend we understand stuff that we actually don't.



>> but maybe those don't need to be about "whether or not you used LLMs"

The only reason one may not want disclosure is if one can’t write anything by themselves, thus they will have to label all code as AI generated and everyone will see their real skill level.


> Normalize draft PRs and sharing big messes of code you're not quite sure about but want to start a conversation about. Normalize admitting that you don't fully understand the code you've written / are tasked with reviewing and that this is perfectly fine and doesn't reflect poorly on you at all, in fact it reflects humility and a collaborative spirit.

Such behaviors can only be normalized in a classroom / ramp-up / mentorship-like setting. Which is very valid, BUT:

- Your reviewers are always overloaded, so they need some official mandate / approval to mentor newcomers. This is super important, and should be done everywhere.

- Even with the above in place: because you're being mentored with great attention to detail, you owe it to your reviewer not to drown them in AI slop. You must honor them by writing every single line that you ask them to spend their attention on yourself. Ultimately, their educative efforts are invested IN YOU, not (only) in the code that may finally be merged. I absolutely refuse to review or otherwise correct AI slop, while at the same time I'm 100% committed to transfer whatever knowledge I may have to another human.

Fuck AI.


  but maybe those don't need to be about "whether or not you used LLMs" and might have more to do
  with "how well you understand the code you are opening a PR for" (or are reviewing, for that matter)
AI is a great proxy for how much someone has. If you're writing a PR you're demonstrating some manner of understanding. If you're submitting AI slop you're not.


I've worked with 10x developers who committed a lot of features and a lot of bugs, and who got lots of accolades for all their green squares. They did not use LLM dev tools because those didn't exist then.

If they had used AI, their PRs might have been more understandable / less buggy, and ultimately I would have preferred that.


  If they had used AI, their PRs might have been more understandable / less buggy, and ultimately I would have preferred that.
Sure, and if they had used AI pigs could depart my rectum on a Part 121 flight. One has absolutely nothing to do with the other. Submitting AI slop does not demonstrate any knowledge of the code in question even if you do understand the code.

To address your claim about AI slop improving the output of these mythical 10x coders: doubtful. LLMs can only approximate meaningful output if they've already indexed the solution. If your vaunted 10x coders are working on already solved problems you're likely wasting their time. If they're working on something novel LLMs are of little use. For instance: I've had the pleasure of working with a notoriously poorly documented crate that's also got a reputation for frequently making breaking changes. I used DDG and Google to see if I could track down someone with a similar use case. If I forgot to append "-ai" to the query I'd get back absolutely asinine results typically along the line of "here's an answer with rust and one of the words in your query". At best first sentence would explain something entirely unrelated about the crate.

Potentially LLMs could be improved by ingesting more and more data, but that's an arms race they're destined to lose. People are already turning to Cloudflare and Anubis en masse to avoid being billed for training LLMs. If Altman and co. had to pay market rate for their training data nobody could afford to use these AI doodads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: