Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're going to set a firm "no AI" policy, then my inclination would be to treat that kind of PR in the same way the US legal system does evidence obtained illegally: you say "sorry, no, we told you the rules and so you've wasted effort -- we will not take this even if it is good and perhaps the only sensible implementation". Perhaps somebody else will eventually re-implement it later without looking at the AI PR.


How funny would it be if the path to actually implement that thing is then cut off because of a PR that was submitted with the exact same patch. I'm honestly sitting here grinning at the absurdity demonstrated here. Some things can only be done a certain way. Especially when you're working with 3rd party libraries and APIs. The name of the function is the name of the function. There's no walking around it.


It follows the same reasoning as when someone purposefully copies code from a codebase into another where the license doesn't allow. Yes it might be the only viable solution, and most likely no one will ever know you copied it, but if you get found out most maintainers will not merge your PR.


That's why I said "somebody else, without looking at it". Clean-room reimplementation, if you like. The functionality is not forever unimplementable, it is only not implementable by merging this AI-generated PR.

It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine.


The trick is getting people to believe you.


You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.

Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.

Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.


> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.


Sorry, but this is user error.

1) Most people still don't use TDD, which absolutely solves much of this.

2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.

3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.

4) Most people ask it to do too much and then get disappointed when it screws up.

Perfect example:

> you can't trust anything they put out

Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.


> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.


This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint

I wasn't gesturing to the energy/environmental impacts of AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: