Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But the idea of letting an LLM write/move large swaths of code seems so incredibly irresponsible.

I do think it is kind of crazy based on what I've seen. I'm convinced LLM is a game changer but I couldn't believe how stupid it can be. Take the following example, which is a spelling and grammar checker that I wrote:

https://app.gitsense.com/?doc=f7419bfb27c8968bae&samples=5

If you click on the sentence, you can see that Claude-3.5 and GPT-4o cannot tell that GitHub is spelled correctly most of the time. It was this example that made me realize how dangerous LLM can be. The sentence is short but Claude-3.5 and GPT-4o just can't process it properly.

Having a LLM rewrite large swaths of code is crazy but I believe with proper tooling to verify and challenge changes, we can mitigate the risk.

I'm just speculating, but I believe GitHub has come to the same conclusion that I have, which is, all models can be stupid, but it is unlikely that all will be stupid at the same time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: