Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't doubt that LLMs are extremely useful for making simple things quickly. I haven't been able to get them to write hard code on their own, though. I was trying to make a sound card with a Pi Pico the other day, and had crackling and popping in the audio. I kept telling Opus to fix that, it kept being absolutely convinced it knows what the problem is every time, and went through multiple iterations of being absolutely sure it will solve the problem this time (with every time bringing a different reason for why the pops are there), and spent $35.

In the end, it had written 500 lines, the problem was still there, and the code didn't work any differently. It worries me that I don't know what those 500 lines were for.

In my experience, LLMs are amazing for writing 10-20 lines at a time, while you review and fix any errors. If I let them go to town on my code, I've found that's an expensive way to get broken code.



> I haven't been able to get them to write hard code on their own, though

For sure, and me neither, for what it's worth. But most of the code I write isn't "hard" code; the hard code is also the stuff I enjoy writing the most. I will note that a few months ago I found them helpful for small things inside the GPT window, and then tried agentic mode (specifically Roo, then Claude Code), and have seen a huge speedup in my ability to get stuff done.


Agreed, I no longer have to write the same code for the Nth time, or spend two minutes times a hundred looking up API docs. I love it.


> write the same code for the Nth time

who does this though ? maybe you should extract that into a library/method/abstraction ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: