Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have no issue getting LLMs to generate documentation, modular designs or test cases. Test cases require some care; just like humans LLMs are prone to making the same mistake in both the code and the tests, and LLMs are particularly prone to not understanding whether it's the test or the code that's wrong. But those are solvable.

The things I struggle more with when I use LLMs to generate entire features with limited guidance (so far only in hobby projects) is the LLM duplicating functionality or not sticking to existing abstractions. For example if in existing code A calls B to get some data, and now you need to do some additional work on that data (e.g. enriching or verifying) that change could be made in A, made in B, or you could make a new B2 that is just like B but with that slight tweak. Each of those could be appropriate, and LLMs sometimes make hillariously bad calls here



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: