Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I should have clarified better. Because, indeed, I have the same experience with copilot. Where it suggested code that I disliked but was actually the right one and mine the wrong one.

I was talking about X-Y on a higher level though. Architecture, Design Patterns, that kind of stuff. LLMs are (still?) particularly bad at this. Which is rather obvious if you think of them as "just" statistical models: it'll just suggest what is done most often in your context, not what is current best for your context.



Yea I don't think the crop of LLM is useful for this. They let themselves be lead by what's written, and struggle to understand negation even. So when I suspect there is a better solution, I have a hard time getting such an answer, even if asking explicitly for alternatives. I doubt it's just a question of training, they seem to lock themselves on the context. When using Phind, this is somewhat mitigated by mixing in context from the web, which can lead to responses that include alternatives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: