Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For a tool, I expect “well” to mean that it does what it’s supposed to do

Ah, then LLMs are actually very reliable by your definition. They're supposed to output semi-random text, and whenever I use them, that's exactly what happens. Except for the times I create my own models and software, I basically never see any cases where the LLM did not output semi-random text.

They're not made for producing "correct code" obviously, because that's a judgement only a human can do, what even is "correct" in that context? Not even us humans can agree what "correct code" is in all contexts, so assuming a machine could do so seems foolish.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: