Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the words of Charles Babbage, "I cannot rightly apprehend what confusion of ideas would lead to such a question."

LLMs (by themselves) cannot reliably count. If you expect them to, then you're falling into the common trap of extrapolating a metacognition layer where none exists.



Mention of that limitation is notably absent in the breathless hype about LLMs.


Direct quote from Anthropic's website: "Opus -Our most intelligent model, which can handle complex analysis, longer tasks with multiple steps, and higher-order math and coding tasks."

So you tell me: if a regular developer reads the above, how can they surmise that the model which can do higher-order math can't count?


Yes, higher order math does not include arithmetic, that should not be confusing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: