In the words of Charles Babbage, "I cannot rightly apprehend what confusion of ideas would lead to such a question."
LLMs (by themselves) cannot reliably count. If you expect them to, then you're falling into the common trap of extrapolating a metacognition layer where none exists.
Direct quote from Anthropic's website: "Opus -Our most intelligent model, which can handle complex analysis, longer tasks with multiple steps, and higher-order math and coding tasks."
So you tell me: if a regular developer reads the above, how can they surmise that the model which can do higher-order math can't count?
LLMs (by themselves) cannot reliably count. If you expect them to, then you're falling into the common trap of extrapolating a metacognition layer where none exists.