Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fact that GPT understands how to use tools suggests that not only does it understand the meaning of numbers, it also understands its own limitations.

By all means, the argument around numeracy is bogus, as lots of people have numeracy issues but they know how to use a calculator.

The fact that so many people seem stuck up over the inability to write perfect math when it can do in context learning of a novel programming language, do addition over groups where addition is not defined as a+b but as a+b+c where c is a constant is incredible.

If we held humans to the same standard we hold GPT3.5+ models, the vast majority pf humans would fail.

The fact that it needs as much data as it does is simply an architectural issue and not inherent to the model itself.

As for hallucinations; I will point to whole thing that religion is, a mass psychosis, Eagleman's book goes into great detail on how we hallucinate our reality.



I don't feel like you're responding to the arguments I'm making. Yes, people have numeracy issues or suffer from mass psychosis, but we generally consider those signs of less intelligence. I'm not holding GTP to the same standards; I'm arguing that human intelligence/thinking is not fundamentally the same as an LLM (e.g. reasoning in flexible symbols rather than language), which is why LLMs appear highly intelligent in some ways but much less intelligent in other ways (like grade-school numeracy or the ability to cite sources that exist)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: