Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're acting like 99% of humans aren't very much dependent on that same scaffolding. Humans spend 12+ years in school, their brains being hammered with the exact rules of math, grammar, and syntax. To perform our jobs, we often consult documentation or other people performing the same task. Only after much extensive, deep thought can we extrapolate usefully beyond our training set.

LLM's do have memory and thought. I've invented a few somewhat unusual games, described it to Sonnet 3.5 and it reproduces it in code almost perfectly. Likewise its memory has been scaling. Just a couple years ago context windows were 8000 tokens maximum, now they're reaching the millions.

I feel like you're approaching all these capabilities with a myopic viewpoint, then playing semantic judo to obfuscate the nature of these increases as "not counting" since they can be vaguely mapped to something that has a negative connotation.

>A lot of people don't even consider the ability to solve problems to be a reliable indicator of intelligence

That's a very bold statement, as lots of smart people have said that the very definition of intelligence is the ability to solve problems. If fear of the effectiveness of LLM's in behaving genuinely intelligently leads you to making extreme sweeping claims on what intelligence doesn't count as, then you're forcing yourself into a smaller and smaller corner as AI SOTA capabilities predictably increase month after month.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: