Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AGI us not a very useful term, because people use it often synonymous with "human level or higher ability". But the opposite of "general AI" is not "less intelligent than a human" but "narrow AI". The narrow/general distinction is orthogonal to the low/high ability ("intelligence") distinction. All animals are very general, as their domain of operation is the real world, not some narrow modality like strings of text or Go boards. Animals are not significantly narrower than humans, they are significantly less intelligent. Understood this way, a cat level AI would be an AGI. It would just not be an HLAI (human level AI) or ASI (artificial superintelligence).


Personally I take AGI to refer to a system that is both “intelligent enough” and “general enough”. Given the existence of super-human narrow AI, the interesting property is generality, not intelligence. But I don’t think it’s useful to call a sub-human cat-level general AI an AGI.

Some would disagree; there was a paper arguing that ChatGPT is weak AGI.

But as I see it AGI is a term of art that refers to a point on the tech tree where AI is general enough to be able to meaningfully displace a large proportion of human knowledge workers. I think you may be overthinking the semantics; the “general enough and intelligent enough” quadrant is unique and will be incredibly disruptive when it arrives (whenever that ultimately is). We need a label for that frontier, “AGI” is by convention that label.


Given the existence of super-human narrow AI, the interesting property is generality, not intelligence. But I don’t think it’s useful to call a sub-human cat-level general AI an AGI.

If we have AI as general as an animal, ASI (superintelligence) is probably imminent. Because the architecture of humans intelligence probably isn't very different from cats, just the scale is bigger.


I think that very well could be true, depends on how that generality was obtained.

I would not be surprised if a multi-modal LLM (basically current architecture) could be wired up to be as general as a cat with current param count, and with the spark of human creativity (AGI/ASI) still ending up being far away.

But if you made a new architecture that solved the generalization problem (ie baking in a world model, self-symbol, etc) but only reached cat intelligence, then it would seem very likely that human-level was soon to follow.


> people use it

Do you volunteer to inform them that we use it as "general" as opposed to "narrow"? (I mean, it is even in the very name of 'AGI', literal...)

For the rest: yes, of course. AGI: we implement intelligence itself. How much, that is part of the challenge. I wrote nearby (in other terms) that the challenge is to find a procedure for Intelligence that will actually scale.


That's a great way of looking at it in theory. But in practice how would we even know if we're looking at a cat-level AGI? For a human level AGI it's obvious, we would question and evaluate it.

Is there a reasonable way of distinguishing narrow-AI ChatGPT from a hypothetical cat-level AGI? We can't even measure the intelligence level of real world cats.


A cat level AI would be able to use a robot body in the real world on the level of a cat.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: