I think the goal posts keep moving, and that if you showed a person from 2019 the current GPT (even GPT-4o), most people would conclude that it's AGI (but that it probably needs fine-tuning on a bunch of small tasks).
So basically, I don't even believe in AGI. Either we have it, relative to how we would have described it, or it's a goal post that keeps moving that we'll never reach.
Generative AI is nowhere close to AGI (formerly known as AI). It’s a neat parlour trick which has targeted the weaknesses of human beings in judging quality (e.g. text which is superficially convincing but wrong, portraits with six fingers). About the only useful application I can think of at present is summarising long text. Machine learning has been far more genuinely useful.
Perhaps it will evolve into something useful but at present it is nowhere near independent intelligence which can reason about novel problems (as opposed to regurgitate expected answers). On top of that Sam Altman in particular is a notoriously untrustworthy and unreliable carnival barker.
That's a pretty fundamental level of base reasoning that any truly general intelligence would require. To be general it needs to apply to our world, not to our pseudo-linguistic reinterpretation of the world.
If you showed a layman a Theranos demo in 2010, they would conclude it's a revolutionary machine too. It certainly gave out some numbers. That doesn't mean the tech was any good when little issues like accuracy matter.
LLMs are really only passable when either the topic is trivial, with thousands of easily Googleable public answers, or when you yourself aren't familiar with the topic, meaning it just needs to be plausible enough to stand up to a cursory inspection. For anything that requires actually integrating/understanding information on a topic where you can call bull, they fall apart. That is also how human bullshit artists work. The "con" in "conman" stands for "confidence", which can mask but not stand in for a lack of substance.
> I think the goal posts keep moving, and that if you showed a person from 2019 the current GPT (even GPT-4o), most people would conclude that it's AGI
Yes, if you just showed them a demo its super impressive and looks like an AGI. If you let a lawyer, doctor or even a programmer actually work deeply with it for a couple of months I don't think they would call it AGI, whatever your definition of AGI is. It's a super helpful tool with remarkable capabilities but non factuality, no memory, little reasoning and occasional hallucinations make it unreliable and therefore non AGI imo.
So basically, I don't even believe in AGI. Either we have it, relative to how we would have described it, or it's a goal post that keeps moving that we'll never reach.