I predict that the fight will be more about defining intelligence than inventing it.
No one knows what AGI is. There isn't going to be some switch that flips to take us from AI to AGI. These tools we have today will just keep getting incrementally better, and some new ones will pop up, and at some point we'll have to stop and say "yeah, this is good enough to qualify". And everyone will have their own opinion on what that point is. Plenty of people even think that we are there today, and there's nothing stopping Google or OpenAI from claiming it if they want.
And you can say the exact same for consciousness, sentience, self-awareness etc.
And when we run out of clues what AI can't do yet, it's AGI?
(You imply that a "can't do yet" will remain forever, which is the open question. If you ask me, AGI is only possible if the tech has ~unlimited agency, which implies control over computer and energy production facilities.)
No matter how good am AI system gets we can rely on hacker news commenters highlighting hallucinations and saying “it’s only doing a simple <whatever> task” while totally ignoring whatever the system is actually capable of.
I agree that the goalposts will be moving for a long time (at least for some people)
That's right, whether we call it almost intelligence or asymptotic intelligence, or just plain artifice.
There are reasons Ray Kurzweil used the term "spiritual" in the Age of Spiritual Machines. Among those reasons is that "spiritual" is much more difficult to define with any consensus among experts.
And indeed, there's an inflection point coming. What this is, is not at all clear. However, I'd predict that the answer lies with the realization that, given the limits of conversing with LLMS and GPT, the implication is that there's a human-computer sensemaking loop:
The difference with this HCI is that you'd not hire a human collaborator who lied to you with or without being aware of their own lack of veracity. Here, we'll burn fields full of GPUs at massive cost to get an answer, even though the outcome may be advertising the fact that the AI is wrong. There is learning, but it's going to be costly and painful.
I see this argument all the time and it perplexes me. A thing so huge to rival the wheel, the control of fire, etc, and yet people say that we might not be sure when it's here. If it is AGI it will change the world in dramatic ways, you will not spend time arguing that we rolled things on top of logs before we had the wheel. The freaking war chariot is charging your ranks, taking your home and your family in slavery, there is no room for doubt about the impact.
No one knows what AGI is. There isn't going to be some switch that flips to take us from AI to AGI. These tools we have today will just keep getting incrementally better, and some new ones will pop up, and at some point we'll have to stop and say "yeah, this is good enough to qualify". And everyone will have their own opinion on what that point is. Plenty of people even think that we are there today, and there's nothing stopping Google or OpenAI from claiming it if they want.
And you can say the exact same for consciousness, sentience, self-awareness etc.