I'm just making point that for AI "General" Intelligence.
That humans are also not as "General" as we assume in these discussion. Humans are also limited in a lot of ways, and narrowly trained, make stuff up, etc...
So even a human isn't necessarily a good example for what AGI would mean. Human is not a good target either.
Humans are our only model of the type of intelligence we are trying to develop, any other target would be a fantasy with no control to measure against.
Humans are extremely general. Every single type of thing we want an AGI to do is a type of things that a human is good at doing, and none of those humans were designed specifically to do that thing. It is difficult for humans to move from specialization to specialization, but we do learn them with only the structure to "learn, generally" being our scaffolding.
What I mean by this is that we do want AGI to be general in the way a human is. We just want it to be more scalable. It's capacity for learning does not need to be limited by material issues (i.e. physical brain matter constraints), time, or time scale.
So where a human might take 16 years to learn how to perform surgery well, and then need another 12 years to switch to electrical engineering, an AGI should be able to do it the same way, but with the timescale only limited by the amount of hardware we can throw at it.
If it has to be structured from the ground up for each task, it is not a general intelligence, it's not even comparable to humans, let alone scalable beyond us.
So find a single architecture that can be taught to be an electrical engineer or a doctor.
Where today those are being done, but specialized architectures, models, combination of methods.
Then that would be a 'general' intelligence, the one type of model that can do either. Trained to be an engineer or doctor. And like a human once trained, they might not do the other job well. But they did both start with same 'tech', like humans all have the same architecture in the 'brain'.
I don't think it will be an LLM, it will be some combo of methods in use today.
Ok. I'll buy that. I'm not sure everyone is using 'general' in that way. I think more-often people think a single AI instance that can do everything/everywhere/all at once. Be an engineer and doctor at same time. Since it can do all the tasks at same time, it is 'general'. Since we are making AI's that can do everything, could have a case statement inside to switch models, half joking. At some point all the different AI methods will be incorporated together and will appear even more human/general.
Right, but even at that point the sceptics will still stay that it isn't "truly general" or unable to do X in the same way a human does. Intelligence like beauty is in the eye of the beholder.
But if humans are so bad, what does that say about a model that can't even do what humans can?
Humans are a good target since we know human intelligence is possible, its much easier to target something that is possible rather than some imaginary intelligence.
I'm just making point that for AI "General" Intelligence.
That humans are also not as "General" as we assume in these discussion. Humans are also limited in a lot of ways, and narrowly trained, make stuff up, etc...
So even a human isn't necessarily a good example for what AGI would mean. Human is not a good target either.