Yes, AGI was here at AlphaGo. People don't like that because they think it should have generalized outside of GO, but when you say AGI was here at AlphaZero which can play other games they again say not general enough. At this point is seem unlikely that AI will ever be general enough to satisfy the sceptics for the reason you said. There will always be some domain that requires training on new data.
You're calling an Apple an Orange and complaining that everyone else wont refer to it as such. AGI is a computer program that can understand or learn any task a human can, mimicking the cognitive ability of a human.
It doesn't have to actually "think" as long as it can present an indistinguishable facsimile, but if you have to rebuild its training set for each task, that does not qualify. We don't reset human brains from scratch to pick up new skills.
I'm calling a very small orange an orange and people are saying it isn't a real orange because it should be bigger so I show them a bigger orange and they say not big enough. And that continues forever.
Maybe not yet, but what prevents games from getting more complicated and matching rich human environments, requiring rich human like adaptability? Nothing at all!
>When really, humans are also pretty specialized. Humans have Years of 'training' to do a 'single job'. And they do not easily switch tasks.
What? Humans switch tasks constantly and incredibly easily. Most "jobs" involve doing so rapidly many times over the course of a few minutes. Our ability to accumulate knowledge of countless tasks and execute them while improving on them is a large part of our fitness as a species.
You probably did so 100+ times before you got to work. Are you misunderstanding the context of what a task is in ML/AI? An AI does not get the default set of skills humans take for granted, its starting as a blank slate.
That is a result we want from AI, it is not the exhaustive definition of AGI.
There are steps of automation that could fulfill that requirement without ever being AGI - it’s theoretically possible (and far more likely) that we achieve that result without making a machine or program that emulates human cognition.
It just so happens that our most recent attempts are very good at mimicking human communication, and thus are anthropomorphized as being near human cognition.
I'm just making point that for AI "General" Intelligence.
That humans are also not as "General" as we assume in these discussion. Humans are also limited in a lot of ways, and narrowly trained, make stuff up, etc...
So even a human isn't necessarily a good example for what AGI would mean. Human is not a good target either.
Humans are our only model of the type of intelligence we are trying to develop, any other target would be a fantasy with no control to measure against.
Humans are extremely general. Every single type of thing we want an AGI to do is a type of things that a human is good at doing, and none of those humans were designed specifically to do that thing. It is difficult for humans to move from specialization to specialization, but we do learn them with only the structure to "learn, generally" being our scaffolding.
What I mean by this is that we do want AGI to be general in the way a human is. We just want it to be more scalable. It's capacity for learning does not need to be limited by material issues (i.e. physical brain matter constraints), time, or time scale.
So where a human might take 16 years to learn how to perform surgery well, and then need another 12 years to switch to electrical engineering, an AGI should be able to do it the same way, but with the timescale only limited by the amount of hardware we can throw at it.
If it has to be structured from the ground up for each task, it is not a general intelligence, it's not even comparable to humans, let alone scalable beyond us.
So find a single architecture that can be taught to be an electrical engineer or a doctor.
Where today those are being done, but specialized architectures, models, combination of methods.
Then that would be a 'general' intelligence, the one type of model that can do either. Trained to be an engineer or doctor. And like a human once trained, they might not do the other job well. But they did both start with same 'tech', like humans all have the same architecture in the 'brain'.
I don't think it will be an LLM, it will be some combo of methods in use today.
Ok. I'll buy that. I'm not sure everyone is using 'general' in that way. I think more-often people think a single AI instance that can do everything/everywhere/all at once. Be an engineer and doctor at same time. Since it can do all the tasks at same time, it is 'general'. Since we are making AI's that can do everything, could have a case statement inside to switch models, half joking. At some point all the different AI methods will be incorporated together and will appear even more human/general.
Right, but even at that point the sceptics will still stay that it isn't "truly general" or unable to do X in the same way a human does. Intelligence like beauty is in the eye of the beholder.
But if humans are so bad, what does that say about a model that can't even do what humans can?
Humans are a good target since we know human intelligence is possible, its much easier to target something that is possible rather than some imaginary intelligence.
No human ever got good at Tennis without learning the rules. Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.
> Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.
The model should learn the rule, don't make a model based on the rules. When you make a model based on the rules then it isn't a general model.
Human DNA isn't made to play tennis, but a human can still learn to play it. The same should be for a model, it should learn it, the model shouldn't be designed by humans to play tennis.