Will AI of today definitely become AGI of tomorrow? No, for sure not, and anyone who claims this is at best crazy.
But is it imaginable? I think totally. Andrej Karpathy' blog post about RNN writing Shakespeare 1 character at a time was 10 years ago. GPT-2 was released 6 years ago. In that time we went from something that barely speaks English, never mind any other language, to something that, on a good run, is an excellent language tutor (natural and programming), can play games, can write moderately complex programs, goodness knows what else. For some people, the romance of a ChatGPT-4 was unmatched.
Even if it doesn't become "AGI", it might just get so good at being sub-AGI that the question is irrelevant. We're seriously contemplating a near future where junior devs are replaced by LLMs; and I write this as an AI sceptic who uses LLMs to write a lot of the kind of dumb code a junior dev might do instead.
I don't like AI, in that it nibbles away at my competitive advantage in life. But it's IMO crazy to pretend it is not even potentially a game changer.
The next logical step for cars were flying cars, the next logical step for planes was space travel. Both were hyped not so long ago, people believed in both, it never happened and it isn't closer to us now than 50 years ago
I'm not saying it doesn't bring any value, I'm just saying that if you think we should give $7 gazillion to Altman because he's building skynet by 2030 you're smoking crack
That's a pretty rude response to a considered reply.
I am not saying we give all money to Altman. I'm saying AI is likely overvalued. But can it evolve into something far more capable given investment? Yes, it may.
Flying cars and space travel didn't happen, but might have. And funnily enough, they might still happen, with drone taxis and Virgin Galactic / SpaceX. Might. That's how it works with speculative investment.
Will AI of today definitely become AGI of tomorrow? No, for sure not, and anyone who claims this is at best crazy.
But is it imaginable? I think totally. Andrej Karpathy' blog post about RNN writing Shakespeare 1 character at a time was 10 years ago. GPT-2 was released 6 years ago. In that time we went from something that barely speaks English, never mind any other language, to something that, on a good run, is an excellent language tutor (natural and programming), can play games, can write moderately complex programs, goodness knows what else. For some people, the romance of a ChatGPT-4 was unmatched.
Even if it doesn't become "AGI", it might just get so good at being sub-AGI that the question is irrelevant. We're seriously contemplating a near future where junior devs are replaced by LLMs; and I write this as an AI sceptic who uses LLMs to write a lot of the kind of dumb code a junior dev might do instead.
I don't like AI, in that it nibbles away at my competitive advantage in life. But it's IMO crazy to pretend it is not even potentially a game changer.