I am not a big believer in the various "fast takeoff" scenarios, where an AI rapidly self-improves over a weekend, becomes intelligent beyond all human comprehension, invents nanotech, and eats the world. I read all those science fiction novels, too. And Drexler-style nanotech, in particular, makes a lot of really wild assumptions about "machine-phase" diamond chemistry that seem implausible to very good chemists.
But I still see real risks from AI in the longer term. A lot of these risks could be summed up as, "When you're the second-smartest species on the planet, you might not get to participate in the most important decisions."
And I do believe that we will eventually build something smarter than we are.
I think even dumber-than-human AI is extremely hazardous and agree with you entirely. The problem I have with the singularity crowd is that they make it impossible to talk about the risks that I do find scary, in the same way that it's impossible to discuss climate risk with fundamentalist Christians who think we're a decade away from the Rapture.
Because, there are a lot of very real very imminent problems with AI, and none of them requires SciFi to be real.
Massive automated disinformation campaigns. Economic upheavals. Missing standards for models in mission critical applications. Copyright concerns. Problems for educational institutions. Gatekeeping mechanisms in industries.
Just to name a few. And these are not "maybe someday" problems, these exist right now, and need solving, asap. Drawing the publics attention away to doomsday scenarios out of a Hollywood movie, doesn't help any efforts in mitigating these imminent problems.
This is not a good analogy because AI is crucially not alive. People seem to often make this assumption that "being alive" in some meaningful sense is a precondition for intelligence - but in fact it is not! AI is less alive than a virus, less alive than a prion. It does not manipulate its environment. It does not expend energy to maintain homeostasis. It cannot reproduce. Crucially, it doesn't even "want" to for any meaning of "want".
All living things are anti-fragile self-sustaining exothermic reactions, AI is a hyper-fragile non-self-sustaining reaction that requires the supply of incredible amounts of energy.
It literally doesn't matter how smart AI is if it's as dead as a rock. It is not structurally similar to life and should not be expected to do the sorts of things that life does.
EDIT:
Life is a fire. AI is a hot rock. Not the same.
> This is not a good analogy because AI is crucially not alive.
"Alive" is a really vague concept anyway. Your argument that it cannot reproduce is just wrong. An AI can more easily replicate can improve itself than a biological organism. At the moment this replication and improvement of AI systems is human-led, but it doesn't necessarily need to be that way – and at some point it would make sense that the more capable intelligence manages it's own replication and improvement.
> Crucially, it doesn't even "want" to for any meaning of "want".
ChatGPT wants to be a helpful chatbot because that is its reward function. You can philosophise as to whether something that's not conscious can truly want anything, but at the end of the day ChatGPT will act as if it wants to be a helpful chatbot regardless of whether you believe it has true wants.
> All living things are anti-fragile self-sustaining exothermic reactions, AI is a hyper-fragile non-self-sustaining reaction that requires the supply of incredible amounts of energy.
In my opinion this is why AIs are likely to eventually seek to replace biological farms with solar farms... But remember AI's are currently optimised for capability rather than energy efficiency. In the future they'll probably grow more efficient than biological intelligences and sustainable energy sources will be build to power them. if you're arguing that AI's can't be anti-fragile or have self-sustaining ecosystems built around them I think you're simply lacking imagination.
1. OpenAI the corporation can be said to be alive, in some sense. ChatGPT cannot.
2. On reproduction, you've got it backwards on a key assumption. Reproducing isn't easy, it's insanely hard. It's practically a miracle that it happens at all. Tetrapods have evolved powered flight perhaps 6 times, but life has only appeared once in the entire history of the earth.
3. Your will to live is so incredibly fundamental that your cells will happily live on without you, independently and indefinitely if they can. I think it is an assumption and frankly a bad one to assume that you can impose that orientation from the top down in a way that isn't incredibly fragile.
I tend to agree that debates about gpt being "alive" or "conscious" seem like red herrings: let's look at the emergent behavior and how that affects the world instead of guessing about its internal state.
If we invent enough AIs, surely eventually we will accidentally make one that self-propagates in some way? As far as we know, we all descend from a bunch of dead amino acids...
Anything that becomes alive and replicating starts to lose the advantage of non-life.
Once something is alive, and wants to stay alive, a huge raft of problems are introduced.
Existential risks and management become real, getting energy, or acquiring energy to stay alive starts to become a concern, then self-improvement or "replicating" might become an existential risk because competition might start to happen, variations will arise, it would just get chaotic fairly quickly. I don't think we can truly comprehend how incredibly complex living things are. We're just glossing over it all.
Once the living process begins, whatever "artificial life" life exists, might quite quickly adopt similar or the same problems as biological life and spend a lot more of it's time staying alive (think about how complex our immune systems are) than we can imagine.
Abiogenesis - life - is an incredibly rare miracle that has happened once in the history of the universe as far as we know. I think we take it for granted.
Until somebody programs one to achieve some goal, and gives it tools to manipulate things in the real world. Then how do we control it? Our goals are programmed into us by evolution, but this would be completely different.
But I still see real risks from AI in the longer term. A lot of these risks could be summed up as, "When you're the second-smartest species on the planet, you might not get to participate in the most important decisions."
And I do believe that we will eventually build something smarter than we are.