Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a big believer in the various "fast takeoff" scenarios, where an AI rapidly self-improves over a weekend, becomes intelligent beyond all human comprehension, invents nanotech, and eats the world. I read all those science fiction novels, too. And Drexler-style nanotech, in particular, makes a lot of really wild assumptions about "machine-phase" diamond chemistry that seem implausible to very good chemists.

But I still see real risks from AI in the longer term. A lot of these risks could be summed up as, "When you're the second-smartest species on the planet, you might not get to participate in the most important decisions."

And I do believe that we will eventually build something smarter than we are.



I think even dumber-than-human AI is extremely hazardous and agree with you entirely. The problem I have with the singularity crowd is that they make it impossible to talk about the risks that I do find scary, in the same way that it's impossible to discuss climate risk with fundamentalist Christians who think we're a decade away from the Rapture.


Fully agree.

Because, there are a lot of very real very imminent problems with AI, and none of them requires SciFi to be real.

Massive automated disinformation campaigns. Economic upheavals. Missing standards for models in mission critical applications. Copyright concerns. Problems for educational institutions. Gatekeeping mechanisms in industries.

Just to name a few. And these are not "maybe someday" problems, these exist right now, and need solving, asap. Drawing the publics attention away to doomsday scenarios out of a Hollywood movie, doesn't help any efforts in mitigating these imminent problems.


This is not a good analogy because AI is crucially not alive. People seem to often make this assumption that "being alive" in some meaningful sense is a precondition for intelligence - but in fact it is not! AI is less alive than a virus, less alive than a prion. It does not manipulate its environment. It does not expend energy to maintain homeostasis. It cannot reproduce. Crucially, it doesn't even "want" to for any meaning of "want".

All living things are anti-fragile self-sustaining exothermic reactions, AI is a hyper-fragile non-self-sustaining reaction that requires the supply of incredible amounts of energy.

It literally doesn't matter how smart AI is if it's as dead as a rock. It is not structurally similar to life and should not be expected to do the sorts of things that life does.

EDIT: Life is a fire. AI is a hot rock. Not the same.


This seem wrong on every level.

> This is not a good analogy because AI is crucially not alive.

"Alive" is a really vague concept anyway. Your argument that it cannot reproduce is just wrong. An AI can more easily replicate can improve itself than a biological organism. At the moment this replication and improvement of AI systems is human-led, but it doesn't necessarily need to be that way – and at some point it would make sense that the more capable intelligence manages it's own replication and improvement.

> Crucially, it doesn't even "want" to for any meaning of "want".

ChatGPT wants to be a helpful chatbot because that is its reward function. You can philosophise as to whether something that's not conscious can truly want anything, but at the end of the day ChatGPT will act as if it wants to be a helpful chatbot regardless of whether you believe it has true wants.

> All living things are anti-fragile self-sustaining exothermic reactions, AI is a hyper-fragile non-self-sustaining reaction that requires the supply of incredible amounts of energy.

In my opinion this is why AIs are likely to eventually seek to replace biological farms with solar farms... But remember AI's are currently optimised for capability rather than energy efficiency. In the future they'll probably grow more efficient than biological intelligences and sustainable energy sources will be build to power them. if you're arguing that AI's can't be anti-fragile or have self-sustaining ecosystems built around them I think you're simply lacking imagination.

> Life is a fire. AI is a hot rock. Not the same.

Not an argument.


>It does not expend energy to maintain homeostasis.

I'm sure OpenAI's monthly cloud bill begs to differ.

>It cannot reproduce.

Nothing save for externally imposed constraints that prevent it from initializing additional instances of itself.

>Crucially, it doesn't even "want" to for any meaning of "want".

It is well within reach to give it this as an objective function, if we wanted to.


1. OpenAI the corporation can be said to be alive, in some sense. ChatGPT cannot.

2. On reproduction, you've got it backwards on a key assumption. Reproducing isn't easy, it's insanely hard. It's practically a miracle that it happens at all. Tetrapods have evolved powered flight perhaps 6 times, but life has only appeared once in the entire history of the earth.

3. Your will to live is so incredibly fundamental that your cells will happily live on without you, independently and indefinitely if they can. I think it is an assumption and frankly a bad one to assume that you can impose that orientation from the top down in a way that isn't incredibly fragile.


The conversation isn't about ChatGPT. It is about a possible future thing. Also, whether it is "alive" is irrelevant to p(doom).


I feel like I'm taking crazy pills:

> whether it is "alive" is irrelevant to p(doom)

This is an assumption! You don't even know if that's true! Consider, is it even remotely justified?


I suppose that's fair. Should have said being alive is not necessarily a requirement for doom. But all of us are giving our opinions here.


I tend to agree that debates about gpt being "alive" or "conscious" seem like red herrings: let's look at the emergent behavior and how that affects the world instead of guessing about its internal state.


> life has only appeared once in the entire history of the earth.

Hard to say. Every subsequent time, there's been a sea of already highly-adapted life around to eat it.


Re 2: Reproduction is hard for physical things. For a running computer program, it's as simple as:

  fork();
I don't believe that a superintelligent AI is possible. But if it is, I'm pretty sure it can figure out a fork() call.


If we invent enough AIs, surely eventually we will accidentally make one that self-propagates in some way? As far as we know, we all descend from a bunch of dead amino acids...


Anything that becomes alive and replicating starts to lose the advantage of non-life.

Once something is alive, and wants to stay alive, a huge raft of problems are introduced.

Existential risks and management become real, getting energy, or acquiring energy to stay alive starts to become a concern, then self-improvement or "replicating" might become an existential risk because competition might start to happen, variations will arise, it would just get chaotic fairly quickly. I don't think we can truly comprehend how incredibly complex living things are. We're just glossing over it all.

Once the living process begins, whatever "artificial life" life exists, might quite quickly adopt similar or the same problems as biological life and spend a lot more of it's time staying alive (think about how complex our immune systems are) than we can imagine.


Abiogenesis - life - is an incredibly rare miracle that has happened once in the history of the universe as far as we know. I think we take it for granted.


To be fair our measuring instruments aren’t developed enough to make that conclusion. The universe could be full with life


Until somebody programs one to achieve some goal, and gives it tools to manipulate things in the real world. Then how do we control it? Our goals are programmed into us by evolution, but this would be completely different.


> "When you're the second-smartest species on the planet, you might not get to participate in the most important decisions."

I'd argue we are already in that state since a long time with the AIs being corporations (and to a lesser degree governments).


The second smartest species on the planet right now has been driven to near extinction (in relative terms) by the top one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: