> What I think people who work in tech fail to realise is that non-tech people are learning about these dangers and I don’t think they’re impressed or care if you have a computer to play with or robots playing soccer.
Doubtless.
That said, when it comes to AI risk, normal people have no reason to know the boundaries between reality and sci-fi for the same reason that I, as a non-lawyer, have no reason to know the boundaries between law and Hollywood tropes.
By extension, this means that I expect the general public to be aware of countless fictional examples of AI takeover scenarios, some where the AI was given a goal and blind logic made it bad for everyone (Days of Future Past, Colossus the Forbin Project), some where the AI is actually evil (Terminator, the character Lore in TNG/PIC), and some where the AI is a tool used by malevolent humans (many Dark Mirror episodes, some TOS), and some where it mixes several of the above (Westworld TV series).
> I really don’t understand how this is supposed to work going forwards ?
Well, by going forwards, I mean practically by the way :)
So here is where I don't even know then why it's being pursued so aggressively, especially in a business context.
There is absolutely no way that Google or Microsoft can just put an ultra-intelligent "AGI" on the internet for hire with a credit card, to start with. It would likely be an existential threat to their own business, people, and or civilization.
So yeah, even for a business like Microsoft, there is a limit to how good you can let this thing get before it actually destroys their own business or worse.
Then, how would it work from a national security standpoint? Would it not just immediately provoke a war if America got an AGI first and China finds out? Even if someone started leaking rumors the Pentagon has an AI war general with an IQ of 900, that could possible make things escalate immediately?
Doubtless.
That said, when it comes to AI risk, normal people have no reason to know the boundaries between reality and sci-fi for the same reason that I, as a non-lawyer, have no reason to know the boundaries between law and Hollywood tropes.
By extension, this means that I expect the general public to be aware of countless fictional examples of AI takeover scenarios, some where the AI was given a goal and blind logic made it bad for everyone (Days of Future Past, Colossus the Forbin Project), some where the AI is actually evil (Terminator, the character Lore in TNG/PIC), and some where the AI is a tool used by malevolent humans (many Dark Mirror episodes, some TOS), and some where it mixes several of the above (Westworld TV series).
> I really don’t understand how this is supposed to work going forwards ?
Does anyone?