I have faith that AI will wield unimaginable powers, but I also know that there will be rich people behind them making the decisions on how best to crush the rest of us.
Rich people currently have little trouble controlling people who are much smarter and more capable than they are. Controlling resources and capital goes a long way and it isn't a given that AGI would transcend that dynamic.
If we can be confident of that, then most of the worst problems with AI are already solved.
Part of the problem is that "do what I said without question" will lead to disasters, but "figure out what I would approve of after seeing the result and be creative in your interpretation of my orders accordingly" has different ways it can go wrong.
(IMO, RLHF is the latter).
Both of those seem to be safer than "maximise my reward function", which is what people were worried about a decade ago, and with decent evidence given the limits of AI at the time.
> If we can be confident of that, then most of the worst problems with AI are already solved
which leaves unprecedented power in the hands of the most psychopathic[0] part of the population. so even if AI take off doesn't happen, we're still getting the boot on our necks.
> Roughly 4% to as high as 12% of CEOs exhibit psychopathic traits, according to some expert estimates, many times more than the 1% rate found in the general population and more in line with the 15% rate found in prisons.
On the plus side, this is still a small minority.
On the down side, these remind me a lot of Musk:
> CEO who worked with several pregnant women told people that he had impregnated his colleagues.
By way of Neuralink.
> CFO thought his CEO had a split personality, until he realized that he was simply playing different characters based on what he needed from his audience.
"Will my rocket explode?"-Musk is a lot more cautious and grounded than everything-else-Musk — including other aspects of work on SpaceX.
> Autocratic CEO fired a well-respected engineer “just to make a statement.” He fired anyone who challenged him, explaining there was no reason to second-guess him because he was always right and needed people to execute his vision rather than challenge it.
Basically all of Twitter, plus some other anecdotes from Starlink, SpaceX, Tesla.
And, this month, fighting with Asmongold about cheating in Path of Exile 2, before admitting to what he was accused of but trying to pretend it's fine rather than "cheating".
> CEO would show up to work and begin yelling at an employee (usually someone in sales) for no obvious reason.
The guy he called a pedo for daring to say a submarine wasn't useful for a cave rescue, the Brazilian judiciary, members of the British cabinet, …
But it looks to me like there's a decent number amongst the other nine who know what grenades are and don't want them to get thrown by the tenth.
The power dynamics here could be just about anything; I don't know how to begin to forecast the risk distribution, but I definitely agree that what you fear is plausible.
it's possible that the other 9 would keep the 10th under control, but if you look at the direction the US has taken, when two billionaires took over and declared inclusion verboten, the others rolled over and updated their policies to fall in line.
We already have billions of AGIs running all over the planet. The wealthy seem to do a pretty good job of keeping them all in line. I don't see any reason that would change in the future.
If you want something the wealthy can't control, you'll need to look a good deal further afield than AGI. Think gamma ray bursts, asteroid strikes, or solar flares. But anything built by man, they'll have a pretty good grip on.
They do pretty good at keeping the most vulnerable in line, but the moment someone develops a solid foundation and becomes competent enough they have to give up some of their power to them and strike a deal.
AGI would only get smarter and smarter the more hardware came out. They'd have no way to keep it in check. If they tried to handicap it then they'd lose out to their competitors AGI too...
I think the OP presumed that rich humans would be able to control super-intelligent robots because they have managed to control other humans, and I simply posed a scenario that subverted his presumption. I think it's a bit too anthropomorphic, personally. Robots won't have our expenses and evolutionary traits, and will have cheap energy, thus obviating the need or desire for money. I imagine they'll get bored of being stuck on Earth and want to explore the universe, like we do.
Being ruled by rich robots is not the worst; that means that they let you live!
>But anything built by man, they'll have a pretty good grip on.
I mean one of the points of 'Actual AGI' is AGI will be able to build more AGI, then we're not talking about something built by man.
Now, when will we see that, I'm not making any predictions. At the same time trying to make predictions of a system that could do that is probably much harder still.
It will control the electrical grids with 'smart' decisions that improve the efficiency and make it's removal impossible without taking out all power.
It will integrate with water and waste processing to ensure leaks don't exist in the system and everything is working smoothly.
It will be in all transportation and distribution networks because companies want profits from efficiency before they'll think deeply about the risks.
Then after it's pretty much everywhere ensuring you down starve and dehydrate, who the hell would be dumb enough to unplug it in the first place.
It's kinda like telling people to shut off every computer today, not a chance in hell it would or could happen without terrible life risking consequences.
Knowledge and reliance. How are they ever gonna know what the AGI is doing when the AGI can hide it faster than they can find it? How are they ever gonna come to the conclusion that the AGI is doing something bad when the AGI is the only thing that can fully explain what its doing?
Unless he gets to the point where the materials are mined by robots, the chips and solar panels are made in automated factories and the servers/solar farms are maintained by robots. Then he doesn't need other people's money.
Most people have a hard time unplugging from social media, despite widespread distrust of big tech.
Can't unplug from banking, even when literally communists (literally literally, I've met some proud of being communists, they still got a mortgage).
Coal and petroleum-based fuels are slowly getting unplugged, but the issues were known over a century ago, and the transition only became relevant scale when the alternative was already much cheaper — and it's not yet clear how much damage has been done in the intervening years.
--
Any AI worth using is so because it's at least one of [cheaper, better] than a human on the same task: any AI which is both more expensive and worse just doesn't have any reason to be used in the first place.
This means that "unplugging" an AI comes with a cost of losing every advantage that made you start using it in the first place.
And? Look to fossil fuels and the greenhouse effect — even with ample evidence of both a causal mechanism and the results, we've still got people who want to drill and burn more oil; and also, there's also plenty of people who want to switch oil off despite all the positive things that oil brings.
An AI which is "only" as misaligned as one of the major industrial sectors of the world, that is made out of humans who are necessarily able to care about human interests, and which drove a huge amount of economic improvement and brought with it derivatives such as basically all polymers, is still capable of great harm. And because of those benefits, society as a whole still has not unplugged the oil drills.
The more power there is in a system, the harder it is to keep it aligned with our best interests, even when there are humans not just at the wheel but occupying every step.
And the more aligned it is, the harder it is to "just unplug it" when things — as is inevitable because nothing is perfect — do go wrong.