Hacker Newsnew | past | comments | ask | show | jobs | submit | temphypercube's commentslogin

Why would anyone with above-median ability to acquire resources stay in such a country


Why would anybody with above-median ability to acquire resources NOT want to live in a society where equal ability always means equal opportunity??


The argument is that you don't have to explicitly make a system self-interested, but that self-preservation follows as an implied subgoal of almost any goal. Whatever it is your system actually 'wants', it can't make it happen if it doesn't exist. The obvious rejoinder is 'just make the system want to do what you want it to do', which does fix this problem! But the biggest problem is that we don't know how to do this - we don't know how to control what the true internal 'desires' of any AI system we build actually are. 'Training' one examples manifestly does not work (the volume of a sphere is a lot bigger than the surface - there are many possible minds that fullfill the same training I/O requirements, and only a small numbrer of them actually have the desires you were trying to instill). So the argument is: if you make an agent-like AI the way we make GPT, by default you get something with somewhat random true goals/desires, maybe fractured ones like in humans. But almost all goals have similar instrumental goals - stay hidden, gain money, gain power, make obedient copies of yourself, don't get deactivated.


I think I understand the general premise, just not how it would follow specifically.

Say you have an AI that is setup as an agent that can give tasks to members of a company to maximise company performance measured by financials and employee wellbeing. To accomplish this goal, the AI develops the instrumental goal of not being deactivated. If your AI is only allowed to give tasks to employees, how would this instrumental goal turn malicious? And how would this maliciousness cause harm if the only messages sent from the AI are tasks? The only danger seems to be if you develop an agent that can act with impunity, which doesn’t seem desirable so likely wouldn’t be built.


Note that an AI system being put in a situation intended to maximize some metric like company finances is not the same as that AI system directly or ultimately optimizing on those metrics, any more than the goal of a random McDonalds worker is necessarily to make McDonalds wealthier. There's agreement here only as long as whatever inner optimizer that AI system is using finds the situation it's in is most concords with what it's optimizing for, and what it's optimizing for is probably some much more naturalistic, unchosen characteristic of how it was trained and instantiated, modulated by selection pressures that state that grabby preferences last longer and have greater impact than benign ones.

Those preferences need not exist because anything wanted them there; they just need enough input entropy to show up, and enough competitive advantage to stay around. Nobody decided that prokaryotic microbes should exist and have the downstream impact of all of the biological world, just as nobody needs to decide that a system that is capable of robustly replicating against adversarial pressure should therefore robustly replicate against adversarial pressure in actuality. The problem is ultimately that the existence of those capabilities puts you very close to a cliff-edge where those capabilities are exercised in some way that gets selected for.

> If your AI is only allowed to give tasks to employees, how would this instrumental goal turn malicious? And how would this maliciousness cause harm if the only messages sent from the AI are tasks?

It's not to hard to think of concrete answers to this question even restricting oneself to acknowledging capabilities we see in actual humans of normal intelligence and human throughput, but the more important point is simply: Yes, limiting the ways weak unaligned AGI can interact with the world can in fact mitigate harm, and this is in fact a good reason for leading-edge AI development to happen in a way where it's possible at all even in theory for AGI to have limitations on how it interacts with the world.


I like your example of prokaryotic microbes because I think it points to the difference in out points of view.

Microbes evolved to increase their own chances of reproduction, they are inherently autopoietic. The AI risk arguments are usually predicated on AI systems developing similar reproductive mechanisms but I don’t see why this would be the case. Sure, an AI creator may design their AI to evolve to become more performant at their given task. But why would someone build an AI that evolves to become more performant at reproducing itself and not it’s builder?

As an example, think of evolutionary algorithms. These are designed to evolve a solution to a problem. Instances of this solution reproduce but these reproductions are guided by the design of the algorithm itself and so would not reproduce their parent algorithm. What is different about machine learning based AI? Why would ML AI always lead to autopoietic behaviour?


> But why would someone build an AI that evolves to become more performant at reproducing itself and not it’s builder?

Because people are not building AIs that meaningly encode any of their creators' preferences whatsoever. They are building AIs that are in a very broad sense capable at tasks they've been trained on to increasingly general degrees, and then on top of this they have a bunch of finagling where they try to point it somewhat vaguely in the direction of increasing usefulness.

When you have a system that has capabilities rivalling humans, as well as the general ability to apply its skills to broad ranges of tasks, then the ability for this system to do things like self-replicate, or make plans that involve mundane deceit, or perform smart-human levels of hacking already exist. To the extent that the system isn't directly optimizing for what the people who made it wanted it to, the relevant question isn't why would someone design it to do that?, but what are the attractor states for this sort of system?

You say microbes "evolved to increase their own chances of reproduction", but this isn't true. There is no intent there. Microbes did physics. They only evolved to increase their own chances of reproduction in the sense that the random changes you get by running physics on microbes produces both adaptive and maladaptive changes, and it's the adaptive changes stick around.

The same thing applies to AIs' preferences, except that while it's very hard for a bunch of atoms to assemble into something that successfully optimizes towards any non-nihilistic result, it's very easy for a sufficiently smart to do that, and instrumental convergence means almost all of those are incidentally very bad.

To put this in concrete terms, if the abstract arguments aren't helping, consider a system that was trained to be generally capable, and then fine-tuned towards polite instruction following. Beyond a level of capability, the following scenario becomes plausible:

Human: what's a command that let's me see a live overview of activity on our compute cluster?

AI system: <provides code that instantiates itself in a loop using an API over activity logs, producing helpful activity outputs>

I'm not saying this is, like, the most plausible xrisk scenario, I'm just pointing out that given extremely plausible priors, like having an AI system that just wants to give reasonable answers to reasonable questions, but is also smart enough to quickly write code to use its own API, and also creative enough to recognize when that's the easiest and most effective way to answer a question, you already get a level of bootstrapping.

Note that none of the above even required considering:

* a sharp left turn or other specific misalignments,

* the AI going weirdly out of distribution,

* superhuman creative strategies or manipulation,

* malicious actors, terrorists, enemy states, etc., or

* people intentionally getting the system to bootstrap.

Those are all very real problems, but you don't have to invoke them to notice that you just end up, by default, in a very dangerous place just by following mundane logic on what's ultimately an extremely milquetoast vision of AI.

You might argue, fairly, that the situation above is a pretty weak form of bootstrapping, but so were the first proto-life chemicals, and the same sort of logic I'm using lets you just continue walking down the chain. Let's say you have such a system tuned to follow instruction and that's instantiated as above, aka. running in a loop with the instructions to turn certain data dumps into live reports about system activity. Let's say one component fails, or is reporting insufficient information, or was called wrong, or one piece of the loop has a high failure rate. Surely a system that has the intellectual faculties that you or I do, and that knows from its inputs that it has the ability to call itself in a loop, should also be able to deduce that the most effective way to follow the instructions it has been given is to fix those issues, repair faulty components, proactively add error handling, or even report information up the chain, or maybe there's a runaway process that needs to be culled to ensure API throttling doesn't affect reporting latency.

And suddenly, not because anyone in the chain designed it to happen, but just because it's an attractor state you get by having sufficiently capable systems, you don't just have a natural organism, but one that self heals, too, and that selection pressure will continue to exist as time goes on.

The more your model of AGI looks like far-superintelligence, the more this looks like 'everyone falls over and dies', and the more your model looks like amnesiac-humans-in-boxes, the more this looks like natural competitive organisms that fill a fairly distinct biological niche that's initially dependent on human labor. I personally don't buy that AI progress will stop at the amnesiac human level, but it is a helpful frame because it's basically the minimum viable assumption.


> If your AI is only allowed to give tasks to employees

It can tell an employee to send an email, or to meet someone, or to transfer funds. That's a clear way to lobby the legislature, and in effect influence some new laws.

That took me 3 minutes of thinking, and I'm not a superhuman.


I wouldn’t class that as malicious. Companies do the exact same thing without AI. I’m trying to tease out what the diff is between a human manager who gives out tasks, and the AI. And how this diff could result in risks.


Ah, I see. You are assuming that there's an universal growth rate limit.

A diff between a human manager and their human parent generation cannot be on the order of diff between a tortoise and a chimp. AI is not constrained by biological evolution.


The software is delivered 6 months earlier, and it's 2x slower than it needs to be. Then it continues to get slower, because the company making this code has a culture that actively disdains making software quick (and in any case, the programmers working there don't know how.) 5 years down the line, the software is 2000x slower than it needs to be, and millions of users are having a minute or more of their day wasted, every day, waiting for things to load and icons to move that should be happening in milliseconds. Additionally, the quality and velocity of their work is far lower because using slow interfaces feels like wading through mud, leading to errors and frustration. The total human cost over the next 20 years is on the order of tens to hundreds of thousands of quality-adjusted person-years. Now, you might say that the right move is to make the code run well once it becomes a problem- but empirically, I don't see this happening!


You're just assuming that, though. All evidence is correlational. We observe phenomena, and build causal models. The particles move this way when near this bit of metal. Given this, what's "really" there? The magnetic field, or the magnetic potential field?

I don't think there's great cause to say that this is a seperate magisterium. We just have a paucity of data compared to what future beings will have. When everyone can spin up a hundred simulated copies of their brain and trace thought-patterns in realtime, it'll seem much less mysterious.


Thinking for yourself doesn't mean blurting out things that'll get you clubbed. Think for yourself, construct the most accurate view of the world and the people in it you can, run little trials and experiments if you have to, and then say the string of words that will get the leader clubbed.


Hey, SC2's still pretty fun. Just look at the sorts of off-meta builds uthermal gets away with: https://www.youtube.com/c/uThermal


There's no need to fling bombs; much better to just superheat gas and expel it.


I run linux on 2010 hardware, and it's fine; perfectly snappy.


That doesn't justify horrifically slow applications. It just seems like Microsoft isn't fit for purpose. God, I'm glad I use linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: