Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These types of arguments-- "if you actually believed X, you'd be doing Y"-- always ring false to me. If you genuinely believed that the death of all biological life was imminent and the only way to forestall it was to bomb data centers and you were a normal, functional human being who did not want to bomb data centers, you'd probably sink into hopeless depression. Which is, in fact, exactly what Eliezer has done - if you talk to anyone around him, it's common knowledge that he has been despondent for the past ~12 months or so. It certainly does not seem like a self-aggrandizement tactic.


I think if I seriously thought all biological life was about to be destroyed, I could get over my distaste for bombing data centers.


No, that doesn't follow. It's possible to think that all biological life was about to be destroyed but also believe some combination of

1) it's hopeless to bomb data centers because I'd quickly be stopped by the authorities and remaining data centers would beef up security, and nothing would be accomplished,

2) in the past, other people who have had beliefs of this sort ("violence is the only answer") have been wrong and I should not act on them even if I am sure "this time is different", because those other people also thought this time was different, and

3) just having a deeply ethically-ingrained prohibition against violence like this which is not easy to overcome through intellectual rationalization alone.

These are not mutually exclusive, and if you did believe all of them at once, I think it's reasonable to assume you'd be in a serious depressive spiral.


Israel believes that Iran having a nuclear weapon is an existential threat. What do they do?

https://en.wikipedia.org/wiki/Assassination_of_Iranian_nucle... https://en.wikipedia.org/wiki/2020_Iran_explosions

I find it hard to believe that Yudkowsky et al. think we're facing a threat that is many times greater in magnitude, and yet are completely unwilling to act.

Here's what he advocates for in the article:

>If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

>Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

Does this sound like someone who has a deeply ethically-ingrained prohibition against violence?


The government of Israel is not particularly known for being ethical.


If Yudkowsky believes Israel's brand of preventative saboutage to be unethical, why does he advocate for exactly that style of measure to be taken against data centers in this very article?


Yudkowsky is an avowed extreme utilitarian of the "take a person from the street and kill him if his organs could save two people, as long as there is forced pill everyone would take so that everyone doesnt get nervous it that could happen to them" type. Of the "unless the population was really large and the pill would hurt their throats for more utilons than gained by the net increase in life" type.

A real argument he made went something like: killing an orphan to avoid everyone in the future getting a spec in their eye for a moment is ok.

It is very unlikely he believes anything like 2 or 3 and in the article already advocates bombing non treaty participants if they build a datacenter.


Sources for this character assassination of a scientist that tried to save the world by creating the ai safety field in research?


I went back and looked it up, the piece he wrote is called "Torture vs Dust Specks." The part about bombing countries who build data centers is in the submission editorial.


This is misinterpreted, he calls for an international treaty backed by military. One sees what he wants to see.


I already mentioned the treaty:

> already advocates bombing non treaty participants if they build a datacenter.


You seem to misunderstand what an international ban treaty is. Nuclear proliferation ban included non signatories etc.


Are you talking about the NPT? Can you detail the military strike options it allows if a non-party acquires nuclear weapons?


Looking Iran,Iraq,any country that has a bioweapons program etc. It's pretty trivial to understand what Eliezer meant if you truly want to.


Nice dodge. It isn't in the non-proliferation treaty.


But he's already in favor of bombing data centers, he just hopes someone else will do it on his behalf. So he clearly doesn't care about #2 & 3. #1 boils down to the fact that it would be kind of difficult, but that is no barrier to many people.

I can tell you for a fact that there are people with the knowledge, motivation, and track record of doing that kind of crime that will pick up this jeremiad of his and incorporate it into their existing corpus of theoretic justifications, mostly but not exclusively built around the writings of Ted Kaczynski. Not a large number of people, fortunately, but imagination and smarts matter more than numbers in that context.


I think of this as the inordinate power of advocacy.

It can seem useless to advocate some position that is far out there, but say you convince a million people to use a little less plastic. That's gonna reduce plastic use more than anything you can personally accomplish.

Of course in this scenario you don't have to convince some people, you have to convince everyone, and then especially the people you think are the worst actors, so it's sort of unlikely to be effective.


It's blatantly self-serving for you to decide that "if someone really believed X, they would do Y". All it does is justify your belief that no one actually believes X. You should really avoid these kinds of unfalsifiable thought patterns that only serve to reenforce your pre-existing beliefs.


This argument is similar to the facile arguments that "al gore doesn't care about climate change because he takes a private plane too get to conferences".

Al Gore may believe that the net impact of getting his message out may overwhelm the cost of his flights.

Likewise, the author may believe that his best path to stopping the advance of ai may lie in communicating and building consensus, rather than running his own bombing campaign.


I think you underestimate the motivating power of an overwhelming belief. This is a story about a very ordinary guy who decided that it was in fact time to start bombing data centers and turned it into a project.

https://leftcoastrightwatch.org/articles/he-went-to-the-capi...


You’re just rationalizing because you don’t want to accept that he’s right. People keep doing this to me. One person I know keeps coming up to me, initiating the conversation about ai just to assert over and over again how it’s not a problem. And I’m just sitting there. And it’s like, dude, you’re in complete denial. Most people are having an emotional block right now. They spew bad faith arguments and make tons of noise about how much this isn’t a problem. Just accept it. We are in trouble right now.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: