upvote
Anyone willing to read that wall of text should also read Yudkowsky's original piece on the topic: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:

> be willing to destroy a rogue datacenter by airstrike.

Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.

reply
You doubt that Yudkowsky "was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks." Let's let the reader decide.

In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:

>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.

reply
> The strings "bomb" and "attack" never occur.

What do you think an "airstrike" is, then?

Trying to argue that certain strings don't occur in the page is the kind of argument that gets brought out when someone is desperate for any technicality to avoid having to concede a point.

This level of weaponized pedantry is what makes trying to debate anything with LessWrong-style rationalists so impossible: There's always another volley of gish gallop to be fired at you when you get too close to anything that goes against their accepted narratives.

reply
You were trying to get people to view what EY wrote in the time.com article as an encouragement to engage in criminal violence (as opposed to state-sponsored violence a la an airstrike on a data center) such as the firebombing of Sam's home when in actuality (both before and after the publication of the time.com article) EY has explicitly argued against doing any crimes particularly violent crimes against the AI enterprise.

Knowing that most readers do not have time to read the entire article, I brought up how many times various strings occur in the article to make it less likely in the reader's eyes that there are passages in the article other than the one passage I quoted that could possibly be interpreted as advocating criminal violence. I.e., I brought it up to explain why I quoted the 3 (contiguous) paragraphs I quoted, but not any of the other paragraphs.

In finding and selecting those 3 paragraphs, I was doing your work for you since if this were a perfectly efficient and fair debate, the burden of providing quotes to support your assertion that EY somehow condones the firebombing of Sam's home would fall on you.

reply
I found the last paragraph a fairly great summary of a rather long post:

> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.

reply
The whole post should have just been this one line. He likes the sound of his own voice too much.

That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).

The whole scene is toxic.

reply
Jeebuz that was long, I only made it through about half of it. But I think he's calling for cold war nuclear treaties style international cooperation. But I believe those mechanisms are broken and unavailable to us for two main reasons:

1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.

2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.

reply
I find all of this stuff very interesting but nonetheless these two voices sound like they could never win an election and aspire not to. That is the ultimate test of the worthlessness of a policy - it's all equally worthless until it wins an election, and that's what makes it reality.

AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.

What am I saying? The best rebuttal is, get elected.

reply
The interesting thing is that, for the "Father of Accelerationism" (Nick Land), AI Doomerism (doom for humans, at least for human identity) and Accelerationism (which for Land is just another label for capitalism: 'The label "accelerationism" exists because "capitalismism" would be too awkward.'[0]) are not opposed at all. And capitalism does not need to get elected.

(Land follows the above quote with "(But the reflexivity of the latter [capitalismism] is implicit.)"[0], which specifies that, for Land, more precisely, "Accelerationism is simply the self-awareness of capitalism"[1].)

[0] Nick Land (2018). Outsideness: 2013-2023, Noumena Institute, p. 71.

[1] Nick Land (2017). A Quick-and-Dirty Introduction to Accelerationism in Jacobite Magazine. Retrieved from github.com/cyborg-nomade/reignition

reply
i don't know, to me they are very different things - accelerationists might be really calling for Better Capitalismism, but that's only because chatbots (the thing you are accelerating) are really good at math, and math is important for making money. if it weren't good at making money, literally nobody would care, kids would not be CS and math majors, they wouldn't care about STEM. they only care because $. But most real problems, including human problems, are not math problems.

this is a huge blind spot in the whole, rationalist and broader STEM cultural-professional community: math isn't the best way to solve problems, most problems are not math problems. SOME of school might be math problems, and it feels good to be a Doctor or a Software Developer Engineer and get your kids to practice "problem solving" - no, they are practicing math problems, not problem solving.

for example there's no math answer to whether or not a piece of land should be a parking lot, or an apartment building, or a homeless shelter, or... you can see how just saying, "whoever is the highest bidder" - that's the math answer, that's why capitalism and accelerationism are related to their core - isn't a good answer. it pretends to be the dominant way we organize land, and of course, it isn't the dominant way we organize land usage anywhere at all, even if we pretend it is. there's no "bidding" for whether a curb should be a disabled parking spot, or a bike lane, or parking, or a restaurant seating, or a parklet, or... these are aesthetic, cultural choices, with meaningless economic tradeoffs. it's not about money, so it's not about math, so math does not provide an answer. there are lots of essential human questions that cannot even be market priced, such as, what should we pay to invent new cures to congenital, terminal illness in children? parents, and a lot of people, would pay "any" price, which is a market failure - but there are a lot of useful political answers to that question. a chatbot cannot answer that question, and it would struggle to take leadership and get elected to answer that question.

mathematicians are basically never elected. so chatbots would not be. and elezier yudlowsky would not be. are you getting it? capitalism does definitely need to be elected, you might think it wins every election but it very often loses at the local level!

i am agreeing with Hashem Sarkis dean of the MIT SAP and kind of disagreeing with Bong Joon-Ho, for further reading.

reply
Iran's leadership seems to be a solid rebuttal of that argument.
reply
That was really fascinating. Thanks.
reply