upvote
reply
Those 2 links certainly satisfy my request. Thank you.

My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.

Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if 100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.

In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).

Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).

reply
The question was

> How many people are allowed to die to prevent AGI?

He didn’t say “not everyone dying is preferable to everyone dying”. The question was about acceptable consequences from preemptively stopping AGI under his assumption that AGI will lead to the extinction all humans.

Those are only the same thing under the assumptions that 1) AGI is inevitable without intervention and 2) AGI will lead to the extinction of humanity.

If he believes he is being misunderstood, his “apology” doesn’t actually deny either of the assumptions I identified, and he is widely known to believe them.

In fact, his stated reason for correcting his earlier tweet, that using nuclear weapons is taboo, is an extremely weak excuse. Given the opportunity to save humanity from AGI if that is what you believe, it would be comical to draw the line at first use of nukes.

No, I think Eliezer is trying to come to grips with the logical conclusion of his strident rhetoric.

reply
You have a population of relatively wealthy, scientifically-educated people who believe that AI risk is real and existential. That if they/we don't act, humanity itself might become extinct -- and that this is an unacceptable outcome. Then you have Yudkowsky mooting the possibility that this is basically inevitable (in the absence of global coordination that seems highly unlikely), and suggesting that hyper-violent outcomes might be literally the only way our species survives.

What I am not saying: Yudkowsky intends to exterminate most of humanity.

What I am saying: this is a dangerous environment, and these kinds of statements will be seen as a call to action by a certain kind of person. TFA is literal proof of the truth of that statement. Moreover: within the community there exist trained experts who might be able to, at the cost of millions of lives, plan an attack that could (plausibly) delay AI by many years.

The danger of this argument is that someone who reveres Yudkowsky might take his arguments to the logical conclusion, and actually do something to act on them. (Although I can't prove it, I also think Yudkowsky knows this, and his decision to speak publicly should be viewed as an indicator of his preferences.) That's why these conversations are so dangerous, and why I'm not going to give Yudkowsky and his folks a lot of credit for "just having an intellectual argument." I think this is like having an intellectual discussion about a theater being on fire, while sitting in a crowded theater.

reply
I said something to the same effect in a sibling comment to yours.

> someone who reveres Yudkowsky might take his arguments to the logical conclusion

What about Eliezer himself? Does he not believe his own rhetoric? Certainly if he believes the future of the human race is at stake it demands more action than writing a book about it and going on a few podcasts.

I think the whole thing is a bit like the dog who finally caught the car. It’s easy to use this strident rhetoric on an Internet forum, but LessWrong isn’t real life.

reply
If I ran the FBI I would be very gently keeping tabs on the most fervent (and technically capable) anti-AI groups. Unfortunately I don't think anyone is currently running the FBI. If I was tightly connected to folks in these communities, I would be keeping tabs on my friends and making sure they're not getting talked into anything crazy.
reply
deleted
reply