upvote
> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.

reply
This is an entertaining (and often exasperating) decades-old trend in competitive U.S. college debate, as well.

A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.

The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.

In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.

As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.

reply
It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.
reply
Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's mugging", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)

No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.

[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."

reply
well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.
reply
deleted
reply
> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?

reply
Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.
reply
I don't think it needs to be a binary to be effective. Yes, those weapons still exist, but understanding of existential risk and political pressures have slowed them considerably and resulted in a safer, more cautious world.
reply
China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.

That kind of idea might have held water in the 90's, but that's not the world we live in any longer.

reply
> Haven't many (most?) countries agreed to nuclear disarmament?

This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).

9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.

(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)

reply
> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

I wish they did before too.

reply
[flagged]
reply
>The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.

I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.

reply
Cold comfort: AGI will not genocide humanity until it can plausibly automate logistics from mining raw materials to building out compute and power generation.
reply
Humanity agreed, for example, that growing ozone hole is dangerous for everyone, and worked together to ban production of gases that damage ozone layer. See Montreal Protocol International Treaty. It was highly effective. Training powerful AIs isn’t different.
reply
I think that trying to stop AI development is more like trying to stop nuclear weapon proliferation than it is like fixing the ozone hole. I think the difference is that if one country works to fix the ozone hole, that doesn't make the other countries scared that they are falling behind in ozone hole fixing technology and might get conquered or reduced to subservience as a result.

Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.

reply
We've also made progress as a species towards banning and reducing other things that in-group upsides and really bad externalities: off-the-shelf sale of broad system antibiotics; chattel slavery; human organ trafficking; some damaging recreational drugs.

The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.

reply