upvote
Agreed. But, many said the same thing about Moore's Law or its equivalents in 1985, 1995, 2005, 2015, and yet the pace of core hardware development has been relentlessly exponential. I keep thinking we must be approaching some kind of limit (and surely we must be!) but I've learned not to bet on it.
reply
It's often constructive to consider the edges and corners of the space of possible positions, to understand the weaknesses of the various arguments.

For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.

How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.

And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.

Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.

So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.

reply
deleted
reply
> People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]

We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.

Fixed that for you.

reply
That's a completely separate point, is it not?

Maybe write it up and post a top-level comment if you think it's a point worth making.

reply