upvote
Scenario 2 makes the assumption that no technological development can happen without AI, which seems like a stretch to me. Honestly, the worst scenario i can think of is 40ish years of AI assisted development followed by a technological crash due to there being no competent engineers left to fix the slop.
reply
I didn't say all technological development would be halted, just that tech "in many fields" would have to be stalled for safety (AI development, algorithm development that would reduce the cost of training models, etc)> Naturally if AI is considered an existential threat there would be a huge safety radius for things that would allow bad-actors to train AI models.
reply
deleted
reply
This makes the assumption that AI will lead to the apocalypse. That's unfalsifiable, predicted about plenty of things in the past, and frankly annoying to keep seeing pop up.

Its like listening to Christians talking about the rapture.

reply
The problem is that if someone is right about an existential disaster caused by AI, by the time they're proven right it would be too late.

Frontier AI models get smarter every year, humans but humans don't get any smarter year over year. If you don't believe that somehow AI will just suddenly stop getting better (which is as much a faith-based gamble as assuming some rapturous outcome for AI by default), then you'd have to assume that at some point AI will surpass human intelligence in all fields, and the keep going. In that case human minds and overall will will be onconsequential compared to that of AI.

reply
Frontier AI models get evaluated for safety precisely to avert the "AI robot uprising causes an existential disaster" scenario. At the moment we are light years away from anything like that ever happening, and that's after we literally tried our best to LARP that very scenario into existence with things like moltbook and OpenClaw.
reply
Cool story, bro!
reply
[dead]
reply
In other words, only one option.
reply