You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
The real world alignment problem is humans using AI to do bad stuff
The latter problem is very real
The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.
And it's true, the more entities that have nukes the less potential power that government has.
At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind.