What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
Where did you get this notion? Did you hallucinate it?
Thirty-one percent being smaller than half.