upvote
It’s because that would be fairly speculative and cannot be measured. I don’t think that’s something that would make much sense in a system card. But Anthropic leadership does seem to communicate on that topic: https://www.darioamodei.com/essay/the-adolescence-of-technol...
reply
Yeah this has always been the glaring blind spot for most of the "AI Safety" community; and most of the proposals for "improving" AI safety actually make these risks far worse and far more likely.
reply
> Political risks, such as dictators using AI to implement opressive bureaucracy.

I think we're pretty good at that without AI.

reply
> * Political risks, such as dictators using AI to implement opressive bureaucracy. * Socio-economic risks, such as mass unemployement.

Even Haiku would score 90% on that.

reply
The unemployment rate in the US is whatever the Fed wants it to be, and isn't a function of available technology.
reply
I'm getting flashbacks to the 2018 hit:

    This is extremely dangerous to our democracy
We evolved to share information through text and media, and with the advent of printing and now the internet, we often derive our feelings of consensus and sureness from the preponderance of information that used to take more effort to produce. Now we're now at a point where a disproportionately small input can produce a massively proliferated, coherent-enough output, that can give the appearance of consensus, and I'm not sure how we are going to deal with that.
reply
They don’t care about those risks, because they’re unsolvable and would mean they wouldn’t make money/gain power.
reply
Dario Amodei, CEO of Anthropic discusses all those risks in this essay: https://www.darioamodei.com/essay/the-adolescence-of-technol...

He seems to care quite a lot?

reply
Not enough to not do it, though. Actions, not words, and the actions are simple: they're building this while promising to wipe out entire industries.
reply