Reminds me of the book 48 Laws of Power -- so good its banned from prisons.
i'm very inclined to trust them on the various ways that models can subtly go wrong, in long-term scenarios
for example, consider using models to write email -- is it a misalignment problem if the model is just too good at writing marketing emails?? or too good at getting people to pay a spammy company?
another hot use case: biohacking. if a model is used to do really hardcore synthetic chemistry, one might not realize that it's potentially harmful until too late (ie, the human is splitting up a problem so that no guardrails are triggered)
funny because they do it every time like clockwork acting like their ai is a thunderstorm coming to wipe out the world
What if the capability advancements are real and they warrant a higher level of concern or attention?
Are we just going to automatically dismiss them because "bro, you're blowing it up too much"
Either way these improvements to capabilities are ratcheting along at about the pace that many people were expecting (and were right to expect). There is no apparent reason they will stop ratcheting along any time soon.
The rational approach is probably to start behaving as if models that are as capable as Anthropic says this one is do actually exist (even if you don't believe them on this one). The capabilities will eventually arrive, most likely sooner than we all think, and you don't want to be caught with your pants down.