In an alternate universe, opus 4.7 is sonnet 5, and Mythos is released as Opus. Can you imagine how much praise would be heaped on Anthropic if it opus 4.7 was < half the price it is now?
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...
> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
You might even call it... a tight spot
I have French installed on my keyboard as well so sometimes it will randomly correct English words to French words (inconsistently, but at least they're words), but blpw is not a word in either of those languages.
Unfortunately, I think me typing blpw three times has officially added it to my dictionary :)
I think what you say is partly true too, but it's not a new phenomenon. Some examples
- awful used to mean "awe-inspiring" https://en.wiktionary.org/wiki/awful
- you used to be the plural/formal second person pronoun with thou being the informal form https://en.wikipedia.org/wiki/You
- prior to the printing press English didn't have any standardized spelling at all https://www.dictionary.com/articles/printing-press-frozen-sp...
Language evolves. The English we learned in grammar school is likely not going to be the same English our kids or grandkids learn. At the end of the day, written communication has a single purpose — to communicate. If I can understand what the author is trying to say, then the author achieved their goal. That being said, I wish my mom did use spell check or autocorrect because her messages often require a degree in linguistics to decipher, but because of typos, not spelling. Maybe she'll influence the next evolution in typed communication :)
Edit - formatting
In this case, it's not clear who wins yet — "lose" may loose, or mount a comeback, resulting in "loose" being the one to lose.
"Loose" is a short word that ends sharply, but "lose" is a long word that slowly peters out.
They should be the other way around imo.
https://www.dictionary.com/articles/printing-press-frozen-sp...
So, technically we are allowed to make modifications! We just can't expect others to adhere to our modifications :)
For some reason I can't think of those propositions at the moment, but it's definitely prevalent when I'm speaking French and use the wrong proposition, only because I'd have used the wrong proposition in English.
I think it would be correct to say people display varying command of the English language, which to me has never been a problem - as long as I can understand what you mean, it's all fine.
"The President of the US, the Secretary of Defense, Iranian Prime Minister walk into a bar..."
I know it's not realistic at this point, but I really hope the Chinese labs will release models that run local and are on par with the abilities of frontier models. That is, I hope the idea of frontier models goes away. Because if not, what we're looking at is a seriously bleak outlook with respect to economic freedom for anyone outside the 0.1%. We may even be looking at out and out lack of economic viability for vast segments of the population.
The more interesting one is:
1. Assuming even incremental AI coding intelligence improvements
2. Assuming increased AI coding intelligence enables it to uncover new zero day bugs in existing software
3. Then open source vs closed source and security/patch timelines will all need to fundamentally change
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
If that's your concern, shareware industry developed tools to obfuscate assembly even from the most brilliant hackers.
AI is already superhuman at reading and understanding assembly and decompilation output, especially for obfuscated binaries. I have tried giving the same binary with and without heavy control flow obfuscation to the same model, and it was able to understand the obfuscated one just fine.
Private companies make products. When those products were plowshares or swords or missiles, the company didn't really have a say over how they were used, and could be compelled by the government to supply them. Now that new cloud and AI products that increase government command abilities live on servers controlled by private companies, private companies think they can tell government what to do and not do. No government will accept that, because the essence of government is autocratic sovereignty: the sovereign commands and is not commanded.
In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.
This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!
Predictability is the whole point. Undermining it is how you destroy your own economy.
The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.
They had another problem. If one of their contractors used Claude to engineer solutions contrary to Anthropic’s “manifesto” would Claude poison pill the code?
Basically Anthropic wanted the angels halo and the devils horns and the govt said pick one.
That's not what the presidential announcement blacklisting Anthropic said. It said they're being punished for trying to require that the military follow their terms of service.
The media is usually flush with defending Anthropic. And yes - the supply chain risk label is too broad. But there is another side to the story and Anthropic isn’t an “innocent” as made out to be.
So he'll only accept systems developed by people who understand, as Sam Altman promised to, that the US military is not to be questioned.
*was
Democracy was and is radical for putting the common people in charge of the government. The right to petition for redress of grievances is literally in the first amendment. Government is a social contract, enforced with state violence on one end and mob violence on the other.
If you want to return to autocratic rule, I hear North Korea is lovely this time of year.
Governments are difficult customers for software firms, as most military folks get an obscure exemption from copyright law at work. Anthropic finding other revenue sources is a good choice, if and only if the product has actual utility (search is an area LLM are good at.) =3
Maybe not "completely out", but at least not having enough available capacity to release a model way bigger than Opus publicly.
You mean the obvious commercial losses caused by keeping an expensively created product effectively off the market altogether?
What the actual fuck is with people who come up with stuff like this?
Now if only the NSA would vet key people in our government, there should be no reason a foreign entity can just hack the FBI director's personal GMAIL, the NSA should be trying to break into their accounts before our enemies do. It's ridiculous that they're not already doing this.
They probably did that for a while.
Sadly, they as an agency were un-vettable to the general public, and abused that position to create tons of blatantly unconstitutional programs that they tried to hide.
There are truly evil people in this world, way worse than we probably realize. Our military is not perfect, our country is not perfect, no country or military is, but we generally do our very best to do what is right historically speaking. It's hard to see that if you get lost in the politics of things.