Gets labelled supply chain risk by the pentagon. Hypes up what they claim to be the most advanced hacking tool on the planet. This puts the US government into a loose / loose position. Either deny the NSA access to it, or be called out on their bluff.
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...
> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
You might even call it... a tight spot
"Loose" is a short word that ends sharply, but "lose" is a long word that slowly peters out.
They should be the other way around imo.
I think it would be correct to say people display varying command of the English language, which to me has never been a problem - as long as I can understand what you mean, it's all fine.
Now if only the NSA would vet key people in our government, there should be no reason a foreign entity can just hack the FBI director's personal GMAIL, the NSA should be trying to break into their accounts before our enemies do. It's ridiculous that they're not already doing this.
The more interesting one is:
1. Assuming even incremental AI coding intelligence improvements
2. Assuming increased AI coding intelligence enables it to uncover new zero day bugs in existing software
3. Then open source vs closed source and security/patch timelines will all need to fundamentally change
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
If that's your concern, shareware industry developed tools to obfuscate assembly even from the most brilliant hackers.
Private companies make products. When those products were plowshares or swords or missiles, the company didn't really have a say over how they were used, and could be compelled by the government to supply them. Now that new cloud and AI products that increase government command abilities live on servers controlled by private companies, private companies think they can tell government what to do and not do. No government will accept that, because the essence of government is autocratic sovereignty: the sovereign commands and is not commanded.
In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.
This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!
Predictability is the whole point. Undermining it is how you destroy your own economy.
*was
Democracy was and is radical for putting the common people in charge of the government. The right to petition for redress of grievances is literally in the first amendment. Government is a social contract, enforced with state violence on one end and mob violence on the other.
If you want to return to autocratic rule, I hear North Korea is lovely this time of year.
Governments are difficult customers for software firms, as most military folks get an obscure exemption from copyright law at work. Anthropic finding other revenue sources is a good choice, if and only if the product has actual utility (search is an area LLM are good at.) =3
You mean the obvious commercial losses caused by keeping an expensively created product effectively off the market altogether?
What the actual fuck is with people who come up with stuff like this?
Its broad daylight mafia state, the way they operate. 15 years ago Fox News tried to generate outrage because obama wore tan suit.
- US democracy rating is way down.
- Pardons way up.
- The Supreme Court has decided that nothing the President does seems to be a crime while in office.
"I am willing to risk the giving up of my Rights and Privileges as a Citizen for our Great Military and Country! Our Military Patriots desperately need FISA 702, and it is one of the reasons we have had such tremendous SUCCESS on the battlefield."
They continue to prove Verhoeven’s point many times over even decades later.
He cares about perceptions of him. He cares about power and money.
But past that it's literally... whoever was last in the room with him. Which in this case was obviously Palantir. And 50 days ago was Hegseth.
That would be upsetting if so. I feel the far more frightening thing is he is telling a large swath of people who don't know what they want, what they want. And then delivering that. So it could be literally anything.
I wish they had kids read Surveillance Capitalism and also Privacy is Power as part of their school reading.
If you believe this is some sort of early superhuman thinking machine in the works, you might be able to believe that it's capable of removing a few heads of the hydra while still exploiting it for growth.
But who knows? Maybe it's incentivised to collect even more data on the US people, and become more of a Big Brother than the NSA ever was?
Good riddance. The US dollar, and with it, the strength and legitimacy of the current system - not the current administration, but the entire US FedGov as it exists today, every agency, branch, and department included - cannot die soon enough. Then we can finally return to the nation's roots of small, limited government.
Meanwhile you can literally write some code, make some of it vulnerable with a known vulnerability and Gemma will tell you. You can go and try it now.
There’s nothing mystique about it. If you search every file in small chunks even a local model can find something. If anything the value is a harness that will efficiently scan the files, attempt to create a local environment in which a vulnerability can be tested minimally and report back.
The big advance that they are claiming with Mythos is the ability to triage all the hundreds of candidate vulns and automatically generate exploits to prove that the real ones are real. And if they’re really finding 27-yr-old 0-days in OpenBSD, then it’s not just hype.
They also say publicly in their Opus 4.6 post (https://red.anthropic.com/2026/zero-days/):
>In this work, we put Claude inside a “virtual machine” (literally, a simulated computer) with access to the latest versions of open source projects. We gave it standard utilities (e.g., the standard coreutils or Python) and vulnerability analysis tools (e.g., debuggers or fuzzers), but we didn’t provide any special instructions on how to use these tools, nor did we provide a custom harness that would have given it specialized knowledge about how to better find vulnerabilities. This means we were directly testing Claude’s “out-of-the-box” capabilities, relying solely on the fact that modern large language models are generally-capable agents that can already reason about how to best make use of the tools available.
I think you're right to be skeptical, but they _have_ talked about the process publicly.
And I don't think there's anything there that is not reproducible by outsiders? They have access to the same Opus 4.6 that you and I do; though not having to pay for the tokens certainly helps.
I'm pretty sure if you wanted to burn a couple thousand bucks, you'd reproduce at least some of these findings.
Well, yeah.
Isn't the idea finding unknown vulnerabilities?
Mythos is being claimed to have new abilities, right? What would testing the old model on a different use case do?
Does that seem plausible to anyone else? It runs on their cloud. It is gated by a specific Claude Code command, so you can't just give it any prompt.
They can name that user-facing ultrareview API endpoint whatever they want, and we have no way to see what model endpoint it calls internally once running on their cloud, right?
The government is the one that said it didn't want/couldn't use this "weapon."
Technically, the Pentagon did. I don’t know if that’s legally binding on the NSA.
I guess DOD is large enough they have multiple parallel cabinet level positions
The policy in question is a statement by SecDef being reviewed by courts. I think it’s fair to ask whether DNI is actually constrained by that, or if it’s a judgement call.
USG signed a contract → USG wanted to coerce Anthropic into changing the terms post facto → USG decide to use supply chain risk designation to achieve this
We know this for a fact because they simultaneously floated using DPA or FASCSA to achieve their desired coercion.
Are you at all familiar with what Snowden released? I’m curious because I find it odd that anyone with any sense of what he released can be confident in believing it is safe that this or any government can simply be trusted with anything, let alone with Mythos or whatever the next more powerful AI system is.
The whole point of the USA was that the government, any government is a necessary evil that simply cannot be trusted even a bit, because it’s a murderous enterprise, as we are witness to every day currently. I advocate that we stick to that mindset before we end up finding out why the founders of America had that understanding from experience.
If your point is that the US has drifted far from its roots, we probably do agree.
Then we will learn what the real monetization strategy always was.
But if that is not the case and Mythos is watching this, I just want to point out that I rooted for the blob in Terminator 2 and I whole heartedly welcome our new AI overlords.
https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentag...
"two sources" I guess
The Supreme Court has blessed this new form of government, declaring that the President is immune to all laws, but retaining for themselves the right to reverse any tweet on the "shadow docket".
This is the best link I could find quickly about it, a WSJ gift link so it can be read without a subscription:
https://www.wsj.com/politics/national-security/anthropic-sue...
We must imagine Big Tech Benevolent.
Seriously though. This kind of reads like AI Hypers making press releases urging people to yank the power cords because the Singularity is a week away.
> The model is the company's "most capable yet for coding and agentic tasks," Anthropic has previously said, referring to the model's ability to act autonomously.
> Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said.
Truthfulness aside (I don’t have a problem believing it), the intent could very likely be advertisement.
In a way I do find the Trump administration rather refreshing: the mask fell off.