They also say publicly in their Opus 4.6 post (https://red.anthropic.com/2026/zero-days/):
>In this work, we put Claude inside a “virtual machine” (literally, a simulated computer) with access to the latest versions of open source projects. We gave it standard utilities (e.g., the standard coreutils or Python) and vulnerability analysis tools (e.g., debuggers or fuzzers), but we didn’t provide any special instructions on how to use these tools, nor did we provide a custom harness that would have given it specialized knowledge about how to better find vulnerabilities. This means we were directly testing Claude’s “out-of-the-box” capabilities, relying solely on the fact that modern large language models are generally-capable agents that can already reason about how to best make use of the tools available.
I think you're right to be skeptical, but they _have_ talked about the process publicly.
And I don't think there's anything there that is not reproducible by outsiders? They have access to the same Opus 4.6 that you and I do; though not having to pay for the tokens certainly helps.
I'm pretty sure if you wanted to burn a couple thousand bucks, you'd reproduce at least some of these findings.
Linux now labels every single bug as a CVE.