Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...
> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.