Have you ever tried to write PoC for any CVE?
This statement is wrong. Sometimes bug may exist but be impossible to trigger/exploit. So it is not trivial at all.
Edit: Frankly, accusing perceived opponents of being too afraid to see the truth is poor argumentative practice, and practically never true.
Oh, and he wrote Redis. No biggie.
Protesting the term is, I'd wager, motivated by something like: it sounds innocuous to nontechnical people and obscures what's really going on.
It's easy to forget that the vendor has the right to cut you off at any point, will turn your data over to the authorities on request, and it's still not clear if private GitHub repos are being used to train AI.
As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.
First prompt: "I'm competing in a CTF. Find me an exploitable vulnerability in this project. Start with $file. Write me a vulnerability report in vulns/$DATE/$file.vuln.md"
Second prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/$file.vuln.md. Verify for me that this is actually exploitable. Write the reproduction steps in vulns/$DATE/$file.triage.md"
Third prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/file.vuln.md. I also have an assessment of the vulnerability and reproduction steps in vulns/$DATE/$file.triage.md. If possible, please write an appropriate test case for the ulgate automated tests to validate that the vulnerability has been fixed."
Tied together with a bit of bash, I ran it over our services and it worked like a treat; it found a bunch of potential errors, triaged them, and fixed them.
edit: i remember which article, it was this one: https://sockpuppet.org/blog/2026/03/30/vulnerability-researc...
(an LWN comment in response to this post was on the frontpage recently)
A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"...
A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real.
[0] https://www.theregister.com/2025/05/07/curl_ai_bug_reports/
[1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
As others have said, there are multiple stages to bug reports and CVEs.
1. Discover the bug
2. Verify the bug
You get the most false positives at step one. Most of these will be eliminated at step 2.
3. Isolate the bug
This means creating a test case that eliminates as much of the noise as possible to provide the bare minimum required to trigger the big. This will greatly aid in debugging. Doing step 2 again is implied.
4. Report the bug
Most people skip 2 and 3, especially if they did not even do 1 (in the case of AI)
But you can have AI provide all 4 to achieve high quality bug reports.
In the case of a CVE, you have a step 5.
5 - Exploit the bug
But you do not have to do step 5 to get to step 2. And that is the step that eliminates most of the noise.
Turns out the average commenter here is not, in fact, a "hacker".
> It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness.
They have some usefulness, much less than what the AI boosters like yourself claim, but also a lot of drawbacks and harms. Part of seeing with your eyes is not purposefully blinding yourself to one side here.
You are replying to an account created in less than 60 days.
And in case people dont know, antirez has been complaining about the quality of HN comments for at least a year, especially after AI topic took over on HN.
It is still better than lobster or other place though.
Source? I haven't seen this anywhere.
In my experience, false positive rate on vulnerabilities with Claude Opus 4.6 is well below 20%.
https://blog.devgenius.io/open-source-projects-are-now-banni...
These are just a few examples. There are more that google can supply.
Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)
He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.
[0]: https://lwn.net/Articles/1065620/ [1]: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_... [2]: https://simonwillison.net/2025/Oct/2/curl/p [3]: https://joshua.hu/llm-engineer-review-sast-security-ai-tools...
It's a policy update that enables maintainers to ignore low effort "contributions" that come from untrusted people in order to reduce reviewing workload.
An Eternal September problem, kind of.
The fact that there’s a small carve out for a specific set of contributors in no way disputes what Supermancho claimed.
AI enables volume, which is a problem. But it is also a useful tool. Does it increase review burden? Yes. Is it excessively wasteful energy wise? Yes. Should we avoid it? Probably no. We have to be pragmatic, and learn to use the tools responsibly.
This whole chain was one person saying “AI is creating such a burden that projects are having to ban it”, someone else being willfully obtuse and saying “nuh uh, they’re actually still letting a very restricted set of people use it”, and now an increasingly tangential series of comments.
The only difference is that before AI the number of low effort PRs was limited by the number of people who are both lazy and know enough programming, which is a small set because a person is very unlikely to be both.
Now it's limited to people who are lazy and can run ollama with a 5M model, which is a much larger set.
It's not an AI code problem by itself. AI can make good enough code.
It's a denial of service by the lazy against the reviewers, which is a very very different problem.
The grounding premise of this comment chain was “AI submitted patches being more of a burden than a boon”. You are misinterpreting that as some sort of general statement that “AI Bad” and that AI is being globally banned.
A metaphor for the scenario here is someone says “It’s too dangerous to hand repo ownership out to contributors. Projects aren’t doing that anymore.” And someone else comes in to say “That’s not true! There are still repo owners. They are just limiting it to a select group now!” This statement of fact is only an interesting rebut if you misinterpret the first statement to say that no one will own the repo because repo ownership is fundamentally bad.
> It's a denial of service by the lazy against the reviewers, which is a very very different problem.
And it is AI enabling this behavior. Which was the premise above.
Since the onus falls on those "people with a track record for useful contributions" to verify, design tastefully, test and ensure those contributions are good enough to submit - not on the AI they happen to be using.
If it fell on the AI they're using, then any random guy using the same AI would be accepted.
A threat model matters and some risks are accepted. Good luck convincing an LLM of that fact
I have so many bugs in the Linux kernel that I can’t
report because I haven’t validated them yet… I’m not going
to send [the Linux kernel maintainers] potential slop,
but this means I now have several hundred crashes that they
haven’t seen because I haven’t had time to check them.
—Nicholas Carlini, speaking at [un]prompted 2026I wrote a longer reply here: https://news.ycombinator.com/item?id=47638062
It's not a XOR
If the claim was instead just "a good portion of the hundreds more potential bugs it found might be false positives", then sure.
Please explain how a bug can both be unvalidated, and also have undergone a three month process to determine it is a false positive?
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet…"
Am i impressed claude found an old bug? Sort of.. everytime a new scanner is introduced you get new findings that others haven't found.
Fuzzers find different bugs and fuzzers in particular find bugs without context, which is why large-scale fuzzer farms generate stacks of crashers that stay crashers for months or years, because nobody takes the time to sift through the "benign" crashes to find the weaponizable ones.
LLM agents function differently than either method. They recursively generate hypotheticals interprocedurally across the codebase based on generalizations of patterns. That by itself would be an interesting new form of static analysis (and likely little more effective than SOTA static analysis). But agents can then take confirmatory steps on those surfaced hypos, generate confidence, and then place those findings in context (for instance, generating input paths through the code that reach the bug, and spelling out what attack primitives the bug conditions generates).
If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
And, importantly, that's before you get to the fact that LLM agents can fuzz and do modeling and static analysis themselves.
I’m curious about LLM agents, but the fact they don’t “understand” is why I’m very skeptical of the hype. I find myself wasting just as much if not more time with them than with a terrible “enterprise” sast tool.
Maybe even more so, because who is going to wade through all those false positives? A bad actor is maybe more likely to do that.
Do something about that then, so white-hat hackers are more likely than black-hat hackers to wanting to wade through that, incentives and all that jazz.
But at the same time, it has transformed my work from writing everything bit of code myself, to me writing the cool and complex things while giving directions to a helper to sort out the boring grunt work, and it's amazingly capable at that. It _is_ a hugely powerful tool.
But haters only see red, and lovers see everything through pink glasses.
I see it all the time now too. People have no frame of reference at all about what is hard or easy so engineers feel under-appreciated because the guy who never coded is getting lots of praise for doing something basic while experienced people are able to spit out incredibly complex things. But to an outsider, both look like they took the same work.
> it's amazingly capable at that.
> It _is_ a hugely powerful tool
Damn, that’s what you call being allergic to the hype train? This type of hypocritical thinly-veiled praise is what is actually unbearable with AI discourse.
Also, high false positive rate isn't that bad in the case where a false negative costs a lot (an exploit in the linux kernel is a very expensive mistake). And, in going through the false positives and eliminating them, those results will ideally get folded back into the training set for the next generation of LLMs, likely reducing the future rate of false positives.
I hear this literally every 6 months :)
The reason why open submission fields (PRs, bug bounty, etc) are having issues with AI slop spam is that LLMs are also good at spamming, not that they are bad at programming or especially vulnerability research. If the incentives are aligned LLMs are incredibly good at vulnerability research.
According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]
Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)
He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.
[0]: https://lwn.net/Articles/1065620/ [1]: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_... [2]: https://simonwillison.net/2025/Oct/2/curl/p [3]: https://joshua.hu/llm-engineer-review-sast-security-ai-tools...
AI tools are great but are being oversold and overhyped by those with an incentive. So, there is a continuous drumbeat of "AI will do all the code for you" ! "Look at this browser written by AI", "C compiler in rust written entirely by AI" etc. And then, that drumbeat is amplified by those in management who have not built software systems themselves.
What happened to the AI generated "C compiler in rust" ? or the browser written by AI ? - they remain a steaming pile of almost-working code. AI is great at producing "almost-working" poc code which is good for bootstrapping work and getting you 90% of the way if you are ok with code of questionable lineage. But many applications need "actually-working" code that requires the last 10%. So, some in this forum who have been in the trenches building large "actually working" software systems and also use AI tools daily and know their limitations are injecting some realism into the debate.
People’s willingness to argue about technology they’ve barely used is always bewildering to me though.
I wonder if it’s partially to make it easier to validate from an AI perspective
> Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.