upvote
I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.
reply
It can be correct and slop at the same time. The reporter could have reported it in a way that makes it clear a human reviewed and cared about the report.

Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it.

If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck.

reply
No it can't. These aren't "Show HN" posts about new programs people have conjured with Claude. They're either vulnerabilities or they're not. There's no such thing as a "slop vulnerability". The people who exploit those vulnerabilities do not care how much earlier reporters "gave a fuck" about their report.

This is in the linked story: they're seeing increased numbers of duplicate findings, meaning, whatever valid bugs showboating LLM-enabled Good Samaritans are finding, quiet LLM-enabled attackers are also finding.

People doing software security are going to need to get over the LLM agent snootiness real quick. Everyone else can keep being snooty! But not here.

reply
Everyone is free to be as snooty as they like. If a report is harder to read/understand/validate because the author just yolo'ed it with an LLM, that's on the report author, not on the maintainers.

It's not okay to foist work onto other people because you don't think LLM slop is a problem. It is absolutely a problem, and no amount of apologizing and pontificating is going to change that.

Grow up and own your work. Stop making excuses for other people. Help make the world better, not worse. It's obvious that LLMs can be useful for this purpose, so people should use them well and make the reports useful. Period.

reply
Try to make this sentiment coherent. "It's not OK to foist work onto other people". Ok, sure, I won't. The vulnerability still exists. The maintainers just don't get to know about it. I do, I guess. But not them: telling them would "make the world worse".
reply
> There's no such thing as a "slop vulnerability"

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

See the list at the bottom of the post for examples.

reply
Those aren't vulnerabilities. You're missing the point.

Nobody is saying there's no such thing as a slop report. Not only are there, but slop vulnerability reports as a time-consuming annoying phenomenon predate LLM chatbots by almost a decade. There's a whole cottage industry that deals with them.

Or did. Obsolete now.

reply
If I read the sentence correctly they're saying that past reports were AI slop, but the state of the art has advanced and that current reports are valid. This matches trends I've seen on the projects I work on.
reply