upvote
So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.
reply
An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
reply
Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.

We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.

reply