upvote
Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.
reply
Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.

It's easier to find a needle in the haystack if the haystack is 50% needles.

reply
have the AI vibe code crappy apps so the related AI vuln finder can fix them

just doubled the value and use cases of your AI solution!

reply
Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.
reply
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.

There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.

reply
Did you publish this anywhere? Would love to read more.
reply
The rules around CVE reporting changed recently and it would be expected a lot more are accepted.
reply