But... I spent a bunch of hours on that. For each one.
These days we just fix every reported vulnerable library, turns out that is far less work. And at some point we'd upgrade anyway so might as well.
Only if it causes problems (incompatible, regressions) then we look at it and analyze exploitability and make judgement calls. Over the last several years we've only had to do that for about 0.12% of the vulnerabilities we've handled.
Raising alarms on a CVE in Apache2 that only affects Windows when the server is Linux.
Or CVEs related to Bluetooth in cloud instances.
Yeah so 1) not running a web service 2) not parsing pdf in said non-existing service 3) congrats you are leaking memory on my dev laptop
I'm always curious about the companies that require vendors to report all instances where patches to CVSS 9.x vulnerabilities are not applied to all endpoints within 24 hours. Are they just absolutely flooded with reports, or does nobody on the vendor side actually follow these rules to the letter?
That sounds like a nigh-impossible requirement, as you've written it.
I suspect the actual requirement is much more limited in scope.
9.x vulnerability might not matter if the function gets trusted data while 3.x one can screw you if it is in bad spot
Yup. Almost every single time NVD came up with some ridiculously inflated numbers without any rhyme or reason. Every time I saw their evaluation it lowered my impression of them.