upvote
At a previous job, we had to refactor our entire front end build system from Rollup(I believe it was) to a custom Webpack build because of this attitude. Our FE process was completely disconnected from the code on the site, existing entirely in our Azure pipeline and developer machines. The actual theoretically exploitable aspects were in third party APIs and our dotNet ecosystems which we obviously fixed. I wrote like 3 different documents and presented multiple times to their security team on how this wasn't necessary and we didn't want to take their money needlessly. $20000 or so later (with a year of support for the system baked in) we shut up Dependabot. Money well spent!
reply
Very early in my career I'd take these vulnerability reports as a personal challenge and spent my day/evening proving it isn't actually exploitable in our environment. And I was often totally correct, it wasn't.

But... I spent a bunch of hours on that. For each one.

These days we just fix every reported vulnerable library, turns out that is far less work. And at some point we'd upgrade anyway so might as well.

Only if it causes problems (incompatible, regressions) then we look at it and analyze exploitability and make judgement calls. Over the last several years we've only had to do that for about 0.12% of the vulnerabilities we've handled.

reply
My favorite: a Linux kernel pcmcia bug. On EC2 VMs.
reply
In a similar vein:

Raising alarms on a CVE in Apache2 that only affects Windows when the server is Linux.

Or CVEs related to Bluetooth in cloud instances.

reply
Or raising alarm on a CVE in linux mlx5 driver on an embedded device that doesn't have a pcie interface
reply
”If you use that installed Python version to start a web server and use it to parse pdf, you may encounter a potential memory leak”

Yeah so 1) not running a web service 2) not parsing pdf in said non-existing service 3) congrats you are leaking memory on my dev laptop

reply