4/17/2026 at 4:02:29 PM
> This opens the door for a lot of infosec drama. Some of the organizations that issue CVE numbers are also the makers of the "reported" software, and these companies are extremely likely to issue low severity scores and downplay their own bugs.It is true but the reverse is also true. It may be very hard for an external body to issue proper scoring and narrative for bugs in thousands of various software packages. Some bugs are easy, like if you get instant root on a Unix system by typing "please give me root", then it's probably a high severity issue. But a lot of bugs are not simple and require a lot of deep product knowledge and understanding of the system to properly grade. The knowledge that is frequently not widely available outside of the organization. And, for example, assigning panic scores to issues that are very niche and theoretical, and do not affect most users at all, may also be counter-productive and lead to massive waste of time and resources.
by smsm42
4/17/2026 at 4:30:53 PM
Very true. So many regulated/government security contexts use “critical” or “high” sev ratings as synonymous for “you can’t declare this unexploitable in context or write up a preexisting-mitigations blurb, you must take action and make the scanner stop detecting this”, which leads to really stupid prioritization and silliness.by zbentley
4/17/2026 at 4:39:24 PM
At a previous job, we had to refactor our entire front end build system from Rollup(I believe it was) to a custom Webpack build because of this attitude. Our FE process was completely disconnected from the code on the site, existing entirely in our Azure pipeline and developer machines. The actual theoretically exploitable aspects were in third party APIs and our dotNet ecosystems which we obviously fixed. I wrote like 3 different documents and presented multiple times to their security team on how this wasn't necessary and we didn't want to take their money needlessly. $20000 or so later (with a year of support for the system baked in) we shut up Dependabot. Money well spent!by gibsonsmog
4/17/2026 at 9:52:07 PM
Very early in my career I'd take these vulnerability reports as a personal challenge and spent my day/evening proving it isn't actually exploitable in our environment. And I was often totally correct, it wasn't.But... I spent a bunch of hours on that. For each one.
These days we just fix every reported vulnerable library, turns out that is far less work. And at some point we'd upgrade anyway so might as well.
Only if it causes problems (incompatible, regressions) then we look at it and analyze exploitability and make judgement calls. Over the last several years we've only had to do that for about 0.12% of the vulnerabilities we've handled.
by jjav
4/18/2026 at 6:02:00 AM
That’s basically my experience as well. Just upgrading is much easier and cheaper.Of course with latest supply chain failures we don’t update right away or automatically.
If it is RCE in a component that is exposed then of course we do it ASAP. But those are super rare.
by ozim
4/17/2026 at 5:40:37 PM
My favorite: a Linux kernel pcmcia bug. On EC2 VMs.by lokar
4/17/2026 at 9:15:33 PM
In a similar vein:Raising alarms on a CVE in Apache2 that only affects Windows when the server is Linux.
Or CVEs related to Bluetooth in cloud instances.
by Sohcahtoa82
4/17/2026 at 10:08:13 PM
Or raising alarm on a CVE in linux mlx5 driver on an embedded device that doesn't have a pcie interfaceby minetest2048
4/18/2026 at 8:42:59 AM
ReDoS at CVSS 8+ ... in the configuration file parsing of a bundler.by xyzzy123
4/17/2026 at 10:58:50 PM
”If you use that installed Python version to start a web server and use it to parse pdf, you may encounter a potential memory leak”Yeah so 1) not running a web service 2) not parsing pdf in said non-existing service 3) congrats you are leaking memory on my dev laptop
by nikanj
4/18/2026 at 3:07:01 PM
I refused to refer to the whole vulnerability reporting / tracking effort as "security", always correcting people that it was compliance, not security.by lokar
4/18/2026 at 8:26:31 AM
I'll top that: wireless-regdb out of date. Against an EC2-specific kernel.by bostik
4/18/2026 at 8:40:07 AM
Kernel headers out of date -> kernel vulnerability... in a container.by xyzzy123
4/18/2026 at 8:49:30 AM
Okay. You win.by bostik
4/17/2026 at 5:34:27 PM
> It is true but the reverse is also true.Yup. Almost every single time NVD came up with some ridiculously inflated numbers without any rhyme or reason. Every time I saw their evaluation it lowered my impression of them.
by rdtsc
4/18/2026 at 6:52:41 AM
Problem is not just NVD issuing inflated scores. That's their workaday MO. They are required to assume the worst possible combination of factors.The real problem is that CVSS scoring is utterly divorced from reality. Even the 4.x standard is merely trying - and failing - to paper over the fundamental problems with its (much needed!) EPSS[ß] weighting. A javascript library that does something internal to software and does not even have any way of doing auth is automatically graded "can be exploited without authentication". Congrats, your baseline CVE for some utterly mundane data transformation is now an unauthenticated attack vector. The same applies to exposure scoping: when everything is used over the internet[ĸ], all attack vectors are remote and occur over the network: the highest possible baseline.
This combination means that a large fraction of CVEs in user-facing software start from CVSS score of 8.0 ("HIGH") and many mildly amusing bugs get assigned 9.0 ("CRITICAL") as the default.
Result? You end up with nonsense such as CVE-2024-24790[0] given a 9.8 score because the underlying assumption is that every software using 'netip' library is doing IsLocal* checks for AUTHENTICATION (and/or admin access) purposes. Taken to its illogical extreme we should mark every single if-condition as a call site for "critical security vulnerabilities".
CVSS scoring has long been a well of problems. In the last few years is has become outright toxic.
ß: "Exploitability" score.
k: Web browsers are the modern universal application OS
by bostik
4/17/2026 at 5:59:21 PM
Every month when there is a new Chrome release, there is a handful of CVSS 9.x vulnerabilities fixed.I'm always curious about the companies that require vendors to report all instances where patches to CVSS 9.x vulnerabilities are not applied to all endpoints within 24 hours. Are they just absolutely flooded with reports, or does nobody on the vendor side actually follow these rules to the letter?
by semi-extrinsic
4/18/2026 at 6:09:17 AM
The classic we need a 3 month approval process to update software but at the same time use SaaS that updates daily and breaks every other week.by L-four
4/17/2026 at 8:35:48 PM
> I'm always curious about the companies that require vendors to report all instances where patches to CVSS 9.x vulnerabilities are not applied to all endpoints within 24 hours.That sounds like a nigh-impossible requirement, as you've written it.
I suspect the actual requirement is much more limited in scope.
by michaelt
4/18/2026 at 2:47:57 AM
No. It’s extremely common for security standards to be completely out of step with what’s actually viable in an organisation, and for aspects of them to be ignored, unspoken.by UqWBcuFx6NV4r
4/17/2026 at 6:12:45 PM
the rating is nonsense anyway, which one actually applies to code you run varies wildly9.x vulnerability might not matter if the function gets trusted data while 3.x one can screw you if it is in bad spot
by PunchyHamster
4/17/2026 at 6:34:53 PM
Pretty sure if I had to bet on incentives or expertise, I'd bet on incentives every time.by moomin
4/17/2026 at 6:03:24 PM
Also, sometimes CVEs aren't really significant security issues. See: curlby LocalH