The existing state of vulnerability management is more or less broken. Whether it’s the failures of the Common Vulnerability Scoring System (CVSS) or the inability of organizations to actually prioritize remediation in logical ways, the state of the art leaves much to be desired.
On that note, Mark Curphey recently wrote an interesting article entitled on the topic and I thought there were some great insights in it that are worth highlighting and expanding upon even further. Additionally, there are some discrete (albeit minor) areas of disagreement between us, so I thought it made sense to dive into those as well.
I will quote key sections from his text to tee up my commentary.
The title / thesis
CVE / NVD doesn’t work for open source and supply chain security.
I don’t see why the Common Vulnerabilities and Exposures (CVE) system and National Vulnerability Database (NVD) necessarily can’t work for open source and supply chain security, although agree there are big limitations for applying them in the modern technological landscape.
Mark goes into depth on these shortcomings in his article, but I would say that, if they are (mostly) resolved, then the thesis of his piece would no longer hold.
There are multiple initiatives underway to address these problems, including:
The Software Bill of Materials (SBOM) forum / Open Web Application Security (OWASP) recommendation to support the use of package URLs (pURLs) in the NVD; and
The plan to introduce them into the CVE 5.x standard.
Furthermore, Mark writes about “an arms race to create the best private vulnerability databases.” While I agree this is a real phenomenon, my experience is that these generally copy the format of the NVD but just have more entries in them (e.g. Synk).
My bottom line: unlike CVSS - which I believe to be beyond repair and should be discarded - the NVD and CVE systems can be salvaged.
Garbage in, garbage out
CVE / NVD data is often incorrect and not technically verified
This is an important point. Many (potentially even most) CVEs are what I would call “science projects.” Mark provides one example in his article (which I discuss below), but I would suggest merely searching by the keyword “side channel” in the CVE list. Doing so reveals hundreds of entries. Side channel attacks are generally quite difficult to exploit in general and often require physical access to the target device.
For a modern software business, these types of things should be pretty low on the priority list. Nonetheless, they will often pop up during vulnerability scans, requiring triage, analysis, and potentially even remediation.
Additionally the lack of information regarding the vulnerable methods in open source components is a huge problem. Because the vast majority of CVEs are not exploitable in any given configuration, finding out which ones are is incredibly important.
Without knowing which part of a given library is actually exposed to a hacker, you are left flying (nearly) blind. There are commercial solutions attempting to address this “reachability” problem through data flow analysis (interaction of first- and third-party code), but we are in the early days here.
CVE / NVD is a badge of honor and are increasingly fluff
In addition to being humorous, the example Mark gives of CVE-2022-38392 is also itself highlights the ridiculous nature of many CVEs. In this case, if someone is close enough to a device and can blast music near it to cause a DoS attack, why wouldn’t he just smash it with a hammer? No doubt some poor security analyst or engineer saw CVE-2022-38392 flagged in a scan, had to triage it, and then explain why, although a “medium” severity issue according to CVSS, wasn’t really a cybersecurity issue at all.
CVE / NVD can't possibly deal with the rate of vuln ingestion
Also a major problem, because there is a significant delay in terms of processing time from when an issue is reported until when it full published. Again, since most CVEs are not broadly exploitable, ensuring that you prioritize the ones that are for rapid notification and dissemination is super important. It’s not clear how the NVD does this aside from “finger in the wind” tests such as saying “well, log4shell looks pretty bad to us, so we better hurry up and publish CVE-2021-44228.”
Hiding the ball
The CNA scheme has become a way for vendors to hide issues…The usual reason given is that they have long release cycles and need to coordinate disclosure to their customers. While true for some, it’s just bullshit for most.
I won’t dispute that some vendors do hide issues reported by security researchers using this tactic. And Mark’s point regarding hackers looking through publicly available information to find the issue and exploit it before customers are even aware is troubling in its implications.
Unfortunately, vulnerability disclosure remains - even in 2022 - an incredibly elaborate dance for which there are no clear industry standards. Vendors generally operate in a state of fear, confusion, and apathy when it comes to addressing security issues in their products. I have sought to build a framework for to address just this problem, but very few organizations are ready to operate at this level of sophistication and are generally operating in a highly reactive manner.
You have to look at the incentive model for developers to submit CVEs. The only incentive is the developer doing the right thing, and that is not a strong one. In fact for many developers it's a disincentive. Your boss finds out you were writing code and found out that it had a vulnerability. You mean you write shitty code? Your code caused us public embarrassment?
This is an incredibly important point. And it highlights a major perverse incentive in the security world; product and development teams are often hesitant to do anything to find vulnerabilities in their code, much less report them. Unfortunately this anti-pattern is deeply embedded throughout the landscape, from informal norms to security release criteria to contracts to even government compliance standards.
We used to classify vulnerabilities found using Commit Watcher with a simple taxonomy, one such category we called half-days. Half-days were things that were exposed but generally not known about, things in commit logs, copy and paste vulnerabilities and embedded vulnerabilities from transitive dependency graphs.
I am frankly amazed that this project is abandoned and, although there are likely commercial version of this in use, I haven’t heard of any. If I were a bug bounty hunter (or criminal hacker), this is something I would start using immediately (or fixing if it isn’t working)!
Trusting without being able to verify
A lot of the world doesn’t trust the US government
Also a fair point here, and in fact, they probably shouldn’t (at least not entirely)!
Mark generously says “I have zero reason to ever believe any impropriety,” but this statement doesn’t take into account clear statements by the U.S. government to the contrary (depending on what you mean by “impropriety”).
For example, the “Vulnerabilities Equities Policy and Process for the United States Government,” dated November 15, 2017, declares:
The Vulnerabilities Equities Process (VEP) balances whether to disseminate vulnerability information to the vendor/supplier in the expectation that it will be patched, or to temporarily restrict the knowledge of the vulnerability to the [United States Government], and potentially other partners, so that it can be used for national security and law enforcement purposes, such as intelligence collection, military operations, and/or counterintelligence.
Here the government openly states that it might withhold vulnerability notifications if it benefits U.S. national security to do so. And frankly, in some cases I hope they do!
As an American who served in both the military and intelligence community, I’ll say that there are certainly conditions under which the greater good requires hiding vulnerabilities to later use in offensive operations against U.S. adversaries.
As someone who is also skeptical of state competence, and even intentions at times, though, this leaves me in an uncomfortable position.
Unfortunately, there isn’t really a way around this conundrum that I can see. I think the best way forward is to just make the NVD the best game in town so that people use it by default even knowing there might be some issues that are intentionally not reported.
And in any case, if the government is clear on what criteria it uses to make withholding decisions (through the VEP), then at least people will have some idea of the risks they take in trusting information in the NVD. And these problems are almost certainly going to be worse with the Chinese and Russian versions.
Conclusion
It looks like Mark has another post coming out, and I look forward to seeing it. In the meantime, what does one do with the information in his original article and above?
Evaluate risk holistically and quantitatively, using the likelihood of occurrence of an event and its predicted impact when making prioritization decisions. Don’t robotically accept the outputs from tools and databases at face value.
Specifically for CVEs, use tools like the Exploit Prediction Scoring System (EPSS) to get a general idea of exploitability if you are unable to do detailed analysis. Despite its limitations, it’s the best game in town at its price point (free).
Have a plan for when something goes wrong. Limit the blast radius of an attack through hardening, compensating controls, and incident response plans.