How to communicate about CVE exploitability without having to fix all "highs and criticals."
A five step guide.
Managing known vulnerabilities is a fact of life for any organization that uses software. Addressing common vulnerabilities and exposures (CVEs) is a major subset of this challenge, and something that companies can get overwhelmed doing. Furthermore, in the vast majority of circumstances, you will need to determine the exploitability of CVEs in third-party code, i.e. code your organization did not write.
As you might suspect, this is often difficult.
Adding to this problem, many scanning tools merely report the presence or absence of a given third-party component and then list all relevant CVEs logged in the National Vulnerability Database (NVD), sometimes along with proprietary findings from other sources. To the untrained eye, these results can be alarming, as they often imply that there are hundreds or thousands of exploitable vulnerabilities in your network.
As I have written before, though, this is almost never the case. Only a minority of CVEs are exploitable in a given system at any given time.
Exacerbating the problem, most scanning tools provide (only) the Common Vulnerability Scoring System (CVSS) rating for each CVE as reported in the NVD. Many security and compliance practitioners accept this score at face value, which as I have mentioned, is poor practice.
Thus, the primary task upon receiving such a report is sorting the wheat from the chaff. Only once that is complete can you turn to mitigating the risk from CVEs that are exploitable under any realistic set of circumstances. Much - if not most - of this exercise is communication rather than engineering work. Relevant stakeholders can include prospects, customers, auditors (internal and external), or business leaders. In the sections below, I will provide a step-by-step guide for how to speak effectively about this topic.
Before beginning, my first recommendation would be to refer to automated scan results as “findings” rather than “vulnerabilities.” Describing them as the latter implies something that is generally not accurate (i.e. most CVEs are not true vulnerabilities, at least as far as you are concerned). Additionally, it tends to anchor your counterparty’s perspective inappropriately and disadvantageously, leading this person to assume that all CVEs represent risk when for the most part they do not.
1. Check the EPSS score
I have written at length on the Exploit Prediction Scoring System (EPSS), but suffice to say, it will give you a quantitative and objective (if heuristic-based) probability that any given CVE will be exploited. The key features of the EPSS are the 30-day likelihood of exploitation and the percentile represented by this likelihood. The Forum of Incident Response and Security Teams (FIRST) has put together some great content regarding how to communicate this information, so I would recommend checking it out.
Barring the ability to use more effective tools, I would start with this one and rank findings by their likelihood of malicious exploitation first. I have even built a handy calculator that will help you with this problem. All other things being equal, it probably makes sense to start at the top and go from there. You may - since the EPSS is generated by a neutral third-party organization, FIRST - be able to convince the relevant stakeholder to exclude from consideration anything below a certain threshold.
2. Make sure the component is actually present and loaded to memory
SCA tools are not perfect, and I have on many occasions been confronted with a report from one that does not make any sense. For example, if it you are presented with a CVE dated from 2008 and it’s 2022, then something is probably wrong.
Explaining to the interested stakeholder that you don’t use a certain version of Java or that a certain component has never been present in your project should be relatively easy. If you can prove that the relevant piece of software doesn’t exist in your application, then proving that there is no risk from a given CVE is a logical subsequent step.
For bonus points - and added credibility, if the counterparty is not inclined to trust you - is to report the error to the vendor of the tool. If you can get them to acknowledge that the reading is a false positive (i.e. the component is not actually present), fix this error, and make the finding go away, your job will likely be done. Especially at larger organizations, security teams are strongly incentivized to get a “clean” scan report and move on, despite the fact that such a report might actually suggest more insidious security problems.
Code not used
Even if it is there, the majority of open source code is not loaded to memory or used at runtime (62% to 85% depending on whom you ask). While it is theoretically possible that an attacker could make use of this “dead” code when moving laterally through a network by re-writing (and restarting) the relevant application, it is highly unlikely that one would do so due to the difficulty and likelihood of being detected.
There are commercial tools that can identify instances where the code is not loaded to memory or you can have an engineer check which classes are loaded and let you know. This is subtly different than step 4, which requires a technical expert to review every vulnerability. If you can identify whole swaths of code that are not running, you can reasonably identify the vulnerabilities in them as being not exploitable.
3. Review the vendor / open source comments for evidence of non-exploitability
If the SCA tool properly identified the component, then the next step is to determine if there is any readily available mitigating information. Now, this isn’t going to be as convincing to an external stakeholder as step 2, because it requires more trust. With that said, it does allow for you to appeal to an external authority. Here are some examples of such information:
CVE-2018-16868. This issue, which has a 5.6 CVSS rating, may trigger a stakeholder requirement to fix all “medium” severity issues. Unfortunately, attempting to fix this is probably not worth the effort, even if it is possible. Upon reviewing the NVD entry, one would see that the CVE is exploitable via a “Bleichenbacher type side-channel based padding oracle attack” requiring a malicious party “who is able to run process on the same physical core as the victim process.” If, like me, you don’t know what the former phrase means, and are interested, you can read this paper. If, also like me, you don’t have a ton of time, you can just focus on the latter phrase. This vulnerability requires an attacker to run an attack from the same physical piece of hardware as its target. Considering that most Software-as-a-Service (SaaS) providers couldn’t tell you which physical core their own application is running on, I would say it’s functionally impossible for an attacker to figure this out either. Only the most dedicated, skillful, and well-resourced attacker could realistically contemplate exploiting this under any circumstances, and there is basically zero risk for any cloud-based application. Its EPSS rating is 0.009500.
CVE-2021-44832. This issue appears in Apache Solr, a common open source component used in many commercial applications. With some quick searching, however, one would be able to identify that Apache doesn’t believe it to be exploitable and has documented its logic publicly. Its EPSS rating is 0.686848, which is very high considering the aforementioned information. With that said, it’s important to note that the EPSS applies to all conceivable configurations for a given component. Thus, what might have a very high risk of exploitation in one application might well present zero risk in another.
Even with moderate proficiency in Googling, you should be able to address ~25% of findings with this type of information.
4. Investigate your first-party code
This is where things can get time consuming, and thus, expensive. If you aren’t able to convince yourself or the relevant stakeholder that the CVE is not exploitable using steps 1-3, then you (or your engineering team) will need to conduct a manual technical investigation. This involves reviewing the interaction of first- and third-party code to determine if in fact there is any risk of a malicious attacker taking advantage of the vulnerability described in the CVE. Examining the exploit code for the CVE - if available - will also shine more line onto its true risk. I’ll provide some examples below:
CVE-2017-8283. Assume that a scan reveals this in your containerized application that uses Ubuntu as its base operating system. This has a CVSS score of 9.8 (out of 10.0) per the NVD and might seem like an appropriate top priority for remediation. Digging more deeply would reveal, however, that it is impossible to exploit in Ubuntu when the software is in its default configuration and that the publisher deems the vulnerability to be of “negligible” priority. Thus, combining a link to this information with a statement that you haven’t changed the default configuration may be sufficient to convince the stakeholder to drop his concerns.
CVE-2021-44228. The infamous log4shell vulnerability. According to the NVD, “an attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default.” Based on this description, this is indeed a serious vulnerability. If you used the default settings prior to version 2.15.0, an attacker could execute arbitrary code in your application remotely without any authentication requirement. If you were using such a version, however, and had disabled message lookup substitution, then it doesn’t appear you would be exposed. Figuring this out would require a technical review.
CVE-2021-43801. For certain versions of Mercurius, “a GraphQL adapter for Fastify,” an attacker could conduct a “denial of service attack by sending a malformed JSON to `/graphql` unless they are using a custom error handler.” If there is no way for your application to receive a JSON of any kind, or if you are deploying an appropriate custom error handler, then you can reasonably say this issue cannot be exploitable. Again, looking at the code to confirm will be necessary.
5. Communicate using VEX
Once you have made your determination, now it’s time to communicate it. Oftentimes scan reports are exchanged by emailing spreadsheets - or worse, .pdf files - between stakeholders. This is a highly sub-optimal method of communication due to the version control and data parsing issues inherent to such media. If at all possible, I would strongly recommend tracking this information in some sort of database or collaboration tool and sharing access to that.
With the medium identified, the question remains what to say. My recommendation would be to use the Vulnerability Exploitability eXchange (VEX) format, specifically the CycloneDX version, as the standard for any communications.
Communicating about CVEs and their exploitability is a challenging process that often requires substantial time and technical expertise. In some cases, it may be easier to simply replace, update, or remove the offending component than it would be to investigate a CVE’s exploitability. If this is the case, then by all means do so. When it is not possible or feasible, however, which is often the case, the aforementioned steps will help you communicate confidently and competently with any stakeholder.
Thanks for reading Deploying Securely! Subscribe to receive new posts.