Part 1 of the Deploy Securely Risk Assessment Model (DSRAM) series showed how to evaluate the financial risk posed by common vulnerabilities and exposures (CVEs) in a rapid and quantitative fashion. What as lacking, however, was something comparable for non-CVEs.
This was due to the fact that I am not aware of any existing framework - similar to the Exploit Prediction Scoring System (EPSS) - for calculating their likelihood of exploitability. So I decided to build one myself, using available data about CVEs to construct a heuristic model about non-CVEs.
Creating such a model turned out to be quite challenging. I pressed ahead nonetheless, pulling data from the CISA known exploited vulnerability catalog, AlienVault’s Open Threat Exchange (OTX), and Verizon Risk’s community database. A sample of the challenges I faced are below:
The Verizon database is good, but it often only provides the year (rather than month or day) of exploitation. Additionally, since 2013 it appears to have stopped including the CVE identifier of vulnerabilities used in successful exploitations.
The CISA catalog is just a binary “has this been exploited?” database with no additional information (such as date of exploitation). Also, as I learned, vulnerabilities require assignment of a CVE ID for inclusion.
Much of the information that is available about CVEs is generally not available (e.g. availability of exploit tools) or not relevant (e.g. number of days since public announcement) for non-CVEs, sharply limiting the number of logical inputs.
After cleaning and preparing what data I could find, I attempted to use linear regression and decision tree classifiers to see if I could build a useful predictive engine. After hours of attempting to come up with a useful machine learning model, however, I couldn’t generate anything worth publishing.
Thus, I decided to do the only thing I could with available time and resources: use a heuristic of a heuristic. As you might recall from my last post, there is no “formula” for the EPSS, although it does have identifiable inputs. Thus, to estimate exploitability, I essentially worked backward from the EPSS to develop an idea of the mean exploitability of a given category of vulnerability.
Pulling data from both the National Vulnerability Database (NVD) and the EPSS, I evaluated five years’ worth of CVEs for certain characteristics, separated them by these characteristics, and then looked at the mean EPSS score (converted to a 365-day, rather than 30-day, value). Please see this Google Colab notebook for the detailed calculations.
I fully acknowledge my approach is highly unscientific. With that said, I have not seen any open source tool able to do what I have with any precision whatsoever. You can read about my critiques of the CVSS, DREAD, and SDL bug bar models for details. Since I think that quantitative analysis is so important for cyber risk management, having something is better than nothing, in my mind.
Moving onto the model, I identified three useful inputs for calculating the risk of exploitation of a non-CVE.
Privileges required
This is a binary true or false. “High” and “low” - used in the CVSS standard - are too subjective, so I converted the corresponding NVD values to simply “required” or not. Thus, this input is the answer to the question “does the attacker need to have been authorized to do something before exploiting this vulnerability?” This authorization could have been granted to an otherwise bona fide user (i.e. an insider threat), or the attacker could be chaining vulnerabilities together.
User interaction required
This is also relatively easy to quantify since it is binary. This factor comes down to the question: is it possible to exploit this vulnerability without a bona fide (e.g. without malicious intent) user doing anything except authorizing a malicious user to do something? This means doing things like downloading/running files or clicking on links.
Example vulnerabilities where the answer is “no” include CVE-2021-30860 (which the NVD oddly and incorrectly lists as user interaction being “required”), which can be exploited via “zero click” malware, as well as CVE-2017-0144, which the self-propagating worm like (Not)Petya targeted.
Attack vector
This is identical to the CVSS standard because these definitions are unambiguous. I have excerpted them below:
Network (N): The vulnerable component is bound to the network stack and the set of possible attackers extends beyond the other options listed below, up to and including the entire Internet. Such a vulnerability is often termed “remotely exploitable” and can be thought of as an attack being exploitable at the protocol level one or more network hops away (e.g., across one or more routers). An example of a network attack is an attacker causing a denial of service (DoS) by sending a specially crafted TCP packet across a wide area network (e.g., CVE‑2004‑0230).
Adjacent (A): The vulnerable component is bound to the network stack, but the attack is limited at the protocol level to a logically adjacent topology. This can mean an attack must be launched from the same shared physical (e.g., Bluetooth or IEEE 802.11) or logical (e.g., local IP subnet) network, or from within a secure or otherwise limited administrative domain (e.g., MPLS, secure VPN to an administrative network zone). One example of an Adjacent attack would be an ARP (IPv4) or neighbor discovery (IPv6) flood leading to a denial of service on the local LAN segment (e.g., CVE‑2013‑6014).
Local (L): The vulnerable component is not bound to the network stack and the attacker’s path is via read/write/execute capabilities. Either:
the attacker exploits the vulnerability by accessing the target system locally (e.g., keyboard, console), or remotely (e.g., SSH); or
the attacker relies on User Interaction by another person to perform actions required to exploit the vulnerability (e.g., using social engineering techniques to trick a legitimate user into opening a malicious document).
Physical (P): The attack requires the attacker to physically touch or manipulate the vulnerable component. Physical interaction may be brief (e.g. evil maid attack) or persistent. An example of such an attack is a cold boot attack in which an attacker gains access to disk encryption keys after physically accessing the target system. Other examples include peripheral attacks via FireWire/USB Direct Memory Access (DMA).
Analysis
After crunching the data as per above, I arrived at the following probabilities of exploitation in the next 365 days for 16 different categories of vulnerability:
Since Substack only lets you screenshot charts, here is a Google Sheets version.
At first glance, the probabilities involved might seem extremely low.
I will note, however, that these calculations will generally only be applicable to vulnerabilities that are not publicly disclosed (although there are some public, non-CVEs, they are relatively rare), and thus will generally have a lower likelihood of exploitation. Furthermore, as an example, log4shell (CVE-2021-44228) was introduced into the open-source log4j library on July 18, 2013, and the first known malicious exploitation was on December 1, 2021. Thus, one of potentially the worst software vulnerabilities in history took 3059 days from introduction to exploitation. Through very rough, back-of-the-envelope math, and using the EPSS score (0.9510) for April 15, 2022 (136 days since first exploitation), the mean daily probability of exploitation for log4shell - since its introduction - was approximately 0.0409.
Thus, it seems reasonable to say that a “garden variety” vulnerability requiring no authentication or user interaction, and that is accessible via a network attack, has a likelihood of exploitation of approximately 0.029069 over the next year.
Conclusion
Thats all for now! I have updated the DSRAM project in GitHub, as well as the Google Colab calculator to reflect the above work. Hopefully you find it useful. Please keep the feedback coming, and I look forward to continuing to adjust this model to make it more accurate and useful.
Thanks for sharing this. The math is interesting, I intend to review the code in GitHub. I am interested in learning what’s useful. I determine the exploitability of a vulnerability today by personally checking systems and software using vulnerability management tools and penetration testing techniques. In the future, I am testing AI to assess vulnerability exploitability. The artificial general intelligence system is faster, and more concise than most humans I have worked with. I can start tesing it for cyber risk statistics and probability. I am already developing it for GRC automation, and I have built cloud-native systems for NLP and understand the lexicon and ontology behind the ML. With enough training data we could build a predictive model.