What is the difference between a software vulnerability and a security misconfiguration?
A clear definition.
What did these security incidents have in common?
Capital One paid $270 million in fines and settlements after a 2019 attack by a disgruntled former insider.
Amazon accidentally exposed 215 million entries of pseudonymized Prime Video viewing data, discovered in 2022.
Thomson Reuters left 3TB of sensitive data exposed to the internet without authentication, also identified the same year.
It wasn’t the malicious exploitation of a software vulnerability, but rather a misconfiguration in one or more internet-facing assets.
Although managing and mitigating know software flaws in your tech stack is a key piece of any information security program, making sure your apps and tools are configured correctly in the first place is equally important.
Thus, I think it’s important to draw a distinction between two things:
Software vulnerabilities
Security misconfigurations
There are already a variety of takes out there on this topic. But I wasn’t happy with any of them and decided to put together my own definition that is mutually exclusive and completely exhaustive (MECE).
What is a vulnerability?
I’ll note that both a software vulnerability and a security misconfiguration represent a “vulnerability” in the risk management sense of the word. Attackers don’t especially care what mistake, error, or oversight allows them to access sensitive data.
If you want a broad term that encompasses both (and many other things), check out the Factor Analysis of Information Risk (FAIR) Institute definition:
Vulnerability is the conditional probability that a threat event will become a loss event, given the type of threat event.
Software vulnerabilities and security misconfigurations are just two of many types of vulnerabilities meeting this definition. But I won’t boil the ocean here and will tackle only these two terms. Thus, for the specific purposes of this article only, I will define a vulnerability as:
Any attribute of computer-readable instructions that allows an attacker to impact the confidentiality, integrity, or availability of data in a way not intended by the data owner.
What is a software vulnerability?
What makes a vulnerability a software vulnerability, by my definition, is that it CANNOT be lawfully resolved without the active participation of the entity writing the computer-readable instructions.
Here are some examples meeting this definition:
In your corporate environment you run an open source library whose license forbids further modification of it. A security researcher posts a blog identifying an issue in the open source library that allows an attacker to violate the documented permissions model with a specially crafted payload.
As part of a penetration test by your organization against a Software-as-a-Service (SaaS) product you use, you identify a backdoor that allows an attacker to assume administrative permissions simply by passing a
?debug=true
parameter in a URL.
In both of these scenarios, you must rely on another party to fully resolve the software vulnerability. There are plenty of other risk management actions you can take such as no longer running the code in production (avoidance) or applying compensating controls like a firewall rule (mitigation). But you can’t fix the underlying problem yourself (at least not legally).
What is a security misconfiguration?
A security misconfiguration is a vulnerability that CAN be lawfully resolved without the active participation of the entity writing the computer-readable instructions.
Examples include:
You deploy an Amazon Web Services (AWS) Simple Storage Service (S3) bucket and store customer social security numbers in it. But you accidentally expose it to the internet, allowing anyone to access it without authentication (as many people accidentally do!).
A product which you download and run in your own cloud environment has written documentation stating that you are responsible for confirming the authenticity of any third-party code run in conjunction with said product. You ignore this documentation and download an extension from freesoftware.io (link does not currently lead anywhere, but be careful navigating to this in the future!). You use your product to run this extension, which turns out to be malicious. It extracts sensitive data from your network.
In both of these cases, the user of the software could have eliminated the vulnerability by changing some settings in the product it was using. No cooperation legally required from the entity writing the code in question.
Whose responsibility it is to do so, what needs to be documented by the vendor, and who is at fault when something goes wrong are all important questions. These should all be spelled out in a shared security model. But these questions are separate from the ones I pose in this article.
Conclusion
The biggest problem with my model is that the definition is in the eye of the beholder. What looks like a software vulnerability for a SaaS consumer could be a security misconfiguration for the vendor of said product.
I don’t think this invalidates the model, though, because like data confidentiality, integrity, and availability, the desired state is defined by the owner. So everything is relatively in security to begin with. And I think my definition moves the ball forward.
But I’m open to refinements and comments, so please send them my way!
Note: My LinkedIn conversation with David Hillman inspired this post.