Discover more from Deploy Securely
NIST SP 800-53 (rev. 5, of course)
What Uncle Sam says about vulnerability management: Part III.
In my ongoing mini-series reviewing federal government standards with respect to vulnerability management, I now turn my eyes to NIST Special Publication (SP) 800-53, specifically revision number 5. Dating from late 2020, it is a broad overview of “Security and Privacy Controls for Information Systems and Organizations.”
Clocking in at 492 pages, it is quite the beast of a document. So what I’ll do here is dive specifically into the provisions regarding vulnerability management (my speciality).
Thanks for reading Deploying Securely! Subscribe to receive new posts.
Unfortunately, with some exceptions, there isn’t really a ton of detailed or actionable information in this tome. As it forms the bedrock of many public and private sector security programs, this can lead to wildly varying outcomes between organizations that are all implementing controls identified in NIST SP 800-53.
On a related note, some organizations imply they are “compliant” with the standard, but after gathering feedback from a variety of experts, I don’t believe this to be possible. Due to the fact that the NIST document is a laundry list of controls, rather than a plan for implementing them or a prescriptive standard (despite assertions to the contrary), I don’t really think there is any reasonable way to claim compliance.
The only possible exception would be if you implement every single control for one of the given “baselines” described in NIST SP 800-53B. Predictably labeled as low-, moderate-, and high-impact, these baselines provide a set of suggested controls to put in place based on a sensitivity of a given use case. For the vast majority of organizations, however, such rote adherence to a given baseline would be a massive waste of resources. Furthermore, I am not aware of a single government organization or contractor that does such a thing. For example, the FedRAMP standard, which is a prescriptive framework, does not perfectly mirror NIST SP 800-53 and layers additional controls on top of it.
Thus, treat any claims of “compliance” with skepticism.
With that said, I’ll dive in, starting with the general guidance the document provides.
Early on in the document, the authors make an important point:
[r]ealistic assessments of risk require a thorough understanding of the susceptibility to threats based on the specific vulnerabilities in information systems and organizations and the likelihood and potential adverse impacts of successful exploitations of such vulnerabilities by those threats” (p. 4).
This is the bedrock philosophy of Deploy Securely, and I wholly agree with this passage.
The problem is, there is very little concrete information about how to make such judgements in the rest of the document. NIST SP 800-37, “Risk Management Framework for Information Systems and Organizations,” provides more color on this topic, and is generally used in conjunction with SP 800-53 to develop a plan. But with that said, I would generally expect such a lengthy document to have more detail on exactly what your should do, rather than offering generalities.
That critique aside, here are the key sections you should look at for guidance on how to manage security vulnerabilities in your network
Secure development (sections SA-11 and SA-15)
There are some good recommendations here, especially related to threat modeling, and understanding impact, threat levels, and acceptable risk. The suggestion that organizations design quality metrics is good, but unfortunately NIST really steps in it by suggesting “defining the severity thresholds of vulnerabilities in accordance with organizational risk tolerance, such as requiring no known vulnerabilities in the delivered system with a Common Vulnerability Scoring System (CVSS) severity of medium or high” (p. 281-282). These types of security gates can be a huge stumbling block for organizations because they create infinite loops from which there is no escape (see this article for details.) Furthermore, the CVSS is not a good way to evaluate vulnerabilities (take a look here if you want to know why).
Additionally, as part of a software development process, NIST recommends prioritizing vulnerabilities by “severity.” It’s not even clear to me exactly what this is a reference to, considering that determining the potential damage is impossible without understanding the underlying data. In any case, you should actually prioritize based on overall risk, of which probable impact is but one aspect.
Vulnerability identification (sections CA-8 and RA-5)
The areas focused on vulnerability identification are a mixed bag. Something that caught my eye was “[o]rganizations can determine the sufficiency of vulnerability scanning coverage with regard to its risk tolerance and other factors” (p. 244). While on its surface this seems like an obviously correct thing, it reveals a potential confusion between vulnerability identification and vulnerability management. I strongly recommend separating an organization’s scanning policy from its remediation policy because when the two are combined, people start making odd decisions.
One example is that some will advocate against vulnerability scanning at key junctures (e.g. immediately before a software release) so that they will not need to triage and potentially fix issues during an already hectic time. In a high-functioning organization, this would be illogical, as it is not the detection of vulnerabilities that creates risk but rather their existence. Due to ubiquity of perverse incentives such as the use of the aforementioned security gates (also recommended by NIST), however, such behavior can be rational in a bounded sense.
If you have a sound vulnerability management policy that focuses on risk surface reduction rather than such arbitrary metrics, though, there will be no incentive to attempt to “game the game” in such a manner. Furthermore, by logically separating the vulnerability scanning policy from that focused on vulnerability identification, organizations can help to prevent conflation.
Turning back to the NIST recommendation, although I would agree that lower-value systems might need less frequent vulnerability scanning, it is very important to remember that the lack of scanning in no way reduces the actual number of vulnerabilities.
On a brighter note, there are some good callouts regarding penetration testing. Section CA-8 points out that this technique goes above and beyond automated vulnerability scanning to actually determine exploitability of findings. Also useful is a suggestion to conduct penetration testing when migrating technology stacks (especially from older to newer ones), as these transition points are were vulnerabilities tend to be most frequent and severe.
RA-5 has some high level guidance about conducting, static, dynamic, and software composition analysis as well as checking for misconfigurations. Unfortunately, as with much of the rest of the document, it begs many more questions than it answers. For example, NIST recommends that organizations measure “vulnerability impact” without providing recommendations for how to do so. It also advises “[r]emediat[ing] legitimate vulnerabilities” without providing timelines to do so or guidance on what defines a “legitimate vulnerability” (p. 242). It also recommends using tools that evaluate and track Common Vulnerabilities and Exposures (CVE) without warning that the vast majority of such findings will be non-exploitable and again suggests employing the CVSS.
Vulnerability remediation (section SI-2)
This section is generally pretty vague with little guidance for how to operationalize a vulnerability remediation program, aside from providing high-level recommendations about patching. While the NIST document does talk about developing organizationally-defined time periods to address flaws based on the threat environment, the value of the system, and the severity of underlying vulnerability, there is little in the way of actionable information here. Probably most useful is the recommendation to consider the impact of testing fixes for vulnerabilities when deciding whether to deploy them in the first place (p. 333-334). Even for a highly exploitable issue, determining when to patch can be a complex decision that requires considering many different factors.
Open Source Software (sections CM-10, CM-7, and SA-22)
I think there is a major gap in NIST SP 800-53 with respect to open source software (OSS). I could only find three mentions of the topic in the entire document, which is bizarre to me considering that almost all commercial applications use open source code (and every government agency uses commercial applications). As I wrote previously regarding the Cyber Safety Review Board’s report about the log4shell vulnerability, it’s pretty clear that the government doesn’t really know what to make of open-source software despite it being found throughout their networks.
These sections do have some reasonable points about inspecting OSS for backdoors, understanding that support may not be forthcoming, and that, as a result, exploitable vulnerabilities may not get fixed in a timely fashion, or ever. With that said, I would have liked to have seen a deeper dive here in to topic such as evaluating developer and project reputation, methods for vetting OSS packages, and ways to evaluate the risk stemming from vulnerabilities identified in them.
Organizational design (CM-3)
No government document would be complete without the recommendation for an advisory board or similarly august panel, and NIST SP 800-53 is no exception. It recommends creating “Configuration Control Boards [CCB] or Change Advisory Boards that review and approve proposed changes.” (p. 98-99). Although this is generally accepted technology best practice, frankly, I don’t think it makes much sense in 2022. Considering that big organizations can push thousands of changes a day to production, a streamlined process for doing so is vital. Furthermore, in a crisis situation involving an actively exploited vulnerability, waiting until the next CCB or mustering it in an ad-hoc fashion can itself be a security risk. I recommend developing a clear decision framework based on PRIDE for handling these (and all) types of decisions.
Odds and ends
There are some tangentially related (to vulnerability management, at least) sections worth taking a look at if you have some extra bandwidth:
There is some useful guidance (Sections AC-17, AC-18, MP-7, CM-7) on hardening software systems and hardware devices to eliminate vulnerabilities from unseeded capabilities and components, thereby avoiding the risk they pose entirely.
Sections AU-6, AU-9, and CA-2 suggest some technical and administrative controls to ensure the integrity of your vulnerability scanning and management program.
Sections IR-6, CA-2, and CA-5 recommend conducting a post-incident review of any associated vulnerabilities to determine not only their potential involvement in the incident but also any additional risks the vulnerabilities pose.
Section CA-7 has some interesting points about making sure that a control targeting one vulnerability doesn’t unintentionally open another, “e.g., encrypting internal network traffic can impede monitoring” (p. 92).
My goal in this post was to pull out any useful nuggets as well as highlight key critiques related to vulnerability management I have of NIST SP 800-53, a foundational document in the information security world. It’s hard to not encounter it (or compliance frameworks based on it) if you spend any amount of time in the space, so I would recommend getting familiar with it. You should also understand its key limitations and know that you will need to dig more deeply to find actionable guidance for setting up an effective security program.
Thanks for reading Deploying Securely! Subscribe to receive new posts.