Discover more from Deploy Securely
A review of NIST SP 800-37
What Uncle Sam says about vulnerability management: part II.
In this post I will break down NIST Special Publication (SP) 800-37, which has a title that rolls easily off the tongue: “Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy.” This is the second in a mini-series on federal government vulnerability management practices and recommendations; check out my first piece on the NIST Cybersecurity Framework as well. As with that writeup, I’ll focus this one on how SP 800-37 applies to vulnerability management in real-world situations.
“Why is this important,” you might ask?
Because it is mandatory for government agencies to use and references to it appear throughout private sector information security programs. Furthermore, even when not mentioned explicitly, the document and its predecessors undergird much of modern information security practice, for better or for worse.
Published in 2018, SP 800-37 was a response to calls from multiple government entities and oversight bodies to modernize the federal risk management framework (RMF). A multi-step process for making risk decisions, the RMF can be a useful high-level tool for building risk management programs. Also importantly, the RMF is technology neutral, and is applicable throughout all stages of system development and operation.
As with all frameworks, guidelines, and recommendations, you should take SP 800-37 with a large pinch of salt. I’ll say that it starts out on a fairly strong note, stating that:
In the end, it is not about generating additional paperwork, artifacts, or documentation. Rather, it is about ensuring greater visibility into the implementation of security and privacy controls which will promote more informed, risk-based authorization decisions (p. xv).
Furthermore, it claims to encourage:
Maximize the use of automated tools to manage security categorization; control selection, assessment, and monitoring; and the authorization process;
Decrease the level of effort and resource expenditures for low-impact systems if those systems cannot adversely affect higher-impact systems through system connections;
Make the transition to ongoing authorization a priority and use continuous monitoring approaches to reduce the cost and increase the efficiency of security and privacy programs (page vi).
Unfortunately, readers will quickly recognize prescriptions for generating large amounts of documentation that will never be read, as well completely unrealistic requirements for the number and complexity of involved steps. While SP 800-37 represents a potential ideal state for government information technology operations, I have never seen any government organization perfectly adhere to its requirements.
The seven deadly, er…steps of risk management
The RMF has seven steps:
Prepare to execute the RMF from an organization- and a system-level perspective by establishing a context and priorities for managing security and privacy risk.
Categorize the system and the information processed, stored, and transmitted by the system based on an analysis of the impact of loss.
Select an initial set of controls for the system and tailor the controls as needed to reduce risk to an acceptable level based on an assessment of risk.
Implement the controls and describe how the controls are employed within the system and its environment of operation.
Assess the controls to determine if the controls are implemented correctly, operating as intended, and producing the desired outcomes with respect to satisfying the security and privacy requirements.
Authorize the system or common controls based on a determination that the risk to organizational operations and assets, individuals, other organizations, and the Nation is acceptable.
Monitor the system and the associated controls on an ongoing basis to include assessing control effectiveness, documenting changes to the system and environment of operation, conducting risk assessments and impact analyses, and reporting the security and privacy posture of the system (p. 8-9).
Each step is further broken down into tasks with specific inputs and outputs which highlight specific steps organizations should take to implement the RMF. There is a substantial amount of content related to organizational design and documentation that I will largely skip over, as much of it is specific to government organizations. Furthermore, some steps and tasks have similar content, so I will omit mention or analysis of them.
Task P-2, Risk Management Strategy
This part of the RMF requires establishing a documented tolerance for organizational risk, among other things. This is something I have seen many organizations not make explicit, despite there being a vague sense that some actions are acceptably risky while others are not. This can often lead to confusion as business and technology priorities compete with security objectives for resources. Establishing a clear level of acceptable risk when it comes to information security is thus an essential first step of any strategy.
Confusingly, what NIST describes as risk tolerance, “the degree of risk or uncertainty that is acceptable to an organization,” is generally described as risk appetite in the private sector. This blog describes the latter as “levels of risk-taking that management deems acceptable,” which mirrors NIST’s definition of the former. I (and others) consider risk tolerance to be the velocity with which an organization must return to the baseline established by its appetite after exceeding it. To avoid confusion, I will use the private sector terminology in this post.
SP 800-37 highlights types of events that should be considered when establishing a risk appetite, but doesn’t lay out a clear formula for how to do so. To fill that gap - at least for vulnerability management - I would recommend:
Describing the risk appetite (and tolerance) in terms of dollars. Unless you are doing it using some other kind of currency, every other method will introduce unnecessary ambiguity and blur understanding of operational priorities.
Not identifying risk tolerance at the level of an individual security flaw. Many organizations do this by requiring that vulnerabilities exceeding a certain severity level be resolved in a given number of days, but this is a sub-optimal approach for reasons I detail in this post.
Task P-4, Organizationally-Tailored Control Baselines and Cybersecurity Framework Profiles
This section recommends codifying a set of different risk tolerances, appetites, and the controls to achieve them for specific component organizations and use cases. By developing tailored baselines, organizations can do what many folks do instinctively and in an ad-hoc manner, which is to vary one’s security posture based on the threat and the sensitivity of data at risk.
While this can make sense in larger organizations where a “one-size-fits-all” vulnerability management posture is not appropriate, it is possible to avoid this work entirely. By communicating in terms of dollars of acceptable risk per year, organizations can simply establish a single number for each business unit or product line, and then let the implementers of the policy establish a narrowly tailored set of controls for the exact circumstances.
Task P-6: Impact Level Prioritization
This is a section of the RMF you should observe closely…and absolutely ignore. While prioritizing information systems using the potential impact of their loss is definitely a good practice, the way SP 800-37 suggests doing it is highly unadvisable. This is because the document recommends using a bewildering array of vague qualitative categories. For example, the section recommends identifying “low-high systems, moderate-high systems, and high-high systems” as well as “low-moderate systems, moderate-moderate systems, and high-moderate systems” (p. 34).
A easy way to show how unworkable such a model is would be to ask which combination would be more valuable: one low-high system or two high-moderate systems?
Furthermore, assume there are two different vulnerabilities affecting each grouping; which should you remediate first?
It isn’t possible to answer this question in any coherent manner because arithmetic is impossible with these categorizations, and so is any sort of quantitative risk management. Thus, I would advise completely ignoring this section.
Task P-10: Asset Identification
This is an important section that says something obvious but is nonetheless rare in my experience: you should have a solid understanding of your information technology assets and the consequences of various impacts to them. Only by developing an exhaustive list of information systems your organization relies upon can you understand the risk posed by vulnerabilities in them.
Task P-14: Risk Assessment
I view this section as being the meat of the RMF, but unfortunately it is relatively spare in terms of detail. It discusses a range of potential risks as well as the fact that some might be directly tangible while others are harder to quantify, but does little in the way of providing a roadmap for measuring them. It does not even provide a high level formula for how to measure risk in any concrete manner. Because of this gap, if you need a framework to measure vulnerability risk, then check out the Deploy Security Risk Assessment Model (DSRAM).
Task C-2: Security Categorization
Once you have determined the potential impacts a malicious actor could have on your organization at the level of individual system, the RMF recommends giving it a security categorization. While the government maintains a bewildering array of rubrics for classifying systems, I think such approaches generally are too cumbersome for the private sector.
While outside the scope of this article (and potentially a topic for another one), I would recommend that most enterprises identify information as being either authorized for external dissemination outside of the organization or not, and then identifying a single information owner who can identify exceptions to the general rule based on his understanding of the business situation.
Tasks S-1, S-2, and S-3: Control Selection, Tailoring, and Allocation
The first three tasks of the Select phase are the most important in my opinion, and I think SP 800-37 does a generally good job in providing a framework for implementing controls. As with the rest of the document, however, it is sparse on details or even specific examples that would be useful for developing a real-world program.
Essentially, NIST suggests pursuing a baseline-driven or a customized method for implementing controls. The former involves using a pre-ordained set of controls prescribed for the relevant system security categorization, which in the case of the government is SP 800-53 (the topic of a future article). This document has three such baselines, which are (predictably): low-, moderate-, and high-impact.
For the private sector, blindly implementing one of these baselines is likely inappropriate, and even federal certification programs such as FedRAMP do not do this. Thus, tailoring the controls you use for the specific business situation you face is the best approach. By using a quantitative, financially-driven approach with respect to risk posed by individual vulnerabilities (or a set of them), you can quickly identify which controls are economical to implement and which ones aren’t.
Task A-5: Remediation Actions
Just counting the National Vulnerability Database (NVD) alone, software vulnerabilities are identified at the rate of several dozen per day. Thus, no effort to manage the risk from them would be complete without a plan to remediate issues identified after a system begins operating. SP 800-37 prescribes a relatively formalized process documenting such remediation action but falls short by not providing any clear framework for prioritizing the underlying flaws.
While the document suggests automating the various processes involved in implementing the RMF, it seems that it would be challenging to do so based on the many suggested consultations and inputs involved in the remediation process. Furthermore, due to the RMF’s recommendation to identify a risk appetite but not a risk tolerance (in the private sector sense of the words), any deviation from the initially established risk baseline would appear to require manual reassessment. Since managing your organization’s total risk surface stemming from vulnerabilities is the key task of any security program, I would advise establishing clear thresholds in quantitative terms to allow subordinate leaders, or even automated tools, to make remediation decisions in a prompt manner.
Task R-3: Risk Response
While the details of the Authorization To Operate (ATO) process are probably not of much interest to those in the private sector, the RMF does a good job in laying out how organizations should handle risk acceptance. Instead of having security professionals be responsible, a paradoxical organizational design which I oppose, the RMF requires identification of a single official - generally responsible for the accomplishment of the organization’s objectives - as the sole authority for accepting risk (p. 72).
While the ATO process itself is potentially overly complex and inflexible for most organizations, having a single mission owner accountable for cyber risk is an important centerpiece of any vulnerability management or broader security program. Furthermore, promising developments such as the concept of Continuous ATO (cATO) appear to be moving the government away from the mindset of “check-the-box,” formalized risk assessments to ones that adjust dynamically based on the threat environment, vulnerabilities identified, and relative mission importance.
All organizations should take note of these developments and implement the ones that fit their needs.
This has been a whirlwind tour through a very meaty document, but I think it’s important to get a high-level overview of the relevant components of the RMF as described in SP 800-37 and related to vulnerability management
Thanks for reading Deploying Securely! Subscribe to receive new posts.
. Since the very concept of information security programs originated with the federal government, understanding the core concepts can help security practitioners of all experience levels better understand current practices.
With that said, I think the RMF is already showing its age and requires an update to catch up with the times. Paperwork-driven compliance processes can consume enormous amounts of time while doing little to actually protect the confidentiality, integrity, or availability of sensitive data. A continuously-updated risk assessment, managed through automated tools, is the most effective way to stay on top of the range of vulnerabilities with which modern software operators must contend, as well as the threats trying to exploit these flaws.