Wrapping up the NIST AI RMF
Part 4 of a series on the National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework.
This closes out our series on the NIST AI Risk Management Framework (RMF), which you can check out below:
Part 1: Frame
Part 4: Map, measure, and manage (you are here)
Based on the groundwork laid in the first three posts, there isn’t too much else left to go through. Much of the remainder of the NIST AI RMF is somewhat duplicative or obvious, so I won’t waste your time with it. If you want to review the entire original document or additional implementation playbook, then by all means go ahead.
But in this post, I’ll explore anything that isn’t either of these two things and help to suggest actionable ways to implement this framework.
The main things that jumped out here are subcategories:
1.6, which provides an example requirement of a system requirements as being “the system shall respect the privacy of its users.”
As a former product manager, I can tell you this is a terrible requirement.
There is no way for an engineer to confirm it is implemented or for a user to understand how his privacy will be respected. If you want some examples of what actionable security requirements look like, check out this post.
3.1 and 3.2, which require documentation of costs and benefits. The RMF discusses “non-monetary costs,” implying this should somehow be a separate category from those having dollar signs. Check out this article for an explanation of why this approach is misguided.
The highlights of this section are subcategories:
1.3, which suggests “experts who did not serve as front-line
developers for the system and/or independent assessors are involved in regular assessments and updates.” Having an objective review of AI systems - by people who didn’t design them - is definitely a good control to have in place.
2.6, which states that AI systems should “fail safely.” Exactly what “failing safe” means in the context of AI isn’t exactly clear and requires reflection. When the doors at a school automatically open during a power outage, this is “failing safe,” whereas when a bank vault stays locked in a similar situation, it “fails secure.” If an autonomous vehicle has a power failure, then having the brakes automatically engage isn’t necessarily “failing safe” because another car might ram into it. So “fail safe” designs for AI system will require substantial architecting and red-teaming.
Which subcategories were most interesting in this section:
1.3: which says that responses to “AI risks deemed high priority, as identified by the MAP function, are developed, planned, and documented.” This would seem to be something of a recursion because the NIST AI RMF provides very little guidance for identify “high” priority risks. Check out the first post in the series for some suggestions about how you might do so. This subcategory also notes that risk “response options can include mitigating, transferring, avoiding, or accepting.” It’s more accurate to say that they must include at least one of these four options, as there are no other methods of risk management.
4.1, which recommends “Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.” As I mentioned in part 2 of this series, organizations tend to apply very little thought with respect to how they will handle systems after they go live. This is a good callout from NIST.
In summary, the NIST AI RMF is a high-level document that represents the barest of bones for an artificial intelligence governance program. But considering it a) has the imprimatur of a government agency and b) there are few other frameworks in existence addressing this topic, I think it is going to be the major player in the space.
Due to the rapidly increasingly regulatory and public scrutiny on AI tools, I expect adoption of the RMF to accelerate rapidly. And cybersecurity, governance, risk, and compliance teams need to start thinking hard about how they are going to implement it in the real world.