Vulnerability management policies: top 10 recommendations
A brief detour from - and primer to - discussing risk measurement frameworks.
In many organizations, vulnerability management policies are a) non-existent, b) unknown to the product or engineering teams, c) created without the input of these organizations, or some combination of the three.
I was about to dive into a discussion of the Common Vulnerability Scoring System, and had the post ready to go. Before launching it, though, and because of these reasons, it made sense to do a deeper dive into vulnerability management policies. Specifically, I wanted to explore why they exist and look at best practices for implementing them.
In those cases where product development teams know about the policy, the former often have an adversarial relationship with the latter, viewing it as an “unfunded mandate” from the application/information security group.
All of the aforementioned situations are unfortunate, and in my mind, unnecessary. Much ink has been spilled on the topic of ensuring that business or mission leaders establish the organization’s cyber risk appetite and tolerance, and then implement this guidance through policy development.
Vulnerability management is one such area for implementing this guidance.
Despite the zeitgeist, however, my experience suggests that these policies are often developed in a vacuum without input from key business leaders. Without having bought into the requirements that such policies level upon their organizations, such business leaders feel free to ignore them. Many times, lacking the signoff of key executives in the company’s chain of command, it is not even clear that business units are actually bound by such policies. In the worst possible case, senior company leaders (CEO, COO, etc.) might believe a policy is being followed scrupulously merely because the information security team informed them of its existence, while at the same time individual business lines are ignorant of or recalcitrant toward it.
To address these and other common failure modes, I would propose that the proper way to view a vulnerability management policy is that it is simply part of a given product’s or service’s specification - i.e. another nonfunctional requirement on top of performance, scalability, and other such measures. Different businesses have wildly different requirements for all of these things, and it is quite conceivable that their security needs are different as well.
A health insurance company and a startup with an app that allows you to rate cute dog pictures maintain data of greatly different sensitivity and have widely varying regulatory requirements (of late, the explosion in ransomware attacks has slightly changed this dynamic, as although the data from a dog rating app is not generally valuable to hackers, it is certainly valuable to the company itself, and thus the organization is likely to pay to regain access to it).
Thus, just like any feature on the roadmap, a variety of factors should drive vulnerability management requirements, such as competitive forces, compliance needs, and customer demands. Expressing risk in quantitative terms will help sharpen the picture and allow for appropriate tradeoffs between security and other requirements.
Assuming you agree, I would offer the following “top 10” recommendations for developing your organization’s vulnerability management policy.
Ensure it is action-focused. At the most basic level, a vulnerability management policy is an action plan for managing the business risk presented by software vulnerabilities. Thus, having clear and directive language is vital to ensuring success. Use of the passive voice, e.g. “all critical vulnerabilities shall be resolved within 45 days” is an obvious sign that your policy is not action-focused. Every requirement should designate exactly one accountable person, e.g. “the General Manager shall ensure resolution of vulnerabilities exceeding $100,000 in annualized loss expectancy within 15 calendar days.” Similarly, language that implies optionality, e.g. “recommendations should take the form of a quantitative assessment” is sub-optimal because it leaves room for interpretation.
Establish unambiguous timelines. Be very precise in terms of which events trigger requirements for which subsequent actions, and how long the accountable parties have to comply. For example, the use of “business days” as a measure of time is confusing for globally distributed organizations that might have different working and holidays in different regions. Malicious cyber actors also do not respect the traditional 5-day work week and, in fact, are more likely to strike an organization when most of its employees are not actively working. Separately, every action required by the policy must include a deadline, for example, “the Chief Information Security Officer shall provide a written assessment of the risk posed by the identified vulnerability within 72 hours of a risk acceptance request from the General Manager.” Otherwise, it is possible for one party to delay action indefinitely without running afoul of the policy’s requirements.
Ensure that you have a separate vulnerability detection/scanning policy. Vulnerability management policies can obviously only apply to known vulnerabilities. The rate at which your organization finds vulnerabilities is dependent on factors very different from the rate at which it resolves them. If one product line appears to have far more vulnerabilities than another, it doesn’t necessarily mean that the first one is less security-conscious. It’s in fact quite possible that the second one is simply not scanning for vulnerabilities or only doing so in a haphazard manner. Thus, having a separate set of requirements for how the organization detects vulnerabilities in the first place is critical. Otherwise, development teams might consciously or unconsciously seek to make vulnerability management compliance easier by scanning less frequently or thoroughly, which is completely counterproductive to your security goals.
Make the policy short and accessible. Long or confusingly worded policies that include substantial amounts of boilerplate text are less likely to be read. Enumeration of many examples - such as the various types of vulnerabilities - is also not appropriate for these types of policies. Finally, ensuring that the entire organization can access the most recent version of the policy - such as via a company intranet page, rather than an e-mailed .pdf document - is also important for improving compliance with it.
Facilitate automatic enforcement mechanisms. Turning words on a page into code in a piece of software is not (yet) an automatic process, unfortunately. Thus, ensuring that vulnerability management is just another part of the daily life of engineering teams will help ensure compliance. Using developer-centric tools like Jira workflows to provide reminders about deadlines for the resolution of a given vulnerability will greatly aid in your efforts. Furthermore, tracking vulnerabilities in the engineering team’s system of record will allow for detailed analysis of policy compliance, resource allocation, and other performance measures.
Ensure there is a risk acceptance procedure built into the policy. The underlying purpose of an effective vulnerability management policy is to ensure that the organization makes optimal tradeoffs between cybersecurity and other risks. If perfectly structured, there would never be a need for any sort of incremental risk acceptance above and beyond what is specified in the policy. We live in the real world, though, and even the most accurate risk-scoring system can sometimes result in odd outcomes. Thus, a good policy will allow for a “manual override,” so to speak, to reflect when there are higher organizational priorities than resolving a given vulnerability at a given point in time.
Assign risk acceptance responsibilities to a single business leader (rather than someone in security or engineering). I have written at length on this topic, but think it is important to re-emphasize that business leaders must own cyber risks. These people will have the incentives to optimize their decision-making in a way that maximizes value for the entire organization, rather than for any specific aspect of it. Thus, vulnerability management policies should identify a single individual - preferably with P&L responsibility - who must approve (or deny) cyber risk acceptance requests. Ensure that these decisions are captured in a written, auditable, and easily indexable manner.
Make security leaders responsible for identifying risk levels. Business leaders should be responsible for risk management decisions, but not necessarily risk measurement ones. It is appropriate for advisors - like members of the security team - to assess such cyber risks. Optimally, these people will be outside the direct reporting chain of the business leader making risk acceptance decisions, to prevent any sort of undue pressure on them to modify their risk assessments (so that the business leader does not face the burden of accepting incremental risk in the first place).
Ensure that business leaders communicate the policy to their organizations. When managers with holistic responsibility for the success or failure of a product, business line, or entire company communicate the establishment or modification of a vulnerability management policy, it has a much different effect than when only the security team does so. When a business leader communicates regarding the importance of cybersecurity, it is psychologically much more difficult for those in the organization to disregard complying with the accompanying policy as a “sideshow” compared to other pressing needs. Additionally, and although I would hope this goes without saying, explicitly communicating about the policy and training all relevant stakeholders on it is a vital step. I have seen information security teams merely post a policy on a Sharepoint page and then assume that the rest of the organization will immediately begin complying with it, which rarely happens.
Treat accidental non-compliance as a learning opportunity. Those who actually implement vulnerability management policies face a wide variety of competing requirements. Additionally, it is rare for implementors who are aware of a vulnerability management policy to intentionally violate it out of malice (although obvious examples of this behavior should generally result in termination). Non-compliance often results from pressure to meet other deadlines, ignorance, or confusion regarding priorities and the fact that “actions at the sharp end resolve all ambiguity” with respect to risk tolerance. Although it is appropriate to indicate what the consequences of policy non-compliance are - up to and including termination - optimally there should be a graduated series of sanctions. For example, I would suggest that the first such instance trigger a presumptively blameless post-mortem, which can reveal the aforementioned conflicting directives and identify escape valves or policy refinements helping to avoid such situations in the future.
With the above said, I will now return to evaluating various risk measurement systems, as I said I would in my previous post.