Vulnerability management policies can often have perverse incentives written into them.
One of the most common I have seen is a requirement preventing the release of code with vulnerabilities above a certain level of severity, such as in this policy template: “No software should be deployed to production with unresolved HIGH or MEDIUM findings.”
These requirements are well-intentioned, but misguided.
The ostensible goal is to protect against the release of slipshod code through a security “gate,” but unfortunately this often backfires. Especially when combined with a time-based policy, this can cause some series problems. I don’t recommend basing a policy on CVSS to begin with, but consider this scenario, where your policy requires:
Fixing all CVSS 7+ issues in fielded products within 30 days
Not releasing any product with any known issues of CVSS 7+
Now, assume you identify an issue of CVSS 7.0 - already in your fielded product - on day 1. You work to resolve it and are all set to release on day 29. Then, on day 29, you identify another (already-existing) issue of CVSS 8.0 in your product, which will take at least 5 days to fix, package a new release, and deploy.
Now you are in a quandary.
Following the letter of the law of your policy (which you should absolutely should), you need to release tomorrow due to the first issue…but you also can’t release tomorrow because of the second.
Due to this paradox, what I usually see happening is that organizations just keep trying to pile security issues into the upcoming release, blowing their timelines for the issues detected furthest in the past. It’s unfortunate but unsurprising that people break such logical logjams by using the recency bias to determine which rule to break. Considering that - all other things being equal - these older issues are more likely to have publicly available exploit code targeting them, they represent relatively more risk and should be fixed first.
The key issue here is that releasing “net new” code with one or more vulnerabilities is quite different from issuing a release to fix flaws in code that is already running in production. The former potentially increases the attack surface for a given product while the latter generally reduces it. Many policies do not make this distinction clear, however, and forbid releases containing known vulnerabilities over a certain threshold. This can lead to a paradoxes such as the one I described.
To resolve this problem, I would advise completely separating vulnerability management from security release criteria policies. The former should apply only to released code while the latter should only apply to code under development.
Furthermore, a truly risk-based policy of the type I have suggested previously will eliminate this problem. A policy that only specifies the total acceptable level of risk - rather than that of individual vulnerabilities - makes the above-described confusion a non-issue. An engineering team working under such a regime would even have the latitude to introduce new, vulnerable code in the odd situation where it would be necessary to do so to fix another, bigger issue and thus reduce the total risk surface.
Acknowledging that such a policy is probably too difficult for many organizations to implement effectively, I would suggest that your security release criteria policy should focus on preventing the following (without affirmative acceptance of the marginal risk):
Security regressions - meaning any code change to existing functionality, the introduction of which creates a new vulnerability somewhere in the product that does not already exist in production. You should generally have a “zero tolerance” policy for this, due to the fact that your customers’ security posture is almost always going to deteriorate if you release with such a flaw, with no compensating benefit. With that said, it may make sense in some situations to release such code if doing so is necessary to quickly fix a much more severe vulnerability. A single business leader is best equipped to decide.
Security vulnerabilities above a certain threshold in new functionality - this is slightly different than a regression, as it refers to the delivery of new but broken (from a security perspective) functionality whereas regressions refer to breaking changes to existing functionality. Having a “zero tolerance” policy here is a little trickier, as you need to balance the value delivered by the new functionality against the increased total risk surface. I would recommend having a relatively low-but-nonzero threshold for the accepted level of risk from these items.
By logically separating vulnerability management and release criteria, you can help to prevent your product and engineering teams from tying themselves in knots dealing with contradictory guidance.
Furthermore, by using two key heuristics - is this a security regression? Is this a security vulnerability in new functionality? - you can quickly determine how best to proceed. Codifying these requirements into their own policy that the organization is trained on and understands fully would be the final step.
EDIT: after Tom Alrich posted a blog linking to mine pointing out that customers also sometimes mandate nonsensical security release criteria for their vendors contractually, I wanted to re-emphasize his point. Make sure you don’t demand your software provider does something silly that is against both of your best interests. See this post for a detailed exposition on some other considerations related to vulnerability management in contracts.
Great post, Walter! I'm going to borrow it for my blog - although I may not give it back.