Throughout my time in the world of software security, I have frequently observed (and taken part in) an awkward dance in response to the question “whom should we tell about this vulnerability we just found in our product, and when?”
This dance often takes the form of an “email avalanche” where more and more people are added to a thread and feel the need to contribute something to the conversation without actually owning or being accountable for the outcome.
The reason for this phenomenon is that very few organizations have internally (let alone publicly) codified an answer to this question in any generalizable way. Thus, for every new vulnerability identified, there is a new scramble.
Even Apple, the company with the world’s largest market capitalization as of this writing, maintains a relative basic publicly-available policy which is not even stated explicitly: the company either publishes known vulnerabilities in its products as CVEs or it does not. Apple’s security page doesn’t explain what factors drive its decision-making, what timelines it follows, or why. While I’m sure they have more detailed internal policies, this is pretty bare for the world’s biggest enterprise.
Probably the most transparent company - potentially surprisingly - in this regard is Palantir, whose application security program I previously reviewed. But it is an exception to the rule. And even they don’t explain under what conditions they notify stakeholders of specific vulnerabilities. This widespread opacity is generally due to:
Not knowing what the industry standards are (hint: there are basically none, evidenced by the fact that the world’s most valuable company hasn’t clearly communicated anything about them).
Concerns about setting precedents for what you will and won’t disclose in the future.
A general concern regarding talking publicly about cybersecurity in general.
Some practitioners cite vague “legal” concerns without specifying exactly what they mean, but I have yet to see any convincing evidence to support such a position.
Conversely, recent Federal Trade Commission (FTC) enforcement actions against Drizzly and Chegg both cited the organizations’ lack of written security policies as reasons to punish them.
I have written before about this issue from the viewpoint of the software consumer, focusing mainly on remediation timelines, and made some recommendations about what is wise to include in your contracts with suppliers.
In this post I’d like to look at the other side of the equation, and suggest to software providers how they address this thorny problem. Instead of looking at how and when to fix vulnerabilities, I’ll focus mainly on how and when to communicate about them - and your planned actions - with internal and external stakeholders.
Overall, I believe a proactive approach - creating a standardized framework that is generally applicable - is superior to a reactive one. Reasons for this stance include the fact that being reactive requires:
Having to individually adjudicate stakeholder requirements or demands (including contractual ones) about notification criteria and timelines on a case by case basis. Even worse, assuming you accede to any of these, you need to ensure you comply with such a disparate web of requirements for the length of your obligation.
You to be able to explain why you did - or did not - disclose certain vulnerabilities but not others during high-stress and time-sensitive situations. Even the most logical and conscientious teams will find it hard to make consistent decisions without any guidance prepared ahead of time.
Due to these challenges, I advise establishing a clear regime for vulnerability notification, of which you can publicly describe some (but not all) parts, and also commit to contractually, if required by your stakeholders.
General considerations
I have designed this framework to be flexible enough to apply to any type of software provider. Software providers can be those who sell software commercially or they can develop it for use inside larger organizations.
These providers develop and maintain “products,” but these aren’t necessarily goods or services that are sold. Product can be internal applications or technologies only used internally within an organization.
Finally, these providers have stakeholders who rely on the products to maintain the confidentiality, integrity, and availability of the latter’s data (or those they are responsible for protecting). Not every stakeholder is a paying customer, but can include partners as well as industry bodies, regulators, or other governmental authorities. Determining the correct makeup of this group will of course depend on your organization’s specific situation.
Overall, I would recommend that your notification policy:
Leave some room for judgement calls on the part of your technical teams. While I often point out how quantitative analysis is superior to qualitative review, you don’t want to put your organization in a bind. To square this circle, I would recommend that any notification regime explicitly state quantitative criteria that you have high confidence (i.e. 95% of the time) you can meet consistently. By “underpromising and overdelivering,” you will ensure you never need to explain to your stakeholder(s) why you failed to meet one of your commitments while at the same time preempting one-off questions every time there is a crisis situation.
Develop clear standard operating procedures (like this one) for the notification process that include not only security and engineering groups but also business leaders, legal teams, and public relations (PR) staff. Drill these regularly to ensure everyone understands them.
Ensure that both software provider and consumer incentives are aligned (see this article for details).
As far as the public face of your policy goes, I would recommend including at least the high level of it in your shared security model, potentially providing more detail to certain stakeholders - like paying customers - through a knowledge base or access-controlled security center.
When developing your internal procedures, I would also recommend dividing your notification program into “passive” and “active” components.
Passive and Active notifications
As anyone involved in vulnerability or incident response knows, events develop at widely varying velocities. Your program should account for this fact. Sometimes explicit, direct communication is necessary when time is of the essence. But not everything requires this channel and using it too much can create alert fatigue.
In any case, documenting what you have communicated is vitally important, to both save time and prevent any allegations that you have failed in your contractual or other duties. Thus, you should develop clear criteria as to what requires only “passive” notification and what requires both this and “active” notification.
Passive
Passive notifications are those which don’t interrupt the workflow of the relevant stakeholder targeted by it. They can be public or restricted to certain stakeholders such as active customers. But they are generally not especially sensitive because they represent “old news” regarding situations that have already been resolved one way or another. Examples of passive notifications include:
Release notes.
Alert banners.
Blog posts.
A vulnerability exploitability exchange (VEX) report, either included in a software bill of material (SBOM) or not.
While generally passive - such when uploaded one to a repository accessible to stakeholders - VEX reports can also be active notifications, such as if you emailed it to a stakeholder directly.
As in all things, automating and standardizing the process of making these notifications to the maximum extent possible will help conserve precious time and attention that could be otherwise applied to more pressing matters. Passive notifications should be machine-readable whenever feasible.
Risk determinations
I have written at length regarding how to calculate risk, so would refer you to my articles on that topic if you are interested. But it’s important to note that being very clear about how you calculate risk is vital when communicating externally as well as internally. I would advise having definitive thresholds and formulae defined before building out your notification program.
Specific to the topic of notifications, something you might consider doing is automatically accepting the risk from any vulnerability which you determine has a “trivial” likelihood of exploitation. And making this stance clear to your stakeholders. What “trivial” means will, of course, will depend on your overall risk appetite and resources available. Defining it clearly is an important step.
Trivial probability of exploitation
When you face dozens or hundreds of thousands of known vulnerabilities, it will likely be necessary to establish a bar below which you will not conduct any further analysis. This could be a given Exploit Prediction Scoring System (EPSS) rating, output from a commercial tool, or some other metric.
Unfortunately, because you exclude such issues from further review, you may be accepting more risk than you actually intend to. Thus, it might make sense to only allow such automatic categorizations for less sensitive assets while requiring a full evaluation of every vulnerability present in more valuable ones.
Furthermore, even if your most skilled team member carefully reviews a given vulnerability and decides it is not exploitable, he could still get it wrong. Risk is everywhere and we will never be rid of it entirely. We can only make probabilistic decisions with the best information available to us at the time.
Thus, I would suggest using some combination of factors in determining whether you will categorize a given vulnerability as being of “trivial” exploitability and provide the following as examples.
I’ll note that these are not equivalent or otherwise interchangeable and you’ll need to do a careful analysis to determine which, if any, are right for you. Options include:
Below 5th percentile of Exploit Prediction Scoring System (EPSS) rating for all published CVEs.
Software publisher provides a Vulnerability Exploitability eXchange (VEX) report with a
state
ofnot_affected
orfalse_positive.
First-party technical analysis confirms non-exploitability through manual code review.
Defining this threshold is vital for building your notification strategy.
When to make passive notifications
There is a major vulnerability (e.g. Ripple20, log4shell, etc) in the news which has no nexus whatsoever to your product, but your entire support apparatus is nonetheless getting flooded with inbound requests from anxious stakeholders.
You find a vulnerability, such as a CVE, in your product but you determine it has a trivial likelihood of exploitation.
You identified and have resolved a vulnerability with non-trivial likelihood of exploitation in:
a purely -as-a-Service (-aaS) product requiring no stakeholder intervention to update.
a non-aaS product where you have confirmed no stakeholders are using the impacted version(s).
If you distribute stakeholder-managed software (e.g. deployed on-premises), you might alternatively have a clearly-stated policy whereby you don’t make passive notifications until the most recent impacted version of a given product has gone out of support. This has the benefit of both simplicity and giving slow-moving stakeholders another reason to patch their systems.
Related to the above point, if a security researcher helped you out as part of your coordinated vulnerability disclosure (CVD) program, you should make sure to recognize him as part of the passive notification.
Active
Active notifications are appropriate for time sensitive situations where discretion and confidentiality are key. They can take the form of:
Emails.
Text messages.
Phone call.
Live/virtual meetings.
Any notification made ephemerally (e.g. via voice or video chat or in-person) should also be accompanied nearly immediately by a written notification (e.g. email) with the same information. Additionally, all active notifications should also eventually become passive notification to memorialize all information communicated.
When to make active notifications
There is a non-trivial risk of exploitation (including any evidence of active malicious use of the vulnerability) and you can provide compensating controls a stakeholder can apply to reduce their risk.
You have a software fix available for a vulnerability in a non-aaS product which does require stakeholder intervention to update.
When you suspect a stakeholder might expect an active notification but there isn’t necessarily a reason to provide one from a risk perspective. For example, if you fix a major vulnerability in your -aaS product that results from its interaction with a partner product, you would likely want to give that partner a heads up so they can manage any PR issues that will result from you passively disclosing the underlying issue.
When not to make notifications
Although I have advocated for transparency to the extent possible, in some situations saying nothing is the best course of action. These include:
When you have identified a vulnerability with non-trivial likelihood of exploitation but there are no compensating controls and you are actively working on a fix for it. Since there is nothing any stakeholder can do to mitigate their risk, there is no upside to even marginally increasing the risk of a premature public disclosure of the vulnerability (by disseminating information about it more widely than absolutely necessary).
You have identified a scanner finding indicating non-trivial risk but have not been able to review or a make determination to an appropriate level of certainty. Even if stakeholders push hard for blanket notification requirements in contracts (e.g. “vendor shall inform customer of all high and critical (per the CVSS) severity issues within 48 hours), you should negotiate hard to avoid this language. It will create gigantic alert fatigue in addition to contractual liability when you are inevitably unable to comply.
When legally or contractually required not to say anything. With that said, make sure you don’t contract your way into a situation where you are required to maintain confidentiality about an issue - such as in third-party software - when you would otherwise want to notify stakeholders. I have seen this happen, and trying to negotiate out of this prevision after someone else agreed to it was extremely painful.
Finally, I would recommend having a clear way of distinguishing general requests about security issues (e.g. a security questionnaire asking if you support single sign-on) from reports about specific vulnerabilities (e.g. CVE-2022-12345). You’ll probably want a central clearinghouse to make sure nothing goes in the wrong bucket (especially a vulnerability going into the general issue one).
Conclusion
As with everything in cybersecurity - and perhaps in life - the answer to the question “should I disclose this vulnerability?” is often “it depends.” With that said, having a clear and supportable rationale for your decision before a crisis hits is absolutely vital to maintaining your business continuity, reputation, and sanity. With this framework, I hope to provide a structured and logical way for making these weighty decisions.
Customizing these guidelines for your specific organization and situation is no doubt essential, and I recommend coordinating with all relevant personnel to develop a mutually-agreeable plan of action. Through scheduled and unscheduled drills, you can ensure that everyone is onboard and understands their specific roles. You will also be able to test your procedures for weaknesses that you will want to remedy before a true emergency is upon you.