Building a security program can be time- and resource-consuming.
Especially for organizations just getting started, the amount of work involved can be intimidating. One step that any organization trying to use software securely can take, however, and with minimal effort, is setting up a coordinated vulnerability disclosure (CVD) program. These are also called “responsible disclosure programs” or just “vulnerability disclosure programs” (VDP).
CVD programs are formal mechanisms for an organization to receive and take action on reports from security researchers (also known as “white hat” or ethical hackers) regarding cybersecurity vulnerabilities in the organization’s network. They are distinct from bug bounty programs in that they do not offer to pay for such reports.
Additionally, CVD programs are useful even if you do not develop software yourself or make it available for customers. This is because:
It is quite possible a researcher could find a novel vulnerability in a Software-as-a-Service (SaaS) product you use of which the vendor is not aware. In this case, you would want to inform the vendor and get them to fix the bug and protect your (customers’) data.
Even if you only develop and operate software for in-house purposes (i.e. you don’t make customer-facing applications), you would absolutely want to know if you accidentally misconfigured an Infrastructure-as-a-Service (IaaS) environment, for example. This security research blog post highlights this exact situation occurring (and describes two very different responses from two different organizations).
The motivation for playing the CVD game
Why would a security researcher take the time to look for vulnerabilities in your network when they aren’t going to be compensated, you ask? In a word: credibility. To establish a name for oneself in the security community, having a public record of successful CVD submissions can enhance the researcher’s reputation.
Additionally, if a company has a good experience with a researcher, it might even contract with or hire him for paid work. Furthermore, since bugs with bounties understandably receive a lot of attention, much of the low hanging fruit with a price on it has been identified already. Thus, especially for those getting started out, CVD programs are a great way to build a reputation in the security community.
In addition to getting the free help, having a formal CVD program will prevent you from suffering reputation and potentially direct financial damage stemming from an uncoordinated vulnerability disclosure. As many companies have learned, just because they are ethical does not mean that security researchers are necessarily tame. If they feel like they are getting ignored or given the run-around, security researchers may simply announce the vulnerability publicly. Even if they wait until your organization gives the go-ahead, they may loudly express their frustrations with the process.
The main requirements to run a CVD program, from a company’s perspective, are:
Publicly state that you have such a program and what the rules of engagement are.
Develop an internal process to address any incoming reports.
Describing your program externally
The simplest way to do this is to have a web page stating how you deal with CVDs. Since you should also publish a shared security model, this can comprise a section of it. The key requirements of such a CVD page are:
Clearly state that you won’t pursue legal action against those acting in good faith. I am not an attorney, so obviously check with your own counsel on how to word this appropriately, but you need to ensure that researchers won’t fear punitive steps if they submit a vulnerability report to you.
With that said, don’t allow for any doubt as to whether you will pay for vulnerability reports, i.e. whether this is a bug bounty program or not. Unfortunately there are “gray hat” hackers out there who will insinuate (or flat out demand) that you pay them, even if you don’t operate a bug bounty program. State this up front so you can clearly identify when someone is acting in good faith or not.
Provide a clear process for how to submit vulnerabilities. This should include things like a contact email address or phone number, but also what format reports should take (e.g. must include a proof-of-concept video or steps to reproduce). As an example of what not to do, Google “Microsoft coordinated vulnerability disclosure.” When not logged in and using incognito mode, you will be directed to this page, which provides a high level description of what CVD is but does not provide any actionable steps.
Provide a time frame within which you will respond, preferably in hours after submission rather than “business days.” The latter can be interpreted in different ways in different geographies.
Make clear what the burden of proof is. It doesn’t take much effort or talent to run a vulnerability scanner on your marketing website and identify a CVE, most of which are not exploitable. Just reporting this type of finding doesn’t add any value, so make sure you require evidence of exploitability.
Bonus points
Host a security.txt file at https://yourdomain.com/.well-known/security.txt. This reduces the chance of someone not finding your policy to begin with and is a generally good practice.
Provide a secure method for submitting information regarding the vulnerability. Security researchers themselves are tempting targets for hacks due to the fact that they often maintain actionable information regarding non-public flaws in enterprise networks and how to exploit them. Additionally, you likely want to prevent wide dissemination of inbound CVD reports within your own organization, at least before remediation of the underlying issue. Thus, encryption standards like Pretty Good Privacy (PGP) or even secure messaging apps like Signal (if you have a “duty phone” for whoever is monitoring CVD submissions) represent potential options.
Building an internal process
Developing your externally-facing program, however, is only half the battle (or potentially less). Additionally, you should:
Have a plan to rapidly acknowledge and take action on any legitimate findings that security researchers submit. Don’t put them in a position to wonder whether you are blowing them off, because then you might have an uncoordinated vulnerability disclosure on your hands. Security researchers may deliver their own deadlines to you as part of their disclosure. Like everything, these are negotiable, but you should act in good faith if it appears your counterparty is doing the same.
On the same note, have a decision tree or branch plan to activate in case the researcher violates your policy in some way and you no longer consider the disclosure to be “coordinated.” If an initially benign-seeming report turns into an extortion attempt, you should have a plan to muster the technical and legal resources to stem any damage and pursue the attacker through appropriate means.
Look carefully at reports to make sure they are actually referring to vulnerabilities in your organization’s code. Sometimes it’s challenging for an outsider to tell where your product ends and another begins, especially when dealing with plug-ins, extensions, and the like. If you could be impacted by the vulnerability in this third-party code, it might make sense to reach out to them directly, while keeping the researcher in the loop. If not, then advise the researcher to make the disclosure to the party who actually developed the code.
Make sure you have a solid communication plan for external stakeholders, especially customers. If you are a B2B company, they should definitely get advance warning prior to the public announcement and as soon as you have developed a fix, so that they can patch their instance of your application (if non-SaaS) and not get taken by surprise.
Recognize researchers for their work after the issue is resolved, at the very least through a blog post or press release. Sending them a company baseball hat or t-shirt is also a classy move.
Responsible disclosure can be a win-win situation for software companies and researchers; you learn about and fix problems in your code and they bolster their reputations. Follow these guidelines, and you are more likely to make this a reality.
Notes
Thanks to everyone who commented on my original LinkedIn post on this topic; I have incorporated some of your feedback into this article.
Check out StackAware’s policy for an example. The disclose.io project also has a policy builder that is a good first stop for organizations just starting out. I think the text is a little wordy and vague, but it’s certainly better than nothing.
Related LinkedIn post