As always, I am not an attorney and this is not legal advice.
Contracts are the most common way in which organizations allocate risk as part of a transaction.
Being explicit about how you apportion this risk in terms of cybersecurity can help to prevent confusion, heartache, data breaches, and lawsuits. As a substantial source of such risk, software vulnerabilities deserve a fair amount of attention in these agreements.
Lawyers are well-positioned to advise on how best to craft these accords, but I strongly agree with the refrain (attributed to several people who were themselves paraphrasing Clemenceau) that “law is too important to be left to the lawyers.” Thus, as with all advisors (cybersecurity ones included), attorneys should not drive contract negotiations. A single business leader should be at the forefront, equipped with all of the relevant context to make appropriate risk management decisions.
Whether in the driver’s seat or providing recommendations in an advisory capacity, I have some suggestions for how to structure contractual provisions related to vulnerability management. I think these suggestions can be useful to both software vendors and buyers.
Industry standards
Just kidding.
There is no prevailing industry standard (of which I am aware) for how best to handle contractual terms for identifying or fixing vulnerabilities in software products. Many, if not most, contracts I have ever seen are silent on the specific matter of vulnerability management. Even worse, some agreements actually codify behavior that is illogical and not in the best interests of either party.
For those contracts that do mention anything about vulnerability management, a plurality will say something to the effect of requiring that the vendor “implement a vulnerability detection and remediation program consistent with industry standards.” An even smaller minority will require notification from the vendor to the customer within a certain timeframe of “critical” (usually undefined) vulnerabilities identified in the software.
Finally, in a handful of cases, I have seen contracts that mandate a progressive set of timelines for notification and remediation based on the CVSS rating of vulnerabilities, e.g. “vendor shall notify customer of vulnerabilities measured at 9.0 and above as per the Common Vulnerability Scoring System within 2 days, and remediate within 5 days.” I have never seen any contract identifying who determines the CVSS score, with on-the-fly interpretations ranging from the National Vulnerability Database (NVD) being the arbiter (which can lead to odd outcomes when the NVD score doesn’t reflect reality when taking into account other controls or mitigations) to the vendor having the discretion to evaluate the CVSS score in the context of its product or network.
Unfortunately, none of these are good approaches.
Being silent on the topic means that a customer would likely have no idea as to the vulnerability management practices and procedures that a vendor follows, which, anecdotally, vary wildly between organizations.
The mere existence of a vulnerability management program also doesn’t tell you much either. As I have noted, in many cases such programs are arguably no more effective than randomly picking issues to fix. Furthermore, a program could be as simple as “we only fix issues reported on in WIRED magazine,” in which cases such fixes would be both rare and late. Thus, just knowing the vendor has a program doesn’t tell you all that much.
Having a vague bar of “criticality” above which a vendor must notify a customer of a vulnerability leaves far too much open to interpretation. Even assuming use of the CVSS standard (a “critical” issue is one rated at 9.0 or above on that scale), with the NVD as the source of truth, such an agreement (assuming it is faithfully adhered to) has problems. This is because indexing on the CVSS score and requiring notification/remediation within a certain amount of time is likely to a) overwhelm the customer with reports (assuming the vendor even complies with the requirement) and create alert fatigue as well as b) spur the customer to demand that the vendor fix issues based on the CVSS score, which is not a good vulnerability management strategy and wastes vendor resources that can be applied more efficiently to other security measures or functional capabilities.
Notification timelines
In my experience, vendors rarely comply with contractual requirements to notify customers of vulnerabilities in their products. Vulnerability disclosure usually occurs when a customer scans the vendor product for flaws or sees a media report and inquires about a given issue. This is likely due to many reasons, but primarily the fact that no vendor has been sued solely for breaching a notification clause timeline, to my knowledge (please comment below if I am mistaken).
Furthermore, it’s unlikely such a suit would be able to hold much water anyway. I have never seen any contract that makes clear when the “clock starts.” Is it upon the vendor getting a scan result from a software composition analysis (SCA) tool indicating a potential vulnerability component is present? Does it occur after the vendor has triaged the vulnerability and determined it to be a true positive? I haven’t seen these questions addressed in any written agreement between organizations.
Finally, if a vendor wanted to be extra safe and alert its customer to everything it found in its product every time, after the first few reports the customer would be almost certain to stop paying attention or responding to the vendor. Assuming the notification standard is CVSS 7+ (per the NVD) and with more than 10,000 CVEs identified in 2020 alone (and growing), finding one such issue in a given product could conceivably be a daily occurrence.
Optimizing a notification program
After spilling all of the above ink, though, I will tell you that, as a customer, you probably don’t want a broad vulnerability notification requirement in your contract, with certain exceptions. That’s because such a provision creates a perverse incentive. If finding a vulnerability in its product creates a whole bunch of work for the vendor and potentially agitates customers…guess what software makers are going to be hesitant to do?
Try to find vulnerabilities.
Why run the SCA tool every day when you can run it every month?
Why introduce a new scanner that will produce even more findings?
As a customer, though, you want the vendor to find as many vulnerabilities in their product as possible. That’s because the only way to fix these issues is to know about them. Short of some very elaborate provision detailing the frequency and method of scanning - which you have confidence you can enforce - a broad vulnerability notification requirement is going to cause the vendor to look less hard for security issues in its product.
If you must have a generalized notification requirement in your contract, though, I would recommend the following:
Base the notification threshold on an objective standard that has use in a risk management context, such as the Exploit Prediction Scoring System (EPSS) rating of a given vulnerability (e.g. above 0.1, which means a 10% chance of exploitation in the next 30 days). This is a metric that both parties can view for free, and it causes fewer false alarms than when using CVSS 9+ (or 7+ or whatever) as your bar.
Drawbacks of this approach include the fact the EPSS only applies to published CVEs and slightly more than half of all exploitations are of unknown-at-that-time or otherwise un-patchable vulnerabilities. A possible way to address this issue would be to specify that, if an EPSS score is not available, then the vendor must issue notifications for any vulnerability meeting certain criteria, e.g. if can be exploited remotely and without any authentication. My very rough analysis of EPSS scores for different categories of vulnerabilities shows that there is noticeable correlation between these characteristics and likelihood of exploitation.
Although EPSS scores are generic and not specific to individual environments and deployments, however, having such a clear trigger can force a discussion with the vendor at times when it is likely to be most appropriate and valuable.
Even more effective would be to scale the requirement such that the notification windows shrinks as the risk of exploitation increases, e.g. notification for an issue with EPSS of 0.3 is due in 10 days but an issue with an EPSS score of 0.6 is due in 5. This would require faster notification for issues that are more likely to be exploited.
Although I would advocate dispensing with such broach provisions, there are two exceptions where a customer DOES want to have mandatory notifications baked into contractual language:
When the vendor has identified any compensating controls that can protect the customer against exploitation prior to the delivery of a formal software fix.
If the maker of software has determined there is a way to reduce the likelihood of someone actually using a given issue to penetrate a network, then presumably it is an otherwise exploitable vulnerability. Whether it is making a configuration change, blocking a certain port, or some other action that doesn’t require the vendor to change its code, you as a customer will probably want to know about it.
The Vulnerability Exploitability eXchange (VEX) standard offers a standardized way to communicate about such recommended actions, so you might want to consider requiring mitigation reports using that format.
When there is evidence that a vulnerability is being maliciously exploited, either in the vendor’s product or elsewhere “in the wild.”
This has the benefit of limiting notifications to true emergency situations, e.g. log4shell (CVE-2021-44228), where hackers are actively targeting a given vulnerability.
It has the difficult-to-avoid disadvantage of still not clearly establishing when the “clock starts.” Is it upon an employee reading a tweet about the issue? Upon a threat intelligence tool identifying exploitation attempts.
Additionally, you also don’t want to create yet another perverse incentive here, where the vendor tries to bury its head in the sand to avoid learning about malicious exploitations. Even assuming completely good faith, different organizations have different data feeds, subscriptions, and levels of sophistication. As a result, they are likely to learn about active exploitations with different frequencies and speeds. In an absolutely perfect world, the customer would have a way to track exploitation attempts across the internet, which would then ping the vendor to respond with whether or not their product is impacted. The vendor would not be able to feign ignorance and the contractual obligation would require a reply in a certain amount of time.
Finally, you’ll need to make sure that “maliciously exploited” is clearly defined. You don’t necessarily need to know immediately if an outside security researcher finds and exploits a vulnerability in your vendor’s product, and you certainly don’t want to create any obstacles to that happening due to the benefits of a well-run coordinated vulnerability disclosure (CVD) program.
3rd party review
In addition to requiring the vendor to report certain types of vulnerabilities, software buyers should also consider contractual provisions requiring the vendor to receive input from 3rd parties regarding its security posture. This could include maintaining a CVD program, offering bug bounties, and/or submitting to penetration tests from a contracted security vendor. Customers might also want to bake direct notification pipelines into any language agreed to with the vendor or these third parties, so that the customer becomes aware of any exploitable issues as soon as the vendor does. This is more challenging with a CVD program, where you cannot directly control what security researchers do, but you could require the vendor to add you as a recipient of the distribution list or contact form which receives such reports.
This model greatly helps in addressing the “who decided” and “when does the clock start” problems that I have described previously. In addition to resolving such ambiguity, it also means that vulnerabilities are likely to be reported more rapidly than if it were solely at the vendor’s discretion. Especially in the case of bounty hunters, who are generally paid by the quantity and severity of flaws reported, these parties are going to have a strong incentive to identify exploitable issues as rapidly as possible.
The latter two options - bug bounties and external penetration tests - can be expensive, so the customer may need to pay for these itself or otherwise sweeten the software deal. Frankly, though, a security-minded vendor should be thrilled if a new contract comes with a penetration test included or the customer agrees to pay bounties for bugs found in its software. This is already occurring, evidenced by how Dropbox pays bounties for security bugs found in vendors such as Zoom. Similarly, Canva offers penetration tests to some of its vendors and incorporates the results - and level of cooperation - into their third-party risk management process.
I want to give credit to Richard R., who floated the basis for this section via a comment on a LinkedIn post of mine. Thanks for the input!
Speed of remediation
Once a customer becomes aware of a vulnerability in a product that it uses, the next question is how quickly the vendor has to fix it. This is unfortunately a no less thorny issue than in the case of notification. A software maker will be very hesitant to sign a contract that obliges it to do anything where it cannot fully understand the scope of work involved. Building features is one thing; if your product and engineering teams aren’t incompetent, you will at least have some idea how long a new aspect of your software will take to develop, given the requirements.
Vulnerability fixes, though, are much trickier. Fixing an extremely severe CVE could be as easy as flipping a single digit in the version number of a library that is imported when the product is compiled. Conversely, resolving a trivial issue could require a massive re-architecture of the application. Both situations are no-brainers; do the first and don’t do the second. But it’s hard to figure out how to word a contract that doesn’t have the potential to put the software maker - and its customer, who will suffer reduced product quality due to the vendor’s wasted effort on the trivial issue - in a very tight spot in the latter case.
Additionally, having a remediation timeline also creates the same perverse incentive described previously. If fixing a vulnerability might be a huge hassle, should the vendor look very hard for them in the first place?
Confidentiality and integrity SLAs
Given the problems inherent to existing practices - especially broad-based contractual timelines for notification and remediation of vulnerabilities - I suggest an alternative: extending the concept of a service level agreement (SLA) to the confidentiality and integrity attributes of data.
Availability SLAs are extremely common in the software, mostly -as-a-Service (SaaS), industry. In fact, there are now SaaS tools that help you generate SaaS SLAs. Essentially, these SLAs require the vendor to pay financial penalties if the customer cannot access its data with a certain level of predictability, usually expressed in the form of “uptime” in a given year. For example, 99.9% availability (“three nines”) means that the application can only be down for approximately 8 hours and 45 minutes per year.
In my opinion, the next logical step is developing SLAs for data confidentiality and integrity. This would mean that, should a vendor be breached, it agrees ahead of time to pay a certain amount per record confirmed to have been compromised. The average value of individual records lost is relatively well established, especially due to the public availability of things like IBM’s Cost of a Data Breach report. Lost 1000 records containing the customer’s intellectual property (for any reason, malicious or otherwise)? Well, that’s $169,000 dollars owed to the customer.
This can get tricky when there are regulatory penalties at play (as in the healthcare space) and there might need to be payments to multiple parties, but I am sure a properly motivated and resourceful person or group thereof could figure out a rational formula. Additionally, having a clear standard of what qualifies as “lost” will be important, but the vast array of databases of leaked information on the internet and commercial tools that scan them suggests this is also a solvable problem.
Data integrity is probably even easier. A permanent loss of it could just be treated like a permanent loss of availability (like after a successful ransomware attack from which the vendor cannot recover). A partial corruption of data that could be remedied through some work should carry with it a penalty equivalent to the opportunity cost of that work. For example, if it takes a customer’s data engineer an average of 1 hour to repair 10,000 records corrupted by a flaw in a vendor product, and gets paid $125/hour, then your integrity SLA should be $0.0125/record impacted.
The most important feature of this type of arrangement is that it aligns interests between the customer and the vendor. Neither party wants the latter to get breached: the customer due to the interruption to their business operations and likely reputational harm and the vendor due to these factors as well as the financial penalties. As a result, they will both do whatever is in their power to prevent a breach from happening. Whether this means applying new security controls, scanning for vulnerabilities in novel ways, or training their teams more effectively, both organizations will seek to secure the vendor in the most cost-effective way possible. Many perverse incentives disappear when the vendor risks a clear amount of cold hard cash for getting breached.
Conclusion
Vulnerability management - including in your suppliers’ networks and products - should be all about managing risk. Thus, when negotiating your next contract, regardless on which side you might find yourself, think hard about the incentives you are putting in place and select ones that help both parties stay secure in the most economical fashion.
Nice article Walter, written by somebody who is very experienced and has personally felt the pain.