Deploying securely, the government way
A look into recent NDAA provisions about software security.
The House of Representatives recently passed its version of the National Defense Authorization Act (NDAA) for Fiscal Year 2023. As several folks have already pointed out, it has some notable provisions related to software security. Although there is some other interesting stuff in here (section 5804 proposes a national strategy for distributed ledger technology and directs research into smart contract vulnerabilities), I’ll focus on what has been getting the most attention, which is section 6722.
Entitled “DHS SOFTWARE SUPPLY CHAIN RISK MANAGEMENT,” this part of the document sets requirements for “covered contracts,” which are those “relating to the procurement of covered information and communications technology or services for the Department of Homeland Security.” For such “covered contracts” the vendor must provide “a planned bill of materials when submitting a bid proposal” and the following:
(1) A certification that each item listed on the submitted bill of materials is free from all known vulnerabilities or defects affecting the security of the end product or service identified in--
(A) the National Institute of Standards and Technology National Vulnerability Database; and
(B) any database designated by the Under Secretary, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency, that tracks security vulnerabilities and defects in open source or third-party developed software.
(2) A notification of each vulnerability or defect affecting the security of the end product or service, if identified, through--
(A) the certification of such submitted bill of materials required under paragraph (1); or
(B) any other manner of identification.
(3) A notification relating to the plan to mitigate, repair, or resolve each security vulnerability or defect listed in the notification required under paragraph (2).
While some have argued that this “would forbid the Department of Defense (DoD) from procuring any software applications that contain a single security vulnerability, or CVE,” the bill neither applies to the DoD nor does it formally prevent the DHS from procuring software with CVEs in it. With that said, I do think there are some major concerns with the existing text:
It appears to be missing an “or” after paragraph (1) as well as an “and” after paragraph (2). Without these key conjunctions, it would appear that a vendor must both certify that there are no known vulnerabilities in the product while at the same time being able to alert the government about such known vulnerabilities and the plan to resolve them. If paragraph (1) is a firm requirement, then (2) and (3) would not be necessary. By adding these two words, however, the provision would make much more sense.
The vast majority of modern software products contain open source components, many of which have known vulnerabilities in them. The vast majority of these vulnerabilities are not exploitable in any given configuration. Although the bill’s requirements only apply to vulnerabilities “affecting the security of the end product or service,” the word “security” does not appear to be defined in the bill. A non-zero amount of security teams (including those who have served at the highest levels of the federal government) robotically (and unwisely) index on issues that have a CVSS rating of “high and critical” and assert there is no way to completely rule out a security impact from any such known vulnerability. Thus, faced with the chance that a government customer might claim the vendor has violated the provision (backed by force of law), many will likely “play it safe” and make sure no one can accuse them of being aware of any known vulnerabilities in their product. This doesn’t actually make anyone safer, as I have noted previously, and also because:
It is likely that vendors would need an entirely different code base for DHS use than for commercial or other governmental entities. The former would need a very controlled development process that would likely include a “code freeze” when the contract process begins and stops when the offer is submitted to ensure there are no components with known vulnerabilities introduced during this period. Considering that, from a technological perspective, this is a completely arbitrary period, such a practice would represent a large waste of time.
More insidiously, there is likely to be an unofficial and accompanying “vulnerability scanning freeze” period. New vulnerabilities are reported every day, including in existing components. Because the vendor cannot be aware of any known vulnerabilities in its product before delivery to the government, though, there is a strong incentive not to learn about them, especially immediately before submitting the bid. Thus, they would be incentivized to stop or curtail vulnerability scanning and penetration testing activities during this time.
Due to the need to create separate government product lines to meet these requirements, only the existing behemoths would be able to compete for DHS contracts, slowing down the government’s adoption of new products, including cybersecurity ones.
Additionally, the guidance would allow the vendor to ship a product with a known vulnerability - including an exploitable, high-risk one - so long as the vulnerability does not reside in one of the databases identified by the relevant Under Secretary. I highly doubt that the government will buy subscriptions to every commercial vulnerability database which its vendors use, creating a potential gap. Additionally, there could be serious, unaddressed issues in first-party code known only to the vendor and not reported in any of the covered database. Such issues would also fall outside the scope of the provision.
Furthermore, the section says nothing about what happens after the bid process concludes. While there are a range of existing requirements in place for commercial software used in government networks, this provision appears to over-index on the (perceived) security of a piece of software at a single point in time, without any concerns about what happens after the software goes into operation.
There appears to be no ability on the part of the vendor to accept risk stemming from software vulnerabilities, even when providing a justification to the government. The vendor can only “mitigate, repair, or resolve” such issues. There are many situations where it is conceivable that the cybersecurity risk is outweighed, at least temporarily, by the operational one (e.g. rapidly developing and deploying a system for vetting travelers against a terrorist watchlist). Conversely, such a “no known vulnerability” standard is too stringent for certain use cases, e.g. displaying the menu in the DHS dining hall. Such a standard will make a lot of software unnecessarily expensive, a burden on taxpayers.
With that said, I think there is an opportunity for forward progress here. At a minimum, this legislation will make it basically mandatory to use machine-readable Vulnerability Exploitability eXchange (VEX) reports about issues identified in software products used by the government. Any non-automated process would break down easily and likely run afoul of the laws requirements. Without the widespread use of such standards and tools, software bills of material (SBOM) are not likely to see significant adoption, due to the fact that vendors will be flooded with inquiries about false positive vulnerabilities in their products. Thus, this bill may help to motivate the introduction of new techniques for communicating about the exploitability of vulnerabilities in software while increasing transparency for the entire ecosystem.
These are my initial reactions, but I welcome comments with your thoughts. My hope is that Congress will modify these provisions during the conference process to reduce the perverse incentives. There are, however, some potential positive aspect to be found in this legislation. As always, I stand by to assist those in government in refining the bill’s provisions.