Discover more from Deploy Securely
What software security regulation should (not) look like
A response to CISA's recent Foreign Affairs piece.
Government has a role to play in cybersecurity, but I don’t love what I’m hearing lately.
Specifically, the leader of the Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly, and her Executive Assistant Director, Eric Goldstein, recently penned a piece in Foreign Affairs. While I agree with some things they said, the overall thrust of it suggests CISA and potentially the entire Biden Administration is pushing for regulation of the software industry in a manner I view as counterproductive.
I think everyone (except perhaps cybercriminals) can agree that fewer data breaches and other successful attacks are preferable to more of them. But the way to arrive there is not through hyper-focusing on standards, frameworks, and other inputs. These are merely tools for achieving an end.
What the government should ultimately focus on is improving security by evaluating - and incentivizing - outcomes.
Thus, I’ll go through the major points of the Foreign Affairs article, detailing my thoughts on the key components. And I’ll conclude with how I think software security regulation should look.
While this shouldn’t be necessary to say, I want to make clear none of my critiques are in any way directed personally at the CISA team. I am sure they are absolutely dedicated to their missions and believe everything they wrote is in the best interest of the American people.
Additionally, after I previously criticized CISA’s Stakeholder-Specific Vulnerability Categorization (SSVC) model, Eric Goldstein personally reached out to me and put me in touch with his team to discuss my objections. I appreciate this move and am happy to continue engaging constructively.
Government officials can certainly express their policy preferences. But a key feature of our constitutional system is that I can also express mine.
CEOs should be accountable for cybersecurity
This is my single greatest point of agreement with the CISA team. From the very launch of Deploying Securely I have advocated for viewing security professionals as advisors and implementers. Not final decision-makers when it comes to cyber risk.
That is because business leaders - the most of senior of which is the CEO - must make tradeoffs between a wide variety of risks: regulatory, competitive, technological, and of course, security.
So I agree that chief executives - and the boards to which they answer - should be ultimately accountable for cyber risk. Just like they are for every other type.
Unfortunately, just as the CISA team is wrapping up this point, they appear to undermine it:
Most important, board members should see that chief information security officers [CISOs] have the influence and resources necessary to make essential decisions on cybersecurity.
Organizations should absolutely empower their CISOs to be the key cyber risk advisor. And of course such leaders should have discretion - within the constraints set by their boss - to make implementation decisions.
But the “essential decisions on cybersecurity” belong to the CEO, not the CISO. Accountability and authority must go hand in hand.
Secure by design and by default are good, but the details are key
Few would argue that securing system by design and by default are a bad thing. Resolving potential flaws earlier in the software development lifecycle is almost always preferable to resolving them later.
The biggest problem here is that these are very broad concepts, implementation details matter, and there are always tradeoffs to be had. The way the rest of the article describes cyber risk, though, shows little acknowledgement of the nuance and gray areas involved.
And I am concerned by some of the things the CISA team says, primarily because
The government tends to focus on metrics that don’t matter
Americans…accept products that are released to market with dozens, hundreds, or even thousands of defects.
This appears to be a reference to the fact that basically any software product in operation, if scanned, would reveal scores of known vulnerabilities. While for the most part true, it isn’t an especially relevant piece of information. The way such defects are recorded is flawed and generates a lot of downstream problems.
Organizations anyway tend to index heavily on raw vulnerability counts despite the fact that these quantities aren’t especially significant by themselves from a risk management perspective. And the CISA team seems to fall into the same trap.
Since at most 10% of Common Vulnerabilities and Exposures (CVEs, a major category of known security defects) have ever been exploited in the wild, understanding which individual issues pose the biggest risk is much more important than raw quantity of them.
While not doing so in the first reference, the CISA team appears to address this later in the piece by urging software developers to
significantly reduce the number of exploitable flaws before it is introduced to the market for broad use.
All other things being equal, this is a no-brainer. Who wouldn’t want to do this? But there are some problems with this exhortation. Exploitability is not a binary concept but rather a probabilistic one. How much is “significantly?” What does “broad use” mean?
I raise these questions because existing federal standards do not inspire a lot of faith. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 makes some odd recommendations when it comes to secure development and the FedRAMP cloud security standard bakes in some downright counterproductive requirements.
Furthermore, the CISA team doesn’t seem to acknowledge that
Making tradeoffs is necessary in all facets of life, including cybersecurity
Government can smooth the way by making clear its expectations that technology is designed and built with safety as a top priority.
Describing something as “a top priority” is classic managerial doublespeak. An organization cannot have more than one top priority (i.e. priority #1). It can have a #2 priority and a #3 priority, but these must necessarily be lower than #1. And if leaders are vague, “[a]ctions at the sharp end resolve all ambiguity,” sometime sub-optimally.
So which is it? Is cybersecurity #1? That wouldn’t make sense because then we would just shut down all of the computers, disconnect all networks, and call it a day. And I’m pretty sure very few people want that.
So if cyber safety or security isn’t #1, how do we make tradeoffs between these and other competing concerns? The article doesn’t shed much light on this and appears to suggest that no such tradeoffs are required.
There is an especially strange paragraph that leads me to this interpretation:
This is not the first time that American industry has made safety a secondary concern. For the first half of the twentieth century, conventional wisdom held that automotive accidents were the fault of bad drivers.
Do drivers not have responsibility for the safe operation of their vehicles? Even the National Highway Traffic Safety Administration (another government agency) attributes ~94% of accidents to driver error! It would appear this conventional wisdom is accurate according to the feds themselves.
And, if we are being honest, millions of Americans show every day that safety is a secondary concern to that of arriving at their destination in a timely manner. They demonstrate this by getting into their cars and driving somewhere despite the risk of being injured or killed on the highway.
Easterly and Goldstein continue:
Similarly, today, if a company suffers a cybersecurity breach, the company itself is blamed if it did not patch a known vulnerability. Such an approach neglects to question why the vendor that produced the technology needed to issue so many patches in the first place or why failure to implement a patch allowed a damaging breach to occur.
So should companies with knowledge of exploitable vulnerabilities in their networks not be accountable for patching them? To imply otherwise suggests they should expect an unattainable level of technological perfection. Also, how many is “so many patches?” One per month? Once a year? If the state-of-the-art is not acceptable, what is?
Later, the CISA team writes:
No one would think of purchasing a car that did not have seatbelts or airbags, nor would anyone pay extra to have these basic security elements installed.
No one thinks of purchasing a new car without seatbelts or airbags because such things do not exist…due to government regulation. But I’m sure that people would buy such cars if they did exist (just like some people ride motorcycles without helmets where it is legal). And auto manufacturers raised their prices across the board to compensate for enhanced safety requirements (i.e. making consumers “pay extra”).
Making the connection to the digital world, the authors write:
software sellers must include in their basic pricing features that secure a user’s identity, gather evidence of potential intrusions, and control access to sensitive information rather than as added expensive options.
What is “basic?” What is “secur[ing] a user’s identity?” Should every product have a forensic logging capability? Does this include a consumer app for sharing dog pictures? Should the consumer have no choice in whether to buy cheaper but less secure technologies?
And how would mandating such requirements not result in an increase in prices across the board for software, as it did for cars? I know CISA has focused specifically on the issue of multi-factor authentication as an example of something that should be “free.” But companies spend money to develop and maintain these capabilities. Should they not be allowed to charge for them? If so, then either these enterprises will a) go out of business (unlikely) or b) just bake in price increases elsewhere (probable).
Finally Easterly and Goldstein suggests that
Flaws often wind up in technology products because creators rush to release them to customers and are often more focused on feature expansion than security.
Assume this is true. Is it necessarily bad? What if you are developing software for an insulin pump that will allow patients to live longer and more fulfilling lives? What about cardiac pacemakers? Doesn’t it make sense to consider how long the software will take to develop or how much it will cost and weigh that against the security risk?
Obviously we don’t want medical devices getting hijacked by attackers. But how many lives might be saved or improved by moving things to market quickly?
Only a sober, quantitative analysis can tell you when deploying software is worth the potential security risk, and when it is not. And CISA does not suggest any formula for such an evaluation.
Additionally, the tradeoff problem becomes even more acute when you consider how
Federal government requirements impede innovation and security itself
The CISA team writes that the
U.S. government can start by defining specific attributes of technology products that are secure by default and secure by design.
Unfortunately, many government standards are confusing, non-sensical, and sometimes downright antithetical to security.
In addition to the FedRAMP and NIST SP 800-53 examples, there is also the:
Muddled attestation requirements for government contractors which have been broadly panned. This last one is especially unfortunate considering Easterly and Goldstein boast that the “Biden administration has taken important steps…in establishing software security requirements for federal contractors.”
They also note that
Such requirements may pose challenges for smaller technology companies and new entrants to the market. To ensure that innovative and disruptive companies can thrive in an environment where heightened security investment is the norm, development of stronger security practices must focus on outcomes rather than on prescriptive, doctrinaire requirements.
Although I agree with the last point, it seems to contradict much of the rest of the article. How are startups supposed to navigate the giant morass of rules already in existence combined with any new standards the federal government mandates?
And the first point from the above paragraph is definitely true. While they may not have intended to do so, the CISA position implicitly favors incumbent players while explicitly lauding them:
Google, Amazon, and Salesforce, are…providing strong security measures by default for their customers and introducing innovative advances toward security by design.
The message I get from this and other statements is that the tech behemoths are the good guys. And they will certainly fare much better than smaller players when faced with even more regulation. Such “coronation disguised as regulation” impedes innovation and will eventually make America less competitive.
Finally, and something which the authors don’t address at all, is the fact that the:
The government throws a lot of stones from its glass house
The CISA piece includes a lot of tough talk. Unfortunately, the federal government itself has been responsible for an array of devastating security failures like the:
2014-2015 Office of Personnel Management (OPM) breach. Due to a range of security lapses at this government agency, Chinese government-linked hackers stole the security clearance investigation files of millions of federal workers and their associates.
SolarWinds breach of 2019-2020. Despite a series of Government Accountability Office reports identifying risks in federal technology supply chains, the Departments of Justice, State, Treasury, and many others suffered data breaches by Russian government-backed hackers via a compromised third-party contractor. The CISA team might want to blame the vendor for this, but its hard to see how the government doesn’t have any affirmative responsibility to protect itself here.
Log4shell exploitations of 2021-2022. In spite of agencies like the Federal Trade Commission making dire threats to “use its full legal authority to pursue companies that fail to take reasonable steps to protect consumer data from exposure as a result of Log4j,” Iranian-linked hackers breached at least one federal agency because of just such a failure.
While you can walk and chew gum at the same time, the federal government has a lot of work to do in shoring up its own networks. I’d love to see some progress here so the executive branch can lead by example.
How to do security regulation
So, after all of my points, a fair question would be “so how would you solve the problem?”
By doing what the CISA team suggests at one point in the article: focus on outcomes.
Here is what this would look like:
Clearly identify a monetary fine for each breached social security number, medical record, or other piece of sensitive data. This would include data that supports and sustains life (the federal government values the latter at approximately $10 million per human being). Assign accountability to the processor of that data. Make a single regulatory agency responsible for penalizing breaches involving each kind of data (probably a pipe dream but worth wishing for!).
With these clear figures, organizations can then contractually transfer risk throughout their digital supply chains, allowing them weigh the cybersecurity risk vs. business reward in an objective manner. This will allow them to select more expensive and more secure software where cost-effective. But when the threat or risk doesn’t justify it, they can use cheaper options.
If it looks like the rate of breaches is accelerating without a sufficiently compensating increase in technological progress, then the regulatory agenc(ies) can increase the fines.
If the opposite is happening and the speed of advancement is declining, then agencies can lower the fines.
When individual data privacy is not at issue but data security still is, such as with trade secrets, encourage private organizations to contractually assign values to different types of data. This will create confidentiality and integrity service level agreements (SLA) in addition to availability ones.
Avoid a federal backstop to the cyber insurance market to avoid distorting prices (such as happened with flood insurance) and let the free market value risk, given knowledge of the fines that will result from a security lapse.
The primary advantage of such a regime is that it doesn’t matter how companies achieve their goals. If they invent some game-changing new technology which makes them impermeable to attack (unlikely, but great if it happens), they avoid regulatory action. And if following NIST checklists and other federal guidance prevents them from getting breached, they are equally satisfied.
But they can meet the end goal - protecting sensitive data - in the most efficient manner available.
This is a high level approach and I know that crafting legislation and regulation takes a lot more detail. With that said, I hope to point the conversation in the right direction with this proposal.
While I am a skeptic of government regulation in general, I am not saying “don’t regulate” in the specific case of software security. There is a way to enhance software security while at the same time not unduly hampering innovation.
But the Easterly/Goldstein approach is not it.
Yet another set of frameworks, guidelines, and even binding requirements will only add red tape, benefit incumbents, and force organizations to focus more on “security theater.” And I urge the security community to oppose new legislation or regulation along these lines.