People love to talk about how compliance does not necessarily equal security.
But I’m going to focus on the reverse relationship in this article.
Because I think to a very real degree, security is often compliance.
What I mean is that stopping malicious impacts to data confidentiality, integrity, and availability often prevents your compliance posture (or lack thereof) from becoming an issue in the first place.
Much of the American regulatory and legal system focuses heavily on intent and inputs. For example, the Biden Administration is trying to build an entire software liability regime on whether or not companies or other actors took the correct (in the government’s view) actions, not whether they achieved desirable outcomes.
This is a terrible incentive structure because it motivates people to focus on performative compliance rather than concrete actions to improve security.
The U.S. government focuses its punishment on those who suffer breaches
The good news is that, in reality, U.S. regulators don’t even follow their own alleged principles! They focus a hugely disproportionate amount of attention investigating disasters after they occur. And the proximate cause for the resulting punishment is almost exclusively the fact of a cybersecurity incident.
Here are some examples:
The Federal Trade Commission’s (FTC) epic $575 million fine against Equifax following its 2017 breach
The stated reason for the fine was “the credit reporting company’s failure to take reasonable steps to secure its network.”
NOT the mere fact that it had been breached.
This implies that, if for some reason, a whistleblower told the FTC about the company’s security before the breach, it would have gotten the same fine.
But I think we can all agree this isn’t the case.
If ~147 million people didn't have their data stolen, this would be a nothingburger.
Consider how a U.S. Senate report identified several weaknesses in Equifax’s security posture post-breach, such as the fact that it:
had more than 1,000 internet-facing vulnerabilities.
Data compiled from over a million organizations by Security Scorecard and the Cyentia Institute shows that this is more the rule, rather than the exception.
could not follow its own patching policy.
Because many organizations still leverage the obsolete Common Vulnerability Scoring System (CVSS) when building theirs, this is also a common occurrence.
If the FTC applied the a similar standard to every organization handling customer data - even those who did not have a confirmed breach - we would have seen an enormous wave of enforcement actions over the past decade.
But we haven’t.
Because the breach is the trigger, not the security posture.
The U.S. Securities and Exchange Commission (SEC) 2023 civil suit against SolarWinds and its CISO
The SEC’s claims centered around a lack of disclosure regarding cybersecurity gaps, specifically claiming that the defendants, specifically “acting knowingly, recklessly, or negligently...obtained money or property by means of untrue statements of a material fact or by omitting to state a material fact necessary.”
Again, the issue here was NOT the mere fact of the breach.
But is it conceivable that we would be talking about SEC action against SolarWinds and its CISO without the 2019-2020 breach?
I don’t think so.
Lots of publicly-traded companies have:
Virtual private networks (VPN) allowing access from unmanaged devices.
Security teams who feel overwhelmed and discuss it internally.
A huge backlog of known vulnerabilities.
like the SEC alleges SolarWinds did.
But their CISOs aren’t getting sued in their personal capacity.
That’s because their companies haven’t had a confirmed breach.
The U.S. Department of Justice (DOJ) criminal case against the former Uber CISO following a 2016 data breach
Again, the charges here focused on his alleged cover-up of the breach, NOT the breach itself.
So I suppose the government is saying that if there were no breach but he did the same things (e.g. allegedly misleading the FTC, which was conducting a separate investigation), he would still have been convicted.
Again, does anyone really believe that is the case?
It’s hard to see how there would be anything to mislead about without the breach.
Notable exceptions to my thesis
We live in a world of gray, rather than black and white, so a lack of cyber incidents won’t necessarily protect you completely from regulatory action.
European Union (EU) General Data Protection Regulation (GDPR) enforcement actions
The EU is notable for levying enormous fines (against mostly American tech companies) without a breach by a malicious third party. The top 3 examples are:
Meta, fined €1.2 billion for allegedly transferring EU personal data to the United States without appropriate safeguards.
Amazon, fined €746 million for allegedly conducting targeted advertising without proper consent.
Meta again, fined €405 million for alleged unlawful processing of children’s personal data. It might be reasonable to describe this a breach, though, because the focus of the investigation was the fact that Instagram accounts for children 13-17 automatically displayed their contact information publicly.
Thus, specifically in the context of the GDPR, compliance with the letter of the law can be more important than whether or not you get breached. Having read the GDPR myself, though, the “letter of the law” leaves quite a bit open to interpretation.
Federal Deposit Insurance Corporation (FDIC) and other informal regulatory actions
During a discussion on a related LinkedIn post, Eric Stoever made the point that there are “informal actions” regulators such as the FDIC can take. These are not public and could conceivably include actions related to cybersecurity weaknesses (but not following an actual incident).
With that said, these are not legally enforceable and seem to me like the institutional equivalent of a cop giving you a warning instead of a speeding ticket.
While it could certainly put you on the “radar” of a regulator, an informal action doesn’t deliver any financial or legal pain. Any one that does is public information and my analysis of the record shows that the overwhelming majority of these follow a confirmed breach or other cyber incident.
The Securities and Exchange Commission (SEC) whistleblower program
William Wilsey made another interesting point that the SEC’s whistleblower program could potentially motivate an employee or other stakeholder to highlight cybersecurity failures (specifically, not disclosing them to investors) to the agency, even in the absence of the breach.
Because whistleblowers are eligible to receive 10-30% of the money collected by the SEC in a successful regulatory action, there are bound to be “creative” interpretations of the whistleblowing rules.
Ironically, it seems like the first cybersecurity-related whistleblowing attempt was by the ransomware group AlphV AKA BlackCat. After breaching the company MeridianLink’s networks, it filed a compliant with that the SEC. The group claimed (mistakenly, as the rule had not yet taken effect) that MeridianLink had “failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days.”
Hilariously, partners at the law firm Debevoise & Plimpton could not definitively rule out the possibility of AlphV being eligible for a whistleblower payout, had the rule been in effect.
In any case, it will be interesting to see how things play out here. We are likely to see those with access scouring the archives of SEC-regulated companies for evidence of undisclosed material cybersecurity weaknesses.
Improving your cyber defenses reduces the likelihood of regulatory action
Security, legal, and compliance teams spend a lot of time worrying about punitive action by regulators.
And while the government’s approach explains a lot of this behavior, sometimes they miss the point entirely when designing their security programs. The fear of civil or even criminal action leads them to focus on demonstrating how they did things “the right way” rather than just preventing a breach in the first place.
But while whether compliance is necessarily security is a hotly debated topic, I think we can clearly say based on the above evidence:
security often IS compliance!
P.S. As always, I am not a lawyer and this is not legal advice.
Related LinkedIn Posts: