Risk management - including when dealing with software security - is all about trading off costs and benefits.
There is already quite a lot of available material focused on the four risk management options - avoid, transfer, mitigate, and accept - so I won’t go into depth rehashing those here. If you aren’t familiar with the concepts, check out this post.
Some things that do not receive a lot of attention, however, are the logistics and mechanics of who makes these decisions and how to record and communicate them. I promised you actionable details and I plan to deliver, starting with this post but continuing over the next few.
First and foremost is the question of who should have ultimate authority and accountability for cyber risk decisions. Different organizations have varying approaches to this problem, and in my experience, their members often accept the status quo without question or reflection.
Improperly structured decision-making processes and models can lead to perverse incentives and strange outcomes, unfortunately, so I think this question deserves exploration.
To start with, I’ll lean on my military experience again because I learned some important philosophical lessons there. In the armed forces, “commanders, not [uniformed] lawyers [called judge advocates], own the military justice system.” This means military leaders - who don’t necessarily have law degrees or extensive formal training - are ultimately accountable for deciding whether or not to prosecute those under their command for offenses committed against the Uniform Code of Military Justice (a section of the federal law applicable only to service members). This fact has become controversial for reasons unrelated to cybersecurity, so I will avoid that rabbit hole for now, but generally, I think this paradigm is correct.
The reason for this structure is that military commanders have total responsibility for their units; everything that happens or doesn’t happen on their watch is their responsibility. This is not true for military lawyers. They don’t get fired for losing battles or wars (although it’s debatable how many people in the military get fired for these failures at all, that’s another potential tangent). These lawyers could, however, be fired for providing overly permissive legal advice, which might potentially create one of the perverse incentives I mentioned.
If military lawyers had to make the final call on prosecutions, I suspect they would generally lean toward filing charges. This might happen because lawyers are experts in...the law...not in running combat units. As a result, they would be quick to identify legal gray areas and then warn and even punish those who went anywhere near them. Furthermore, uniformed attorneys are evaluated and promoted based on their ability to provide legal analysis, not necessarily to ensure that military units succeed in combat.
As a result, Marines, soldiers, sailors, airmen (and guardians?) might hesitate during critical moments in combat, fearing that their actions would be judged harshly in retrospect by an attorney who is not familiar with the totality of the challenges they face in combat. Commanders, on the other hand, have total responsibility for their team’s success or failure. This includes ensuring that those under their command follow the law but also that they are able to accomplish their assigned mission. This forces commanders to balance all of the relevant concerns. Thus, they are rightly in charge of tough calls related to prosecutions.
Back in the world of cybersecurity, I think a similar model should apply. In those situations where organizations have explicit risk management programs, however, I have sometimes seen advisors such as lawyers - but also information security and compliance professionals - be ultimately responsible for risk decisions. Unfortunately, such a system is the wrong way to do business.
The aforementioned folks are almost always evaluated and compensated based on their ability to minimize the risk of bad things happening to the organization, which makes sense. They are almost never responsible for the company’s or business unit’s bottom line (e.g. profit) or ability to serve its customer base or society at large, though. Since a software company exists primarily to generate value, it seems odd to me that those within the organization who do not have a holistic responsibility for doing so could be in charge of making potentially momentous decisions that could impact its viability as a going concern.
Thus, I would propose that it is those with overall responsibility for the success or failure of a given business venture - in its entirety - who should be the ones answerable for cyber risk decisions.1 A company’s CEO - like a military commander - is the single person ultimately accountable for everything that a business does or does not do (notwithstanding the trend of having co-CEOs, which frankly, is a crazy idea. But that discussion is appropriate for another post or forum). Thus, CEOs should sign off on at least a high-level risk management program or policy for their businesses.
In larger organizations, it is not feasible or desirable for the CEO to review every single decision, and thus she should delegate some authority to business line general managers or product managers. The latter two roles are examples of those with holistic responsibility for a business, or part thereof, and generally have the perspective necessary to weigh all risks facing a given organization or component.
Once this delegation occurs, can a CEO then wash her hands of decisions made by subordinates, especially if they result in a catastrophic event? Absolutely not. She remains accountable and should address such an event to determine if the subordinate’s decision-making was appropriate given the circumstances and available information (if not, then either coach or terminate the employee) and whether the original delegation of authority was correct (if not, then re-claim the authority).
The advisors I discussed earlier - security and compliance professionals as well as attorneys - should focus on just that: advising. Their expertise is generally limited to specific types of risks: cyber attacks, lawsuits, etc. Thus, their roles should be to fill in gaps in the ultimate decision-maker’s knowledge so that he can weigh the risks stemming from these adverse outcomes against other hazards, such as the failure to achieve revenue targets, deliver key functionality to customers, or even support the operation of critical systems like power plants or nuclear weapons.
Furthermore, improperly putting the onus of risk decision-making on advisors could potentially lead to crazy outcomes where such individuals never accept any risk. Frankly, if I were in such an advisory role - but were accountable for the ultimate decision - I probably would not approve any risk acceptance request coming my way. If my job focused narrowly on reducing a specific type of risk, it would not make sense for me to sign my name to a decision increasing that risk, especially if there were nothing for me to gain by doing so.
In practice, these types of frameworks - where advisors rather than business owners make risk decisions - usually cause the latter to apply pressure to the former to “be a team player” and sign off on them. This causes risk advisors to attempt to consider things such as revenue at stake, competitive dynamics, and the relative importance of customer use cases - things usually outside their remit and expertise - in their decision-making. It can also free business leaders from psychological accountability of said decisions, allowing them to think “well, security/compliance/legal approved this, so it’s not my problem or my fault if something goes wrong.”
The correct model, in my mind, is to require the appropriate advisors to provide analyses of potential strategies and courses of action, as well as the potential consequences, but not make the decisions themselves. I’ll delve into more detail in later posts, but such an analysis should include an estimate of the risk incurred (preferably in quantitative terms) as well as suggestions regarding potential compensating controls.
But the key is to force the relevant business owner to be the ultimately accountable decision-maker. As you might expect, when these folks have to sign their names personally they apply far more scrutiny and engage in a more thorough risk/reward analysis than if they were merely bystanders.
That’s all for this post, but I am not done on this topic. I will continue analyzing this issue in my next edition, providing a “meta-framework” for running risk management decision-making.
Related LinkedIn posts
There is room for two narrow and clearly-defined exceptions: 1) where the CISO is the business leader in question, e.g. is making decisions about a security tool or vendor and 2) when the cyber risk is an unknown one. As the primary cyber risk advisor, if a CISO is unable to anticipate it and warn other business units accordingly, then the CISO should be accountable if this risk in fact materializes.
Great stuff.
Demands that GMs/PMs have technical literacy/competence so that they can weigh the advice of cyber advisors. This is so often lacking (even among PMs!) which is probably why some orgs default to the compliance/risk management folks holding the ultimate responsibility.
I was the quality assurance officer on the boat, responsible for managing the risk of maintenance programs. Creates all sorts of misaligned incentives as you highlighted ("be a team player") etc.