Discover more from Deploy Securely
The federal government (at least part of it) confirms that it does not understand cyber risk management
The Air Force CTO proves my point.
There are two important points to make about this comment, in descending order of importance:
It goes a long way toward validating my original thesis that the federal government doesn't sound like it understands cyber risk management.
It uses a rhetorical technique that you should ignore.
I’ll address the rhetoric first.
Invoking the injury or death of service members - Scott’s reference to sending “one of our precious men or woman [sic] home in a coffin” - is a tactic often used by those in uniform to justify their position. Mentioning such a loss, real or potential, is a common way to preempt or otherwise shut down any sort of debate on the topic at hand. The implication is that there can be no discussion about a given issue when lives are at stake.
The thing is, arguments must stand on their own merits especially when lives are at stake. Those who rely on imagery of “Warriors’ Blood” frequently do so when their own logic is deficient and they are grasping for a way to compensate for this fact. It’s unsurprising to see Scott use this tactic, considering that such behavior is prevalent at the absolute highest levels of the military. Unfortunately, deploying this technique often goes hand-in-hand with poor risk analysis and management practices while at the same time obfuscating such deficiencies.
To give a military example, when I served in Afghanistan an adjacent unit was required to use an electronic countermeasure against a non-existent threat. The details aren’t especially important, but suffice to say that despite being heavy, cumbersome, and exposing Marines to direct fire from the enemy, the unit's commander insisted on using this device with the same justification that Scott employs: lives are at stake. Unfortunately, this commander did not appear to have conducted an effective risk/reward analysis - and, if he had - was secretly doing exactly what I criticized in my original article: conducting a preemptive CYA maneuver to immunize himself from criticism should anything bad happen in the future.
Surely, if a Marine died while using this equipment, then it wouldn’t be the commander’s fault, because the latter had done everything in his power to make sure the former was “safe.”
As several in my platoon predicted, and despite my protests against the requirement, a Marine in this adjacent unit died from accurate rifle fire while carrying this paperweight-equivalent device and moving slowly through an open field. It provided no protection to the Marine or his brethren, as it was narrowly designed to stop a specific type of attack which had essentially never occurred anywhere near the relevant area of operations. Furthermore, due to the known sniper threat, there had been consensus among those actually on the front lines that moving swiftly through open areas was critical to minimizing risk.
At no point did I become aware of any sort of formal after-action report during which anyone even questioned the requirement to carry around this device in the first place, the mandatory use of which I strongly suspect led to the death. The device provided “security,” and it was verboten to discuss the matter anyway, as a man had fallen.
Turning back to the rhetorical issue, I fully admit that the stakes are high in the context of military software procurement, and that lives hang in the balance. This is also true in the context of medical device and industrial cybersecurity, where, frankly, even more people might be potentially impacted. The use of the imagery of dead service members suggests that Scott, however, views the necessary calculus to be somehow different - or inappropriate to even consider - for the armed forces. It’s not, and you should treat such gratuitous references as a red flag that the underlying logic is potentially faulty.
Moving to my primary point regarding risk management, Scott’s comment reveals the exact problem in government thinking that I am trying to illustrate…and correct. Since he is the Chief Technology Officer (CTO) of the Air Force and appears to be writing in an official capacity (which I am not), then I think it’s fair to attribute his views to at least a chunk of the military, if not the federal government writ large.
Although he implies that this is my position (it isn’t), risk/reward analysis is not just isolated to market share or profit. In fact, one of the key examples from my original post lays out a series of parallel courses of action where in each case lives might be lost. What does one do in such a situation? There is no easy or clear answer, because either path could lead to innocents dying.
The only reasonable response is to make as informed a decision as is possible within the given time constraints, weighing the potential benefits and drawbacks of each option and selecting the one that causes the least harm.
Scott seems to acknowledge this reality by noting that “a missed calculation on our part to secure our nation’s data could” get people killed. That is certainly true, but then he goes on to say that the potential loss of life is “a price I’m not willing to negotiate on.”
To me it appears that he contradicts himself in adjacent sentences. The first one points out that the cybersecurity tradeoffs the government needs to make are of grave importance - with which I agree - but the second statement rejects the need to make such choices at all. “I’m not willing to negotiate” provides no information of decision-making value. This position also implicitly refuses to acknowledge resource limitations, and is the classic example of a double bind. It gives subordinate leaders no guidance except that they will be punished mercilessly if anything goes wrong. Similarly, it also suggests that any missed opportunities - including those to save lives by economizing in certain areas - are acceptable losses.
Consider some realistic situations.
What if buying cheaper - but less secure - software to run dining facilities would free up money for troops to have more effective weapons or conduct more live-fire training? Is negotiation inappropriate in this instance?
Conversely, if America went to war tomorrow, should we ground all F-35 fighter aircraft just because their software has known security flaws in it? (Side note: despite the CTO of the Air Force saying that he is “not willing to negotiate” on the point, his service clearly is, having decided to keep in operation an airplane with known vulnerabilities in its onboard systems!)
To ask these questions is to answer them.
Critics might say that these are extreme examples which have obvious correct solutions - accepting some degree of cybersecurity risk - a point with which I would agree. These obvious solutions would, of course, also violate Scott’s non-negotiation position or the blanket dictate that “security should be table stakes,” my critique of which formed the basis of my first article.
Systems that protect the identities of intelligence sources, allow for augmented reality-enabled field medical care, and control artillery fire should of course receive a high level of scrutiny from a cybersecurity perspective (how high? Take a look at this post for some thoughts). But unless you assume an unlimited defense budget (it’s big, but not limitless), then at some point you need to start making tough decisions. Although I know that chow halls are important to the Air Force, this is probably a good place to start making cuts. In between the extremes is where things start getting gray, but quantitative risk analysis can light the way and help to identify an optimal course of action.
Until the government figures out risk management, though, it is going to continue making sub-optimal and opaque trade-offs. Taxpayers, federal civilian employees, and members of the military will continue paying the price.
As I mentioned in my last note, I stand ready to assist.