“The hard truth is that it’s not about “if”, it’s about “when” you get breached.”
‘“likelihood” == 1.0. Seriously. We say, “it's not if but when...”. The simple fact is that the likelihood of an incident in today's day and age is 1.0.’
“It is not a question of if, but when you'll be attacked,”
“There are only two types of companies: those that know they’ve been compromised, and those that don’t know.”
The above quotations are variations on a common theme in cybersecurity: that a successful malicious attack against your organization is basically guaranteed.
If you extend the timeline sufficiently, then of course this will be true. And even over the course of one’s time at a given company, the chance of experiencing a breach is non-trivial.
With that said, I don’t think these types of statements are especially useful or information-rich. This is for three main reasons:
Organizations and individuals have sub-infinite planning horizons, and rationally so;
Such statements ignore the time value of money; and
The impact of different types of successful attacks varies wildly - from trivial to catastrophic - as do the relative frequency of such events.
1. Planning horizons matter
Since the time we have on earth is limited, how we spend that time is very important. Hopefully this is obvious, but it’s generally better to be the victim of fewer data breaches rather than more, all other things being equal.
To give a simple example, assuming you live 100 years, reducing the frequency of a breach involving your data occurring once every 20 years to once every 100 years (or even once every 40 years) means a major bump in your quality of life.
This same logic is applicable to organizational timelines as well. If you can reduce the frequency with with which you need to conduct forensic investigations, notify customers, and inform regulators about malicious cyber events, you can apply resources to other, higher value-generating projects over your organization’s lifetime.
You can only make these tradeoff decisions if you have some idea of how frequently a given type of event will occur.
Thus, always assuming a likelihood of a breach occurring to be 1.0 over any given period will paralyze you with indecision and make it impossible to accomplish any productive endeavors.
2. Time is money and money is time
This is related to #1 but distinct from it.
When a breach happens is also important. Although sayings I listed at the beginning use the word “when,” they do in a way that implies inevitability. I grant that over an infinite timeline a successful attack is guaranteed, but maintain that knowing this isn’t especially useful.
What is useful, however, is being able to delay such an attack.
Due to the fact that productively deployed capital will compound over time, and assuming that a successful cyberattack reduces the amount of capital available to compound, you would generally prefer to suffer such an event further in the future rather than sooner (assuming you need to suffer one at all).
Thus, any action you take to defer a breach has value in and of itself.
And to make intelligent investment decisions regarding deployment of capital, you need to have a general idea of how soon (or late) a breach will occur.
3. Your mileage per breach may vary
Points #1 and #2 are valid even assuming that every event has the same impact to your organization (and thus, represents the same risk).
But we know this isn’t the case - not even close.
Some attacks cause basically no damage and some cause $100 billion worth of damage.
If you assume ‘“likelihood” == 1.0’ for any given type of event over any period of time, then you would likely over-index greatly on protecting against the higher impact ones.
This is likely to cause you to focus more on preventing mega-breaches rather than more frequent but less severe incidents. This may or may not be the right call, but there is no way of knowing this unless you do a quantitative risk analysis.
And such an analysis requires establishing the probability of a given event occurring over a set period of time.
Additionally, if you make the wrong call, you risk suffering death by a thousands cuts while striving to protect against “the big one.”
Conclusion
It is important to note that I am NOT saying organizations should dispense with incident response plans or ignore defense-in-depth measures. Quite the contrary. To develop effective second- and n-level controls, it is absolutely vital to understand the likelihood of an attacker breaching your first layer of security, the consequences, and what his subsequent actions might be.
Such detailed analysis requires more than blanket statements.
The “not if, but when” slogan might have represented effective cybersecurity marketing copy at one point, but unfortunately it does not help when making risk management calculations.
Furthermore, with the market being thoroughly saturated with fear, uncertainty, and doubt (FUD)-based messaging, the phrase is losing its luster even from the perspective of its original purpose.
I propose the security community retire it.