top of page
  • Phil Venables

Are Security Analogies Counterproductive?

Do analogies actually help us or do they set back our ability to drive change? On the face of it they are a useful explanatory tool, as are metaphors and perhaps even similes. But at what point is their use counterproductive in leading people to incorrect solutions or approaches?


Despite their widespread use, analogies can be an inefficient and unwarranted layer of explanation, especially now and in the future when we should have a higher expectation of people in understanding basic cybersecurity principles, if not in their personal lives then almost certainly in the context of decisions in modern digital businesses.


There are a number of often repeated analogies I’ve seen as immediately appealing but can lead to some dangerous assumptions or causes of action, for example:


  • Malware execution for filtering purposes compared to a “detonation chamber”. This seems a neat mental model, except that munitions don’t often try and be aware they’re in a detonation chamber and try to fool the process, and actual detonation chambers don’t make a copy of the bomb, send it to the target while queuing the copy for detonation and then when it blows up desperately try to also stop it blowing up in the target's face (desktop). Yes, I know not all malware filtering systems work exactly like this.

  • Cybersecurity is like defending your house. You have doors, windows, multiple entry paths, valuables to steal, the possibility for alarms, detection and interdiction. Ok, great, except that burglars rarely silently enter your abode and take copies of things leaving your environment otherwise intact, similarly they don’t come in and padlock everything and then later ask for payment to unlock things so you can use the bathroom.

  • Micro-segmentation is like office physical security. You have your building entrance, floor physical security and maybe even specific office space security. Which is all fine except offices are not often transient, dynamically set up, have to pass millions of authorized visitors at the same time, and also try and enforce access based on where you just came from and what your intent is when you get inside the office.


In each of these cases, the analogy is not worthless if it’s used as a basic explanation as an introduction. Although, it is potentially dangerous because it introduces a mental model to people for which the actual implementations don’t stand up. This distracts from the need for additional controls to actually mitigate the risk at hand vs. the implied risk in the analogy.


We need to be careful of analogies and bias away from the general analogies as much as possible - unless they really do work. I’m not guilt free here, although I do think the cloud digital immune system analogy that I use a lot is quite apt - especially if you add the color that what we’re really talking about is an immune system supported by human intervention (i.e. health intelligence, research, vaccines, treatments and other prophylactics).


So, rather than immediately leap to using analogies we should first consider the use of other explanatory devices such as the following, in order of preference:

  1. Actually Explain. Talk about controls in plain-language. For example, rather than constructing some complicated analogy for binary runtime authorization it might just be easier and clearer to say that we want to make sure is that our IT systems only run authentic software so that we reduce the risk of illicit activity. I’d argue that’s a pretty simple concept for even non-IT people. Or, that having a physical device that can't be cloned to prove who you are is better than a password that is undetectably copyable and prone to theft.

  2. Narratives. Use narrative, scenarios or other means of story-telling to capture the imagination and drive understanding and outcomes. In other words, telling a story or showing an actual or constructed incident analysis of a before and after view of the controls and the consequences of no control.

  3. Context Specificity. If you do use analogies, then use context specific analogies in a particular industry or situation. For example, if you’re in finance working with finance business leaders and employees then local terms might be used e.g. risk spreads to connote degrees of uncertainty or, say, value at risk to denote probabilistic assessments of expected losses in certain scenarios. In each industry there will be other useful context specific analogies that will not only connote more seriousness and professionalism but also can have less scope for misunderstanding. Again, the danger of analogies are not the conveyance of an appreciation for a topic but rather the illusion of direct comparison leading to the wrong or insufficient risks being mitigated.

  4. Analogy. As a last resort, when something is really so subtle and the population you need to explain something to is such that 1, 2 and 3 can’t work then use an analogy. But, be very careful what you pick and be on guard for unintended consequences. In such cases analogies can be used to improve understanding (although many I see are really designed for the sole purpose of making the author look clever or to deliberately confuse the audience). We should also remember that the two use cases of analogy are (a) to set the stage for a correct decision to change some situation and, (b) to help in understanding so people will cooperate with controls in a situation. Each might require different analogies, or more likely an explanation for one and an analogy for the other. We should also be careful in our global organizations on how people interpret analogies under cultural and other geographic disparities. Many of our favorite analogies, metaphors, similes or idioms are not just language specific but also culturally specific. For example, I’ve lived in the US for 22 years and I’m still baffled by most local sports analogies, although I have developed a bad habit of using the phrase “to throw the flag on the field” to make the point of what should happen under certain points of escalation. This is another reason to use context specific analogies that transcend cultures.


However, at some point we have to expect more of people. In expecting more I don't mean people should understand our jargon, which we should minimize or eliminate in as many non-domain specific conversations as we can, but rather to understand basic concepts of technology. This is especially true as it has become the foundation of our digital lives and businesses.


Let's use an analogy (unashamedly, given what I've just said), it’d be like never expecting business leaders to understand basic accounting terms but rather having the CFO always explaining the general ledger and the importance of it’s controls as like two abacuses that you decrement one as you increment the other to track the amount of money you have. At some point the analogies are just doing too much work and that's when you know you should work harder at explanation.


Also, in expecting more of people they might just surprise us in their abilities and reward us with their deeper understanding, cooperation and a drive to improve security. One that sticks in my mind was Feynman's explanation of the root cause of the O-ring issue in the Challenger disaster. He didn't use an analogy, but rather a very clear explanation with an even clearer demonstration.


We also need to remember that at a certain threshold the topic we seek to analogize, itself becomes the foundation of a future analogy. For example, we use healthcare as an occasional cyber reference point, but at what point did healthcare need no explanation as opposed to it being explained as something else. Did someone once say that healthcare is important because you should look after yourself like you look after your horse and chariot?


Another one is nuclear deterrence as an analogy for cyber deterrence. This is initially appealing but simply doesn’t work on many levels, not least because of the notion of mutually assured destruction. Besides, if it did work then we why do we see so many nation state cyber attacks and no (hopefully ever again) nuclear attacks.


Interestingly, though, there are some analogies which I think have powerful connotations we under utilize, such as technical debt. People intuitively grasp that the failure to maintain systems, or make appropriate architectural decisions leads to a “debt”, in the sense that it needs to be paid off or written off at some point in the future. But debt also implies the need to service that debt with interest, it reduces your capacity to take on more debt, and reduces your agility in the face of opportunity. The analogy also points to some interesting measures, like if you’re in too much debt then your credit rating (the ability for you to take on more debt) should be diminished. Such thinking could inspire some different types of enterprise controls in the same way SRE error budgets can act to improve system reliability within a negative feedback loop.

Bottom line: analogies can be useful explanatory tools but they shouldn’t be our only tool. Many analogies can be dangerous if they’re ill-considered such that they create an incorrect mental model of risks and mitigation - or they create a feeling of familiarity when we actually need deeper understanding. So we need to use explanation, narrative, and context specific analogies in preference to general analogies that collapse under scrutiny. We should have a stretch goal that more of our work is accepted and understood and needs no analogy. We will know we’ve won when other disciplines use cyber as the analogy not the other way round. I look forward to the day, for example, when I hear a discussion on constraining possible unethical behavior of an AI is like cybersecurity anomaly detection and response.

2,207 views0 comments

Recent Posts

See All

Security and Ten Laws of Technology 

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might

A Letter from the Future

A few weeks ago The White House published our PCAST report on cyber-physical resilience. Thank you for all the positive reactions to this. There is already much work going on behind the scenes in publ

InfoSec Hard Problems

We still have plenty of open problems in information and cybersecurity (InfoSec). Many of these problems are what could easily be classed as “hard” problems by any measure. Despite progress, more rese

bottom of page