• Phil Venables

The Most Important Mental Models for CISOs - Simple Steps for Outsize Effects

There are lots of problem solving techniques across many fields. These are often represented as mental models or behavioral short-cuts. Here are the ones I have found most useful over the years. 


1. 80 / 20 Rule (a.k.a. The Pareto Principle)


Once you see this you see it everywhere, whether it’s the 20% of customers that make 80% of your returns or the 20% of effort that contributes 80% of the outcomes. In cybersecurity you also find this. Investing 20% of time and energy in platforms that drive 80% of outcomes is common. Parsing your risk ledger and looking for the 20% of issues that if resolved would reduce 80% of your risk also works consistently.


Once you look for 80/20’s you also conclude that prioritizing work for your most critical assets needs to also focus on the most connected assets. Driving change on those can quickly ripple through the environment. I remember, when introducing a single sign-on (SSO) system, we didn’t implement it first on the most critical systems rather we implemented it on the small number of systems that were used by the most people. Of course, this enabled quick user experience improvement but it also meant that the end user community pushed the development teams to implement it much quicker for all the other less widely used systems. Looking for the 20% of systems used by over 80% of the end user population pushed the demand for SSO past its tipping point. In the end we needed no program or project to complete the other 80% of systems - demand pull from end users took care of that.  A great introduction to 80/20 is Richard Koch’s book.


2. Force Field Analysis

From the social sciences (Kurt Lewin), Force Field Analysis looks at a situation from both sides: what is working to change the situation and what is resisting the change.



Of course, in stasis situations the forces counter balance each other. When facing such cases look at both sides. It may be the most effective means of driving change is not to pile up more things on the left hand side (people, budget, dictates, policies, tools, etc.) but rather remove a few of the countervailing forces (individuals resisting, insufficient migration tools, organization misalignment, etc.). Then sit back and watch the situation change.


I’ve seen this in many situations as well. I remember encouraging people to use a new tool that would improve how e-mail would be secured. We were struggling to get take up, despite education, encouragement and in some cases mandates. Turns out (so obvious in hindsight) we’d made it a provisioned service that needed manager approval. There was no real reason for this other than the bias that what is provisioned needs approval. We took that approval step away and adoption happened quickly. Incidentally, we then used that as a catalyst to look at all the other things for which approval was needed but which always got approved to then remove that step for those. 

3. Inverting the Problem

Related to Force Field Analysis is the practice of inverting problems. The prime example of this is to focus on avoiding mistakes, not just achieving success. For example, in security, this is about cultivating a significant obsession with a pervasively deployed baseline of controls. It might not mitigate all threats but it does save you from the ones that if successful would make you look truly stupid. Another application of inversion is the pre-mortem. Think to the end of the project, and imagine it went terribly wrong. Why did it go wrong? If there are many reasons then apply the 80/20 and take that small number of items and focus on making sure they go well. This inversion is a lot easier than agonizing over a forward looking exhaustive project risk register. 

4. The 5 Y’s. 

When analyzing an issue, an incident or indeed anything. Keep asking why? until you get to the core of a problem. For example:


  • Why was the server exploited?  Because it wasn’t patched.

  • Why wasn’t it patched? Because if was end of life and there were no patches available.

  • Why wasn’t it upgraded?  Because the application software would need to be rewritten to use a more up to date operating system.

  • Why wasn’t the application upgraded? Because all the budget is always spent on new features instead.

  • Why don’t we have a business sponsored preventative maintenance budget? Ah, good question, we should - let’s start with 5% and see how we go.

5. Reframe the Problem

This is often an exercise in lateral thinking. In simple terms it is going after a root cause of a root cause that can come from the 5Y’s or similar exercises. For example, if your organization (or teams within it) are slow to fix discovered vulnerabilities then that is usually not because of a lack of desire. It’s often due to built up nervousness to do any changes because of outdated software development practices and tooling. The reframed problem is to improve the continuous integration/deployment process to make people more comfortable with any change. This reframed problem also has the adjacent benefit of improving productivity, agility and meeting business goals. The Phoenix Project and The Unicorn Project explore this particular example at length. 

6. Invariants

Be very curious about the things that don’t change in your environment. Seek to understand why and whether that can be used to your advantage. There’s a great story about Jeff Bezos' observation about the importance of focusing on the things that don’t change. This got me wondering, what are similar invariants for security and what are their implications? These are even more specific than the invariants we’ve discussed before: information wants to be free, code wants to be wrong, and entropy is king. For example:

  • There will always be software vulnerabilities, so, concentrate on architectural defenses (technical and business process) as well as trying to reduce the occurrence, impact and life of vulnerabilities. In other words don’t permit architectures that let one exploit be catastrophic.  

  • Software is eating the world and security issues are a "waste product" from that process. So, shift left and embed more controls systematically in the production of software (SDLC integration, testing, autonomous operations etc.)

  • Attackers will continue to innovate in surprising ways and will always be motivated and will never be eliminated. So, attack their economics on multiple fronts to keep them contained. The additional invariant is that attackers will always have bosses and budgets.

  • Interactions in complex systems will continue to be surprising, so seek to decouple dependencies.

7. Double-Think

Double-think (the acceptance of or mental capacity to accept contrary opinions or beliefs at the same time) is less a mental model and more a technique to not fall victim to the flawed opinions of others. Another way of looking at this is to not fall into the trap of everything being in opposition, for example:

  • Just because you should assume breaches will occur (and so focus on detection and response) doesn’t mean you shouldn’t also work like crazy to make sure breaches don’t happen.

  • Just because you're pushing for a zero trust architecture or otherwise espouse being “perimeter-less” doesn’t mean perimeter defenses like firewalls are of no use.

  • Just because your industry benchmarking puts you ahead of your peers doesn’t mean that you don’t have glaring exposures in your business.

  • Just because security is a business problem doesn’t mean that isn’t also very much a technology problem as well.

Bottom line: the CISO job or any security role is challenging and complex. You need a tool bag of models to use to save you reinventing new approaches in every situation. Just like we have a set of methodologies and approaches for technical challenges we can also rely on a small set of mental models for management and analysis problems. 


1,117 views

Recent Posts

See All

Vulnerability Management - Updated

It still surprises me that much of the tone of vulnerability management is about patch/bug fix vs. detecting broader configuration and architectural problems. I’ve always found it more useful to think

Subscribe for updates.

© 2020 Philip Venables.