top of page
  • Phil Venables

Leverage Points - A Cybersecurity Perspective

Security is an emergent property of the complex systems we inhabit. In other words, security isn’t a thing that you do, rather it's a property that emerges from a set of activities and sustained conditions.


There are optimal places in such complex systems to do those activities to establish such conditions. These are called leverage points and have been ably described by Donella Meadows in this widely read article: Leverage Points: Places to Intervene in a System. They are the places where small changes can have a big impact. But caution is needed as they are sometimes not intuitive and if not managed carefully can make whatever problem was trying to be solved worse.


Many of us, working in various risk management disciplines including information / cybersecurity have for many years explicitly or instinctively found and utilized many of these leverage points to change our organizations and societies for the better. Let’s take a look at the leverage points, in reverse order of impact (following the original article’s pattern), and see what all this might mean for cybersecurity.


12. Constants, Parameters, Numbers

Use standards for security to provide a constant basis for a minimum level of interoperable controls.

This includes proscribed system-wide constants or other factors that dictate certain conditions, things like subsidies, taxes, enforced standards and so on. Consider taxes, in society, that can often drive the right outcomes by seeking to reduce the behavior that generates the taxable property like excise on cigarettes or gasoline. But as you might expect taxation can also be used as a social policy with unintended consequences especially when it’s excessively tinkered with through deductions.


Much of the fiddling down at this level will only drive critical system changes by supporting other leverage points. The most effective use in security is the use of standards ranging from cryptographic algorithms, protocols, all the way up to standard regimes of controls like PCI. Many of these are highly effective and proven over many years. Some are disputed, but it’s hard to argue many are truly counter-productive despite the compliance vs. security debates that still occasionally erupt.

11. Sizes of Buffers and Stocks

Adopt increasing defense in depth to buffer against attacks.

Buffers are useful in many contexts like an inventory of products to alleviate supply chain interruption. Buffers in data transmission smooth fluctuation of capacity use and demand. Physical buffers like calcium in soil act as a buffer against acid rain.


Buffers can be used as a leverage point but they are often on the lower end of effectiveness since in many situations they are difficult to adjust after first instantiation.


In the context of security, let’s take a step back and think of a buffer as a means of assuring performance or smoothing the flow of supply and demand in the face of stressors. In this situation you can think of defense in depth as “buffering” attacks. The more you have depth of buffering then the more you can afford an individual control failure.


10. Structure of Stocks and Flows

Pay attention to structures of IT systems to support security goals - which need to include a healthy focus on maintainability constraints - otherwise security can limit performance or performance can limit security.

The structure of a system can, of course, have a dramatic effect on its performance. For example, in the physical world, a hub and spoke rail network is going to have significant concentration points and be particularly ill-suited for spoke to spoke travel. In most cases the only way to fix a system like this is to rebuild it and, unsurprisingly, for most situations this isn’t tenable.


Some stock and flow systems are fundamentally unchangeable, such as the march of demographics from past population booms or declines and you simply have to deal with their effect as they work their way through.


So, the real leverage point is getting design right in the first place. I’ve seen multiple situations where IT environments have been poorly structured such that software updates can’t scale (e.g. single masters vs. multiple masters), where DMZs have been designed to choke traffic through single control (prevention and detection) points so that security becomes a performance limiter to sideways scaling all the way through to more fundamental architectural flaws like using layer 3 / 4 controls to implement layer 7 objectives. This last example is particularly pernicious on web or other application flows where layer 3 / 4 network devices (think intrusion prevention devices) block traffic courtesy of deep protocol inspection but have no means to meaningfully signal the cause of blocking to the application at layer 7 so that the application just sees a network failure.

9. Lengths of Delays

The time it takes to measure the effectiveness of security and make adjustments is one of the most important determinants in every security control. Such fast feedback can also enable more radical risk reduction as overshoots in security that incur reliability risk can be quickly course corrected.

The time it takes to see an outcome as a result of some change is vital for applying leverage in a complex system. Simply put, to know if a point of leverage is having effect (positively or negatively) you need to observe its effect. The quicker you can measure that the quicker you can amplify or course correct. The classic, physical world example is adjusting your shower temperature, if the response is not immediate then you spend a lot of time hunting around for the optimal point because of the response delays - with a lot of excess hot and cold (and cursing) during the process.


An important consideration here is that the timeliness of outcome is a function of the measurement and of your response. Having immediate feedback but being slow to react is just as bad as being ready to go after slow measurement. Of course, slow measurement and tardy response can be even worse. Tardiness of response might be something built into system constraints. For example, if your responsiveness to the feedback of attacks is limited by, say, log storage then it’s going to be a while to build that up especially if your log solution is cost constrained.


In security, it is especially vital to focus on measurement and response and getting that as fast (and accurate) as possible. This could be from vulnerability detection and resolution through to associated practices like patching / system updates and measurement of completeness. It also applies in other parts of security like privilege management, software assurance, supply chain integrity and so on. It’s also important for course correction when excess controls might have been applied. Imagine struggling to reduce the default levels of privilege against an unknown or uncertain policy baseline. Often the easiest way to do this is to be draconian and take away privilege and build it back up to the necessary level. This can be hazardous for reliability and so in such exercises we often proceed slowly and cautiously, perhaps at the expense of needed risk reduction. If you have a fast measurement and response cycle where response can quickly reinstate privileges then you can proceed more aggressively and so reduce risk further while limiting reliability or other operational risk impact. Speed is key.

8. Strength of Negative Feedback Loops

The whole domain of continuous control monitoring is fundamental to leveraging effective negative feedback points. This includes defining an end state goal, the scope and timeliness of measurement, and the speed of adjustments to return the system state to the defined goal.

Negative feedback loops are vital as a leverage point especially when they can be strong enough relative to the impacts they are trying to correct against.


Negative feedback loops are ever present in nature as well as systems built by people. The classic examples being predator / prey population balances through to thermostat controls. Negative feedback loops need a goal to move toward, a measurement of state and an actuation process to drive some action to bring the measurement closer to the goal. The effectiveness of the negative feedback loop is contingent on many factors including the speed of response, the time it takes for the control process to reach the goal and how long it takes for the system to (re-)drift into a state of needing more correction. Each of these elements represents a point that can be adjusted or leveraged to create system-wide performance goal improvements.


Again, security is full of examples of negative feedback loops. If we accept our 4th fundamental force, that entropy is king, then we see that to control this requires constant monitoring and feedback to keep a system in its specified controlled state. Our new worlds of policy-as-code, controls-as-code, infrastructure-as-code and so on all are premised on a specified “golden” configuration which needs monitoring and feedback to reverse any drift. The whole domain of continuous control monitoring is fundamental to leveraging effective negative feedback points including definition of goal, scope of measurement, timeliness of measurement, speed of actuation to return to the goal and the consistent effectiveness of that process.

7. The Gain of Positive Feedback Loops

Drive positive feedback loops in development activities that improve build and deployment reliability, reduce the toil of errors and free up resources to further improve reliability.

While a negative feedback loop is self-correcting, a positive feedback loop is self-reinforcing. Instead of driving correction to a goal or target state, it is intended to amplify one or more aspects of performance.


You see this in many domains from infectious diseases (the more people are infected and the more transmissible something is then the more other people will be infected) to financial performance (the more money you have, the more you earn and the more that compounds to earn more). Positive feedback loops can also have negative consequences such as soil erosion, the more soil erodes the less capacity it has to retain vegetation and thus erodes more, and so on.


Positive feedback loops will often be constrained by a negative feedback loop linked to the exhaustion of resources which were initially driving the positive feedback.


In many real world examples there can be negative associations with “success for the successful” positive feedback loops - such as the rich inevitably getting richer. But, in the world of security we want to create some positive feedback loops that enhance capabilities, for example, getting a software development team into a cycle of more continuous build, integration and deployment will often trigger a cycle of testing and security assurance processes that will liberate more resources to apply to improve the speed of such development activities. Similar positive feedback loops occur in introducing SRE techniques where improvements in reliability free up resources to further improve reliability which frees up more resources.

6. Structure of Information Flows

Find information that needs to be made transparent which will, when presented as a comparison between teams, encourage improvements (irrespective of forced negative or positive feedback loops).

Leverage points in complex systems are heavily dependent on the structure of information flows. This is not just because of the information dependencies of feedback loops but also because the information determines the nature of the interaction between components of the system.


So, the structure of information flows including who and what has (or has not) access to information - is a significant leverage point.


The classic example here is making some piece of information transparent that it wasn’t before, such as in houses putting the electricity usage meter in plain sight vs. in the basement or exterior of the house. The act of making electricity usage visible, subtly or overtly drive the occupants to adjust their consumption. In other words a whole new feedback loop has been created by a restructuring of information flow.


There’s also a lot to be said of transparency. There are many examples where making transparent some aspect of a system’s operation, or in the case below, a system’s pollution can cause a self-imposed corrective feedback loop.


The same technique can be used in many parts of a security program. It is important to drive certain goals ever closer to 100% conformance using a negative feedback loop, or to amplify some aspect of performance using positive feedback. It might also be highly effective to simply make transparent the relative performance of teams, in a league table or other format, to encourage some competition based on that transparency. This is very effective when connected to behaviors or patterns that the top performers are using that can be shared to the advantage of all.

5. Rules of the System

Use industry associations and standards bodies to promote better societal rules and approaches to improve security, but also watch out for changes to the rules of the system that might have unintended consequences.

The wider rules of the system are the structure of the surrounding ecosystem in which the system operates. These are different from earlier leverage points like standards since there are less degrees of freedom in how to conform to the rules of the system vs. the construction of your system and which standards or policies that brings into scope.


The rules of the system in this sense are laws, constitutional rights, education system structures, socio-economic factors from employment protection to intrinsic healthcare, and so on.


Most organizations, let alone specific security teams, have little option to change the rules of the system to benefit your individual goals. But, we all have opportunities to harness our trade associations, sector coordinating councils, ISACs/ISAOs, standards bodies and more to improve legislative and regulatory approaches to increase the likelihood of security improvements.


Also, we all have to be watchful of new rules of the system that have unintended consequences on security. There have been plenty of examples here like eIDAS, encryption back doors, key escrow and more, including examples of rules of the system that don’t directly dictate security measures but inadvertently impact security. These are usually sector specific, for example the creation of central clearing houses in finance to reduce credit and liquidity risks spiked some operational risks.


4. Power to Change System Structure

Encourage blameless approaches to root cause analysis and devolve the power to change and experiment as close to organization edges as possible. Wire the organization to benefit from successful experiments.

The power to change system structure - to evolve at multiple levels - is an immense leverage point. If a system is rigid to change, even when internal forces are trying to propel such change then it will be hard to apply many of the other leverage points. So to permit and encourage such change is powerful.


Many environments largely treat such self-organization as accidental, that is present by good fortune not design. Sometimes the benefits that arise are not even seen as being due to the organization supporting such self-organization.


Self-organization in many environments stems from a culture of psychological safety in having people at all levels be able to suggest changes in response to what they see - closer to where the action and problems are. It is important for an organization's culture to be able to be challenged and evolved as it grows while still preserving the core principles of its founding culture.


A big part of this, and many other aspects of self-organization, is a high degree of tolerance for experimentation. This is hard for many organizations. In the context of security this might mean a careful balancing act of letting devolved or otherwise federated teams have some degree of autonomy on what they do - as opposed to seeking exhaustively consistent implementation standards for all aspects of security. Yes, consistent policy objective enforcement, but possibly experimentation in the means of achieving those goals. This is only useful if there is sufficient connectivity and feedback across the system as a whole so that successful experiments can be replicated quickly for the good of the whole system and that failed experiments can be, blamelessly, stopped and replaced with something more proven - or a fresh experiment undertaken.


3. Goals of the System

Organization-wide goals drive many other behaviors and so having security goals that are compatible with those is crucial - even if that might mean challenging the organization-wide goal.

The goal of the system can have outsize effects on the performance of the system. Yes, that is somewhat of a statement of the obvious. But, it needs to be said since in many areas if the goal is incorrectly or ambiguously stated then all other leverage points will be contorted to fit the wrong goal and no amount of adjustment at those points of leverage will survive the conflict with that.


At some level you can imagine simple system-wide goals. For example, the goal of a company is to make a profit, right? Well, yes, but as a company is growing it might be that the right goal is to grow market share well ahead of profitability. It might be that the goal is to vertically integrate the company while making the bare minimum profit so as to build resilience for the future. All of these goals will dictate radically different approaches and leverage points at all levels of the system.


Now, one of the biggest roles a security leader has is to set goals for the security program that are compatible with the corporate goals. For example, if the security program is about minimizing all risk inside a growing business with competitors trying to steal market share by undercutting cost and performance on security then the goal is nuanced. It can’t be to have minimal security risk, neither I would argue can it be about a race to the bottom, rather it requires pragmatism on risk levels, rapid in-field adjustment of security and careful positioning and transparency of superior security qualities. That wider set of goals results in different approaches than other stated goals. Stating such goals clearly can be as tough as implementing them once stated.

2. Mindset or Paradigm Out of Which the System Arises

The security team needs to understand and align goals to the mindset of the organization and the society that is building and running the system.

This is the classic approach of a shared societal or organizational mindset out of which the system — its goals, structure, rules, delays, parameters — arises. There are some classic national examples born of a system’s “foundation” story, such as what is possible in the United States is radically different than, say, other regimes or countries not when it comes to a technical analysis of the democratic process but rather of the individual freedom vs. collective approach mindset.


In security, if the mindset of an organization is that no breaches are tolerable then there is no possibility of success no matter how a system goal is stated. I’ve had a number of difficult Board conversations over the years with people saying things like, “we have no risk appetite for security breaches”. Clearly, this should then translate into we will do all it takes at the expense of all other goals to have no security breaches. This is not tenable or even desirable in the grand scheme of things. Of course, what they really mean is that, all things being equal, we’d rather not have anything go wrong with security. The security leader's role then is to translate that into a workable risk appetite and associated set of goals and other leverage points.


The mindset of the organization is vital and needs to be observed and possibly used by the security team. If the prevailing mindset is more like “move fast and break things” then some clarity of where that mindset is applicable and how it is tempered will be vital.


As the Ralph Waldo Emerson quote from the original post says:


“Every nation and every man instantly surrounds themselves with a material apparatus which exactly corresponds to … their state of thought. Observe how every truth and every error, each a thought of some man’s mind, clothes itself with societies, houses, cities, language, ceremonies, newspapers. Observe the ideas of the present day … see how timber, brick, lime, and stone have flown into convenient shape, obedient to the master idea reigning in the minds of many persons…. It follows, of course, that the least enlargement of ideas … would cause the most striking changes of external things.”


Changing mindsets is a slow process that, in its simplest form, requires a constant highlighting of the anomalies and contradictions arising from the current mindset. In other words, you chip away at this over time by putting people with a new mindset in authority and adding change agents to the organization.

1. Power to Transcend Paradigms

Stay flexible.

The ultimate leverage point is to not be attached to any one specific paradigm. To stay flexible. People sometimes criticize companies or teams for conducting a pivot. This focuses on the “failure” of their original pursuit of some paradigm rather than their successful realization of the need to transcend that and move to tackle new goals with a different mindset.


In the security program, recognize what is working and what is not. If your mindset is failing you then switch. Do this often as the prevailing paradigm of your organization, eco-system or society changes.


Bottom line: complex systems cannot typically be directed, rather we use leverage points to coax them to a new state. This ranges from standards, feedback loops, cultures, to overall goal setting. But, perhaps, in the end our biggest source of leverage that encompasses all leverage points is the power to face reality and mercilessly and relentlessly deal with it. This is especially crucial for security programs.

2,717 views0 comments

Recent Posts

See All

Where the Wild Things Are: Second Order Risks of AI

Every major technological change is heralded with claims of significant, even apocalyptic, risks. These almost never turn out to be immediately correct. What often turns out to be riskier are the 2nd

Security and Ten Laws of Technology 

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might

A Letter from the Future

A few weeks ago The White House published our PCAST report on cyber-physical resilience. Thank you for all the positive reactions to this. There is already much work going on behind the scenes in publ

bottom of page