top of page
  • Phil Venables

Raise the Baseline by Reducing the Cost of Control

One of the most successful techniques for enterprise security in many organizations is to create a universal baseline of controls that apply everywhere - and to then economically increase that baseline by reducing the unit cost of controls (existing and new).

This is counter to what most of us were taught to do in textbook-style security risk management, which goes something like this:


  1. Determine the value of assets at risk.

  2. Assess the risks to those assets by looking at potential threats acting on vulnerabilities.

  3. Do some hand-waving or other pseudo-scientific approach to come up with some estimate of potential losses.

  4. Implement controls if the cost of those controls is less than the potential losses. 


There are many analytical problems with this approach: from determining the risk to assets from various scenarios to the difficulty of actually estimating potential losses. Although, as I've covered here, that is improving. 

The real issue though is the sheer cost and complexity of doing this continually to achieve that perfect balance of cost of control vs. value at risk. Once you include the cost and maintenance overhead of disparately applied controls it is often cheaper to just implement the controls consistently whether a particular asset needs it or not.

An additional issue in a modern enterprise is how interconnected business processes and technologies are and there is only so far you can go to minimize risk with fault isolation - in other words, looking at risks to specific assets often fails to take into account contagion risk. 


In a world dominated by platforms, whether it is on-premise or in the cloud, the right approach is to deploy a universal baseline of control from patching/vulnerability remediation, identity and access management, segmentation to strong software security assurance and so on. Then work to keep increasing that baseline both in terms of feature/controls as well as the level of assurance/stringency. But, we have finite budgets, so when we say keep raising the baseline of controls we don’t mean keep spending progressively more money. Rather, we have to reduce the unit cost of control to enable controls to be flooded across the environment. Now, this isn’t just about cost of implementation, more importantly it’s about the cost of operation and the impact to productivity and usability. The way to raise the baseline is to reduce the cost of control, and reducing the cost of control takes a deliberate and sustained effort. 

Consider the highly simplified approach depicted below. We have our most critical systems and then all the rest - in reality you will likely have more tiers. In our most critical systems we want to implement enhanced controls and to keep increasing those controls as needed year-on-year. Because these are our most critical systems we are prepared to tolerate higher expense of implementation and operation - at least initially. The rest of our systems get our baseline set of controls. These also keep increasing year-on-year but because these represent our whole environment at a much bigger scale we won’t be able to necessarily afford to widely deploy the more enhanced controls used on our most critical systems. 


Now we can’t just rest there, we have to take those enhanced control sets and then figure out how to make them cheaper, easier to implement, less effort to manage and with reduced impact on agility and productivity. This takes work. Think about the analogy of cars. Much of the safety and performance technology in everyday family cars started in high end racing cars. A lot of this technology was expensive, hard to use, unreliable beyond one use and hard to maintain. Society decided we wanted improved safety in our cars but that didn’t mean we were prepared to pay 10x for our cars, have them maintained every week and subject ourselves to a whole new regime of driver training. No, the automative manufacturers took those high end features and commoditized them, making them cheap, reliable, easy to maintain and straightforward to use. In the commoditization process some features might be dropped but the end result achieves the goal in a reduced context. There are some good examples here

Let's take an example of security controls, in particular software security. In the past, and in many cases still today, the full range of software security assurance (analysis tools, design review, code review, penetration tests, etc.) can often only be afforded on the most critical systems. But we can commoditize many aspects of this to work across all of our software, for example: develop and use toolkits to solve for common software security issues, deploy cut down versions of analysis and testing tools that look for the most egregious flaws, embed capability in IDE's and train developers in their own work flow.

This model of raising the baseline by reducing the cost of control is happening at enormous scale in the major cloud providers. They, in partnership with customers, develop standard architecture patterns for security based on underlying secure infrastructure and tools. The most common elements of those patterns progressively make it into secure defaults for their services and thus the baseline is raised. The best part is that the nature of the hyper-scale cloud providers means that the economics of raising the baseline becomes a flywheel.

Bottom line: we should move away from just implementing controls according to specific need. Rather we should flood our environments with a baseline of commodity security controls. Implement enhanced controls on your most critical systems but then work like crazy to commoditize those so you can drop them into your baseline. Standardize on common platforms and commonly used services, such as cloud services, that keep getting more secure courtesy of their own scale. Then keep upgrading to take advantage of that. Your most important future security principle is your ability to keep up to date. Your best security metric is your version/feature adoption velocity. The systems that are stuck and not upgraded will be the ones most relatively insecure the quickest.

1,169 views0 comments

Recent Posts

See All

A Letter from the Future

A few weeks ago The White House published our PCAST report on cyber-physical resilience. Thank you for all the positive reactions to this. There is already much work going on behind the scenes in publ

InfoSec Hard Problems

We still have plenty of open problems in information and cybersecurity (InfoSec). Many of these problems are what could easily be classed as “hard” problems by any measure. Despite progress, more rese

DevOps and Security

Each year, DevOps Research and Assessment (DORA) within Google Cloud publishes the excellent State of DevOps report. The 2023 report published in Q4 was as good as ever and in particular documented so

bottom of page