top of page
  • Phil Venables

Why Cybersecurity Budget Benchmarks are a Waste of Time

I have built up a disdain for cybersecurity budgeting benchmarks. To be fair, there are some good attempts amid a sea of haphazard approaches, but my real problem is with the very concept of these benchmarks. So much so that I think budget benchmarking has actually held back real progress for many organizations.


Why do I think this? Three simple reasons.


  1. A budget is an input not an outcome. Security risk management needs to be centered on outcomes.

  2. There is no agreed upon taxonomy of comparison. You’re never comparing apples to apples.

  3. Incentives are misaligned. Budget comparisons aim to set minimum standards, "spend at least X”, but often good security programs become more efficient and spend less per unit of control (overall budget might increase as the organization grows but this should be sub-linear to the overall enterprise spend).


Let’s look at each one.


Security needs to be centered on outcomes


Our goal is to mitigate security risk in the best way for our organizations while balancing the needs of customers and the commercial purpose of the enterprise:


  • Your risk is not my risk

  • Your business is not my business

  • Your threat outlook is not mine

  • Just because you and I spend roughly the same doesn’t mean we will get the same result, I might have different people, different issues, different established infrastructure and so on.

As an aside, if your leadership says things like "we will spend whatever it takes to make sure we have no security issues”, then when an event happens does that mean you didn’t actually spend enough despite what you said? I’ve also seen some organizations actually spend too much because of excess focus on input not outcomes which resulted in wasted product spend, conflicting projects, overlapping teams and unnecessary employee churn.


Compare based on an agreed upon taxonomy


If you’re going to collect data to do a comparison then, of course, you need to establish consistency in those units of measure, the taxonomy of terms used to establish what is being measured and an approach for context-adjustment. I’ve seen benchmarking that has done this in some fields and domains, but never for a budget benchmark.


As a result, budget benchmarks or even just disclosures of what your budget is can be misleading. For example, most large organizations I know could legitimately represent their security budget in a 1 to 10X range i.e. I can make it the core budget of the security team (1X) or a large percentage of the whole enterprise spend (10X) - should you count systems administration activities like patching, how about everyone’s training, cost of product reviews, access administration, what about the opportunity cost of security projects, etc.


Think of an analogy, a credit risk department at a Bank (that oversees whether loans are too risky). How would you benchmark credit risk management budgets between banks? - the cost of the team, the cost of the hedges, the computational cost of the risk calculations, the time spent by loan officers’ conforming to lending standards, the opportunity cost of too stringent credit standards, and so on.


Align incentives


If the goal is to compare what are the most effective practices to drive improved security outcomes then some of the following are great places to start:


  • How much is security embedded in product design and development (shift left)

  • The extent to which controls are seamlessly embedded into business processes and supporting systems (ambient controls)

  • The degree of automation of repeatable activities (never send a human to do what a computer can do)

  • How frequently the organization is adapting its processes or architecture to reduce inherent risk (risk avoidance)

  • Whether the organizing is constantly taking new updates and features (the rising tide of cloud and service centric security)

Interestingly, when these are done well then they usually show reductions in expenditure per unit cost of control. If you're focused on budget benchmarks for which success equates to more spend then you might, paradoxically, be disappointed with actual better outcomes.


Bottom line: I do believe comparisons / benchmarking are a useful tool for learning, for continuous improvement and sometimes for validation. Just don’t benchmark on raw spend - focus on comparing outcomes or outcomes per unit of input, not just on input.  Why? Well, is spending 90% of your IT budget on security better or worse than spending 10%? I have no idea. You might be irresponsibly pouring money down the drain in the first case or being crazily frugal in the 2nd case. We don’t know without knowing the outcomes and risk profile.

2,622 views0 comments

Recent Posts

See All

Security and Ten Laws of Technology 

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might

A Letter from the Future

A few weeks ago The White House published our PCAST report on cyber-physical resilience. Thank you for all the positive reactions to this. There is already much work going on behind the scenes in publ

InfoSec Hard Problems

We still have plenty of open problems in information and cybersecurity (InfoSec). Many of these problems are what could easily be classed as “hard” problems by any measure. Despite progress, more rese

bottom of page