top of page

Security Leadership Master Class 4 : Enhancing a Security Program 

  • 4 minutes ago
  • 6 min read

This is part 4 of a 7 part series grouping together sets of prior posts into a particular theme. 


  1. Security Leadership Master Class 1 : Leveling up your leadership

  2. Security Leadership Master Class 2 : Dealing with the board and other executives

  3. Security Leadership Master Class 3 : Building a security program

  4. Security Leadership Master Class 4 : Enhancing/refreshing a security program

  5. Security Leadership Master Class 5 : Getting hired and doing hiring

  6. Security Leadership Master Class 6 : When disaster strikes

  7. Security Leadership Master Class 7 : Contrarian takes


In this summary we’ll look at how to enhance or refresh a security program. When you’ve built a security program, or have taken over running a well established and well constructed one, you will still need to keep enhancing it. You may even, at various inflection points, need to outright refresh the whole thing. Such points might be major M&A activity that substantially changes the character of the company, major new product lines, new markets, and possibly external triggers like new regulation, competition or even wholesale shifts brought on by dramatic technology changes like AI. 


Enhancing or refreshing includes many factors such as:


  • Shift Left to Shift Down. As you’ve built your program you’ve likely implemented many critical controls and are in relatively good shape. But, you may still have many places where the secure path is not always the easiest path. Now is the time to look for places to even more tightly couple security, to embed in platforms, tools, and frameworks. Paradoxically, while you want a high degree of awareness for security among your developer, end user and business communities you actually don’t want them to have to do a lot of work to enact security. At this stage you need to keep creating ambient controls that seamlessly protect your business, help commercial growth and are relentlessly cost effective. 


  • Adopt More Foundational Metrics. Shift away from only counting lagging metrics (like breaches, vulnerabilities etc.) or what is simply easy to count toward a smaller set of fundamental and higher leverage metrics ("count what counts"). These are the 20% critical leading indicator metrics that can drive 80% of your future risk reduction. They also deliver significant adjacent commercial benefits. Examples include measuring the degree that the organization has software reproducibility and infrastructure reproducibility. These are massive boosters of security but also deliver agility, productivity and time to market benefits for your organization’s products and services.

  • Flip the Script on Incentives. Move away from focusing only on the rationale for security being loss avoidance or returns on security investment expressed as “soft dollars” (theoretical savings that don’t result in actual money returned) when justifying major projects. Instead, focus on funding initiatives that deliver commercial or mission benefits such as real cost savings or measurable improvements in customer experience. The security and resilience gains serve as critical adjacent outcomes to business goals. As you built your program you likely were given a boost of funding to get those initial things done. As you further enhance it I’ve found you have higher expectations to demonstrate how you can scale effects super-linearly from smaller, flat or even decreased budget. This means a big shift to efficiency and effectiveness to get more leverage vs. your initial focus which was getting the base level of security implemented as fast as possible.

  • Budget for Preventative Maintenance. Getting the organization into a path of sustained control means explicitly assigning and managing a budget for preventative maintenance. This can be expressed as a percentage of the wider operating budget to fund activities like technical debt pay down, system upgrades (to avoid them becoming end of life), control sustainment, bug fixes, and testing enhancements. This budget should be designed to increase after many types of failures (at the expense of other initiatives) thereby creating aligned incentives for teams to improve reliability and avoid premature cost cutting. This is like the SRE error budgeting approach.

  • Counter Entropy via Control Reliability Engineering. Recognize that controls inevitably degrade over time ("Entropy is King"). So, implement a discipline of Control Reliability Engineering by continuously monitoring controls at run time and treating control failures (or "control incidents") as first-class objects, similar to security incidents, to ensure sustained completeness and correct operation.

  • Prioritize Inventory Triangulation for Accuracy and Savings. Maintaining accurate knowledge of what is in your environment is vital. Keeping inventories up to date and with complete coverage can take a lot of effort. So, automate accuracy by "triangulating" inventories so that they are interlinked and self-correcting. For example: cross checking application and infrastructure inventories to note if there are apparent applications running nowhere or servers/cloud instances running nothing. Measure the percentage of inventories that are subject to such reconciliation as well as many inventories are kept accurate because processes depend on that. For example, provisioning that only works when reference to an inventory is made which forces inventories to be accurate (or quickly detected as broken). Also spend time showing the commercial as well as security benefits for the wider organization like accurate inventories saving money by discovering and redeploying unused assets.

  • Manage Complexity Through Design, Not Only Avoidance. Of course, complexity should be avoided as much as practical but at some level all useful environments are necessarily complex at some level of abstraction. Rather, subjugate complexity with better security properties through good design principles, including linked behaviors, opinionated defaults (e.g., default deny), and declarative configurations.

  • Prepare Leadership for the "Uncanny Valley". As you enhance the security program you will keep digging deeper into areas you hadn’t when getting the basic controls in place. Sometimes this visibility creates surprise and the “Uncanny Valley” that things look worse before they then get sustainably better. So, precommunicate and be explicit that your diligence in finding fundamental issues will temporarily result in the perception of things getting worse, and commit to educating other leadership consistently to ensure they cross the valley with you toward more sustainable controls.


  • Integrate Risk Quantification. Use quantitative risk analysis as a tool to compel action and inform decision-making, not be an end goal in itself. Its effectiveness relies on combining analytic rigor with experienced human judgment to contextualize and communicate findings. The key action is to ensure risk quantification exists in decision-making feedback loops, selecting the appropriate method (from basic counts to advanced loss distribution models) for the specific risk at hand. Your goal should be that your risk metrics are driving decisions even without you always having to present them. 

  • Shift Attack Surface Management to Architectural Reduction. Vulnerability management often fails by focusing too narrowly on only finding flaws in exposed components, thereby ignoring that the mere unexpected presence of a service, even if fully patched, is itself an underlying vulnerability. So, keep working to shrink your attack surface by better technical architecture choices rather than only reducing the vulnerabilities in the attack surface you observe.

  • Institutionalize Counter-Forces Against Resource Atrophy. When you have a reasonably good security program you might suffer the paradox that other leaders might ask, “Why are we spending so much on security when we don’t have any incidents?” So, it’s vital to have deeper metrics like a control pressure index that shows how much your security controls are “load bearing” for what cost/value and that is why you don’t have security incidents. It’s also important to watch out for the gradual and organizationally distributed micro cost cuts or deprioritization that builds up over time to have a bigger macro impact. The slow drain of resources often occurs implicitly, not through a single executive decision, through a series of small choices that progressively load teams until a tipping point is reached. For this you need to create counter-forces like organization health monitoring to systematically assess the people, skills, and budget required for embedded security teams and other activities. Make scarcity visible. 


Here’s a short video (thanks to NotebookLM) covering all of this.




The blog posts used to build this video and summary are here:



















Recent Posts

See All
Subscribe for updates.

Thanks for submitting!

© 2020 Philip Venables. 

bottom of page