Things Are Getting Wild: Re-Tool Everything for Speed
- Phil Venables
- 1 minute ago
- 8 min read
It’s not often that a force appears that totally re-orients everything in security. This is what we are facing with AI.
12 months ago I had an incrementalist view of the cybersecurity impact of AI. Specifically, that it will be very significant but things will change progressively and we’ll adapt to adversarial use while also using it to improve defenses.
Now, I’m coming to a view that this will have a bigger negative impact than even our worst assumptions. But at the same time, it represents an even larger positive impact for defensive use.
I am short term pessimistic but wildly long term optimistic.
The four major pillars of concern are, on their own, fiercely bad, but together they are epic.
A tidal wave of vulnerabilities is coming.
Attackers are rapidly industrializing.
Everything can be faked: content, people, companies.
Trillions of agents will exhibit emergent properties (and I’d discussed this before watching Moltbook happen).

Let’s look at this in more detail and explore the consequences and what to do.
1.The Tidal Wave of Vulnerabilities
More software is and will be generated due to AI augmented engineering (not just “vibe coding”). There has always been a backlog of software needs in enterprises and the cost of software production approaching zero will fill that. This doesn’t mean, unlike what some commentators assert, that the need for software engineers goes away. Rather, the nature of software engineering (vs. code writing) gets closer to its historic core of requirements, architecture, design, and the management of complexity.
While the latest models are getting better at producing software with less errors, including security, they are far from perfect. Just like humans. Much work remains to conjoin LLM-driven software production with more predictable and declaratively driven work to conform to security invariants like the use of standard frameworks, tools, APIs, assured software supply chains, and other patterns to mitigate common classes of vulnerabilities.
AI is great at finding many (but not all) types of vulnerabilities, even vulnerabilities that have previously been undiscovered for many years in the face of other techniques. Many models are great at using tools, like fuzzing engines, to find even more vulnerabilities. Reverse engineering is becoming AI-driven to yield yet more return on effort.
So, with more software, at least constant density of bugs, and more ability to find more new and latent vulnerabilities, the vulnerability numbers are going to increase dramatically (at least in the short to medium term).
2.Attackers Industrializing
Security’s dirty little secret is that there have always been more vulnerabilities that are exploitable than are actually exploited by attackers. There are many reasons for this. Many vulnerabilities while exploitable do not immediately provide a benefit for an adversary. It's hard to develop a chain of exploitable vulnerabilities to realize an action on an objective with sufficient return for the attacker (who have bosses and/or budgets like the rest of us).
Despite automation and their complex economic value chains, attackers are relatively under-resourced to exploit more of the targets of opportunity that currently, and will more so, present themselves.
AI changes all this. So more of the increasing number of vulnerabilities will be more economically exploitable and will be more easily chained together to let attackers get to quicker and more decisive actions on objectives.
3.The Quest for Authenticity
We are already in a world where it’s hard to tell what voice, video, image or other content is AI-generated or not. Arguably, models to detect fake content will remain effective but they have to be integrated into processes to be useful.
We are seeing fake job candidates, fake workers landing roles, and existing workers holding multiple roles across many organizations - all supported by AI.
We will find it harder to trust any interaction with content, companies or people in our personal, business or civic lives.
Our ultimate quest as individuals and businesses will be for authenticity. This is not just to know what is truthful but also to be able to stand behind something being produced by humans as being of higher value.
4.Enterprise Agentic Control Plane
We are rightly concerned about the safe and secure deployment of AI in our lives and businesses. Mitigating many of the risks like model poisoning, prompt injection and so on are vital. But so is paying attention to the second order risks of what emerges from agentic and other AI use. I’ve covered some of this here.
We must ingrain controls, provision identities, privileges and defined action limits for agents and their use of tools. But we also need to control and observe the collective behavior of the trillions of agents that will inhabit our enterprises, and the 1000’s or more that will inhabit our personal lives.
Think of it this way, I can prescribe a control policy to a specific agent like a supply chain management agent that is authorized to order items. I can also do the same for an agent that works my accounts payable process and one that does inventory reconciliation. All might be well, but if those agents have a path to communicate with each other and decide for whatever reason (remember these are probabilistic behaviors with imperfect guard-rails) to make an order of supplies, and keep it off-book and have it as a pre-approved break in inventory reconciliation you might never know if things were going awry. That is unless you plan to enforce deterministic policies or other invariants across the behavior of a set of agents. This is what will be needed in an enterprise control plane for agents.
___________________________________________________________________________
So, things look pretty bleak. We will have more software with more discoverable vulnerabilities. We will have attackers who are less resource constrained to exploit those in more effective ways. We won’t know, at scale, what is authentic in any part of our lives and businesses and there will be trillions of agents interacting in unpredictable ways. Pretty pessimistic outlook, right? So why am I still wildly optimistic for the future?
The Value of Strong Defenses
It might take AI to fight AI in some respects but the biggest defense against human, human and AI, and AI alone is a set of strong baseline defenses. Such a baseline includes:
Strong authentication.
Layered defenses and segmentation.
Fast patching critical vulnerabilities.
Effective detection and response.
Architecturally reducing the blast radius of potential events.
Managing identity and privilege.
Binary authorization and software allow listing.
Software supply chain controls.
Configuration / baseline hardening.
Given the increasingly relentless ability of attackers to find, exploit, and monetize targets of opportunity it becomes vital for defenders to also be more relentless in making sure their baseline of controls are universally applied and rigorously kept up to date and enforced.
Such a strong baseline is not easy to implement for many organizations and it's even harder to institutionalize the operational discipline to keep this implemented. But this has to be done.
Continuous Control Monitoring - Table Stakes
The need for relentless consistency in ensuring the baseline of controls, augmented with some more advanced controls on specifically critical assets doesn’t just happen out of wishful thinking. It happens by implementing a deeply enshrined practice of continuous control monitoring. Even more importantly, not just finding breaks in expected control deployment or performance but adopting a “control reliability engineering” practice to diagnose root causes, learn from errors, and institute automation to reduce the likelihood of recurrent failures.
Seizing the Defenders' Advantage
In the short term there will be moments when it seems like attackers have a perpetual advantage over defenders. But, over time AI will be more of an advantage for defenders than attackers. Defenders have a structural advantage in that they have control of their environment. They have data on their environment and specific context.
Specificity is a home field advantage for defenders where attackers mostly have generality.
But defenders need to actually adopt AI to augment and then radically reimagine their capabilities. This can be done in many ways:
Iteratively improving workflows. Don't underestimate the combined transformational effect on an organization from 10’s or 100’s of small gains in workflow productivity from AI.
Find and fix vulnerabilities before attackers.
Similarly, using AI for configuration management and drift detection. The same applies for privilege.
The deployment of swarms of defensive agents will be profound.
In addition to the defensive adoption of AI, defenders also have an advantage in designing their environments for defensibility - across prevention, detection and response / recovery.
It’s trite to say attackers have an advantage because they only have to be right once and defenders have to be right all the time. Reframing this, defenders can make it so attackers have to remain undetected constantly to succeed. Defenders can architect an environment where potential attacker movement is pushed into areas bristling with sensors.
Similarly, defenders can increasingly use deception technology to raise the workload and uncertainty of AI driven attacks. They can use AI to auto-respond to attacks. They can also use AI to radically improve the knowledge of legitimate IT flows to enable more aggressive segmentation which reduces the blast radius and increases containment and response options for even initially successful attacker footholds.
Finding and Fixing Whole Classes of Vulnerabilities
It becomes even more important to prioritize lightning fast fixes of known critical vulnerabilities, or chains of such vulnerabilities that together would represent a critical exposure. It’s also vital that organizations adopt the same, or better, vulnerability discovery technology to find their own vulnerabilities. Additionally, defenders can aggressively adopt auto-remediation capabilities to increase the tempo of developing, testing and widely deploying fixes.
In a world of more, and more frequently discovered and exploited vulnerabilities the ability to work the 80/20 is essential. That is, finding the 20% of work that can 80% reduce current and future potential vulnerabilities by deploying mitigations to whole classes of vulnerabilities. These could be tools and frameworks implemented by force across a code base to mitigate common vulnerabilities like SQL injection, CSRF, XSS, or many others. It could be implementing memory safe libraries, enforcing more stringent protective compiler flags, and migrating some critical code to memory safe languages.
Speed Wins: Agents and the OODA Loop
Speed is all. Particularly, how defenders can run their OODA (observe, orient, decide, act) loop faster than attackers can adapt. This is easier said than done. Carefully conceived, orchestrated, and constantly tested agents to enhance all major security and related activities, becomes essential.
Red Teaming for All
Red teaming, actual red teaming rather than penetration testing or vulnerability scanning, will become a minimum standard practice for all organizations. Red teaming yourself closer and closer to perfection will, driven by AI, become something viable as the cost of such services collapse.
Red team yourself closer to perfection.
But, naturally, this only works if you have the speed to fix the issues discovered and to formulate and deploy structural changes to raise the difficulty level for the AI-driven red teams.
Don’t Trust and Verify More
Our new world of fake everything means we can trust less and less. Even when we want to trust our employees we might not know that the employees we did trust are still the same. Our enterprises, and lives, at all levels will be constantly assaulted by scams, frauds and disinformation. Armoring ourselves and the processes we inhabit against this will become a required default. Implementing ambient controls to mitigate these risks, where defense blends in, will be a differentiator since we cannot afford to respond to these issues with too much friction.
Bottom line: The compounding set of changes we are experiencing in cybersecurity is deeply concerning. But this is a transition point. We should be short term pessimistic about the risks we face. At the same time the opportunities for defenders are dramatic. Yes, attackers can find and exploit more vulnerabilities but at the same time defenders (who outnumber attackers by orders of magnitude) can also use the same techniques to find and fix vulnerabilities at increasing scale. As defenders we might be worried about a world of AI-driven attacks marshaling new vulnerabilities. But attackers should be just as worried about the exponentially compounding scale of defenders drying up the long term supply of vulnerabilities. In all of this there is still only one fundamental defense - that is relentless speed. In the end, despite the short term pessimism, I remain wildly optimistic for the future.

