top of page

High Frequency Trading and Lessons for Agentic AI

  • Phil Venables
  • 13 minutes ago
  • 4 min read

I suspect I’m not the only former or current financial markets technologist that sees parallels between the world of high frequency / algorithmic trading controls and what is needed for appropriate deterministic guardrails around our, mostly, non-deterministic agentic AI systems. As we transition from chatbots to systems of agents, that don't just talk but act, we are entering a regime of automated risk that the financial markets have navigated, mostly successfully, for decades.


High-Frequency Trading (HFT) provides a blueprint for managing autonomous, high-velocity systems. By applying the hard-won lessons of HFT failures like the 2010 Flash Crash, the Knight Capital collapse, and other examples, we can build agentic frameworks that are resilient to goal drift and automated cascading failures. Perhaps we’ll be able to avoid an impending “agentic flash crash”, or at least know what to do when we pick up the pieces in the aftermath of one.


Note: the purpose of this post is to draw an analogy to remind us that many of the controls we need to build and keep building into agentic AI systems are not all new ground. But, this is not a complete analysis. I’m well aware that there are a myriad of HFT controls I could have mentioned, and many agentic controls already being applied that draw upon this. The incidents I cover are much abbreviated and not a thorough review of what amounts to 100’s of pages of post-mortem. Indeed, I worked on one of them and dug deep into the others when they occurred.  


Velocity and Agency

In HFT, the risk is velocity. An algorithm can lose millions in seconds. In agentic AI, the risk is agency. An agent can autonomously plan, access sensitive tools, and execute multi-step workflows across systems. 


The industry is currently moving from simple output filtering (checking what an AI says) to runtime guardrails (checking what an AI does). We are seeing the rise of frameworks like the AARM, OWASP Top 10 for Agentic Applications, and the work of the Cloud Security Alliance


In our increasingly unfolding world of agentic workflows our risk is both velocity and agency.


HFT Controls: A Template for Agentic Safety

The core principles of HFT risk management, standardized by the SEC Rule 15c3-5 (Market Access Rule), FIA Best Practices, and many others map incredibly well to AI agents, for example:


  • Pre-Execution Action Gates. Validates if a proposed action (e.g. a $10k refund or a database deletion) stays within a "safety envelope". You cannot reliably, with high assurance for critical actions, use a solely non-deterministic system (like an LLM-as-a-judge) to effectively monitor another non-deterministic system. 


  • Circuit Breakers (Deterministic Output Gating). While some safety research focuses on probabilistic monitors that trip when latent internal states drift, this is computationally expensive and many research challenges remain. True HFT circuit breakers are strictly deterministic. We must map these to Deterministic Output Gating, hardcoded, immutable limits on external actions (e.g. maximum daily API expenditure, or restricting agents to specific time-bound privileges, etc.) that completely bypass the AI's internal reasoning loop.


  • Kill Switches (Process and Network-Layer Isolation). Simply revoking an agent’s short-lived authentication credentials is a dangerous assumed ability. It will not reliably stop a runaway process if the agent has already spawned autonomous sub-processes. True kill switches require process and network-layer isolation, terminating the containerized environment hosting the agent and severing its outbound network access entirely. 


  • Rate & Scope Limiters. Prevents API hammering or recursive loops where an agent calls a tool thousands of times per minute.


  • Post-Trade Analysis (Drop-Copy Auditing). Beyond merely auditing the Chain of Thought post-hoc, we must introduce the HFT concept of Drop-Copy. This requires real-time, out-of-band replication of every action an agent takes to WORM (Write Once, Read Many) or equivalent secure storage vault that the agent has no means to modify.


Agentic Security Lessons from HFT Incidents

Major HFT incidents provide specific warnings for AI agent developers:


Knight Capital (2012)


  • The Incident: A manual deployment left "dead code" active on one server, which proceeded to buy high and sell low, losing $460 million in 45 minutes.


  • Agentic Lesson: While it's true agents shouldn't have access to dormant APIs or deprecated test tools, the core failure was a catastrophic lack of deployment synchronization across distributed servers. The true lesson is Immutable Infrastructure and Version Synchronization. We need strict, staged deployment pipelines to ensure that distributed agent swarms are running identical logic, prompts, and toolsets across all nodes to prevent unpredictable bifurcated behaviors.


The Flash Crash (2010)


  • The Incident: Algorithms simultaneously withdrew from the market when they hit their risk limits, causing a liquidity vacuum and a trillion-dollar drop.


  • Agentic Lesson: The danger isn't just a synchronized enterprise freeze caused by a shared safety trigger. We must design against uncontrollable Multi-Agent Feedback Loops. The real danger is a situation where Agent A's output becomes Agent B's malicious input, causing a runaway localized downward spiral (e.g. autonomous pricing agents continuously undercutting each other until prices reach zero).


Goldman Sachs Options Error (2013)


  • The Incident: A coding error sent approx.16,000 mispriced orders. Compounding this, a human operator lifted the (correctly) tripped circuit breaker blocks without fully understanding the risk, exacerbating the loss.


  • Agentic Lesson: When an agent hits a guardrail and asks for human-in-the-loop (HITL) approval, the human must be given a "context bundle" that clearly explains why the guardrail tripped. Blindly clicking 'Approve' is the modern equivalent of lifting an HFT circuit breaker.


Bottom line: The future of Agentic AI isn't just about smarter models, it’s about sturdier architecture. We should treat AI agents like high-frequency trading systems. They require pre-computed limits, real-time monitoring, and automated isolation. By borrowing the Market Access mindset, we can ensure that when our agents start "trading" in real-world actions, they don't trigger an agentic flash crash or take your balance sheet with them in a swarm of misaligned activity.

Recent Posts

See All
Maintenance of Everything : A Review

I haven’t done a book review for a while and there’s no better way to get back to this than a look at Stewart Brand’s Maintenance of Everything . Stewart developed a lot of this book in an open editin

 
 
The Real Role of the Field CISO

We all need to advance our businesses and that is in many respects about selling. We also need to recognize that security and reliability are increasingly the path to sustainable long term customer su

 
 
Organizational Politics & The Security Program

I first wrote the original of this post over 4 years ago. Having seen a new spurt of discussion about organization politics in various on-line and in-person forums I thought it was time for an update.

 
 
Subscribe for updates.

Thanks for submitting!

© 2020 Philip Venables. 

bottom of page