top of page

Security Implications of DORA AI Capabilities Model

  • phil7672
  • 5 minutes ago
  • 4 min read

The DORA AI Capabilities Model (DevOps Research and Asssesment, not the EU Digital Operational Resilience Act) report is well worth a read not just to get a perspective from the developer community but to look at the many security implications it uncovers. This post is a summary of the explicit findings and some of broader implications from reading between the lines of the report.



1. Data Protection and Access Control

A primary security concern is ensuring AI tools respect existing permissions and access controls on sensitive data. Organizations must address the major obstacle of controlling which AI services users connect to and ensuring those services are themselves secure.


Specific safeguards recommended to protect sensitive data include:



  • Ensuring that automated retrieval mechanisms (like Retrieval-Augmented Generation) operate with the user’s own credentials, guaranteeing the system can only access documents the user is already authorized to access.


  • Channeling AI requests through an approved, centralized proxy or Model Context Protocol (MCP) server that is vetted and monitored to mitigate data exfiltration risks and ensure adherence to company security policy.


  • Defining prohibited uses, such as inputting any customer's personally identifiable information (PII) or trade secrets into a public AI model.


2. Risk Mitigation and Safety Nets

As AI increases the velocity and volume of changes, strong version control practices are critical. Version control acts as the essential safety net/mechanism that enables confident experimentation while mitigating the risk of AI-generated code, which is associated with increased software delivery instability.


Furthermore, ambiguity around AI usage creates risk. A clear and communicated AI stance helps reduce this risk, provides the necessary psychological safety for developers to experiment confidently, and can assuage data privacy concerns.


3. Governance, Compliance, and Guardrails

A quality internal platform is essential for scaling AI benefits securely across the organization, providing the necessary guardrails and shared capabilities. These platforms ensure applications are built, tested, and deployed securely and in a compliant way.


Explicit security controls integrated into the workflow include:


  • Requiring human-in-the-loop review for, at least, critical AI-generated code.


  • Ensuring the platform helps teams follow required processes, such as security sign-offs.


  • Using version control to provide auditability and complete traceability of changes for compliance purposes.


4. Secure Code Development

Connecting AI to up-to-date internal data, documentation, and best practices helps the AI avoid suggesting deprecated functions and outdated patterns, thereby helping teams build more maintainable and secure code from the start.


5. Broader Security Implications

In addition to these results and points explicitly called out about security there’s a lot between the lines, both positive and negative. 


Negative:


  • Amplified Security Vulnerabilities. The report emphasizes that AI is an amplifier, magnifying organizational dysfunctions. If a struggling organization with poor security practices adopts AI tools, the rapid increase in the volume and velocity of code generation will likely lead to more security vulnerabilities being introduced into the codebase, at a faster rate, compounding their instability.

 

  • Increased Blast Radius of Data Breaches. To function as a "specialized expert," AI tools are connected to vast amounts of proprietary company data, including codebases, architectural diagrams, and documentation. If the required least privilege security model fails, a single security lapse in the AI service could expose a massive collection of sensitive institutional knowledge, dramatically increasing the potential impact (blast radius) of a data exfiltration event compared to a single developer accessing only local project files. 


  • Context Poisoning Risk. The capability of AI-accessible internal data relies on curating code context and avoiding indexing deprecated or "bad" examples. If an attacker or insider maliciously poisons the curated code curriculum by introducing flawed or insecure patterns into the gold standard repositories, the AI will learn and replicate these bad examples, automatically integrating systemic security flaws into new code generated for other teams. 


Positive:


  • Proactive Security Enforcement. A quality internal platform serves as a risk mitigator and provides automated, standardized, and secure pathways. This implies that security controls (e.g. dependency scanning, policy adherence checks, pre-commit hooks) can be enforced automatically and consistently across all projects through the platform, reducing the chance of human error and ensuring security is built in to the delivery pipeline.


  • Enhanced Audit Trails for Agentic Workflows. The report suggests storing AI prompts and agent configuration files in version control to share knowledge and create an audit trail for agentic workflows. This practice provides a traceable, versioned history not just of what code changed, but the specific context and instructions (the prompt and configuration) that led the AI to generate the change, significantly improving the ability of security and compliance teams to investigate incidents or perform compliance reviews.


Bottom line: Leveraging AI in software development is like giving an organization a turbocharger. If the environment is weak, that added power and performance will cause instability and failure. But, if it’s strong then the performance, quality and security boost will be significant. For security, this means pre-existing security flaws are accelerated, while robust security platforms and governance are amplified into systemic safeguards.








Recent Posts

See All
The CISO's Craft: Watchmaker or Gardener?

Some time ago I saw a comment about the distinction between acting like a “watchmaker” or a “gardener” when undertaking organization transformations. I misplaced the original reference so, unfortunate

 
 
2025 Year in Review - Top 10

The most read posts in 2025 coalesced around the concept that successful cybersecurity is fundamentally a function of business leadership, strategic design, and sustainable execution . The unifying th

 
 
Subscribe for updates.

Thanks for submitting!

© 2020 Philip Venables. 

bottom of page