The pattern is now common enough to be a category. A developer opens Claude, Cursor, or a similar agentic coding tool, describes what they want to build, and has a working application in two to four hours. The speed is real. The code runs. The product does what it was asked to do.

Then someone runs it against OWASP. Several vulnerabilities. Sometimes more.

This is not a failure of the AI tool. The tool built what it was asked to build. The question is what nobody asked before the build started.

What the Scanner Finds and What It Cannot

A compliance scanner checks against known vulnerability patterns: OWASP Top 10 for LLMs, the Agentic Top 10 published in 2026, CVE registries, static analysis rulesets. It is good at finding what it knows to look for.

Speed is asymmetric. The build accelerates. The governance surface does not.
Design time
Hours
Build time
Hours
Test time
Hours to days
OWASP vulnerability surface
Fixed regardless of build speed
Regulatory obligation surface
Fixed regardless of build speed
Evidentiary requirement
Fixed regardless of build speed
Figure 1. The build accelerates. The compliance surface remains constant.

What the scanner cannot find is the gap between what the application does and what your organisation's governance framework says it is permitted to do. That gap is not a code vulnerability. It is a design decision that was never made, or was made implicitly by a developer who had no way of knowing the relevant constraints.

The OWASP Agentic Top 10, developed through collaboration with over 100 industry experts and practitioners, makes this structural problem explicit.

OWASP Agentic Top 10: selected risks that trace to design-time decisions
ASI01
Agent Goal Hijack. Attackers redirect agent objectives by manipulating instructions, tool outputs, or external content. Root cause: no documented permitted objectives to defend against deviation
ASI02
Tool Misuse. Agents misuse legitimate tools through unsafe composition, recursion, or excessive execution. Root cause: tool boundaries not defined at design time, only assumed
ASI03
Identity and Privilege Abuse. Delegated credentials operating far beyond their intended scope. Root cause: no named owner reviewed credential scope after deployment began
ASI10
Rogue Agents. Autonomy that expands beyond intended boundaries without triggering oversight. Root cause: no monitoring mandate and no evidentiary record of intended scope
Figure 2. Each risk traces to a governance decision that was not made before deployment. Source: OWASP Top 10 for Agentic Applications 2026.

The Conversation That Should Have Happened First

Three functions carry distinct, non-overlapping knowledge that a compliance scanner needs to be configured meaningfully. In most organisations, that conversation does not happen before a build starts, or it happens across three separate processes that never connect.

What each role knows that nobody else does, and what the scanner needs from them
Role What only they know The scanner configuration question only they can answer
Product Manager What the application is permitted to do, who it serves, what data it touches, what the acceptable parameters of its behaviour are What does a behavioural violation look like for this specific product, and what constitutes a breach of its documented scope?
Solution Architect How permissions translate into architectural constraints: which tools the agent can invoke, what credentials it can inherit, what the blast radius of a misconfiguration looks like Which architectural boundaries encode those permissions, and where does the scanner need to enforce them at the tool and credential layer?
Risk Manager Which regulatory obligations apply to this specific product: EU AI Act Article 9, DORA ICT risk requirements, what a regulatory examination would ask to see Which scanner findings constitute regulatory exposures rather than technical findings, and what does the evidentiary record need to show?
Figure 3. None of these roles can configure a meaningful compliance scan alone. All three are required before a developer opens a code editor.

The Design-Time Contract

The OWASP Agentic Top 10 is a taxonomy of what goes wrong after deployment. What it does not describe, because it is a security framework rather than a governance framework, is what must be produced before deployment to make the scanner's findings actionable.

That pre-build output is what this series has called the design-time contract: a documented record of what the agent is permitted to do, under what parameters, monitored by whom, and evidenced how. Without it, the scanner generates a finding list with no organisational owner and no clear remediation authority.

The question is not whether to run your AI-assisted application against OWASP. The question is whether the three people who should have been in the room before you built it have produced something the scanner can actually enforce.

Pre-build alignment card One page. Three columns. One question per role. Complete it before a build starts. Free to use with attribution to 4iGov.

If you are building AI-assisted products in a regulated environment and want to think through what the pre-build conversation looks like for your organisation, the email is open.

[email protected]