The pattern is now common enough to be a category. A developer opens Claude, Cursor, or a similar agentic coding tool, describes what they want to build, and has a working application in two to four hours. The speed is real. The code runs. The product does what it was asked to do.
Then someone runs it against OWASP. Several vulnerabilities. Sometimes more.
This is not a failure of the AI tool. The tool built what it was asked to build. The question is what nobody asked before the build started.
What the Scanner Finds and What It Cannot
A compliance scanner checks against known vulnerability patterns: OWASP Top 10 for LLMs, the Agentic Top 10 published in 2026, CVE registries, static analysis rulesets. It is good at finding what it knows to look for.
What the scanner cannot find is the gap between what the application does and what your organisation's governance framework says it is permitted to do. That gap is not a code vulnerability. It is a design decision that was never made, or was made implicitly by a developer who had no way of knowing the relevant constraints.
The OWASP Agentic Top 10, developed through collaboration with over 100 industry experts and practitioners, makes this structural problem explicit.
The Conversation That Should Have Happened First
Three functions carry distinct, non-overlapping knowledge that a compliance scanner needs to be configured meaningfully. In most organisations, that conversation does not happen before a build starts, or it happens across three separate processes that never connect.
| Role | What only they know | The scanner configuration question only they can answer |
|---|---|---|
| Product Manager | What the application is permitted to do, who it serves, what data it touches, what the acceptable parameters of its behaviour are | What does a behavioural violation look like for this specific product, and what constitutes a breach of its documented scope? |
| Solution Architect | How permissions translate into architectural constraints: which tools the agent can invoke, what credentials it can inherit, what the blast radius of a misconfiguration looks like | Which architectural boundaries encode those permissions, and where does the scanner need to enforce them at the tool and credential layer? |
| Risk Manager | Which regulatory obligations apply to this specific product: EU AI Act Article 9, DORA ICT risk requirements, what a regulatory examination would ask to see | Which scanner findings constitute regulatory exposures rather than technical findings, and what does the evidentiary record need to show? |
The Design-Time Contract
The OWASP Agentic Top 10 is a taxonomy of what goes wrong after deployment. What it does not describe, because it is a security framework rather than a governance framework, is what must be produced before deployment to make the scanner's findings actionable.
That pre-build output is what this series has called the design-time contract: a documented record of what the agent is permitted to do, under what parameters, monitored by whom, and evidenced how. Without it, the scanner generates a finding list with no organisational owner and no clear remediation authority.
The question is not whether to run your AI-assisted application against OWASP. The question is whether the three people who should have been in the room before you built it have produced something the scanner can actually enforce.
If you are building AI-assisted products in a regulated environment and want to think through what the pre-build conversation looks like for your organisation, the email is open.