Research Focus
The Central Question

Where does AI regulation fail to translate into organisational practice, and what are the structural reasons for that failure? And where does product security governance break down in ways that produce the same accountability gap, risk accepted indefinitely, exposure accumulated silently, defensible evidence absent when it matters? The focus in both cases is the implementation gap: what prevents well-designed governance from producing the outcomes it intends.

Research Areas
01
The AI PM Accountability Gap
How AI product accountability is currently distributed across product, engineering, legal, and compliance functions, and where that distribution creates unowned risk.
EU AI Act Arts. 9–17 · NIST RMF Govern · ISO 42001 Cl. 5
02
Regulatory Translation Failure
How AI regulatory obligations are communicated from legal and compliance functions to product teams, and whether current translation mechanisms produce actionable requirements or compliance theatre.
EU AI Act Conformity Assessment · NIST RMF Measure
03
Post-Deployment Monitoring Maturity
How organisations currently implement post-deployment AI monitoring, what is measured, at what cadence, by whom, and with what escalation authority, mapped against regulatory obligations.
EU AI Act Art. 9(1)(f) · NIST RMF Manage · ISO 42001 Cl. 9
04
Agentic AI & Accountability Boundaries
How multi-agent AI architectures challenge existing accountability attribution models, specifically the boundary between model provider, operator, deployer, and end user in agentic deployment contexts.
EU AI Act Agentic Provisions · FCA AI Guidance · ISO Standards
05
Product Security Governance Failure
How risk acceptance governance breaks down in product organisations, where vulnerabilities are systematically deprioritised, attack chains go uncorrelated, and the gap between reported compliance posture and actual exposure widens. Examines the structural conditions that produce defensible risk management versus compliance theatre.
OWASP SPVS · NIST CSF · EU AI Act Art. 9 · FCA PS7/23
Programme Status
Phase 1 · Foundation
Practitioner resource library live: roadmap, tool directory, and governance templates. Five articles published across two series.
Active
Phase 2 · Editorial
Series 1 (AI Accountability in Regulated Technology) complete. Series 2 (Product Security in AI Agentic Development) active.
Building
Phase 3 · Funding
Research funding requests in preparation. Doctoral programme active: literature review, coursework, and Inforte seminars underway at University of Vaasa.
Active
Phase 4 · Publication
Publishing findings openly with practitioner-facing summaries and regulatory briefs
Planned
Get in Touch
Research Status
Independent & Self-Funded

This research is conducted remotely alongside a senior business analysis role in financial technology. The author is actively seeking a remote Product Owner or Product Manager engagement in AI, fintech, or regulated technology: work that would align with and sustain this research programme long-term.