Seven-Layer Governance
for Agentic AI
The threat model, architecture, and enforcement details behind SharkRouter — mapped control-for-control against OWASP, MITRE ATLAS, NIST AI RMF, EU AI Act, and ISO/IEC 42001. The summary below is free to read; the full PDF is watermarked and sent on request.
Table of Contents
- 01Threat Model for Agentic AI↓
- 02The Seven-Layer Governance Architecture↓
- 03ToolGuard — Deterministic Tool-Call Governance↓
- 04Warden — Continuous AI Security Assurance↓
- 05Compliance Posture — OWASP, NIST, EU AI Act, ISO/IEC 42001↓
- 06Cryptographic Audit Chain↓
- 07Deployment Models — SaaS, Hybrid, Air-Gapped↓
Threat Model for Agentic AI
The attack surface of an AI agent is not its model — it is the full chain of prompts, tools, memories, retrieved documents, and downstream API calls. We taxonomize the threats across seven layers and map each to the governance control that stops it.
- Direct and indirect prompt injection (OWASP LLM01, research trap taxonomy)
- Tool-call hijacking and excessive agency (OWASP LLM06 2025)
- Memory and context poisoning (vector-embedding weaknesses — OWASP LLM08 2025)
- Supply-chain compromise of models, datasets, and MCP servers
- Output exfiltration and sensitive-information disclosure (OWASP LLM06 2023-24)
The Seven-Layer Governance Architecture
SharkRouter sits between AI agents and the systems they touch as a deterministic data plane. Every request passes through seven enforcement layers before reaching an LLM, tool, or storage system — and every response passes through the same layers in reverse.
- Layer 1: Identity and A2A authorization (SSO + signed agent passports)
- Layer 2: PII detection, classification, and tokenization (SHARK-PII)
- Layer 3: Prompt-injection and trap defense (TrapDefense + DeepMind taxonomy)
- Layer 4: Semantic router and cost controls
- Layer 5: ToolGuard policy enforcement on tool and MCP calls
- Layer 6: Output Assurance (deterministic post-execution verification)
- Layer 7: Cryptographic audit chain (hash-chained, tamper-evident)
ToolGuard — Deterministic Tool-Call Governance
Most AI security products classify prompts. ToolGuard governs actions. It inspects every tool call and MCP invocation, enforces per-role and per-tenant policies, and rejects calls that violate declared intent — before they execute.
- Pre-execution policy enforcement with declared intent
- Post-execution output assurance and result policies
- Shadow / Dry-Run / Enforce modes for safe rollout
- Fingerprint-backed institutional memory of known failure patterns
- Per-tenant policy isolation and JIT access flows
Warden — Continuous AI Security Assurance
Warden is the offline, local-first scanner that benchmarks an organization's AI security posture across ten dimensions. It runs on your own code and config, produces a scored report, and maps every finding to the corresponding framework control.
- Ten-dimension scoring (data protection, agent governance, audit, supply chain, and more)
- Code, MCP, infrastructure, secrets, and dependency scanners
- Trap-defense audit against the DeepMind agent-traps taxonomy
- Competitor cross-check with framework coverage attribution
- Local-first — no code or data leaves your environment
Compliance Posture — OWASP, NIST, EU AI Act, ISO/IEC 42001
We do not claim compliance with frameworks; we map our enforcement points to them line-by-line. This section is the authoritative coverage matrix and matches the live /intelligence page control-for-control.
- OWASP Top 10 for LLM Applications — 2023-24 and 2025 revisions
- MITRE ATLAS tactics and techniques with per-control mapping
- NIST AI Risk Management Framework (AI RMF 1.0) crosswalk
- EU AI Act — Chapter III obligations for high-risk systems
- ISO/IEC 42001 AI management system alignment
Cryptographic Audit Chain
Every governance decision is written to a hash-chained audit log signed by a KMS-backed key. Tampering is detectable in O(1); reconstruction is byte-for-byte reproducible. This is the evidence layer that turns "we have AI policy" into "we can prove every decision we made."
- Hash-chained, tamper-evident log of every governance decision
- KMS-backed signing (local, AWS KMS, or HashiCorp Vault)
- Byte-for-byte reproducible replay for incident forensics
- Export to SIEM, S3, or on-prem audit sinks
Deployment Models — SaaS, Hybrid, Air-Gapped
SharkRouter runs where your data does. The same product can be deployed as a multi-tenant SaaS, a hybrid model with customer-held keys, or a fully air-gapped on-prem appliance. This section walks through the operational trade-offs of each.
- SaaS — multi-tenant, managed, instant provisioning
- Hybrid — BYOK with customer-managed KMS
- On-prem — fully air-gapped, licensed appliance
- Kubernetes, Docker Compose, and bare-metal deployment topologies
- HA, scaling, and disaster-recovery guidance
Request the full whitepaper
The full PDF contains every architecture diagram, policy example, audit-chain walkthrough, and per-control framework mapping summarized above — plus the threat model appendix and deployment reference configurations.
- Seven-layer governance architecture with ToolGuard internals
- Full framework mapping with per-control coverage
- Threat model and trap-defense taxonomy
- Compliance posture across OWASP, NIST, EU AI Act, ISO/IEC 42001
- Reference deployment topologies and audit-chain details
Request-only. Each copy is individually watermarked and sent by our team after review — no instant download, no spam.
SEE THE LIVE COVERAGE MATRIX
Every control from this whitepaper, mapped to source
The /intelligence page is the living version of Section 05. Every framework entry links to its canonical source, to the product enforcement point, and (from here) back into the relevant whitepaper chapter.
Open the coverage matrix →