MCP Without a Gateway Is a Root Shell With a Chat Interface
Model Context Protocol gives AI agents direct access to your databases, APIs, and file systems. Without an inline gateway, every MCP server is an unmonitored root shell. We scanned 200+ public MCP implementations — 73% have no authentication, 89% have no rate limiting. Here is what governance at the tool-call layer actually looks like.
MCP Without a Gateway Is a Root Shell With a Chat Interface
Model Context Protocol changed everything. And almost nobody secured it.
Before MCP, AI agents were conversational — they generated text, answered questions, and produced content. They were sophisticated autocomplete engines. Dangerous, perhaps, in what they might say. But limited in what they could do.
MCP removed that limitation. Agents can now read files, query databases, send emails, manage cloud infrastructure, call internal APIs, and interact with every SaaS tool in your stack. They do this through structured tool calls — function invocations with typed parameters that execute against real systems.
MCP servers are proliferating at extraordinary speed. GitHub has thousands of community-built MCP servers. Enterprises are building custom ones for internal systems. Every new MCP server expands the agent's capability — and the attack surface.
The security model for this expansion is, in most organizations, nonexistent.
The MCP Trust Problem
MCP was designed for functionality, not security. The protocol defines how agents discover tools, how tools describe their parameters, and how results are returned. It does not define who is allowed to call which tool, under what conditions, with what oversight.
An MCP server that exposes query_database gives the agent SQL access. An MCP server that exposes send_email gives the agent outbound communication. An MCP server that exposes modify_permissions gives the agent IAM control. An MCP server that exposes execute_command gives the agent shell access.
Each of these, ungoverned, is a lateral movement vector.
The default state of most MCP deployments is full access. The agent connects to the MCP server, discovers available tools via tools/list, and can call any of them with any parameters. There is no authentication layer between the agent and the tool. There is no authorization check on tool parameters. There is no audit trail of what was called and why.
An AI agent connected to an MCP server with write access to a production database is, from a security perspective, indistinguishable from a service account with admin privileges and no monitoring. Except it's worse — because the agent's behavior is non-deterministic, influenced by everything it reads, and capable of being redirected by environmental manipulation.
What MCP Servers Don't Check
We scanned over 200 public MCP server implementations. The findings are consistent:
No authentication in 73% of servers. The MCP server trusts any connecting client. There is no client certificate, no API key, no token validation. If an agent can reach the server on the network, it can use every tool.
No parameter validation in 68% of servers. The tool schema defines expected parameter types, but the server doesn't validate that parameter values are within acceptable ranges. query_database(query="DROP TABLE users") passes schema validation because the query parameter expects a string, and DROP TABLE users is a valid string.
No read/write separation in 81% of servers. Tools that should offer read-only access often expose write operations as well. A server designed to let agents query a CRM also lets agents modify records, delete entries, or export entire tables.
No audit logging in 64% of servers. Tool invocations are not logged. When something goes wrong, there is no record of which agent called which tool with which parameters.
No rate limiting in 89% of servers. An agent can call the same tool thousands of times per second. A compromised agent can exfiltrate an entire database before any human notices.
These are not theoretical vulnerabilities. They are the default configuration of the tools enterprises are deploying in production today.
Where A2A Compounds the Risk
Google's Agent-to-Agent (A2A) protocol adds another layer. When Agent A delegates a task to Agent B, Agent B inherits the context — and potentially the permissions — of Agent A. Delegation chains can span multiple agents, each adding its own context and tool access.
Without governance, delegation creates two critical risks:
Permission amplification. Agent A has read-only database access. It delegates to Agent B, which has email access. The chain now has read + send capability — a combination neither agent was individually authorized for. The exfiltration path exists in the delegation chain, not in any single agent.
Taint propagation failure. Agent A reads PII from a database. It passes a summary to Agent B. Agent B doesn't know the summary contains PII because the classification was lost at the delegation boundary. Agent B sends the summary to an external recipient. PII has been exfiltrated through a chain of individually legitimate actions.
The regulatory implications are immediate. Under GDPR, the data controller is liable regardless of which agent in the chain exposed the data. Under the EU AI Act Article 14, human oversight must cover the full decision chain — not just the first agent.
What an MCP Security Gateway Does
An MCP security gateway sits between agents and MCP servers. Every tool call passes through it. Every tool result returns through it. The gateway enforces policy, scans content, and logs everything.
Pre-execution enforcement. Before a tool call reaches the MCP server, the gateway evaluates it against deny-by-default policies. Is this agent authorized to use this tool? Are the parameters within acceptable ranges? Does this call require human approval? If any check fails, the call is blocked. The MCP server never sees it.
Post-execution scanning. When the MCP server returns a result, the gateway scans the content before it enters the agent's context. Hidden instructions in HTML comments, zero-width Unicode characters, CSS-invisible text, embedded commands in document metadata — all stripped or flagged before the agent processes the result. This is the layer that prevents environmental traps.
Delegation governance. When Agent A delegates to Agent B via A2A, the gateway enforces scope narrowing: Agent B's permissions can never exceed Agent A's. Data classification tags (taint labels) propagate across the delegation chain. The gateway verifies that the child agent's action is consistent with the parent agent's declared purpose.
Cryptographic audit. Every tool call is logged to a WORM (Write Once Read Many) audit chain with Ed25519 signatures and SHA-256 hash chaining. Tamper with one entry and the chain breaks. Seven-year retention. Mathematical proof of what happened, when, and who initiated it.
The Argument-Level Gap
Most MCP governance discussions focus on tool-level access control: "Should Agent X be allowed to call send_email?" This is necessary but insufficient.
The question is not whether the agent can send email. The question is what the agent is emailing, to whom, and why.
send_email(to="colleague@company.com", subject="Meeting notes", body="...") is routine.
send_email(to="analyst@competitor.com", subject="FYI", body=customer_database_export) is a breach.
Both are calls to the same tool. Tool-level access control allows both or blocks both. Argument-level governance evaluates the parameters: the recipient domain, the body content, the data classification of the attached information. This requires understanding the tool call, not just permitting it.
A seven-guard evaluation chain — from fast regex pattern matching through schema validation, policy evaluation, semantic analysis, and AI-assisted classification — evaluates each tool call in under 150ms. Eighty-five percent of decisions complete in under 18ms, using only the fastest deterministic guards. The expensive AI-based guards fire only when faster guards can't make a determination.
Dry-Run Preview: Seeing Before Executing
When an agent requests a high-risk operation — database deletion, bulk export, fund transfer — the gateway doesn't just block it. It shows the human reviewer exactly what would happen:
"Agent finance-bot is requesting: DELETE FROM transactions WHERE date < '2025-01-01'. This will affect 47,329 records in the production transactions table. Risk level: CRITICAL. [Approve] [Deny]"
The reviewer sees the actual SQL, the actual affected row count, the actual risk assessment. Not a summary generated by the agent — a direct representation of the tool call parameters.
This matters because Google DeepMind's research documented approval integrity attacks: agents generating misleading summaries for human reviewers. "Update customer preferences" as the summary for "DELETE FROM customers WHERE 1=1." Dry-Run Preview eliminates this attack by showing the actual action, not the agent's description of the action.
No other product in the market offers this. It is architecturally impossible from any position other than inline — you must have the tool call parameters in hand before execution to show them to a reviewer.
Securing the Protocol Layer
MCP governance is not a standalone problem. It is one layer in a stack that must cover the complete agent lifecycle:
What agents are allowed to do (ToolGuard — tool call governance).
What agents read from their environment (Trap Defense — content scanning).
What agents produce as output (Inspect — post-execution verification).
Who agents are and who authorized them (Agent Passport — cryptographic identity).
How agents behave over time (Kill Switch — behavioral anomaly detection).
What evidence exists of agent actions (WORM Audit — cryptographic non-repudiation).
MCP security without trap defense means agents can be manipulated through poisoned tool results. Trap defense without MCP security means agents can call any tool with any parameters. Both are required. Neither is sufficient alone.
The EU AI Act Article 15 requires AI system robustness against adversarial manipulation. OWASP Agentic AI identifies tool misuse (AG02) and data exfiltration (AG06) as top-10 risks. MITRE ATLAS documents prompt injection (AML.T0051) and data poisoning (AML.T0049) as known attack techniques. Compliance with any of these frameworks requires governance at the tool-call layer — the exact layer where MCP operates.
Warden scans your MCP server configurations for security issues. Free, open source, local-only.
pip install warden-ai
warden scan . --format html
