AI in Security
MCP Security Best Practices: How to Secure Model Context Protocol Servers, Clients, and Tokens
Model Context Protocol can make AI tools dramatically more useful, but it also expands trust boundaries. Security teams should treat MCP like a privileged integration layer: sandbox servers, minimize scopes, block token passthrough, defend against SSRF, and review every tool as a potential remote-action surface.
Model Context Protocol, or MCP, is becoming one of the most important connective layers in the AI tooling stack. Anthropic originally introduced MCP as an open protocol for connecting AI assistants to tools and data sources, and OpenAI now documents MCP support for developer documentation workflows as well. That momentum is exactly why security teams should pay attention. MCP is not just a developer convenience. It is a trust bridge between models, local tools, remote services, OAuth flows, and sensitive data.
The security question is straightforward: what happens when a model gains access to a tool that can read files, call APIs, query internal systems, or trigger remote actions? The official MCP security best-practices guidance answers that with unusual clarity. It calls out confused deputy problems, token passthrough, server-side request forgery, session hijacking, local server compromise, and over-broad scope requests as real attack patterns. In other words, MCP should be treated like a privileged integration plane, not a harmless plug-in system.
Start with the most important boundary: MCP servers run close to high-value resources. Some expose local files, source code, issue trackers, cloud APIs, internal documentation, or third-party SaaS actions. Teams should sandbox local MCP servers with minimal filesystem, network, and process privileges by default. If a server needs broader access, that access should be granted deliberately and narrowly. The protocol is powerful precisely because it reaches into useful systems, so the safest default is containment first, convenience second.
Token handling is the next major control point. The MCP security guidance explicitly forbids token passthrough, where a client-supplied token is accepted and forwarded downstream without proper validation. That pattern breaks accountability, weakens trust boundaries, and creates easier paths for lateral movement or data exfiltration. MCP servers should only accept tokens that were explicitly issued for that MCP server, validate token claims, and maintain clear separation between client identity and downstream resource access.
OAuth metadata and network behavior also need scrutiny. The best-practices guide warns that malicious MCP servers can abuse discovery flows to trigger SSRF against internal services, localhost endpoints, cloud metadata services, and private IP space. Production MCP clients should require HTTPS for OAuth-related URLs, block private and reserved address ranges where appropriate, defend against DNS rebinding, and treat redirects as part of the attack surface. If an MCP client can be tricked into resolving attacker-controlled metadata, the model is no longer the only thing being manipulated.
Scope minimization is where many otherwise careful teams fail. It is tempting to expose every tool and request broad access once, especially when trying to avoid user friction. But the MCP guidance argues for the opposite model: small initial scopes, targeted elevation, precise authorization challenges, and explicit logging of privilege increases. A stolen or misused broad token turns every connected tool into part of the blast radius. A narrowly scoped token keeps a mistake local.
Security teams should also assume that prompt injection and MCP security are linked. If a model consumes untrusted content and that content can influence tool calls, then an MCP-connected system can become a bridge from hostile input to real action. That is why tool approvals, server allowlists, output review, network restrictions, and environment isolation matter so much. Prompt injection is dangerous partly because it rides the power that integrations provide.
The practical rollout pattern is clear. Inventory every MCP server your organization allows. Classify them by read-only, write-capable, or high-impact action. Sandbox local servers. Use least privilege for remote servers. Review OAuth and redirect handling. Ban token passthrough. Log scope elevation and tool usage. Restrict high-risk servers behind explicit approval. And keep AI assistants away from credentials, production infrastructure, and sensitive data until you have evidence that the surrounding controls are good enough.
MCP is likely to stay because it solves a real integration problem. That means the winning strategy is not avoidance. It is disciplined adoption. Organizations that treat MCP as a security architecture problem now will be far better prepared than the ones that discover, too late, that their most helpful AI tool was also their easiest path to unintended access.
Source notes
Every Wednesday post should link back to primary reporting or documentation so readers can verify claims quickly.