Model Context Protocol (MCP)
The Good, the Bad, and the Ugly of AI’s interface to the real world.
Deploy Securely is sharing this guest post from Daniel Kalinowski, an ethical hacker and the founder of TLBC, a security company. In this piece, he talks about the Model Context Protocol (MCP) and its implications for cybersecurity.
Artificial Intelligence (AI) agents are popping up everywhere—but how do they actually interact with the world? Enter the Model Context Protocol (MCP) introduced by Anthropic in late 2024: an emerging standard for connecting large language models to tools, systems, and data in a structured way. In 2025, we've seen a surge of MCP server releases from giants like Cloudflare, AWS, Microsoft, and others, turning this protocol into a foundation for next-gen automation.
This blog post is your guide to understanding MCP, why it matters, and the different shades of its ecosystem—from promising innovations to potential pitfalls.
What is MCP?
The Model Context Protocol (MCP) defines how language models and AI agents communicate with external systems. Instead of relying solely on static prompts or Application Programming Interfaces (APIs) with brittle wrappers (hacky temporary solutions made with duct tape), MCP provides a consistent, open format for actionable context—turning models into dynamic operators.
Think of it as the protocol that allows your AI assistant not just to talk about tools, but to use them.
Recent MCP server launches span cloud management, security, enterprise data, and DevOps. They’re enabling everything from serverless infrastructure orchestration with AWS Lambda to querying network logs via Versa’s SASE MCP server.
The good: agentic automation
The real magic of MCP is its ability to integrate with other systems. Some highlights:
Natural language for DevOps: AWS’s MCP servers let AI agents spin up Lambda functions, monitor ECS (Elastic Container Service) containers, and audit configurations using human-readable commands.
AI-Native debugging: Cloudflare’s suite of 15 servers makes deep observability information—logs, browser rendering, DNS (Domain Name System) analytics—accessible to agents.
Enterprise intelligence: Dremio’s Lakehouse MCP server connects agents to governed, real-time enterprise data for analytics or insights generation.
Security efficiency: Versa’s server lets AI copilots query firewall logs or automate incident response with context-aware prompts.
MCPs aren’t toy demos—they’re in production with companies like Stripe, Atlassian, Sentry, and Webflow.
The bad: complexity and fragmentation
Despite all the enthusiasm, not everything about MCP is perfect:
Developer overhead: Standing up your own MCP server (or even contributing to open source ones) requires understanding YAML(Yet Another Markup Language)-based schemas, tracing, tool exposure, and authorization flows.
Versioning pain: As the protocol evolves, keeping your server aligned with the latest model capabilities and conventions can be a moving target. But that’s the nature of every meaningful standard—from HTTP to GraphQL to OAuth—stability comes over time, but early adopters must adapt. If you want to play at the frontier, this kind of version churn is part of the deal.
Documentation gaps: Many new MCP projects are shipping fast, with minimal public docs—leaving developers to reverse engineer functionality from GitHub examples.
The promise is there. But the learning curve remains steep.
The ugly: trust and safety hazards
This is where things get spicy. Here are a few examples of how things can go wrong:
Over-permissioned actions: If an MCP server can use powerful tools (like modifying DNS records or restarting containers) without robust checks, a hallucinating agent could wreak havoc. Picture this: an agent messes with a domain's settings, and boom: thousands of people can't get their emails.
Opaque auditing: Streamable tool calls and auto-traced responses (when input parameters, timestamps, etc. are automatically recorded) are powerful—but unless well-logged and monitored, they could become a blind spot in security.
Inaccurate logging of decisions and actions muddies the waters of accountability and makes discipline unenforceable.
Logs themselves can become a security risk if stored in an insecure location, e.g. an internet-facing AWS S3 bucket without authentication.
Supply chain risks: As open-source MCP servers gain traction, they could become vectors for dependency-based attacks—especially when plugged into high-privilege environments.
Say you find a handy open-source MCP server on GitHub that connects to your cloud billing dashboard. Looks clean, has a few stars, and saves you a weekend of work—so you plug it into your AI agent and give it production credentials. But what if that server pulls in a third-party dependency compromised by a malicious maintainer last week? Maybe a nested package starts exfiltrating cloud keys or tweaking cost reports. Suddenly, your helpful AI assistant is unknowingly funneling sensitive data out of your environment—because one unverified repo snuck into the stack.
Another potential attack vector: a malicious MCP server could prompt inject yours through a line jumping/tool poisoning attack, contaminating the context of your server. Without ever being invoked, this hostile MCP server could pass corrupted tool descriptions to yours and cause unexpected behavior.
In other words: an AI assistant with access to your cloud infrastructure can be both your greatest enabler and your biggest risk.
Reality check
Look, using MCP Servers securely isn't plug-and-play. You've got to roll up your sleeves, thoroughly test the implementation methods, and really understand what's going on under the hood.
A recent post from Invariant Labs explores the scenario where GitHub MCP server code itself might not contain any flaws, but still pose a risk. The context in which the server is used—deployed alongside content consumed by the the Agent—might result in a leak of the content of a private repository to attacker.
Basically, context is super important, right?
Like with the underlying Large Language Models (LLM), a major security hole with MCP is prompt injection. Because this is basically an impossible-to-fix problem, developers of both need to apply guardrails. Here are some:
1. Layered input and output validation
Why. Filtering user inputs and LLM outputs can block unsafe content and behaviors.
How. Open-source classifier packages, purpose-built regex (regular expressions), and trusted third-party content moderation APIs.
2. Continuous monitoring and escalation
Why. Tracking usage, automating security alerts, and enabling human review for abnormal or high-risk interactions.
How. Detailed logging, clear behavioral thresholds, pre-established alarms, and on-duty HITL (humans-in-the-loop).
3. Modular, configurable, and evolving architecture
Why. Building guardrails as configurable, updatable, and standalone components keeps them flexible and durable.
How. Feature flags and microservice deployment models. Not copying and pasting code.
MCP is here to stay—use, with care
The Model Context Protocol has moved beyond hype—in 2025 it’s clear MCP is a cornerstone of modern AI deployments. Whether you're debugging systems with natural language or querying enterprise data in real time, MCP servers offer incredible leverage.
But as with all powerful tools, they require care. Audit what your agents (can) do. Invest in observability. And don’t let excitement outweigh caution.
The good is revolutionary.
The bad is manageable.
The ugly? Avoidable—with the right guardrails.
Excellent and timely post given recent reporting: https://www.cyberark.com/resources/threat-research-blog/poison-everywhere-no-output-from-your-mcp-server-is-safe