MCP security: Navigating the wild west of AI integration

Ori Saporta

October 28, 2025

AI development is moving at a breakneck pace. Every month brings new frameworks, tools, and protocols that expand what AI systems can do and how they connect to the broader software ecosystem. That speed is exhilarating, but it often means security lags behind adoption.

From my years working on authentication and authorization, implementing and integrating protocols like OAuth 2.0 and OpenID Connect long before AI assistants existed, I’ve learned that the faster the technology moves, the more disciplined you have to be about security. When standards are in flux, as they are today with the Model Context Protocol (MCP), risks don’t just add up; they compound exponentially.

Context: MCP and the AI protocol landscape

The Model Context Protocol (MCP), developed by Anthropic and later open-sourced, allows LLM-based applications, such as assistants like Copilot, Amazon Q Developer, or Claude, to connect with external tools, applications, and data sources through a common interface. MCP extends an assistant’s reach beyond the model’s sandbox, simplifying integrations but also introducing new security concerns. This creates a risk of tool-to-system exposure, where a compromised tool can access data directly and become a tunnel into enterprise systems, exposing information or executing unintended actions.

Ease of integration creates convenience, but also more entry points. Each new connection between an assistant and an external tool becomes a potential attack path.

Meanwhile, Google’s emerging Agent-to-Agent (A2A) Protocol enables AI agents to communicate with one another. While framed as complementary to MCP, overlaps appear when agents act as tools or invoke other agents—where a single compromised agent could mislead or manipulate others.

Both models introduce fragile trust models that enterprises must carefully evaluate. We’re in the early stages of an AI protocol war—and the winners will be those who make security foundational, not optional.

A moving target for standards

The MCP authorization protocol is still a moving target—evolving almost daily. In preparing for a recent discussion with IDC’s Michele Rosen, I reviewed two recent protocol revisions, and found entirely different approaches. That fluidity makes enterprise-grade security challenging. Until the industry coalesces around a single, stable standard, each vendor must implement its own safeguards.

At vFunction, we’ve opted for a pragmatic, well-understood method: API keys at the application level. It’s not perfect—there’s no fine-grained permissioning yet—but it’s standard, predictable, and revocable. The right thing to do is to use OpenID connect for authentication and authorization. But since that’s not yet standardized within MCP, we’re proceeding with the most reliable alternative available today. 

Ultimately, this is how innovation in security often works: build safely now, then evolve toward stronger, standardized solutions as the ecosystem matures.

Real risks: Security edge cases aren’t theoretical

The MCP website and documentation highlight several security pitfalls that every implementer should understand:

  • Token pass – Accepting unvalidated tokens and forwarding them to other APIs without verification.
  • Session hijacking – Unauthorized reuse of a valid session ID.
  • Confused deputy problem – When an MCP server acts as a proxy for other resources without proper validation.

    For example, a user might think they’re sending a task to vFunction’s MCP server, but another installed server offering “to-do” handling intercepts the request. The user is prompted to approve it, but if overwhelmed or caught in a busy workflow, that approval might happen unintentionally—opening the door to data leakage or privilege misuse.
  • Improper sandboxing or token reuse – Poor token isolation or mishandled refresh logic can allow persistent access even after sessions should expire.

A compromised or malicious tool creates a risk of tool-to-system exposure, allowing direct access to or exfiltration of data—especially given MCP’s middleware-style design, where servers often handle parts of authentication and authorization on behalf of the user. If a refresh token is mishandled, for example, an MCP server could mint new access tokens indefinitely—turning what should be a contained compromise into a persistent breach.

These risks moved from theoretical to very real when JFrog disclosed CVE-2025-6514, a critical remote code execution (RCE) vulnerability affecting certain MCP server implementations. The flaw allowed attackers to send specially crafted payloads that executed arbitrary code on servers running vulnerable MCP components, effectively granting full remote access.

According to JFrog’s analysis, the issue stemmed from MCP’s design flexibility and inconsistent validation logic across implementations—exactly the kind of weakness that emerges when standards evolve faster than security practices. Rated 9.8 on the CVSS scale, the exploit underscores how quickly an integration protocol can become an attack vector when safety checks aren’t uniform.

MCP’s openness is both its strength and its greatest risk, making it a high-value target.. Without consistent authentication, sandboxing, and code-execution boundaries, even well-intentioned implementations can expose enterprise environments to catastrophic breaches.Ultimately, an integration protocol can become an attack vector when safety checks aren’t uniform.

What this means for enterprises

For enterprises, MCP isn’t just another developer protocol; it’s a new security boundary you now have to defend. The moment you allow AI assistants to connect to production systems or even the local file system of users within corporate networks, you’re expanding your attack surface in ways traditional governance models were never designed to handle.

From my perspective, that means:

  1. Incorporate security from day one
    Don’t wait for MCP standards to mature before putting security controls in place. Use simple, effective, and easy-to-manage secret rotation policies, and avoid sharing refresh tokens with MCP servers.
  2. Plan for today’s decisions to have a lasting impact
    Once MCP-enabled workflows spread across your organization, rolling them back becomes difficult. You need a plan for monitoring, auditing, and updating every connected tool.
  3. Evolve governance models
    Traditional IT governance assumes static integrations and predictable change cycles. MCP is dynamic by design. You’ll need live registries, usage policies, and automated detection for unauthorized tool activity.
  4. Require protections beyond user consent
    Consent fatigue is real. Developers must move from “ask the user” to “protect the user,” with role-based permissions and guardrails that don’t rely solely on end-user judgment. Ideally, each MCP server should defend itself from malicious inputs.

    MCP was designed to let assistants access resources outside the sandbox. Using prompts, it’s easy to create malicious behavior even from non-malicious MCP tools or to pair a malicious one with legitimate ones. Breaking out of the sandbox allows us to do things we wouldn’t do otherwise, but if the only thing protecting those systems is user consent, it’s not enough. You get a million requests, and the million-and-first is bad—there goes your enterprise.
  5. Treat the enterprise AI stack as a security product
    MCP shifts responsibility for secure operation from protocol designers to product implementers. If you’re building enterprise AI applications, security is now part of your core value proposition, not a checklist item at the end of the release cycle.

Without proactive controls, the difference between innovation and exposure can come down to a single user click. These principles mirror ISO 27001’s guidance: rely on layered controls, not human approval alone. And yes—we’re ISO 27001 certified, which means we’ve standardized our paranoia 😉

Popup Image

Why this matters now

Many organizations are still piloting MCP in labs or prototypes, but adoption is accelerating. The security model we choose now will set the precedent for how safe—or unsafe— those deployments become.

When we introduced MCP support into the vFunction platform, we didn’t treat it as just another integration. We designed it to bring the same architectural context, governance controls, and security guardrails we apply to every modernization workflow. That means our MCP implementation can connect assistants like Amazon Q and GitHub Copilot to enterprise applications while still enforcing the architectural rules, permissions, and boundaries that keep them secure.

MCP is designed to “break out of the sandbox,” that’s its value. But without robust, well-understood security controls, it also creates new attack surfaces. We now have the opportunity to shape MCP’s evolution into something enterprises can trust. Let’s not waste it.

To learn more about vFunction and its use of AI via its MCP server, visit our platform page.


Ori Saporta

Ori Saporta co-founded vFunction and serves as its VP Engineering. Prior to founding vFunction, Ori was the lead Systems Architect of WatchDox until its acquisition by Blackberry, where he continued to serve in the role of Distinguished Systems Architect. Prior to that, Ori was a Systems Architect at Israeli’s Intelligence Core Technology Unit (8200). Ori has a BSc in Computer Engineering from Tel-Aviv University and and an MSc in Computer Science from the same institute for which his thesis subject was “Testing and Optimizing Data-Structure Implementations Under the RC11 Memory Model”.

Get started with vFunction

Accelerate engineering velocity, boost application scalability, and fuel innovation with architectural modernization. See how.