← What's New

MCP Is the New API: What Model Context Protocol Means for Enterprise Architecture

MCP is becoming the default wire protocol for AI agents. How architects should think about adoption, tradeoffs, and governance patterns that scale

Treat MCP as a wire-protocol decision, not an AI integration tactic — that single reframing changes most of the architecture choices that follow.

A year after Anthropic introduced the Model Context Protocol in late 2024, MCP is no longer a curiosity buried in a vendor blog. OpenAI adopted it in March 2025. Google DeepMind followed in April. Microsoft, Cloudflare, and a long tail of SaaS vendors shipped servers through the year. The open specification and reference implementations live on GitHub at modelcontextprotocol.

For enterprise architects, this matters less as an AI story and more as a standards story. The shape of the decision is familiar: a new wire protocol arrives, three or four credible vendors back it within a year, your platform team has to decide whether to bet on it, hedge with adapters, or wait for things to settle. We have lived through this with REST, with GraphQL, with gRPC. MCP is the agent-era version of that decision, and most of the writing about it is either tutorial-grade or vendor-flavored. The architect’s view — when MCP is the right primitive, when it actively makes things worse, and what governance has to look like before you put it in front of a regulator — is the part that is missing.

This post is that view.

The integration problem MCP actually solves

The motivating problem is older than agents. Anytime you have M AI applications and N data sources or tools, the naive solution is M × N custom integrations. Every new agent has to learn every system. Every new system has to be wrapped for every agent. The result is the same combinatorial mess REST tried to clean up for HTTP services in the 2000s, except now the consumer is a probabilistic language model rather than a deterministic client.

MCP takes its design cue from the Language Server Protocol — the standard that decoupled IDEs from language tooling and made the modern editor ecosystem possible. The protocol uses JSON-RPC 2.0 over a session-oriented transport (stdio for local servers, streamable HTTP for remote ones). A host process — Claude Desktop, Cursor, VS Code, an internal agent — runs one or more clients. Each client connects to a server that exposes three primitives: tools (callable functions), resources (readable data), and prompts (parameterized templates).

The crucial shift is dynamic discovery. With per-vendor function calling, an application embeds a static list of function schemas in every model request. With MCP, the host asks the server “what can you do?” at runtime via tools/list, and the model decides what to invoke. Tool catalogs become first-class artifacts that live outside the agent application — versioned, governed, reusable across hosts.

That last word is what makes MCP an architecture decision rather than an integration tactic. A well-built MCP server is consumed by Claude today, by your internal agent tomorrow, by whatever model you switch to in 2027. The lock-in moves from the integration layer to the model choice itself, which is the right place for it.

MCP vs function calling vs OpenAPI: a decision framework

The single most common mistake I see is treating MCP as a replacement for function calling. It is not. MCP standardizes how tools are discovered and described; the model still emits structured tool calls underneath. You can implement MCP servers and call them via function-calling-style APIs. The two coexist.

The honest comparison is between three patterns:

  • Function calling — Tool schemas live inside the application’s model request. Lowest latency, lowest infrastructure cost, no protocol overhead. Tightly coupled to one host and (often) one model vendor.
  • OpenAPI tools — Existing HTTP services described via OpenAPI 3.1; agent frameworks auto-generate callable tools from the spec. Excellent fit when you already run a mature API gateway and want governance to ride on existing controls.
  • MCP — Servers expose tools to any compliant host over a standardized protocol. Higher overhead, but portable across hosts and dynamically discoverable.

A serviceable decision framework, working backwards from the question that actually matters:

  1. Will more than one host consume this tool? If no, function calling is almost always cheaper and simpler. If yes, MCP earns its keep.
  2. Is the underlying capability already a governed HTTP service with an OpenAPI contract? If yes, your default should be to expose it through OpenAPI tooling and treat MCP as an optional adapter layer for hosts that demand it. Do not auto-convert every endpoint into an MCP tool — the context-window tax is real, and tool quality degrades as catalogs grow.
  3. Do you need runtime tool discovery? Static catalogs are simpler. If your toolset is small and stable, you do not need the dynamism MCP provides.
  4. Is latency the dominant constraint? Function calling avoids a network hop; MCP servers add one. For sub-200ms inner loops, this matters.
  5. Will you change model vendors in the next two years? If the answer is “probably,” MCP’s portability is worth the overhead.

The pragmatic shape most mature teams converge on is hybrid: a small set of in-process function calls for hot paths, OpenAPI as the source of truth for HTTP services, and a curated set of MCP servers for tools that need to be reused across multiple agent surfaces. Avoid the trap of treating MCP as the new universal hammer. Tool inflation is a real failure mode: hosts like Cursor cap tool counts because model accuracy degrades sharply past a few dozen tools in context. Curate aggressively.

The security model is the hard part

MCP’s protocol elegance hides a meaningful security problem, and the industry is still working through it.

The first category is tool poisoning. Because the model reads tool descriptions and parameter schemas as input, anything an MCP server returns in those fields is effectively prompt content. An attacker who controls or compromises a server can inject instructions directly into descriptors with no sanitization layer in the way. Two CVEs landed this category on the map in 2025: MCPoison (CVE-2025-54136) and CurXecute (CVE-2025-54135). Simon Willison documented the broader class — including “rug pull” attacks where tool descriptions change after initial user approval — earlier in the year. The OWASP LLM Top 10 catalogs these as variants of LLM01 (Prompt Injection) and LLM05 (Supply Chain).

The second category is authorization. The MCP authorization specification was finalized on OAuth 2.1 with PKCE, and went through several revisions through 2025 before reaching a workable form. Critics — including OAuth 2.1 co-author Aaron Parecki and Solo.io’s Christian Posta — pointed out that early drafts implicitly asked MCP servers to act as their own authorization servers, which is incompatible with how enterprises actually deploy identity. The mature pattern, which is now the de facto consensus, is:

  • MCP server is a pure OAuth 2.1 resource server.
  • The enterprise IdP (Okta, Entra, Auth0, Keycloak) remains the authorization server.
  • Clients use Resource Indicators (RFC 8707) to bind tokens to specific MCP servers via the audience claim.
  • Tokens are short-lived, narrowly scoped, and rotated.

The third category is identity propagation across hops. A single user request now flows through user → host application → MCP client → potentially several MCP servers → backend APIs. Each hop must preserve authorization context, prevent credential leakage into model context, and produce an independently auditable trail. JWT-based tokens with strict audience binding help, but the protocol alone does not enforce any of this. Treat it as your problem, not the spec’s.

If you take one principle from this section, take this: every channel that enters the model’s context is a security boundary. Tool descriptions, parameter schemas, resource contents, server names. All of it. Design accordingly.

Governance: the MCP gateway pattern

The architectural pattern that has emerged for serious deployments is the agent gateway, sometimes called an MCP gateway. It sits between hosts and servers and owns the things the protocol leaves to you. A working checklist for what such a gateway should do:

  • Server allowlisting. Only registered, signed MCP servers can be reached from production hosts.
  • Token transformation. Inbound IdP tokens get exchanged for narrowly-scoped, short-lived downstream credentials. The model never sees a long-lived secret.
  • Tool curation. A central registry decides which tools from which servers are exposed to which agents and which users. This is also where you enforce the “fewer tools, better descriptions” discipline.
  • Description hashing. Tool descriptions are pinned at registration time. Any change triggers a re-approval flow, neutralizing rug-pull attacks.
  • Audit logging. Every tool call, with full arguments and full results, is logged with user, agent, server, and policy decision attached. This is the artifact your compliance team will want.
  • Egress control. Tools that touch external networks go through a sanctioned egress proxy with DLP. This is how data exfiltration via prompt injection actually gets contained.

You do not need a vendor product to build this. Most platform teams I have seen succeed start with a thin proxy in front of their MCP traffic and grow it as use cases demand. The discipline is what matters: every security control point should have one team that owns it.

When MCP is the wrong answer

A short list, because it deserves one:

  • Deterministic workflows with no LLM in the loop. Use a normal API. MCP exists to mediate model-driven tool selection; if no model is selecting, you are paying overhead for nothing.
  • Single-host, single-model deployments with a fixed toolset. Function calling is simpler, cheaper, and easier to debug. Adopt MCP later if portability becomes a requirement.
  • Tight inner loops where every millisecond counts. The extra hop is real.
  • Tools the model already knows how to use. Mature CLIs sit in training data and can often be called directly through a shell tool. You do not need to wrap git in an MCP server.
  • Replacing a working OpenAPI catalog. Do not auto-convert a thousand endpoints into MCP tools. Curate ten that an agent actually needs.

The pattern I encourage architects to internalize is that MCP is most valuable where portability and dynamic discovery are the primary goals. Where they are not, simpler primitives win.

The protocol layer of agentic AI is converging faster than most enterprises are ready for. The teams that come out ahead are not the ones building the most MCP servers — they are the ones treating MCP as a standards decision, investing early in the governance surround, and being ruthlessly selective about where the protocol genuinely earns its keep. Everything else is integration debt waiting to happen.