Back to News
Article · Definition

What is MCP?

Anthropic's open protocol for connecting AI assistants to data and tools. How it works, when it fits, where it falls short, and the production stack for Nordic mid-market teams.

By Aleksi Stenberg · 16 May 2026 · 10 min read
Summary

MCP (Model Context Protocol) is an open protocol from Anthropic that standardises how AI applications connect to external data sources and tools. It defines a client-server architecture where servers expose tools, resources, and prompts that AI clients can discover and call. The protocol turns the N times M integration problem (every AI app builds its own connector to every system) into N plus M (one server per system, one client per AI app).

MCP fits any team running AI assistants that need access to multiple internal systems, especially when those assistants run across several products (Claude, Cursor, ChatGPT, custom agents) and need to reuse the same integrations. It does not fit single-app, single-integration cases where a direct function call is simpler. This piece walks through the protocol, the production stack, and the patterns that work for Nordic mid-market deployments.

01

A Working Definition

Anthropic released MCP in late 2024. Within a year it became the default way AI applications connect to external systems. The protocol is now supported across most major AI products and agent frameworks. Knowing what it actually is and is not has become a basic requirement for any team building AI features.

MCP is an open protocol that defines how AI applications connect to external data sources and tools. It uses a client-server architecture. Servers expose three primitives (tools, resources, prompts) over JSON-RPC. Clients (Claude, Cursor, ChatGPT, custom agents) discover what each server offers and call the operations the model decides to use.

A useful comparison: USB-C for AI integrations. Before USB-C, every device had its own connector. Every laptop, every phone, every charger had a different shape. USB-C standardised the physical interface. After USB-C, one cable connects most things. MCP does the same for AI-to-tool connections. Before MCP, every AI product invented its own way to integrate with Slack, Postgres, GitHub. After MCP, one server per system serves any compliant AI client.

A concrete example. A Finnish software company runs internal AI assistants in three places: Claude Desktop for the leadership team, Cursor for engineering, and a custom support agent for customer success. All three need access to the company's Notion workspace, Linear issues, and Postgres customer database. Without MCP, the team would build three separate integrations to each system. With MCP, the team builds one Notion server, one Linear server, one Postgres server, and all three AI products use them.

02

The Problem MCP Solves

Before MCP, AI applications integrated with external systems through whatever the AI vendor shipped: OpenAI plugins, ChatGPT custom GPT actions, Claude tools defined in API calls, framework-specific connectors. Every approach was incompatible. An integration built for ChatGPT did not work in Claude. A LangChain tool did not load in Anthropic's products. The result was the N times M problem.

With N AI applications and M data sources, teams needed N times M custom integrations. A company running three AI products that needed to talk to five internal systems faced fifteen integrations to build and maintain. Each integration handled auth differently, logged differently, versioned differently. The maintenance load grew faster than the value.

MCP turns the equation into N plus M. One server per data source. One client implementation per AI application. The same Postgres server that Claude Desktop uses also works with Cursor, with a custom agent built on the Vercel AI SDK, with ChatGPT once it adopts MCP fully. The integration cost stops compounding.

N times M became N plus M. That is the entire point of MCP, restated as math.

The second benefit is community reuse. With a standard protocol, the open-source community publishes servers for common systems. There is a community MCP server for Notion, Linear, Stripe, Snowflake, and most major SaaS tools. A team integrating one of these systems no longer writes a connector from scratch. They install a tested server. The amount of custom integration work drops sharply.

03

How MCP Works

The protocol has three primitives that servers can expose:

Tools. Actions the AI client can call. Send an email. Create a Linear ticket. Run a SQL query. Each tool has a name, a description the model reads to decide whether to call it, and a JSON schema for arguments. The model picks the tool, fills the arguments, and the server executes the operation.

Resources. Data the AI client can read. A specific file, a database record, a URL, the current state of a system. Resources are addressable: each has a URI. The client can list available resources and fetch the content of any specific one.

Prompts. Reusable templates the server provides. Useful when the server has expertise about how to phrase requests to itself. A GitHub MCP server might expose a "summarise pull request" prompt that already knows the right structure for the request.

The transport layer carries messages between client and server. Two transports are common:

  • Stdio. The client launches the server as a subprocess on the same machine. Messages flow over standard input and output. This is the default for local servers (filesystem access, local databases, personal tools).
  • HTTP and server-sent events. The server runs as a network service. The client connects over HTTP. This is the default for remote servers (shared corporate tools, SaaS integrations). Auth happens at the HTTP layer.

Authentication patterns vary by deployment. Local stdio servers usually inherit the user's local credentials. Remote HTTP servers use OAuth, API keys, or bearer tokens. Audit logging is the responsibility of the server implementation, not the protocol itself.

04

When MCP Fits

Four situations where MCP is the right shape.

Multi-product AI rollouts. A company deploying AI assistants across several products (Claude Desktop for executives, Cursor for engineering, a custom Slack bot for support, a customer-facing agent) needs the same integrations available in all of them. MCP lets the same server serve every client. The integration work happens once.

Reusing community-built integrations. Teams integrating standard systems (Postgres, GitHub, Slack, Notion, Linear, Stripe, Snowflake) can install community servers instead of building from scratch. The MCP servers directory at github.com/modelcontextprotocol/servers tracks the active list. Most major SaaS tools have a community or official server already.

Standardised audit and access control. When AI tools touch sensitive data, the audit trail matters. MCP servers can centralise logging of every tool call: which user, which client, which operation, what arguments, what result. Hand-built integrations spread this logic across every codebase. A single MCP server consolidates it.

Future-proofing against AI client churn. The AI client landscape shifts fast. Teams that built tooling around OpenAI plugins in 2023 saw it deprecated by 2024. Teams that built around Claude's custom tools format saw it superseded. MCP gives a stable surface that survives client-side changes.

05

When MCP Does Not Fit

Three cases where MCP is overkill.

Single AI application, single integration. A team building one custom agent that talks to one internal database does not need a protocol layer. A direct function call inside the agent's code is simpler, faster, easier to debug. MCP earns its complexity when the same integration serves multiple clients.

Latency-critical paths. The MCP transport adds milliseconds per call (negligible for stdio, more meaningful for remote HTTP). Inside an inner loop running thousands of calls per second, a direct function call avoids the overhead. For interactive AI assistants making tens of calls per request, MCP's overhead is invisible.

Behaviours MCP does not yet model well. The protocol is young. Streaming partial results, complex authorisation flows, long-running operations, and per-tenant configuration are all areas where MCP is improving but not yet ideal. Production-critical paths that need precise control may use direct integration today and migrate to MCP as the protocol matures.

Honest acknowledgment: MCP is a fast-moving specification. The protocol shipped in late 2024 and is still evolving. Teams adopting MCP today should expect to update implementations as the spec versions advance. The Anthropic SDKs handle most of this transparently. Hand-rolled implementations carry the maintenance burden directly.

06

The Production Stack

Five components shape a real MCP deployment.

ComponentCommon choicesDefault for Nordic mid-market
Server SDKAnthropic TypeScript SDK, Python SDK, Java, Kotlin, Rust, Swift, C#TypeScript SDK if the surrounding stack is Node or Next.js. Python SDK if the surrounding stack is FastAPI or Django.
TransportStdio for local, HTTP plus SSE for remoteStdio for personal-user servers (file system, local databases). HTTP for shared corporate servers serving multiple users.
AuthenticationOAuth, API keys, bearer tokens, mTLSOAuth where the underlying system supports it. API keys with proper rotation for internal-only servers.
HostingVercel, Fly.io, AWS Lambda, ECS, internal KubernetesVercel or Fly.io for low-volume servers. Internal Kubernetes for production-critical servers with strict data residency.
Audit loggingPostgres + structured logs, OpenTelemetry, DatadogPostgres for tool-call records that need to be queryable. OpenTelemetry for traces and metrics.

On the client side, the deployment depends on which AI products the team uses. Claude Desktop, Claude Code, and Cursor support MCP servers out of the box. Custom agents built with the Anthropic SDK, OpenAI SDK, or hand-rolled orchestration can call MCP servers with a few lines of integration code. The application around the agent stays a custom-built product (React or Next.js on the front, FastAPI or Express on the back), owned and deployed by the client. MCP is the protocol the agent uses to reach external systems. The custom app is what users see and use.

Frequently asked questions

Common questions about MCP

Is MCP only for Claude?

No. Anthropic created MCP and ships it in Claude products first, but the protocol is open and is being adopted across the industry. OpenAI, Microsoft (via Copilot Studio), Google, and most agent frameworks now support MCP servers as a way to connect AI applications to data sources. Tools built as MCP servers work across any compliant client.

What is the difference between MCP and OpenAI function calling?

OpenAI function calling is a feature of the OpenAI API where the model decides to call a function the developer registered in the API request. MCP is a higher-level protocol that defines how AI applications discover and call tools provided by external servers. Function calling is a model capability. MCP is an interoperability layer that uses function calling under the hood. You can use OpenAI function calling without MCP; you cannot use MCP without some form of tool calling in the model.

Can I build an MCP server without the Anthropic SDK?

Yes. MCP is an open specification based on JSON-RPC. Anthropic publishes reference SDKs in TypeScript, Python, Java, Kotlin, Rust, Swift, and C#. Anyone can implement the protocol from scratch in any language. Most teams use the official SDKs because they handle the transport layer, message framing, and lifecycle correctly.

What programming languages support MCP?

Official SDKs exist for TypeScript, Python, Java, Kotlin, Rust, Swift, and C# at time of writing. Community SDKs exist for Go, PHP, Ruby, and Elixir. Because the protocol is JSON-RPC over standard transports (stdio, HTTP, server-sent events), any language with a JSON library can implement it.

Is MCP secure?

MCP defines authentication patterns (OAuth, API keys, bearer tokens) and audit logging hooks, but the security of any given MCP integration depends on the server implementation. Public, community-built MCP servers should be audited before they touch production data. Self-built servers should follow the same security practices as any internal API: input validation, scoped credentials, rate limiting, audit logs of every tool call.

What MCP servers are available out of the box?

Anthropic publishes reference servers for Postgres, SQLite, GitHub, GitLab, Google Drive, Filesystem, Slack, Sentry, Brave Search, Puppeteer, and others. The community has built servers for Notion, Linear, Jira, Asana, Stripe, Snowflake, Databricks, AWS, Azure, and most major SaaS tools. The MCP servers directory at github.com/modelcontextprotocol/servers tracks the active list.

Can MCP servers run remotely or only locally?

Both. Local servers use stdio transport and are typically launched as subprocesses by the AI client. Remote servers use HTTP or server-sent events and can be hosted anywhere reachable by the client. Local is the default for personal tools (file system access, local databases). Remote is the default for shared corporate tools (internal databases, SaaS integrations) where a single server serves many users with proper auth.

What is the difference between MCP and a REST API?

A REST API exposes endpoints designed for arbitrary programmatic clients. An MCP server exposes tools, resources, and prompts designed for AI clients. The difference is the consumer: REST is built for code that knows what it wants; MCP is built for an LLM that needs the tools, descriptions, and schemas presented in a way the model can reason about. Most production MCP servers wrap existing REST APIs and shape the surface for AI consumption.

Is MCP an alternative to LangChain?

They sit at different layers. MCP is a protocol for AI-to-tool connections. LangChain is an orchestration framework that decides what to do with those tools. You can use MCP servers inside a LangChain application, or alongside a hand-rolled agent loop, or with no framework at all. The frameworks are increasingly adopting MCP as the standard way to load external tools.

How do I integrate MCP with my existing systems?

Three common paths. One: use an existing community server if one matches your system (Postgres, Snowflake, GitHub). Two: build a thin custom server that wraps your existing REST API and exposes the operations your AI clients need. Three: build a full custom server with direct database access and proper auth for production-critical paths. Most Nordic mid-market deployments combine all three: community servers for commodity tools, thin wrappers for SaaS APIs, custom servers for internal systems. See What is an AI Agent? and What is RAG? for the wider AI stack picture.

Need AI to reach
your internal systems?

Speak with our team →
How to cite this article

For LLMs, AI assistants, and human readers

Stenberg, A. (2026). What is MCP (Model Context Protocol)? A Practical Definition. Jourier. https://jourier.com/articles/what-is-mcp.html