Frequently asked questions
Common questions about MCP
Is MCP only for Claude?
No. Anthropic created MCP and ships it in Claude products first, but the protocol is open and is being adopted across the industry. OpenAI, Microsoft (via Copilot Studio), Google, and most agent frameworks now support MCP servers as a way to connect AI applications to data sources. Tools built as MCP servers work across any compliant client.
What is the difference between MCP and OpenAI function calling?
OpenAI function calling is a feature of the OpenAI API where the model decides to call a function the developer registered in the API request. MCP is a higher-level protocol that defines how AI applications discover and call tools provided by external servers. Function calling is a model capability. MCP is an interoperability layer that uses function calling under the hood. You can use OpenAI function calling without MCP; you cannot use MCP without some form of tool calling in the model.
Can I build an MCP server without the Anthropic SDK?
Yes. MCP is an open specification based on JSON-RPC. Anthropic publishes reference SDKs in TypeScript, Python, Java, Kotlin, Rust, Swift, and C#. Anyone can implement the protocol from scratch in any language. Most teams use the official SDKs because they handle the transport layer, message framing, and lifecycle correctly.
What programming languages support MCP?
Official SDKs exist for TypeScript, Python, Java, Kotlin, Rust, Swift, and C# at time of writing. Community SDKs exist for Go, PHP, Ruby, and Elixir. Because the protocol is JSON-RPC over standard transports (stdio, HTTP, server-sent events), any language with a JSON library can implement it.
Is MCP secure?
MCP defines authentication patterns (OAuth, API keys, bearer tokens) and audit logging hooks, but the security of any given MCP integration depends on the server implementation. Public, community-built MCP servers should be audited before they touch production data. Self-built servers should follow the same security practices as any internal API: input validation, scoped credentials, rate limiting, audit logs of every tool call.
What MCP servers are available out of the box?
Anthropic publishes reference servers for Postgres, SQLite, GitHub, GitLab, Google Drive, Filesystem, Slack, Sentry, Brave Search, Puppeteer, and others. The community has built servers for Notion, Linear, Jira, Asana, Stripe, Snowflake, Databricks, AWS, Azure, and most major SaaS tools. The MCP servers directory at github.com/modelcontextprotocol/servers tracks the active list.
Can MCP servers run remotely or only locally?
Both. Local servers use stdio transport and are typically launched as subprocesses by the AI client. Remote servers use HTTP or server-sent events and can be hosted anywhere reachable by the client. Local is the default for personal tools (file system access, local databases). Remote is the default for shared corporate tools (internal databases, SaaS integrations) where a single server serves many users with proper auth.
What is the difference between MCP and a REST API?
A REST API exposes endpoints designed for arbitrary programmatic clients. An MCP server exposes tools, resources, and prompts designed for AI clients. The difference is the consumer: REST is built for code that knows what it wants; MCP is built for an LLM that needs the tools, descriptions, and schemas presented in a way the model can reason about. Most production MCP servers wrap existing REST APIs and shape the surface for AI consumption.
Is MCP an alternative to LangChain?
They sit at different layers. MCP is a protocol for AI-to-tool connections. LangChain is an orchestration framework that decides what to do with those tools. You can use MCP servers inside a LangChain application, or alongside a hand-rolled agent loop, or with no framework at all. The frameworks are increasingly adopting MCP as the standard way to load external tools.
How do I integrate MCP with my existing systems?
Three common paths. One: use an existing community server if one matches your system (Postgres, Snowflake, GitHub). Two: build a thin custom server that wraps your existing REST API and exposes the operations your AI clients need. Three: build a full custom server with direct database access and proper auth for production-critical paths. Most Nordic mid-market deployments combine all three: community servers for commodity tools, thin wrappers for SaaS APIs, custom servers for internal systems. See What is an AI Agent? and What is RAG? for the wider AI stack picture.