Getting Started with MCP (Model Context Protocol): A Practical Guide
MCP is changing how AI agents connect to tools and data. Here's a practical guide to understanding, implementing, and building with the Model Context Protocol.
If you've built anything with AI agents, you know the pain. Every tool integration is a snowflake. You write custom code to connect your agent to a database, then completely different code to connect it to a file system, then something else entirely for a third-party API. It's glue code all the way down.
MCP — the Model Context Protocol — exists to kill that pattern. Think of it as USB-C for AI: one standardised interface that any AI client can use to talk to any tool or data source. No more bespoke integrations for every combination of model and capability.
In this guide, I'll break down what MCP actually is, walk through its core concepts, build a working MCP server in TypeScript, and help you figure out when it makes sense to use — and when it doesn't.
Why MCP Exists
Before MCP, connecting an AI model to external tools meant one of two things:
-
Function calling — You define tool schemas in your prompt, the model outputs JSON matching that schema, and your application code handles execution. Works great, but every provider does it slightly differently. OpenAI's function calling looks nothing like Anthropic's tool use, which looks nothing like Google's.
-
Custom integrations — You build a bespoke layer between your agent and whatever it needs to access. A database connector here, a file reader there, an API wrapper over there. Each one is tightly coupled to both the model and the data source.
Both approaches work. Neither scales. If you want your agent to work with 10 different tools, you're writing 10 integrations. If you want to swap out the model underneath, you're potentially rewriting all of them.
MCP flips this. Instead of N×M integrations (N models × M tools), you get N+M. Each tool exposes itself as an MCP server. Each AI application connects as an MCP client. The protocol handles everything in between.
Anthropic open-sourced the spec in late 2024, and since then adoption has been rapid. Claude Desktop, Cursor, Windsurf, Cline, and dozens of other AI tools now speak MCP natively. The ecosystem of pre-built servers covers databases, file systems, Git, Slack, GitHub, web scraping, and much more.
Core Concepts
MCP has five building blocks. Understanding these is the whole game.
Servers and Clients
An MCP server is a lightweight process that exposes capabilities — tools, data, or prompts — over a standardised protocol. It doesn't need to be a web server. Most MCP servers run locally as child processes, communicating over stdio. They can also use Server-Sent Events (SSE) over HTTP for remote deployments.
An MCP client is the AI application that connects to servers and uses their capabilities. Claude Desktop is a client. So is Cursor. So is any agent framework that implements the client side of the protocol.
The relationship is always client → server. One client can connect to many servers simultaneously.
Tools
Tools are the most common MCP primitive. A tool is a function that the AI model can call — read a file, query a database, send a message, execute code. The server declares what tools it offers (name, description, input schema), and the client presents them to the model as available actions.
This is conceptually similar to function calling, but with a crucial difference: the tool definition and execution live in the server, not in your application code. Swap the server, swap the tools. No client changes needed.
Resources
Resources represent data that the client can read. Think of them like GET endpoints — they're for exposing information without side effects. A file system server might expose files as resources. A database server might expose table schemas. A monitoring server might expose current metrics.
Resources have URIs (like file:///path/to/doc.md or db://users/schema) and can be listed, read, and subscribed to for changes. They give the model context without requiring a tool call.
Prompts
Prompts are reusable templates that servers can offer to clients. A code review server might expose a "review this PR" prompt template. A writing server might offer different editorial styles. The client can discover available prompts and let users select them.
Honestly, prompts are the least-used primitive right now. Tools and resources do most of the heavy lifting. But they're useful for encoding domain-specific workflows that you want to expose consistently.
Transport
MCP currently supports two transport mechanisms:
- stdio — The client spawns the server as a child process and communicates over stdin/stdout. This is the default for local tools. It's simple, fast, and requires zero networking.
- Streamable HTTP — The server runs as an HTTP service. The client connects over HTTP with Server-Sent Events for server-to-client messages. Use this for remote servers or shared deployments.
For most local development, stdio is all you need.
Building an MCP Server in TypeScript
Let's build something real. We'll create an MCP server that provides weather data — a simple example that demonstrates the full pattern.
Project Setup
mkdir mcp-weather-server
cd mcp-weather-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Update your tsconfig.json to target ESNext with NodeNext module resolution, and set "outDir": "./dist".
The Server
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather",
version: "1.0.0",
});
// Define a tool
server.tool(
"get-weather",
"Get current weather for a city",
{
city: z.string().describe("City name"),
units: z
.enum(["celsius", "fahrenheit"])
.default("celsius")
.describe("Temperature units"),
},
async ({ city, units }) => {
// In production, call a real weather API here
const response = await fetch(
`https://wttr.in/${encodeURIComponent(city)}?format=j1`
);
if (!response.ok) {
return {
content: [
{
type: "text",
text: `Failed to fetch weather for ${city}`,
},
],
isError: true,
};
}
const data = await response.json();
const current = data.current_condition[0];
const tempC = current.temp_C;
const tempF = current.temp_F;
const description = current.weatherDesc[0].value;
const humidity = current.humidity;
const temp = units === "celsius" ? `${tempC}°C` : `${tempF}°F`;
return {
content: [
{
type: "text",
text: `Weather in ${city}: ${description}, ${temp}, Humidity: ${humidity}%`,
},
],
};
}
);
// Define a resource
server.resource(
"supported-cities",
"weather://cities",
{
description: "List of cities with reliable weather data",
mimeType: "application/json",
},
async () => ({
contents: [
{
uri: "weather://cities",
mimeType: "application/json",
text: JSON.stringify([
"Sydney",
"Melbourne",
"London",
"New York",
"Tokyo",
"Berlin",
]),
},
],
})
);
// Connect transport and start
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP server running on stdio");
}
main().catch(console.error);
Build and Test
npx tsc
You can test this immediately with Claude Desktop. Add this to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/absolute/path/to/mcp-weather-server/dist/index.js"]
}
}
}
Restart Claude Desktop, and you'll see the weather tool available. Ask "What's the weather in Melbourne?" and Claude will call your server.
That's it. No API routes, no auth middleware, no webhook handlers. Just a process that speaks MCP.
What's Happening Under the Hood
When Claude Desktop starts, it spawns your server as a child process. The client sends an initialize request over stdin. Your server responds with its capabilities — one tool (get-weather) and one resource (weather://cities). When the model decides to use the tool, the client sends a tools/call request with the arguments. Your server executes the function and returns the result. The model incorporates that result into its response.
The entire exchange is JSON-RPC 2.0 over stdio. Clean, debuggable, and well-specified.
MCP vs the Alternatives
Let's be honest about the tradeoffs.
MCP vs Direct Function Calling
Function calling (OpenAI-style) is simpler if you're building a single-model application with a handful of tools. You define the schemas inline, handle execution in your app, and move on. No extra processes, no protocol overhead.
MCP wins when you want reusability and interoperability. Build a tool once as an MCP server, and it works with Claude, Cursor, your custom agent, and anything else that speaks MCP. Function calling locks you into one integration point.
MCP vs Custom API Integrations
If you're building a production system with specific requirements around auth, rate limiting, and error handling, you might still want custom integrations. MCP servers can handle all of this, but the ecosystem tooling is still maturing.
MCP wins on speed of integration. There are hundreds of pre-built MCP servers available. Need to connect your agent to PostgreSQL? There's an MCP server for that. GitHub? Covered. Google Drive? Done. You can go from zero to working integration in minutes.
MCP vs LangChain / LlamaIndex Tool Abstractions
Framework-specific tool abstractions give you tighter integration with that framework's ecosystem — chains, memory, retrieval, etc. But they lock you into that framework.
MCP is framework-agnostic. An MCP server doesn't care whether the client is LangChain, a raw API call, or Claude Desktop. This makes MCP the better long-term bet if you value portability.
When to Use MCP vs When to Skip It
Use MCP When
-
You're building tools that multiple AI clients will consume. MCP gives you write-once, use-everywhere. If your team uses Claude Desktop, Cursor, and a custom agent framework, one MCP server serves all three.
-
You want to leverage pre-built servers. The MCP ecosystem already has servers for most common integrations. Check the official server list before building from scratch.
-
You need clean separation between AI logic and tool logic. MCP naturally enforces this boundary. Your agent code doesn't know how tools work internally. Your tool code doesn't know which model is calling it. That's good architecture.
-
You're prototyping and want to move fast. Spinning up an MCP server is genuinely quick. The SDK handles all the protocol details. You just define tools and implement their logic.
Skip MCP When
-
You have a single, tightly-integrated application. If you're building one product with one model and a fixed set of tools, the overhead of running separate server processes might not be worth it. Direct function calling is simpler.
-
You need sub-millisecond tool execution. The stdio transport adds minimal overhead, but it's not zero. For extremely latency-sensitive applications, in-process tool execution will always be faster.
-
Your tools require complex, stateful sessions. MCP supports some state management, but if your tools need deep session context, authentication flows, or long-running connections, you might outgrow what the protocol handles cleanly today.
-
You're operating in a constrained environment. MCP servers are separate processes. If you're deploying to a serverless environment or a container with strict resource limits, spawning child processes might not be practical. The HTTP transport helps here, but adds networking complexity.
The Ecosystem Right Now
As of early 2026, MCP is no longer experimental — it's the de facto standard for AI tool integration. Here's the landscape:
Clients: Claude Desktop, Claude Code, Cursor, Windsurf, Cline, Continue, Zed, and most serious AI coding tools support MCP. The client SDK is available in TypeScript, Python, Java, C#, and Rust.
Servers: There are official servers maintained by Anthropic for filesystem, Git, GitHub, GitLab, Postgres, SQLite, Slack, Google Drive, Puppeteer, Brave Search, and more. The community has built hundreds more.
Frameworks: LangChain, LlamaIndex, Vercel AI SDK, and Spring AI all have MCP integration. You can use MCP servers as tool providers in any of these frameworks.
Spec: The protocol is well-documented at modelcontextprotocol.io. The spec covers transport, capability negotiation, tool schemas, resource handling, and more. If you're implementing a client or server from scratch, the spec is your source of truth.
What's Next
MCP is moving fast. The areas to watch:
- Authentication and authorization — The spec now includes OAuth 2.0 support for remote servers, making multi-tenant MCP deployments practical.
- Streamable HTTP transport — Replacing the older SSE transport with a more flexible streaming approach.
- Elicitation — Servers can now request additional information from users mid-execution, enabling more interactive tool workflows.
If you're building AI agents or tools in 2026, MCP should be in your toolkit. Not because it's trendy, but because it solves a real problem — the N×M integration nightmare — with a clean, practical protocol.
Start with the official quickstart, pick a pre-built server that solves a problem you actually have, and build from there. The best way to learn MCP is to ship something with it.
Related reading
Enjoyed this guide?
Get more actionable AI insights, automation templates, and practical guides delivered to your inbox.
No spam. Unsubscribe anytime.
Ready to ship an AI product?
We build revenue-moving AI tools in focused agentic development cycles. 3 production apps shipped in a single day.