MCP (Model Context Protocol)

Understanding MCP: when external tool servers make sense, and when they are overkill.

What is MCP?

The Model Context Protocol (MCP) is a standardized way to connect AI agents to external tools and data sources via dedicated server processes. Instead of defining tools inline in your agent code, MCP runs a separate server that exposes tools over a structured protocol.

MCP vs. Regular Tool Calls

Regular tool calls are functions defined directly in your agent's codebase. The agent calls them, they execute, and results return in the same process. MCP separates this: tools live in external servers that the agent communicates with over a protocol.

Regular Tool Calls

Tools defined inline in your agent code. Simple, fast, and sufficient for most use cases.

const tools = {
  get_weather: (location) => {
    return fetchWeather(location)
  }
}

// Direct function call
result = tools.get_weather("Tokyo")

MCP Servers

Tools exposed by external server processes. Adds network overhead but enables cross-language tooling and shared tool ecosystems.

// Separate server process
const client = new MCPClient()

// Discover tools via protocol
tools = await client.listTools()

// Call over network
result = await client.invoke(
  "get_weather", { location: "Tokyo" }
)

When MCP Makes Sense

MCP shines in specific scenarios where its additional complexity pays off.

🌐

Multi-Language Teams

Your tools are written in Python but your agent is in TypeScript, or vice versa.

🔗

Shared Tool Ecosystem

Multiple agents across different projects need to access the same tools.

🏢

Enterprise Integration

You need to expose existing internal services as agent tools without modifying them.

🛒

Tool Marketplace

You want to use community-maintained tools without copying code into your project.

When MCP is Overkill

For many use cases, MCP adds unnecessary complexity.

Single-Language Projects

If your tools and agent are in the same language, inline functions are simpler and faster.

Simple Agents

A chatbot with a few tools doesn't need the overhead of running separate server processes.

Rapid Prototyping

When iterating quickly, the indirection of MCP slows down development.

Latency-Sensitive Apps

Network calls to tool servers add latency that inline functions don't have.

The Three Core Primitives

MCP servers can expose three types of capabilities to clients. Most documentation focuses on tools, but resources and prompts are equally important.

Tools

Functions the model can call to perform actions. Tools are invoked by the LLM to interact with external systems—search databases, call APIs, execute code.

query_database, send_email, create_file

Resources

Data the server can provide for context. Resources are read-only content the client can fetch—files, database records, API responses—that inform the model's responses.

file://config.json, db://users/123, api://weather/today

Prompts

Pre-defined prompt templates the server offers. Prompts are reusable interaction patterns with parameters—like "summarize this document" or "review this code".

summarize_document, code_review, translate_text

Server Lifecycle

MCP connections follow a structured lifecycle with capability negotiation at startup.

Initialize

Client sends initialize request with protocol version and client capabilities. This is always the first message.

Capabilities Exchange

Server responds with its supported capabilities (tools, resources, prompts) and protocol version agreement.

Initialized

Client sends initialized notification to confirm setup is complete. Normal operations can now begin.

Operation

Client and server exchange requests: list_tools, call_tool, list_resources, read_resource, list_prompts, get_prompt.

Shutdown

Either side can close the connection. Servers should clean up resources (database connections, file handles).

Real MCP Servers

The MCP ecosystem includes official reference servers and community-built integrations for popular platforms.

Filesystem

Secure file operations with configurable access controls. Read, write, and manage files within specified directories.

GitHub

Repository management, issues, pull requests, and code search. Requires a personal access token.

Slack

Channel management, messaging, and workspace interactions. Post messages, read history, manage threads.

PostgreSQL

Database queries with read-only or read-write access. Execute SQL and explore schema.

Memory

Knowledge graph-based persistent memory. Store and retrieve structured information across conversations.

Git

Read, search, and manipulate Git repositories. View commits, diffs, branches, and history.

Configuration Example

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"]
    }{,
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": { "GITHUB_TOKEN": "..." }
    }
  }
}

How MCP Works

MCP defines a client-server architecture where the agent is the client and tools are exposed by servers.

01

Discovery

The agent connects to an MCP server and receives a list of available tools with their schemas.

02

Invocation

When the LLM decides to use a tool, the agent sends a request to the MCP server.

03

Execution

The MCP server runs the tool and returns results in a standardized format.

04

Integration

Results flow back to the agent and into the LLM context, just like regular tool results.

Practical Advice

Guidelines for deciding whether to use MCP in your project.

Start simple: use inline tool definitions until you hit a specific limitation.

Consider MCP when you find yourself copy-pasting tool code between projects.

The overhead of running MCP servers only makes sense at scale or in enterprise settings.

Community MCP servers can accelerate development but add dependency risks.

Key Takeaways

  • 1MCP is a protocol for exposing tools via external servers, not a replacement for regular tool calls
  • 2For most single-project agents, inline tools are simpler and have lower latency
  • 3MCP shines in polyglot environments and shared tool ecosystems
  • 4Don't reach for MCP by default—it's a solution for specific scaling and interoperability challenges