If you've ever tried to integrate AI agents into your workflow, you've probably hit a wall when it comes to connecting them with external tools. MCP was born to solve exactly that problem. In this article, we'll break down how Model Context Protocol works and how to put it into practice — with code examples throughout.
What Is MCP? The Standard Protocol for AI Agent Integration
MCP (Model Context Protocol) is an open protocol published by Anthropic in 2024. It defines a standard communication specification that allows large language models (LLMs) to interact safely with external data sources and tools.
In the past, giving an AI agent access to external capabilities meant writing custom integration code for each API. Every new tool added more connector code, and maintenance costs snowballed fast — a pain point many engineers know all too well. MCP addresses this by providing what Anthropic describes as "a USB-C port for AI" — a unified interface that works across the board.
How It Differs from Traditional API Integration
Let's make the USB-C analogy more concrete. With the traditional approach, if you wanted AI to check the weather, you wrote weather API code. If you wanted it to query a database, you wrote database connection code. If you wanted it to read files, you wrote filesystem code. Ten tools meant ten connectors; twenty tools meant twenty.
MCP fundamentally changes this model. The AI client only needs to implement the MCP protocol once — after that, adding a new tool is as simple as plugging in a new MCP server. When I first understood this architecture, my honest reaction was: "Why didn't this exist sooner?" MCP is trying to do for the AI agent world what HTTP did for the web.
Comparison with OpenAPI and Function Calling
Many developers wonder: "We already have Function Calling — why do we need MCP?" Function Calling is a mechanism for LLMs to express intent ("I want to call this function"), but it doesn't define a standard for how the function is actually executed or how data is exchanged. A significant portion of the implementation is still left to the caller.
MCP standardizes the entire flow: tool discovery (what tools are available), invocation (how to call them), and result delivery (how results are returned). It also includes session management and error handling, making it a proper foundation for production-quality agent integrations.
OpenAPI is worth distinguishing here as well. OpenAPI is a specification for describing REST APIs — primarily a documentation standard for human developers. MCP, by contrast, is designed for AI models to dynamically discover tools and invoke them appropriately based on conversational context. Tools now exist to auto-generate MCP servers from OpenAPI definitions, so think of the two as complementary rather than competing.
MCP Architecture and Core Concepts
MCP follows a client-server model with three main actors:
- MCP Host: The application the user interacts with directly — Claude Desktop, an IDE, etc.
- MCP Client: A component inside the host that manages communication with servers
- MCP Server: A process that provides access to external tools and data sources
Servers can expose three types of primitives: Tools, Resources, and Prompts. Tools are functions the model can call; Resources are data the model can reference; Prompts are reusable templates. I initially thought Tools alone would be sufficient, but once I started using MCP in practice, I quickly discovered how useful Resources are for injecting context.
Communication uses JSON-RPC. For local inter-process communication, stdio is used; for remote connections, the transport layer is Streamable HTTP.
A Closer Look at the Three Primitives
Tools are the most intuitive of the three. They define actions an AI model can invoke based on the situation — things like "get weather," "search the database," or "send an email." Each tool has a name, description, and input parameter schema, and the model uses this information to decide when to call which tool with what arguments.
Resources are designed specifically for reading data, as opposed to performing actions. For example, you can expose internal documents or database records as Resources, making them available as context when the AI generates a response. They're designed to be accessed via URI, much like a filesystem.
Prompts let the server provide prompt templates optimized for specific tasks. For example, if you define a code review prompt or SQL generation prompt in your MCP server, clients can call those templates to deliver consistent, high-quality instructions to the model every time.
Lifecycle and Session Management
MCP defines a clear lifecycle for all communication. When a client connects to a server, it begins with an initialization phase, where the protocol version is negotiated and each side exchanges its supported capabilities. This is where the server tells the client things like "I provide Tools but not Resources."
Once initialization completes, the connection enters the operation phase, where tool calls and resource fetches can take place. One notable feature: if the server's tool list changes during the operation phase, there's a built-in mechanism to notify the client. This allows you to add or remove tools dynamically without restarting the server.
Finally, the shutdown phase handles a clean disconnection. This lifecycle management makes the protocol resilient to edge cases like connection drops and reconnects. I experienced this firsthand when using Streamable HTTP in an unstable network environment — the session reconnected naturally, which saved the day.
Choosing the Right Transport Layer
Picking the right transport layer for your use case matters.
stdio (standard I/O) launches the MCP server as a local child process and communicates via stdin/stdout. It's easy to set up and ideal for local development or desktop app integrations. If you're calling a local MCP server from Claude Desktop, stdio is almost always sufficient.
Streamable HTTP communicates with an MCP server over the network. It supports multiple simultaneous client connections, making it well-suited for deploying servers to the cloud and sharing them across a team. It also integrates cleanly with security features like authentication and TLS.
My general recommendation: start with stdio to validate your implementation, then migrate to Streamable HTTP when you need to share the server with your team or run it in production.
Why MCP Is Getting Attention Right Now
As AI agents become increasingly practical, standardizing tool integration is an unavoidable challenge. Several factors explain why MCP is gaining traction.
First, the ecosystem is expanding rapidly. MCP servers already exist for major services like GitHub, Slack, PostgreSQL, and Google Drive. Once you implement an MCP client, you can add any of these tools like installing a plugin — that's a major selling point.
Second, the security model is thoughtful. MCP's design recommends inserting explicit user approval before tool calls, which reduces the risk of an AI autonomously executing dangerous operations. This is a critical consideration for enterprises evaluating AI agent adoption.
Third, players beyond Anthropic are adopting MCP, which means you can avoid vendor lock-in — another point in its favor.
A Snapshot of the Growing Ecosystem
Here's a sample of MCP servers available as of 2025:
- Filesystem: Safely read, write, and search local files, with directory sandboxing
- GitHub: Search repositories, create and manage Issues and PRs, search code — all directly from AI
- Slack: Send messages to channels, retrieve history, look up user info
- PostgreSQL / SQLite: Safely execute read-only queries and expose schema information
- Google Drive: Search documents and retrieve their contents
- Puppeteer / Playwright: Automate web browsers — take screenshots, fill out forms
All of these servers implement the same MCP standard, so you can combine them without modifying your client code. The ability to incrementally expand — "add Slack today, connect the database next week" — is a direct benefit of standardization.
Why Developer Tool Integration Is Accelerating
One reason MCP has earned particularly strong support from the developer community is its integration with IDEs. In VS Code and JetBrains IDEs, AI assistants are gaining the ability to access project-specific tools through MCP.
For example, if you expose your CI/CD pipeline status as an MCP server, you can ask your AI assistant "show me the latest build result" while coding — no browser needed. Or, if you expose your internal API documentation as a Resource, the AI can reference it to generate accurate API call code.
My team experienced this benefit firsthand when we connected deploy logs and error monitoring dashboards to our AI through MCP. Incident investigation became noticeably faster. The compounding cost of context-switching and manual copy-pasting — small individually but significant in aggregate — dropped considerably.
How to Implement an MCP Server
Here's a quick walkthrough of building an MCP server. In TypeScript, the official SDK is the fastest path:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
server.tool("get-weather", { city: z.string() }, async ({ city }) => {
const data = await fetchWeather(city);
return { content: [{ type: "text", text: JSON.stringify(data) }] };
});
await server.connect(new StdioServerTransport());A Python SDK is also available with equivalent functionality — choose based on your team's stack. The recommended approach is to start with a single small tool, verify it works, then expand incrementally.
Example: Python SDK Implementation
Here's what an MCP server looks like in Python. Using the high-level FastMCP API, you can define tools intuitively with decorators:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-server")
@mcp.tool()
def get_weather(city: str) -> str:
"""Fetches weather information for the specified city."""
# In practice, this calls an external API
data = fetch_weather(city)
return f"Weather in {city}: {data['condition']}, Temperature: {data['temperature']}°C"
@mcp.tool()
def search_documents(query: str, max_results: int = 5) -> str:
"""Searches internal documents by keyword."""
results = document_search(query, limit=max_results)
return "\n".join([f"- {r['title']}: {r['summary']}" for r in results])
@mcp.resource("config://app")
def get_app_config() -> str:
"""Returns the application configuration."""
config = load_config()
return json.dumps(config, ensure_ascii=False, indent=2)
if __name__ == "__main__":
mcp.run(transport="stdio")Python type hints are used directly as the tool's parameter schema, which is convenient. Docstrings serve as the description the model uses to understand what each tool does — write them clearly, explaining what the tool does and when it should be used.
Example: Resource Implementation
Here's how to implement the Resources primitive. The following example exposes database table information as a Resource:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
const server = new McpServer({ name: "db-server", version: "1.0.0" });
// Static resource: list all tables
server.resource(
"schema://tables",
"tables://list",
async (uri) => {
const tables = await db.query(
"SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'"
);
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(tables.rows)
}]
};
}
);
// Dynamic resource: accepts table name as a parameter
server.resource(
"table-schema",
"schema://tables/{tableName}",
async (uri, { tableName }) => {
const columns = await db.query(
"SELECT column_name, data_type FROM information_schema.columns WHERE table_name = $1",
[tableName]
);
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(columns.rows)
}]
};
}
);With this setup, the AI model can automatically learn what tables exist in the database and what columns each table contains. A natural workflow emerges: the model reads schema information via Resources, then constructs a well-formed query before calling a Tool to execute it.
Setting Up MCP in Claude Desktop
To use your MCP server with Claude Desktop, add it to the Claude Desktop config file (claude_desktop_config.json):
{
"mcpServers": {
"my-weather-server": {
"command": "node",
"args": ["/path/to/my-server/dist/index.js"],
"env": {
"WEATHER_API_KEY": "your-api-key-here"
}
},
"my-python-server": {
"command": "python",
"args": ["/path/to/my-server/server.py"]
}
}
}Save the config file and restart Claude Desktop, and your tools become available in Claude. A hammer icon will appear in the chat interface — click it to see the list of available tools.
When I first set this up, I was genuinely surprised by how few lines of JSON it took before Claude was actively using external tools. Asking Claude about the weather and watching it actually call the API to answer — that's the moment MCP's power clicks.
Debugging and Testing During Development
Debugging is an unavoidable challenge when developing MCP servers. When using the stdio transport, the server's standard output is used for MCP communication, so you cannot use console.log or print for debug output. Write debug logs to stderr or to a file instead.
The official MCP Inspector tool is extremely useful here. It's a browser-based UI that connects to your MCP server and lets you view available tools, manually invoke them, and inspect request/response payloads:
npx @modelcontextprotocol/inspector node dist/index.jsRunning this command spins up a local Inspector UI where you can visually verify your server's behavior. I use it every time I add a new tool — it has drastically reduced the "it worked in isolation but broke when connected to the AI client" cycle.
From a testing perspective, it's also worth implementing tool functions in a way that makes them unit-testable. If you extract business logic into pure functions separate from the MCP protocol layer, you can test them independently. Whether you enforce this separation upfront makes a significant difference in maintainability as the server grows.
Practical Use Cases for MCP
Let's look at some real-world scenarios with accompanying code.
Internal Knowledge Base Search Server
Making internal documentation and wikis searchable by AI is often the first MCP use case enterprises adopt:
from mcp.server.fastmcp import FastMCP
import sqlite3
mcp = FastMCP("knowledge-base")
DB_PATH = "/path/to/knowledge.db"
@mcp.tool()
def search_knowledge(query: str, category: str = "") -> str:
"""Full-text search of the internal knowledge base.
Can be filtered by category (e.g., 'engineering', 'sales', 'hr')."""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
if category:
cursor.execute(
"SELECT title, content, updated_at FROM articles "
"WHERE content LIKE ? AND category = ? ORDER BY updated_at DESC LIMIT 10",
(f"%{query}%", category)
)
else:
cursor.execute(
"SELECT title, content, updated_at FROM articles "
"WHERE content LIKE ? ORDER BY updated_at DESC LIMIT 10",
(f"%{query}%",)
)
results = cursor.fetchall()
conn.close()
if not results:
return "No matching articles found."
output = []
for title, content, updated_at in results:
# Truncate long articles to the first 300 characters
snippet = content[:300] + "..." if len(content) > 300 else content
output.append(f"## {title}\nUpdated: {updated_at}\n\n{snippet}")
return "\n\n---\n\n".join(output)
@mcp.tool()
def list_categories() -> str:
"""Returns a list of available categories."""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT DISTINCT category, COUNT(*) FROM articles GROUP BY category")
categories = cursor.fetchall()
conn.close()
return "\n".join([f"- {cat}: {count} articles" for cat, count in categories])With this server in place, asking the AI "walk me through the onboarding process for new team members" will have it search internal documentation and compose a response from the relevant guides.
Business Automation Through Multi-Tool Orchestration
MCP's real power shows when you combine multiple tools. Here's an example that integrates a project management tool with Slack notifications:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({ name: "project-ops", version: "1.0.0" });
// Retrieve a list of tasks
server.tool(
"list-tasks",
{ status: z.enum(["todo", "in_progress", "done"]).optional() },
async ({ status }) => {
const tasks = await fetchTasksFromProjectTool(status);
const formatted = tasks.map(t =>
`[${t.id}] ${t.title} (${t.status}) - Assignee: ${t.assignee}`
).join("\n");
return { content: [{ type: "text", text: formatted }] };
}
);
// Update task status
server.tool(
"update-task-status",
{
taskId: z.string(),
newStatus: z.enum(["todo", "in_progress", "done"]),
comment: z.string().optional()
},
async ({ taskId, newStatus, comment }) => {
await updateTaskStatus(taskId, newStatus);
if (comment) {
await addTaskComment(taskId, comment);
}
return {
content: [{ type: "text", text: `Task ${taskId} updated to ${newStatus}` }]
};
}
);
// Send a Slack notification
server.tool(
"notify-slack",
{
channel: z.string(),
message: z.string()
},
async ({ channel, message }) => {
await postToSlack(channel, message);
return {
content: [{ type: "text", text: `Notification sent to #${channel}` }]
};
}
);Connect this server to an AI agent and you can give natural language instructions like: "Check this week's incomplete tasks and send a Slack reminder to the assignee for anything past its deadline." The AI calls list-tasks to get the list, evaluates deadlines, and calls notify-slack where needed — reasoning through the whole flow autonomously.
Key Considerations Before Adopting MCP
Model Context Protocol is still evolving, and there are a few things to keep in mind.
The spec is updated frequently and breaking changes are possible. When integrating MCP into production, make a habit of pinning SDK versions and regularly reviewing the CHANGELOG.
Permission design for the tools your MCP server exposes also requires careful thought. Exposing write-access database tools without restriction means an AI misjudgment could directly affect production data. In my experience, a surprising number of projects have regretted treating permission design as an afterthought.
Best Practices for Permission Design
Here are some principles to build safer MCP servers:
Apply the principle of least privilege. For a database-connected MCP server, use a read-only database user as the default. Where write access is necessary, limit UPDATE/DELETE to specific tables and columns — and never expose destructive operations like DROP TABLE as a tool.
Write precise tool descriptions. The AI model relies on tool descriptions to decide when to use them. Vague descriptions lead to tools being called at the wrong time. Don't just describe what a tool does — also describe when it should and shouldn't be used. This meaningfully improves model decision quality.
Always validate input server-side. Parameters passed from MCP clients are not guaranteed to be well-formed. Standard defenses against SQL injection, path traversal, and similar vulnerabilities are just as necessary here as in any web application.
Designing Error Handling
When errors occur, MCP servers should return clear, actionable messages so the model can recover appropriately:
@mcp.tool()
def query_database(sql: str) -> str:
"""Executes a read-only SQL query. Only SELECT statements are permitted."""
# Reject write operations
normalized = sql.strip().upper()
if not normalized.startswith("SELECT"):
return "Error: Only SELECT statements are allowed. INSERT/UPDATE/DELETE are not permitted."
try:
conn = get_readonly_connection()
cursor = conn.cursor()
cursor.execute(sql)
rows = cursor.fetchall()
columns = [desc[0] for desc in cursor.description]
result = [dict(zip(columns, row)) for row in rows]
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
return f"Query execution error: {str(e)}. Please check your SQL syntax."
finally:
conn.close()Including both what went wrong and how to fix it allows the AI model to autonomously retry or consider alternative approaches.
Performance and Scalability Considerations
As the number of tools and request volume grows, performance deserves attention. Here are some lessons from real-world use:
Optimize the number of tools per server. This is often overlooked. Packing dozens of tools into a single MCP server can confuse the model when selecting between them, degrading response quality. In my experience, keeping it to 5–10 tools per server and splitting by domain is effective. Organizing servers like "database operations," "project management," and "document search" clarifies each server's responsibility and makes them easier to maintain.
Set timeouts thoughtfully. Tools that call external APIs may cause the MCP client to time out if the API is slow. For long-running operations, consider building in progress notifications or designing them as async processes where results can be fetched later.
Use caching where appropriate. For queries that repeat within a short window, caching results in memory reduces load on external services while improving response speed. Set cache TTLs carefully based on how fresh the data needs to be.
Closing Thoughts: MCP as Infrastructure for the AI Agent Era
MCP is steadily making its way into engineering practice as a "common language" connecting AI agents with external tools. The shift from accumulating individual API integrations to unified connectivity through a standard protocol has significant implications — not just for development efficiency, but for maintainability and security as well.
As this article has shown, implementing an MCP server isn't particularly difficult. Whether in TypeScript or Python, you can expose a functional tool in dozens of lines of code. The range of applications is wide: searching internal knowledge, providing safe database access, automating project management workflows.
The best place to start is the official documentation, followed by building one small MCP server of your own. Working through an implementation hands-on is the fastest way to internalize the protocol's design philosophy. Begin by getting something simple — like a weather tool — working end-to-end, then gradually build toward tools that serve real business needs. Through that process, you'll develop a genuine sense of what it means for an AI agent to be a truly useful partner.
MCP is still maturing, but that's exactly why now is a good time to engage with it. When the spec stabilizes and the ecosystem matures further, teams that already have practical experience will have a meaningful head start in AI adoption. For me personally, MCP is one of those technologies I wish I'd explored earlier. I hope this article gives you the push to take that first step.
At aduce, we support system development and DX initiatives powered by AI technology. If you're considering integrating AI agents through MCP, feel free to reach out to aduce — we'd love to help.
