Building MCP Servers with Python: The Complete Guide for 2026
Introduction
In late 2024, Anthropic introduced the Model Context Protocol (MCP) — an open standard for connecting AI assistants to external tools, data sources, and services. By 2026, MCP has become the dominant protocol for AI tool integration, supported natively by Claude, Cursor, Windsurf, and dozens of other AI-powered tools.
If you've ever wanted to give an AI assistant the ability to query your database, call your internal APIs, or interact with your file system — MCP is how you do it. And with the official Python SDK, building an MCP server is surprisingly straightforward.
In this guide, you'll build production-ready MCP servers in Python, from a simple "hello world" to a full database query tool backed by FastAPI.
🔗 New to Python project setup? Check out uv: The Fast Python Package Manager Replacing pip in 2026 for a modern development setup.
Introduction: What Is the Model Context Protocol?
MCP defines a standard way for AI assistants (clients/hosts) to communicate with servers that expose capabilities. Think of it as a USB-C standard for AI tool integration: instead of every AI tool implementing a custom integration for every data source, they all speak MCP.
Before MCP, integrating an AI assistant with your tools meant:
- Building custom API adapters for each AI platform
- Reimplementing authentication, error handling, and schema definitions for each integration
- No standard for how tools should be described or called
With MCP, you build one server and any MCP-compatible AI assistant can use it.
What Can MCP Servers Expose?
MCP servers can expose three types of capabilities:
- Tools — Functions the AI can call (like "search_database", "send_email", "run_query")
- Resources — Data the AI can read (like files, database records, API responses)
- Prompts — Pre-built prompt templates the AI can use
MCP Architecture: Servers, Clients, Hosts, and the Transport Layer
The Three Roles
┌─────────────────┐ MCP Protocol ┌─────────────────┐
│ MCP Host │ ◄──────────────────► │ MCP Server │
│ (Claude, etc.) │ │ (Your Python │
│ │ │ Application) │
└─────────────────┘ └─────────────────┘
- Host: The AI application (Claude Desktop, Cursor, your own app)
- Client: The MCP client embedded in the host that speaks the protocol
- Server: Your Python application that exposes tools, resources, and prompts
Transport Mechanisms
MCP supports multiple transport mechanisms:
| Transport | Use case |
|---|---|
| Stdio | Local MCP servers run as child processes |
| SSE (Server-Sent Events) | Remote servers over HTTP |
| Streamable HTTP | Newer, replaces SSE for remote servers |
For local development and tools that run on the user's machine, stdio is the standard. For remote servers accessible over the internet, SSE or HTTP is used.
Installing the MCP Python SDK
# Using uv (recommended)
uv add mcp
# Using pip
pip install mcp
The MCP Python SDK provides:
mcp.server.Server— the main server classmcp.server.stdio— stdio transportmcp.server.sse— SSE transportmcp.types— all MCP type definitions
Building Your First MCP Server: Tools, Resources, and Prompts Explained
Let's build a minimal MCP server to understand the structure:
# server.py
import asyncio
from mcp.server import Server
from mcp.server.stdio import stdio_server
import mcp.types as types
# Initialize the server
app = Server("my-first-mcp-server")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
"""Declare the tools this server exposes."""
return [
types.Tool(
name="greet",
description="Greet a person by name",
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the person to greet",
}
},
"required": ["name"],
},
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
"""Handle tool calls."""
if name == "greet":
person_name = arguments.get("name", "World")
return [types.TextContent(type="text", text=f"Hello, {person_name}!")]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(read_stream, write_stream, app.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())
Run it:
python server.py
This starts the server on stdio, ready to be connected to an MCP client.
Implementing Tools: Function Calling from AI Assistants
Tools are the most commonly used MCP capability. They allow an AI assistant to call functions in your server with structured inputs and receive structured outputs.
A Practical Tool Example: Web Search
import httpx
from mcp.server import Server
import mcp.types as types
app = Server("search-server")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="web_search",
description="Search the web for information on a given topic",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query",
},
"max_results": {
"type": "integer",
"description": "Maximum number of results to return",
"default": 5,
},
},
"required": ["query"],
},
),
types.Tool(
name="fetch_url",
description="Fetch the content of a URL",
inputSchema={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch",
}
},
"required": ["url"],
},
),
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "web_search":
query = arguments["query"]
max_results = arguments.get("max_results", 5)
# Call your search provider here
results = await search_web(query, max_results)
return [types.TextContent(type="text", text=format_results(results))]
elif name == "fetch_url":
url = arguments["url"]
async with httpx.AsyncClient() as client:
response = await client.get(url, follow_redirects=True, timeout=10)
return [types.TextContent(type="text", text=response.text[:10000])]
raise ValueError(f"Unknown tool: {name}")
Tool Input Validation
Always validate tool inputs — the AI might pass malformed data:
from pydantic import BaseModel, ValidationError
class SearchInput(BaseModel):
query: str
max_results: int = 5
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "web_search":
try:
params = SearchInput(**arguments)
except ValidationError as e:
return [types.TextContent(type="text", text=f"Invalid input: {e}")]
results = await search_web(params.query, params.max_results)
return [types.TextContent(type="text", text=format_results(results))]
Implementing Resources: Exposing Data and Files to the Context
Resources let the AI assistant read data from your server. Unlike tools (which perform actions), resources are for reading structured or unstructured data.
from mcp.server import Server
import mcp.types as types
import json
app = Server("data-server")
@app.list_resources()
async def list_resources() -> list[types.Resource]:
"""List available data resources."""
return [
types.Resource(
uri="data://products/catalog",
name="Product Catalog",
description="The full product catalog with pricing and inventory",
mimeType="application/json",
),
types.Resource(
uri="data://docs/api-reference",
name="API Reference",
description="Internal API documentation",
mimeType="text/markdown",
),
]
@app.read_resource()
async def read_resource(uri: str) -> str:
if uri == "data://products/catalog":
products = await fetch_products_from_db()
return json.dumps(products, indent=2)
elif uri == "data://docs/api-reference":
with open("docs/api-reference.md") as f:
return f.read()
raise ValueError(f"Unknown resource URI: {uri}")
Dynamic Resources
Resources don't have to be static — they can be parameterized:
@app.list_resources()
async def list_resources() -> list[types.Resource]:
# List all available user records as resources
users = await fetch_user_list()
return [
types.Resource(
uri=f"users://{user['id']}",
name=f"User: {user['name']}",
description=f"Profile for {user['name']}",
mimeType="application/json",
)
for user in users
]
@app.read_resource()
async def read_resource(uri: str) -> str:
if uri.startswith("users://"):
user_id = int(uri.split("://")[1])
user = await fetch_user_by_id(user_id)
return json.dumps(user, indent=2)
Implementing Prompts: Templated Instructions for AI Assistants
Prompts let you define reusable prompt templates that an AI host can present to users:
@app.list_prompts()
async def list_prompts() -> list[types.Prompt]:
return [
types.Prompt(
name="analyze_code",
description="Analyze code for bugs and suggest improvements",
arguments=[
types.PromptArgument(
name="language",
description="Programming language",
required=True,
),
types.PromptArgument(
name="focus",
description="Area to focus on (performance, security, readability)",
required=False,
),
],
)
]
@app.get_prompt()
async def get_prompt(name: str, arguments: dict | None) -> types.GetPromptResult:
if name == "analyze_code":
language = (arguments or {}).get("language", "Python")
focus = (arguments or {}).get("focus", "all aspects")
return types.GetPromptResult(
description=f"Analyze {language} code",
messages=[
types.PromptMessage(
role="user",
content=types.TextContent(
type="text",
text=f"Please analyze the following {language} code, focusing on {focus}. "
f"Identify bugs, suggest improvements, and explain your reasoning:\n\n"
f"[Paste your code here]",
),
)
],
)
Connecting Your Server to Claude Desktop and Cursor
Claude Desktop
Add your server to Claude Desktop's configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.jsonWindows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"my-server": {
"command": "uv",
"args": ["run", "python", "/path/to/your/server.py"],
"env": {
"DATABASE_URL": "postgresql://localhost/mydb"
}
}
}
}
Cursor
In Cursor settings, navigate to MCP and add:
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["/path/to/your/server.py"]
}
}
}
After adding the configuration and restarting the AI host, your tools will be available in the AI's tool list.
Building a Real-World MCP Server: A Database Query Tool with FastAPI
Let's build something practical: an MCP server that allows an AI assistant to safely query a PostgreSQL database and also exposes a FastAPI endpoint for management.
# database_mcp_server.py
import asyncio
import json
from typing import Any
import asyncpg
from mcp.server import Server
from mcp.server.stdio import stdio_server
import mcp.types as types
from pydantic import BaseModel
app = Server("database-query-server")
# Database connection pool
db_pool: asyncpg.Pool | None = None
async def get_db() -> asyncpg.Pool:
global db_pool
if db_pool is None:
db_pool = await asyncpg.create_pool(
dsn="postgresql://user:password@localhost/mydb",
min_size=2,
max_size=10,
)
return db_pool
# Allowlist of safe, read-only tables
ALLOWED_TABLES = {"products", "categories", "users", "orders"}
def is_safe_query(sql: str) -> bool:
"""Only allow SELECT statements on allowed tables."""
sql_upper = sql.strip().upper()
if not sql_upper.startswith("SELECT"):
return False
# Reject queries with dangerous keywords
dangerous = ["DROP", "DELETE", "UPDATE", "INSERT", "ALTER", "TRUNCATE", "EXEC"]
return not any(kw in sql_upper for kw in dangerous)
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="query_database",
description=(
"Execute a read-only SQL SELECT query against the database. "
"Only SELECT statements are allowed."
),
inputSchema={
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "A SQL SELECT statement to execute",
},
"limit": {
"type": "integer",
"description": "Maximum rows to return (default: 100, max: 1000)",
"default": 100,
},
},
"required": ["sql"],
},
),
types.Tool(
name="list_tables",
description="List all available database tables with their columns",
inputSchema={
"type": "object",
"properties": {},
},
),
types.Tool(
name="describe_table",
description="Get the schema of a specific database table",
inputSchema={
"type": "object",
"properties": {
"table_name": {
"type": "string",
"description": "Name of the table to describe",
}
},
"required": ["table_name"],
},
),
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
pool = await get_db()
if name == "query_database":
sql = arguments.get("sql", "")
limit = min(arguments.get("limit", 100), 1000)
if not is_safe_query(sql):
return [types.TextContent(
type="text",
text="Error: Only SELECT statements are allowed."
)]
# Inject LIMIT if not present
if "LIMIT" not in sql.upper():
sql = f"{sql.rstrip(';')} LIMIT {limit}"
try:
async with pool.acquire() as conn:
rows = await conn.fetch(sql)
data = [dict(row) for row in rows]
return [types.TextContent(
type="text",
text=json.dumps(data, indent=2, default=str),
)]
except Exception as e:
return [types.TextContent(type="text", text=f"Query error: {e}")]
elif name == "list_tables":
async with pool.acquire() as conn:
rows = await conn.fetch("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY table_name
""")
tables = [row["table_name"] for row in rows]
return [types.TextContent(type="text", text="\n".join(tables))]
elif name == "describe_table":
table_name = arguments.get("table_name", "")
if table_name not in ALLOWED_TABLES:
return [types.TextContent(
type="text",
text=f"Access denied: '{table_name}' is not in the allowed tables list."
)]
async with pool.acquire() as conn:
rows = await conn.fetch("""
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = $1 AND table_schema = 'public'
ORDER BY ordinal_position
""", table_name)
schema = [dict(row) for row in rows]
return [types.TextContent(
type="text",
text=json.dumps(schema, indent=2),
)]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(
read_stream,
write_stream,
app.create_initialization_options(),
)
if db_pool:
await db_pool.close()
if __name__ == "__main__":
asyncio.run(main())
🔗 For async database patterns: Building a High-Performance Async API with FastAPI and PostgreSQL
Authentication and Security for MCP Servers
Environment Variables for Credentials
Never hardcode credentials in your MCP server. Use environment variables:
import os
from dotenv import load_dotenv
load_dotenv()
DATABASE_URL = os.environ["DATABASE_URL"]
API_KEY = os.environ.get("INTERNAL_API_KEY")
In Claude Desktop config:
{
"mcpServers": {
"my-server": {
"command": "uv",
"args": ["run", "python", "server.py"],
"env": {
"DATABASE_URL": "postgresql://user:pass@host/db",
"INTERNAL_API_KEY": "sk-..."
}
}
}
}
Principle of Least Privilege
Your MCP server should only have the permissions it needs:
# Use a read-only database user for query tools
DATABASE_URL = os.environ["DATABASE_READ_URL"] # read-only credentials
# Rate limit expensive operations
from asyncio import Semaphore
query_semaphore = Semaphore(5) # max 5 concurrent queries
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "query_database":
async with query_semaphore:
return await execute_query(arguments)
Input Sanitization
Always sanitize inputs from the AI — treat them as untrusted, just like user input in a web API:
import re
def sanitize_table_name(name: str) -> str:
"""Only allow alphanumeric and underscore characters in table names."""
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', name):
raise ValueError(f"Invalid table name: {name}")
return name
🔗 For secure authentication patterns: JWT Authentication in Django and FastAPI: A Secure Implementation Guide
Testing MCP Servers with the MCP Inspector
Anthropic provides the MCP Inspector — a browser-based tool for testing MCP servers without connecting to a full AI host.
Install and Run
npx @modelcontextprotocol/inspector uv run python server.py
This opens a web UI at http://localhost:5173 where you can:
- View all tools, resources, and prompts your server exposes
- Call tools with custom arguments and see the output
- Read resources
- Test prompts
Automated Testing
Write tests for your MCP server using the Python SDK's testing utilities:
# tests/test_server.py
import pytest
import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import stdio_client
import subprocess
@pytest.fixture
async def mcp_client():
"""Start the MCP server as a subprocess and connect to it."""
server_process = subprocess.Popen(
["python", "server.py"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
async with stdio_client(server_process.stdin, server_process.stdout) as (r, w):
async with ClientSession(r, w) as session:
await session.initialize()
yield session
server_process.terminate()
@pytest.mark.asyncio
async def test_list_tools(mcp_client):
tools = await mcp_client.list_tools()
tool_names = [t.name for t in tools.tools]
assert "query_database" in tool_names
assert "list_tables" in tool_names
@pytest.mark.asyncio
async def test_query_database_blocks_unsafe_sql(mcp_client):
result = await mcp_client.call_tool(
"query_database",
{"sql": "DROP TABLE users"},
)
assert "Error" in result.content[0].text
Deploying MCP Servers
Local Deployment (Stdio)
For tools running on the user's machine, stdio is the standard:
{
"mcpServers": {
"my-server": {
"command": "uv",
"args": ["--directory", "/path/to/project", "run", "server.py"]
}
}
}
Remote Deployment (SSE/HTTP)
For servers that need to be accessible over the internet:
# remote_server.py
from mcp.server.sse import SseServerTransport
from starlette.applications import Starlette
from starlette.routing import Mount, Route
transport = SseServerTransport("/messages/")
async def handle_sse(request):
async with transport.connect_sse(
request.scope, request.receive, request._send
) as streams:
await app.run(
streams[0], streams[1], app.create_initialization_options()
)
starlette_app = Starlette(
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=transport.handle_post_message),
]
)
# Run with: uvicorn remote_server:starlette_app
🔗 For deployment on cloud platforms, see the upcoming Deploying Python APIs to Production: Railway, Fly.io, and Render
🐳 For Docker setup: Dockerizing Django and FastAPI Applications
Conclusion: The Future of AI Tool Integration
MCP is quickly becoming the standard way to extend AI assistants with real-world capabilities. As of 2026, the protocol has broad adoption across major AI tools, and the Python SDK makes server development straightforward.
The key principles for building good MCP servers:
- Security first: Validate all inputs, use least-privilege credentials, sanitize SQL and commands
- Clear tool descriptions: The AI uses your descriptions to decide which tool to call — write them carefully
- Structured outputs: Return well-formatted data the AI can reason about
- Handle errors gracefully: Return informative error messages rather than raising exceptions
- Test with the MCP Inspector: Catch issues before connecting to a live AI host
Whether you're building a database query tool, a code analysis server, or an integration with your internal systems, MCP gives you a clean, standardized way to make those capabilities available to any MCP-compatible AI assistant.
🔗 Related: Building an AI-Powered Code Generator with OpenAI API | Running Lightweight Open-Source LLMs Locally
📌 Up next: Building AI Agents with LangChain and FastAPI: A Complete Guide