Skip to main content
December 2025 Update: MCP is now supported by Claude, Cursor, Windsurf, and many other AI tools. This module includes production-ready server examples.

What is MCP?

Model Context Protocol (MCP) is an open standard for connecting AI models to external data sources and tools. Developed by Anthropic, it provides a unified way for LLMs to interact with:
  • Databases (PostgreSQL, MongoDB, SQLite)
  • APIs (GitHub, Slack, Notion)
  • File systems (local, S3, Google Drive)
  • Development tools (Git, Docker, Kubernetes)
  • Any external service
Think of MCP as USB for AI — a standard interface that lets any AI model connect to any tool, and any tool connect to any AI model.

Why MCP Matters in 2025

Before MCPWith MCP
Custom integration per toolStandard protocol for all
Tight coupling to one modelModel-agnostic
Rebuild for each projectReusable servers
Limited community sharingOpen ecosystem

Who’s Using MCP?

  • Claude Desktop - Native MCP support
  • Cursor - IDE with MCP integrations
  • Windsurf - AI coding assistant
  • Continue - Open-source AI coding
  • Zed - Next-gen code editor

Architecture

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   AI Model      │     │   MCP Client    │     │   MCP Server    │
│   (Claude,      │◄───►│   (in your      │◄───►│   (provides     │
│    GPT, etc)    │     │    app)         │     │    tools)       │
└─────────────────┘     └─────────────────┘     └─────────────────┘


                                                ┌─────────────────┐
                                                │  External       │
                                                │  Resources      │
                                                │  (DB, API, etc) │
                                                └─────────────────┘

Building an MCP Server

Basic Server Structure

# server.py
from mcp.server import Server
from mcp.types import Tool, TextContent
import mcp.server.stdio

# Create server
server = Server("my-tools-server")

# Define tools
@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="get_weather",
            description="Get current weather for a location",
            inputSchema={
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["location"]
            }
        ),
        Tool(
            name="search_database",
            description="Search the product database",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {"type": "string"},
                    "limit": {"type": "integer", "default": 10}
                },
                "required": ["query"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "get_weather":
        location = arguments["location"]
        # In production, call real weather API
        return [TextContent(
            type="text",
            text=f"Weather in {location}: 22°C, Sunny"
        )]
    
    elif name == "search_database":
        query = arguments["query"]
        limit = arguments.get("limit", 10)
        # In production, query real database
        return [TextContent(
            type="text",
            text=f"Found {limit} results for '{query}'"
        )]
    
    raise ValueError(f"Unknown tool: {name}")

# Run server
async def main():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            server.create_initialization_options()
        )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Database MCP Server

# db_server.py
from mcp.server import Server
from mcp.types import Tool, TextContent, Resource
import mcp.server.stdio
import psycopg2
import json

server = Server("postgres-server")

# Database connection
conn = psycopg2.connect("postgresql://user:pass@localhost/mydb")

@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="query_database",
            description="Execute a read-only SQL query",
            inputSchema={
                "type": "object",
                "properties": {
                    "sql": {
                        "type": "string",
                        "description": "SQL SELECT query"
                    }
                },
                "required": ["sql"]
            }
        ),
        Tool(
            name="list_tables",
            description="List all tables in the database",
            inputSchema={"type": "object", "properties": {}}
        ),
        Tool(
            name="describe_table",
            description="Get schema of a table",
            inputSchema={
                "type": "object",
                "properties": {
                    "table_name": {"type": "string"}
                },
                "required": ["table_name"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "query_database":
        sql = arguments["sql"]
        
        # Security: Only allow SELECT
        if not sql.strip().upper().startswith("SELECT"):
            return [TextContent(type="text", text="Error: Only SELECT queries allowed")]
        
        with conn.cursor() as cur:
            cur.execute(sql)
            columns = [desc[0] for desc in cur.description]
            rows = cur.fetchall()
            
            result = [dict(zip(columns, row)) for row in rows]
            return [TextContent(type="text", text=json.dumps(result, indent=2))]
    
    elif name == "list_tables":
        with conn.cursor() as cur:
            cur.execute("""
                SELECT table_name FROM information_schema.tables
                WHERE table_schema = 'public'
            """)
            tables = [row[0] for row in cur.fetchall()]
            return [TextContent(type="text", text=json.dumps(tables))]
    
    elif name == "describe_table":
        table = arguments["table_name"]
        with conn.cursor() as cur:
            cur.execute("""
                SELECT column_name, data_type, is_nullable
                FROM information_schema.columns
                WHERE table_name = %s
            """, (table,))
            
            columns = [
                {"name": row[0], "type": row[1], "nullable": row[2]}
                for row in cur.fetchall()
            ]
            return [TextContent(type="text", text=json.dumps(columns, indent=2))]

# Resources: Expose data as readable resources
@server.list_resources()
async def list_resources():
    return [
        Resource(
            uri="db://tables",
            name="Database Tables",
            description="List of all tables"
        )
    ]

@server.read_resource()
async def read_resource(uri: str):
    if uri == "db://tables":
        with conn.cursor() as cur:
            cur.execute("""
                SELECT table_name FROM information_schema.tables
                WHERE table_schema = 'public'
            """)
            return json.dumps([row[0] for row in cur.fetchall()])

Using MCP with Claude Desktop

Configure in claude_desktop_config.json:
{
  "mcpServers": {
    "my-database": {
      "command": "python",
      "args": ["path/to/db_server.py"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost/mydb"
      }
    },
    "my-tools": {
      "command": "python", 
      "args": ["path/to/server.py"]
    }
  }
}

Building an MCP Client

# client.py
from mcp import Client
from mcp.client.stdio import stdio_client
import asyncio

async def main():
    # Connect to MCP server
    async with stdio_client(
        command="python",
        args=["server.py"]
    ) as (read, write):
        async with Client(read, write) as client:
            # Initialize
            await client.initialize()
            
            # List available tools
            tools = await client.list_tools()
            print("Available tools:", [t.name for t in tools.tools])
            
            # Call a tool
            result = await client.call_tool(
                name="get_weather",
                arguments={"location": "Tokyo"}
            )
            print("Result:", result.content[0].text)

asyncio.run(main())

Integrating MCP with LangChain

from langchain_core.tools import StructuredTool
from mcp import Client

class MCPToolkit:
    def __init__(self, client: Client):
        self.client = client
    
    async def get_langchain_tools(self) -> list[StructuredTool]:
        """Convert MCP tools to LangChain tools"""
        mcp_tools = await self.client.list_tools()
        
        langchain_tools = []
        for tool in mcp_tools.tools:
            async def call_mcp(client=self.client, name=tool.name, **kwargs):
                result = await client.call_tool(name=name, arguments=kwargs)
                return result.content[0].text
            
            langchain_tools.append(StructuredTool(
                name=tool.name,
                description=tool.description,
                func=call_mcp,
                args_schema=tool.inputSchema
            ))
        
        return langchain_tools

Common MCP Servers

The MCP ecosystem has grown rapidly. Here are the most popular servers:

Filesystem

Read/write files, list directories. npx @anthropic-ai/mcp-server-filesystem

PostgreSQL

Query databases, list tables. npx @anthropic-ai/mcp-server-postgres

GitHub

Manage repos, issues, PRs. npx @anthropic-ai/mcp-server-github

Slack

Send messages, read channels. npx @anthropic-ai/mcp-server-slack

Google Drive

Read/write documents. npx @anthropic-ai/mcp-server-gdrive

Puppeteer

Web scraping, automation. npx @anthropic-ai/mcp-server-puppeteer

Memory

Persistent knowledge graph. npx @anthropic-ai/mcp-server-memory

Brave Search

Web search integration. npx @anthropic-ai/mcp-server-brave-search

Quick Install for Claude Desktop

// claude_desktop_config.json (MacOS: ~/Library/Application Support/Claude/)
// Windows: %APPDATA%\Claude\
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@anthropic-ai/mcp-server-filesystem", "/path/to/allowed/dir"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@anthropic-ai/mcp-server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
      }
    }
  }
}

Best Practices

  • Validate all inputs
  • Use read-only database connections when possible
  • Implement rate limiting
  • Log all tool calls
@server.call_tool()
async def call_tool(name: str, arguments: dict):
    try:
        result = await execute_tool(name, arguments)
        return [TextContent(type="text", text=result)]
    except ValueError as e:
        return [TextContent(type="text", text=f"Invalid input: {e}")]
    except Exception as e:
        return [TextContent(type="text", text=f"Error: {e}")]
  • Use connection pooling for databases
  • Implement timeouts for external calls
  • Clean up resources on shutdown
  • Write clear tool descriptions
  • Document expected inputs and outputs
  • Include examples in descriptions

MCP vs Function Calling

AspectMCPOpenAI Function Calling
StandardOpen protocolProprietary
ReusabilityHigh (server-based)Per-application
Multi-modelYesOpenAI only
ComplexityHigher initial setupSimpler
EcosystemGrowingMature

Next Steps

Agentic Architecture

Design patterns for building multi-agent systems