Then, You cannot miss out Anakin AI!
Anakin AI is an all-in-one platform for all your workflow automation, create powerful AI App with an easy-to-use No Code App Builder, with Deepseek, OpenAI's o3-mini-high, Claude 3.7 Sonnet, FLUX, Minimax Video, Hunyuan...
Build Your Dream AI App within minutes, not weeks with Anakin AI!

The OpenAI Agents SDK provides a powerful way to build AI applications with agent capabilities. One of its key features is support for the Model Context Protocol (MCP), which standardizes how applications provide context and tools to Large Language Models (LLMs). This article will guide you through connecting OpenAI Agent SDK to MCP Servers, with detailed steps and sample code.
Understanding Model Context Protocol (MCP)
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a "USB-C port" for AI applications - it provides a standardized way to connect AI models to different data sources and tools. Just as USB-C connects your devices to various peripherals, MCP connects AI models to different tools and data sources.
Types of MCP Servers
The MCP specification defines two types of servers based on their transport mechanism:
- Stdio Servers: These run as a subprocess of your application, essentially running "locally".
- SSE Servers: These run remotely, and you connect to them via a URL using HTTP over Server-Sent Events (SSE).
The OpenAI Agent SDK provides corresponding classes for both these server types:
MCPServerStdio
for local stdio serversMCPServerSse
for remote SSE servers
Prerequisites
Before you begin, make sure you have:
Python 3.10 or higher installed
The OpenAI Agent SDK installed:
pip install openai-agents
For using local stdio servers, you may need additional tools like npx
(for JavaScript-based MCP servers)
Connecting to an MCP Stdio Server
Let's start with connecting to a local stdio MCP server. We'll use the filesystem MCP server as an example.
Step 1: Install Required Dependencies
If you plan to use the filesystem MCP server, you'll need Node.js and NPX:
# Install Node.js (if not already installed)
# For Ubuntu/Debian
sudo apt update
sudo apt install nodejs npm
# For macOS
brew install node
# Verify installation
node --version
npx --version
Step 2: Setup Your Project Structure
Create a basic project structure:
my_agent_project/
├── sample_data/
│ ├── file1.txt
│ └── file2.md
├── main.py
└── README.md
Step 3: Connect to an MCP Stdio Server
Here's a complete example of connecting to the filesystem MCP server using MCPServerStdio
:
import asyncio
import os
import shutil
from agents import Agent, Runner, gen_trace_id, trace
from agents.mcp import MCPServerStdio
async def run_agent_with_mcp():
# Get the directory path for sample data
current_dir = os.path.dirname(os.path.abspath(__file__))
sample_data_dir = os.path.join(current_dir, "sample_data")
# Create and configure the MCP server
async with MCPServerStdio(
name="Filesystem Server",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", sample_data_dir],
},
) as mcp_server:
# Create an agent that uses the MCP server
agent = Agent(
name="FileAssistant",
instructions="Use the filesystem tools to read and analyze files in the sample_data directory.",
mcp_servers=[mcp_server]
)
# Run the agent with a user query
message = "List all files in the directory and summarize their contents."
print(f"Running query: {message}\\\\n")
# Generate trace ID for debugging
trace_id = gen_trace_id()
with trace(workflow_name="MCP Filesystem Example", trace_id=trace_id):
print(f"View trace: <https://platform.openai.com/traces/{trace_id}\\\\n>")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
if __name__ == "__main__":
# Check if npx is installed
if not shutil.which("npx"):
raise RuntimeError("npx is not installed. Please install it with `npm install -g npx`.")
asyncio.run(run_agent_with_mcp())
In this example:
- We create an
MCPServerStdio
instance that runs the filesystem MCP server as a subprocess. - We pass this server to the
Agent
constructor via themcp_servers
parameter. - When the agent runs, it automatically calls
list_tools()
on the MCP server to make the LLM aware of available tools. - If the LLM decides to use any of the MCP tools, the SDK calls
call_tool()
on the server.
Connecting to an MCP SSE Server
Now let's look at how to connect to a remote MCP server using SSE:
Step 1: Understanding SSE MCP Servers
SSE (Server-Sent Events) MCP servers run remotely and expose their functionality via HTTP endpoints. Unlike stdio servers, they don't run as subprocesses of your application.
Step 2: Connect to an MCP SSE Server
Here's a sample code connecting to an MCP SSE server:
import asyncio
from agents import Agent, Runner, gen_trace_id, trace
from agents.mcp import MCPServerSse
from agents.model_settings import ModelSettings
async def run_agent_with_remote_mcp():
# Create and configure the SSE MCP server connection
async with MCPServerSse(
name="Weather Service",
params={
"url": "<https://example.com/mcp/sse>",
# Optional authentication parameters
"headers": {
"Authorization": "Bearer your_api_key_here"
}
},
) as mcp_server:
# Create an agent using the remote MCP server
agent = Agent(
name="WeatherAssistant",
instructions="Use the weather tools to answer questions about current weather conditions.",
mcp_servers=[mcp_server],
# Force the agent to use tools when available
model_settings=ModelSettings(tool_choice="required")
)
# Run the agent with a user query
message = "What's the weather like in Tokyo today?"
print(f"Running query: {message}\\\\n")
trace_id = gen_trace_id()
with trace(workflow_name="Weather MCP Example", trace_id=trace_id):
print(f"View trace: <https://platform.openai.com/traces/{trace_id}\\\\n>")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
if __name__ == "__main__":
asyncio.run(run_agent_with_remote_mcp())
Creating a Simple Local MCP SSE Server
To fully understand how MCP works, it helps to implement a simple MCP server. Here's an example of a minimal MCP SSE server:
import asyncio
import json
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from typing import Dict, Any, List, Optional
app = FastAPI()
# Define tools that our MCP server will provide
TOOLS = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers together",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"}
},
"required": ["a", "b"]
}
}
},
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
]
# Implement the actual tool functionality
async def call_tool(tool_name: str, parameters: Dict[str, Any]) -> Dict[str, Any]:
if tool_name == "add":
return {"result": parameters["a"] + parameters["b"]}
elif tool_name == "get_weather":
# In a real implementation, you'd call an actual weather API
weather_data = {
"Tokyo": {"condition": "Sunny", "temperature": 25},
"New York": {"condition": "Cloudy", "temperature": 18},
"London": {"condition": "Rainy", "temperature": 15}
}
location = parameters["location"]
if location in weather_data:
return {"weather": weather_data[location]}
return {"error": f"Weather data not available for {location}"}
return {"error": f"Unknown tool: {tool_name}"}
async def sse_event_generator(request: Request):
# Read the request body
body_bytes = await request.body()
body = json.loads(body_bytes)
# Handle different MCP operations
if body["action"] == "list_tools":
yield f"data: {json.dumps({'tools': TOOLS})}\\\\n\\\\n"
elif body["action"] == "call_tool":
tool_name = body["tool_name"]
parameters = body["parameters"]
result = await call_tool(tool_name, parameters)
yield f"data: {json.dumps({'result': result})}\\\\n\\\\n"
@app.post("/sse")
async def sse_endpoint(request: Request):
return StreamingResponse(
sse_event_generator(request),
media_type="text/event-stream"
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Advanced Features
Caching Tool Lists
To improve performance, you can cache the tool list from MCP servers:
# Create an MCP server with tool caching
async with MCPServerSse(
name="Weather Service",
params={"url": "<https://example.com/mcp/sse>"},
cache_tools_list=True # Enable caching
) as mcp_server:
# Use the server as before...
When cache_tools_list=True
is set, the SDK will only call list_tools()
on the MCP server once and reuse the result for subsequent agent runs. This reduces latency, especially for remote servers.
To invalidate the cache if needed:
mcp_server.invalidate_tools_cache()
Tracing MCP Operations
The OpenAI Agents SDK includes built-in tracing capabilities that automatically capture MCP operations:
- Calls to the MCP server to list tools
- MCP-related information on function calls
You can view these traces at https://platform.openai.com/traces/ when you use the trace
context manager as shown in the examples above.
Using Multiple MCP Servers
You can connect your agent to multiple MCP servers simultaneously, giving it access to a broader range of tools:
async def run_with_multiple_servers():
async with MCPServerStdio(
name="Filesystem",
params={"command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "./data"]}
) as fs_server, MCPServerSse(
name="Weather API",
params={"url": "<https://example.com/weather/mcp/sse>"}
) as weather_server:
# Create agent with both MCP servers
agent = Agent(
name="MultiToolAssistant",
instructions="Use all available tools to help the user.",
mcp_servers=[fs_server, weather_server]
)
# Run the agent
result = await Runner.run(
starting_agent=agent,
input="First check the weather in Tokyo, then read the contents of the report.txt file."
)
print(result.final_output)
Error Handling and Debugging
When working with MCP servers, you might encounter various issues. Here are some common problems and how to handle them:
Connection Issues
If your MCP server is not responding:
try:
async with MCPServerSse(
name="Weather Service",
params={"url": "<https://example.com/mcp/sse>"}
) as mcp_server:
# Use the server...
except Exception as e:
print(f"Failed to connect to MCP server: {e}")
# Implement fallback strategy
Tool Execution Errors
When a tool execution fails, handle it gracefully:
try:
result = await Runner.run(starting_agent=agent, input=user_query)
print(result.final_output)
except Exception as e:
print(f"Error during agent execution: {e}")
# You might want to check trace logs for detailed error information
Conclusion
The OpenAI Agents SDK's support for MCP allows you to extend your agents with a wide range of tools and capabilities. Whether you're using local stdio servers or remote SSE endpoints, the integration is straightforward and powerful.
By connecting to MCP servers, your agents can access file systems, weather APIs, databases, and virtually any external tool or data source that exposes an MCP interface. This flexibility makes the OpenAI Agent SDK a powerful foundation for building sophisticated AI applications.
Remember to leverage features like tool caching to optimize performance, and use the built-in tracing capabilities to debug and monitor your agent's interactions with MCP servers.
Happy coding with OpenAI Agents and MCP!
