Meta Description: Model Context Protocol (MCP) lets AI agents interact with your API. Learn how to expose your API as MCP tools for Claude, GPT, and other LLMs.
Keywords: model context protocol, mcp, ai api integration, llm tools, ai agents, claude api, function calling
Word Count: ~2,200 words
AI agents need to interact with external systems. They need to fetch data, create records, and perform actions through APIs.
Model Context Protocol (MCP) standardizes how AI models access external tools and data sources. Instead of writing custom integrations for each AI model, you expose your API through MCP once, and all MCP-compatible models can use it.
Here's how MCP works and how to integrate it with your API.
What Is MCP?
MCP is a protocol that defines how AI models discover and use external tools. It's like OpenAPI for AI agents.
The Problem MCP Solves
Without MCP, each AI model needs custom integration:
Your API ──┬──> Custom Claude integration
├──> Custom GPT integration
├──> Custom Gemini integration
└──> Custom local model integration
You write the same integration logic multiple times.
With MCP, you expose tools once:
Your API ──> MCP Server ──┬──> Claude
├──> GPT
├──> Gemini
└──> Any MCP client
AI models discover and use your tools through the standard MCP protocol.
MCP Architecture
┌─────────────────┐
│ AI Model │
│ (Claude, GPT) │
└────────┬────────┘
│ MCP Protocol
│
┌────────▼────────┐
│ MCP Server │
│ (Your API) │
└────────┬────────┘
│
┌────────▼────────┐
│ Backend API │
│ (REST, etc) │
└─────────────────┘
The MCP server exposes your API as tools. AI models call these tools to perform actions.
MCP Tool Definition
Tools are defined with JSON Schema:
{
"name": "get_pet",
"description": "Retrieve information about a pet by ID",
"inputSchema": {
"type": "object",
"properties": {
"petId": {
"type": "string",
"description": "The unique identifier of the pet"
}
},
"required": ["petId"]
}
}
This tells AI models: - What the tool does - What parameters it accepts - Which parameters are required
Building an MCP Server
Let's build an MCP server for the PetStore API.
Install MCP SDK
npm install @modelcontextprotocol/sdk
Define Tools
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new Server(
{
name: 'petstore-mcp-server',
version: '1.0.0',
},
{
capabilities: {
tools: {},
},
}
);
// Define tools
server.setRequestHandler('tools/list', async () => {
return {
tools: [
{
name: 'get_pet',
description: 'Retrieve information about a pet by ID',
inputSchema: {
type: 'object',
properties: {
petId: {
type: 'string',
description: 'The unique identifier of the pet',
},
},
required: ['petId'],
},
},
{
name: 'list_pets',
description: 'List available pets with optional filters',
inputSchema: {
type: 'object',
properties: {
species: {
type: 'string',
enum: ['DOG', 'CAT', 'BIRD', 'RABBIT'],
description: 'Filter by species',
},
status: {
type: 'string',
enum: ['AVAILABLE', 'PENDING', 'ADOPTED'],
description: 'Filter by adoption status',
},
limit: {
type: 'number',
description: 'Maximum number of results',
default: 20,
},
},
},
},
{
name: 'create_adoption_application',
description: 'Submit an adoption application for a pet',
inputSchema: {
type: 'object',
properties: {
petId: {
type: 'string',
description: 'ID of the pet to adopt',
},
applicantName: {
type: 'string',
description: 'Name of the applicant',
},
email: {
type: 'string',
description: 'Email address',
},
housingSituation: {
type: 'string',
enum: ['HOUSE_WITH_YARD', 'HOUSE_NO_YARD', 'APARTMENT', 'OTHER'],
description: 'Housing situation',
},
experience: {
type: 'string',
description: 'Previous pet ownership experience',
},
},
required: ['petId', 'applicantName', 'email', 'housingSituation'],
},
},
],
};
});
Implement Tool Handlers
server.setRequestHandler('tools/call', async (request) => {
const { name, arguments: args } = request.params;
try {
switch (name) {
case 'get_pet': {
const response = await fetch(
`https://api.petstoreapi.com/v1/pets/${args.petId}`,
{
headers: {
'Authorization': `Bearer ${process.env.PETSTORE_API_KEY}`,
},
}
);
if (!response.ok) {
return {
content: [
{
type: 'text',
text: `Error: Pet not found (${response.status})`,
},
],
};
}
const pet = await response.json();
return {
content: [
{
type: 'text',
text: JSON.stringify(pet, null, 2),
},
],
};
}
case 'list_pets': {
const params = new URLSearchParams();
if (args.species) params.append('species', args.species);
if (args.status) params.append('status', args.status);
if (args.limit) params.append('limit', args.limit.toString());
const response = await fetch(
`https://api.petstoreapi.com/v1/pets?${params}`,
{
headers: {
'Authorization': `Bearer ${process.env.PETSTORE_API_KEY}`,
},
}
);
const data = await response.json();
return {
content: [
{
type: 'text',
text: JSON.stringify(data, null, 2),
},
],
};
}
case 'create_adoption_application': {
const response = await fetch(
`https://api.petstoreapi.com/v1/adoptions/applications`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.PETSTORE_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
petId: args.petId,
applicant: {
name: args.applicantName,
email: args.email,
housingSituation: args.housingSituation,
experience: args.experience,
},
}),
}
);
const application = await response.json();
return {
content: [
{
type: 'text',
text: `Application submitted successfully! Application ID: ${application.id}`,
},
],
};
}
default:
return {
content: [
{
type: 'text',
text: `Unknown tool: ${name}`,
},
],
isError: true,
};
}
} catch (error) {
return {
content: [
{
type: 'text',
text: `Error: ${error.message}`,
},
],
isError: true,
};
}
});
Start the Server
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('PetStore MCP server running on stdio');
}
main().catch((error) => {
console.error('Fatal error:', error);
process.exit(1);
});
Using MCP with Claude
Configure Claude Desktop to use your MCP server:
{
"mcpServers": {
"petstore": {
"command": "node",
"args": ["/path/to/petstore-mcp-server/index.js"],
"env": {
"PETSTORE_API_KEY": "your-api-key"
}
}
}
}
Now Claude can use your tools:
User: "Show me available dogs"
Claude: calls list_pets tool with species=DOG, status=AVAILABLE
User: "Tell me about pet 123"
Claude: calls get_pet tool with petId=123
User: "I want to adopt Max. My name is John Doe, email john@example.com, I live in a house with a yard"
Claude: calls create_adoption_application tool with the provided information
MCP Best Practices
1. Clear Tool Descriptions
Write descriptions that help AI models understand when to use each tool:
Bad:
{
"name": "get_pet",
"description": "Gets a pet"
}
Good:
{
"name": "get_pet",
"description": "Retrieve detailed information about a specific pet including name, species, breed, age, status, and medical history. Use this when the user asks about a specific pet by ID or name."
}
2. Use Enums for Constrained Values
Help AI models choose valid values:
{
"species": {
"type": "string",
"enum": ["DOG", "CAT", "BIRD", "RABBIT"],
"description": "The species of pet"
}
}
3. Provide Examples
Include examples in descriptions:
{
"description": "Filter pets by age range. Example: {\"min\": 1, \"max\": 5} for pets between 1 and 5 years old"
}
4. Handle Errors Gracefully
Return helpful error messages:
if (!response.ok) {
if (response.status === 404) {
return {
content: [{
type: 'text',
text: 'Pet not found. Please check the pet ID and try again.'
}]
};
}
if (response.status === 401) {
return {
content: [{
type: 'text',
text: 'Authentication failed. Please check your API key.'
}]
};
}
}
5. Rate Limiting
Implement rate limiting to prevent abuse:
const rateLimiter = new Map(); // userId -> { count, resetTime }
function checkRateLimit(userId: string): boolean {
const now = Date.now();
const limit = rateLimiter.get(userId);
if (!limit || limit.resetTime < now) {
rateLimiter.set(userId, { count: 1, resetTime: now + 60000 });
return true;
}
if (limit.count >= 100) {
return false;
}
limit.count++;
return true;
}
MCP vs Function Calling
MCP is similar to OpenAI's function calling but standardized across models.
Function Calling (OpenAI-specific):
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [...],
functions: [
{
name: 'get_pet',
description: 'Get pet information',
parameters: { ... }
}
]
});
MCP (model-agnostic):
// Works with Claude, GPT, Gemini, and any MCP-compatible model
server.setRequestHandler('tools/list', async () => {
return { tools: [...] };
});
MCP provides a standard protocol that works across different AI models without model-specific code.
When to Use MCP
Use MCP when: - Building tools for multiple AI models - Creating reusable AI integrations - Exposing your API to AI agents - Building AI-powered workflows
Don't use MCP when: - You only support one AI model (use native function calling) - Your API is simple (direct API calls might be easier) - You need fine-grained control over AI behavior
Conclusion
MCP standardizes how AI models interact with external tools. By exposing your API through MCP, you make it accessible to any MCP-compatible AI model.
The Modern PetStore API includes MCP support, letting AI agents: - Search for pets - View pet details - Submit adoption applications - Track adoption status
This opens new possibilities for AI-powered pet adoption experiences.
Ready to add MCP to your API? Check out the MCP documentation and start building AI-powered integrations today.