How to Build and Deploy a Custom MCP Server in 10 Minutes

Whatever your motivation, this guide will walk you through creating and deploying a custom MCP server in just 10 minutes. Yes, 10 minutes! Let's get started.

1000+ Pre-built AI Apps for Any Use Case

How to Build and Deploy a Custom MCP Server in 10 Minutes

Start for free
Contents
💡
Want to create your own Agentic AI Workflow with No Code?

You can easily create AI workflows with Anakin AI without any coding knowledge. Connect to LLM APIs such as: GPT-4, Claude 3.5 Sonnet, Uncensored Dolphin-Mixtral, Stable Diffusion, DALLE, Web Scraping.... into One Workflow!

Forget about complicated coding, automate your madane work with Anakin AI!

For a limited time, you can also use Google Gemini 1.5 and Stable Diffusion for Free!
Easily Build AI Agentic Workflows with Anakin AI!
Easily Build AI Agentic Workflows with Anakin AI

Introduction

Model Context Protocol (MCP) represents a significant advancement in the AI ecosystem, offering a standardized way of communicating with large language models. Rather than each AI platform implementing its own unique formatting for messages, MCP aims to provide a consistent interface for prompts, responses, and function calling across various models and platforms.

While the protocol itself is evolving, building a basic MCP-compatible server can be straightforward. In this guide, I'll walk you through creating a simple, yet functional server that adheres to the core principles of message handling in modern AI systems. Our implementation will focus on creating a foundation that you can later extend with more advanced features.

Is 10 minutes enough time? For a production-ready system, certainly not. But for a working prototype that demonstrates the key concepts? Absolutely. Let's get started!

Prerequisites

Before we begin, you'll need:

  • Node.js (v16+) installed on your system
  • Basic knowledge of JavaScript/TypeScript
  • Familiarity with Express.js or similar web frameworks
  • A code editor (VS Code recommended)
  • Terminal/command line access
  • npm or yarn package manager

Step 1: Setting Up Your Project (2 minutes)

First, let's create a new directory and initialize our project:

mkdir mcp-server
cd mcp-server
npm init -y

Now, install the necessary dependencies:

npm install express cors typescript ts-node @types/node @types/express @types/cors
npm install --save-dev nodemon

Create a TypeScript configuration file:

npx tsc --init

Edit the generated tsconfig.json to include these essential settings:

{
  "compilerOptions": {
    "target": "es2018",
    "module": "commonjs",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"]
}

Update your package.json scripts section:

"scripts": {
  "start": "node dist/index.js",
  "dev": "nodemon --exec ts-node src/index.ts",
  "build": "tsc"
}

Step 2: Creating the Core Server (3 minutes)

Create your source directory and main server file:

mkdir -p src/handlers
touch src/index.ts
touch src/handlers/messageHandler.ts
touch src/types.ts

Let's define our types first in src/types.ts:

// Basic message structure
export interface Content {
  type: string;
  text?: string;
}

export interface Message {
  role: "user" | "assistant" | "system";
  content: Content[] | string;
}

// Request and response structures
export interface ModelRequest {
  messages: Message[];
  max_tokens?: number;
  temperature?: number;
  stream?: boolean;
}

export interface ModelResponse {
  message: Message;
}

// Tool calling interfaces
export interface Tool {
  name: string;
  description: string;
  input_schema: {
    type: string;
    properties: Record<string, any>;
    required?: string[];
  };
}

export interface ToolCall {
  type: "tool_call";
  id: string;
  name: string;
  input: Record<string, any>;
}

export interface ToolResult {
  type: "tool_result";
  tool_call_id: string;
  content: string;
}

Now, implement the basic server in src/index.ts:

import express from 'express';
import cors from 'cors';
import { handleMessageRequest } from './handlers/messageHandler';

const app = express();
const PORT = process.env.PORT || 3000;

// Middleware
app.use(cors());
app.use(express.json({ limit: '10mb' }));

// Main endpoint for message processing
app.post('/v1/messages', handleMessageRequest);

// Health check endpoint
app.get('/health', (req, res) => {
  res.status(200).json({ status: 'ok' });
});

// Start server
app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
  console.log(`Health check: http://localhost:${PORT}/health`);
  console.log(`Messages endpoint: http://localhost:${PORT}/v1/messages`);
});

Next, implement the message handler in src/handlers/messageHandler.ts:

import { Request, Response } from 'express';
import { ModelRequest, ModelResponse, Message, Content } from '../types';

export async function handleMessageRequest(req: Request, res: Response) {
  try {
    const request = req.body as ModelRequest;
    
    // Basic validation
    if (!request.messages || !Array.isArray(request.messages) || request.messages.length === 0) {
      return res.status(400).json({ error: 'Invalid request format. Messages array is required.' });
    }
    
    // Log the incoming request (for debugging)
    console.log('Received request with', request.messages.length, 'messages');
    
    // Process the messages
    const response = processMessages(request.messages);
    
    // Return the response
    return res.status(200).json(response);
  } catch (error) {
    console.error('Error processing request:', error);
    return res.status(500).json({ error: 'Internal server error' });
  }
}

function processMessages(messages: Message[]): ModelResponse {
  // Extract the last user message
  const lastUserMessage = findLastUserMessage(messages);
  
  if (!lastUserMessage) {
    return createErrorResponse("No user message found in the conversation");
  }
  
  const userQuery = extractTextContent(lastUserMessage);
  
  // Simple response generation logic
  let responseText = "";
  
  if (userQuery.toLowerCase().includes('hello') || userQuery.toLowerCase().includes('hi')) {
    responseText = "Hello! How can I assist you today?";
  } else if (userQuery.toLowerCase().includes('weather')) {
    responseText = "I don't have access to real-time weather data, but I can help you understand weather patterns in general.";
  } else if (userQuery.toLowerCase().includes('time')) {
    responseText = `The current server time is ${new Date().toLocaleTimeString()}.`;
  } else {
    responseText = "I received your message. This is a simple model server response.";
  }
  
  // Construct and return the response
  return {
    message: {
      role: "assistant",
      content: [{ 
        type: "text", 
        text: responseText 
      }]
    }
  };
}

function findLastUserMessage(messages: Message[]): Message | undefined {
  // Find the last message with role 'user'
  for (let i = messages.length - 1; i >= 0; i--) {
    if (messages[i].role === 'user') {
      return messages[i];
    }
  }
  return undefined;
}

function extractTextContent(message: Message): string {
  if (typeof message.content === 'string') {
    return message.content;
  } else if (Array.isArray(message.content)) {
    return message.content
      .filter(item => item.type === 'text' && item.text)
      .map(item => item.text)
      .join(' ');
  }
  return '';
}

function createErrorResponse(errorMessage: string): ModelResponse {
  return {
    message: {
      role: "assistant",
      content: [{ 
        type: "text", 
        text: `Error: ${errorMessage}` 
      }]
    }
  };
}

Step 3: Adding Tool Calling Capability (3 minutes)

Create a new file for tool definitions and implementations:

touch src/tools.ts

Implement some basic tools in src/tools.ts:

import { Tool } from './types';

// Tool definitions
export const availableTools: Tool[] = [
  {
    name: "get_current_time",
    description: "Get the current server time",
    input_schema: {
      type: "object",
      properties: {
        timezone: {
          type: "string",
          description: "Optional timezone (defaults to server timezone)"
        }
      }
    }
  },
  {
    name: "calculate",
    description: "Perform a mathematical calculation",
    input_schema: {
      type: "object",
      properties: {
        expression: {
          type: "string",
          description: "Mathematical expression to evaluate"
        }
      },
      required: ["expression"]
    }
  }
];

// Tool implementations
export function executeToolCall(name: string, params: Record<string, any>): string {
  switch (name) {
    case "get_current_time":
      return getTime(params.timezone);
    case "calculate":
      return calculate(params.expression);
    default:
      throw new Error(`Unknown tool: ${name}`);
  }
}

function getTime(timezone?: string): string {
  const options: Intl.DateTimeFormatOptions = { 
    hour: '2-digit', 
    minute: '2-digit', 
    second: '2-digit',
    timeZoneName: 'short' 
  };
  
  try {
    if (timezone) {
      options.timeZone = timezone;
    }
    return new Date().toLocaleTimeString('en-US', options);
  } catch (error) {
    return `${new Date().toLocaleTimeString()} (Server time)`;
  }
}

function calculate(expression: string): string {
  try {
    // CAUTION: In a real application, you should use a safer evaluation method
    // This is a simplified example for demonstration only
    const sanitizedExpression = expression.replace(/[^0-9+\-*/().\s]/g, '');
    const result = eval(sanitizedExpression);
    return `${expression} = ${result}`;
  } catch (error) {
    return `Error calculating ${expression}: ${error instanceof Error ? error.message : String(error)}`;
  }
}

Now, update the message handler to support tool calling by modifying src/handlers/messageHandler.ts:

// Add these imports at the top
import { availableTools, executeToolCall } from '../tools';
import { ToolCall, ToolResult } from '../types';

// Update the handleMessageRequest function
export async function handleMessageRequest(req: Request, res: Response) {
  try {
    const request = req.body as ModelRequest;
    
    // Basic validation
    if (!request.messages || !Array.isArray(request.messages) || request.messages.length === 0) {
      return res.status(400).json({ error: 'Invalid request format. Messages array is required.' });
    }
    
    // Check if this is a request with tool results
    const lastMessage = request.messages[request.messages.length - 1];
    if (lastMessage.role === 'assistant' && Array.isArray(lastMessage.content)) {
      const toolCalls = lastMessage.content.filter(item => 
        item.type === 'tool_call') as ToolCall[];
      
      if (toolCalls.length > 0) {
        // This is a follow-up with tool results
        return handleToolResultsResponse(request, res);
      }
    }
    
    // Process as a regular message
    const response = processMessages(request.messages);
    
    // Return the response
    return res.status(200).json(response);
  } catch (error) {
    console.error('Error processing request:', error);
    return res.status(500).json({ error: 'Internal server error' });
  }
}

// Add this function to handle tool calls
function handleToolResultsResponse(request: ModelRequest, res: Response) {
  const messages = request.messages;
  const lastAssistantMessage = messages[messages.length - 1];
  
  if (lastAssistantMessage.role !== 'assistant' || !Array.isArray(lastAssistantMessage.content)) {
    return res.status(400).json({ error: 'Invalid tool results format' });
  }
  
  // Find tool calls and results
  const toolCalls = lastAssistantMessage.content.filter(
    item => item.type === 'tool_call'
  ) as ToolCall[];
  
  const toolResults = lastAssistantMessage.content.filter(
    item => item.type === 'tool_result'
  ) as ToolResult[];
  
  // Process the results
  let finalResponse = "I've processed the following information:\n\n";
  
  toolResults.forEach(result => {
    const relatedCall = toolCalls.find(call => call.id === result.tool_call_id);
    if (relatedCall) {
      finalResponse += `- ${relatedCall.name}: ${result.content}\n`;
    }
  });
  
  return res.status(200).json({
    message: {
      role: "assistant",
      content: [{ type: "text", text: finalResponse }]
    }
  });
}

// Modify the processMessages function to handle potential tool calls
function processMessages(messages: Message[]): ModelResponse {
  const lastUserMessage = findLastUserMessage(messages);
  
  if (!lastUserMessage) {
    return createErrorResponse("No user message found in the conversation");
  }
  
  const userQuery = extractTextContent(lastUserMessage);
  
  // Look for keywords that might trigger tool use
  if (userQuery.toLowerCase().includes('time')) {
    return {
      message: {
        role: "assistant",
        content: [
          { type: "tool_call", id: "call_001", name: "get_current_time", input: {} },
          { 
            type: "text", 
            text: "I'll check the current time for you." 
          }
        ]
      }
    };
  } else if (userQuery.toLowerCase().match(/calculate|compute|what is \d+[\+\-\*\/]/)) {
    // Extract a potential calculation
    const expression = userQuery.match(/(\d+[\+\-\*\/\(\)\.]*\d+)/)?.[0] || "1+1";
    
    return {
      message: {
        role: "assistant",
        content: [
          { 
            type: "tool_call", 
            id: "call_002", 
            name: "calculate", 
            input: { expression } 
          },
          { 
            type: "text", 
            text: `I'll calculate ${expression} for you.` 
          }
        ]
      }
    };
  }
  
  // Default response for other queries
  return {
    message: {
      role: "assistant",
      content: [{ 
        type: "text", 
        text: `I received your message: "${userQuery}". How can I help you further?` 
      }]
    }
  };
}

Step 4: Testing and Deployment (2 minutes)

Let's create a simple test script to check our server. Create a file test.js in the root directory:

const fetch = require('node-fetch');

async function testServer() {
  const url = 'http://localhost:3000/v1/messages';
  
  const response = await fetch(url, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      messages: [
        {
          role: "user",
          content: [
            {
              type: "text",
              text: "What time is it right now?"
            }
          ]
        }
      ]
    })
  });
  
  const data = await response.json();
  console.log(JSON.stringify(data, null, 2));
}

testServer().catch(console.error);

To test your server:

# Start the server
npm run dev

# In another terminal, run the test
node test.js

For quick deployment, let's create a simple Dockerfile:

FROM node:16-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .
RUN npm run build

EXPOSE 3000

CMD ["npm", "start"]

Build and run the container:

docker build -t mcp-server .
docker run -p 3000:3000 mcp-server

Conclusion

In just 10 minutes, we've built a basic server that implements the key concepts of modern AI message protocols. Our server can:

  1. Process structured message requests
  2. Respond in a standard format
  3. Handle basic tool calling functionality
  4. Process and respond to tool results

While this implementation is simplified, it provides a solid foundation for further development. To extend this server for production use, consider:

  • Adding authentication and rate limiting
  • Implementing proper error handling and validation
  • Connecting to actual AI models for processing
  • Adding more sophisticated tools
  • Implementing streaming responses
  • Adding comprehensive logging and monitoring

Remember that while our implementation follows general principles of modern AI messaging protocols, specific implementations like OpenAI's API or Anthropic's Claude API may have additional requirements or slight variations in their expected formats. Always consult the official documentation for the specific service you're integrating with.

The field of AI messaging protocols is rapidly evolving, so stay updated with the latest developments and be prepared to adapt your implementation as standards evolve. By understanding the core concepts demonstrated in this guide, you'll be well-equipped to work with whatever new advancements emerge in this exciting field.