Microservices API Gateway Pattern

When you're building modern applications, you'll quickly realize that managing dozens of microservices can get messy. Each service has its own API, authentication requirements, and rate limits. That's where API gateways come in—they're the traffic controllers of your microservices architecture. What Is an API Gateway? Think of an API

TRY NANO BANANA FOR FREE

Microservices API Gateway Pattern

TRY NANO BANANA FOR FREE
Contents

When you're building modern applications, you'll quickly realize that managing dozens of microservices can get messy. Each service has its own API, authentication requirements, and rate limits. That's where API gateways come in—they're the traffic controllers of your microservices architecture.

What Is an API Gateway?

Think of an API gateway as a single entry point for all your client requests. Instead of clients calling individual microservices directly, they talk to the gateway, which then routes requests to the appropriate services.

Here's a simple example of what happens without a gateway:

// Client making direct calls to multiple services
const userResponse = await fetch('https://users-service.com/api/users/123');
const ordersResponse = await fetch('https://orders-service.com/api/orders?userId=123');
const inventoryResponse = await fetch('https://inventory-service.com/api/products/456');

With an API gateway, it looks like this:

// Client making calls through a gateway
const userResponse = await fetch('https://api.myapp.com/users/123');
const ordersResponse = await fetch('https://api.myapp.com/orders?userId=123');
const inventoryResponse = await fetch('https://api.myapp.com/products/456');

The gateway handles routing these requests to the correct microservices behind the scenes.

Core Functions of an API Gateway

1. Request Routing

Routing is the most fundamental job of an API gateway. It examines incoming requests and forwards them to the right microservice based on the URL path, headers, or other criteria.

Here's how you might configure routing in Kong:

services:
  - name: user-service
    url: http://users-service:8001
    routes:
      - name: user-route
        paths:
          - /users
        methods:
          - GET
          - POST
          - PUT
          - DELETE

  - name: order-service
    url: http://orders-service:8002
    routes:
      - name: order-route
        paths:
          - /orders
        methods:
          - GET
          - POST

With this configuration, any request to /users gets routed to the user service, while /orders requests go to the order service.

2. Authentication and Authorization

Instead of implementing authentication in every microservice, you can centralize it at the gateway level. This means you write the authentication logic once and it protects all your services.

Here's an example using JWT authentication in Kong:

plugins:
  - name: jwt
    service: user-service
    config:
      key_claim_name: kid
      secret_is_base64: false
      claims_to_verify:
        - exp

And here's how a client would authenticate:

const response = await fetch('https://api.myapp.com/users/123', {
  headers: {
    'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
  }
});

The gateway validates the JWT token before forwarding the request to the user service. If the token is invalid or expired, the request never reaches your microservices.

3. Rate Limiting

Rate limiting prevents abuse and ensures fair usage of your API. You can set limits per user, per IP address, or globally.

Here's a Kong rate limiting configuration:

plugins:
  - name: rate-limiting
    config:
      minute: 100
      hour: 1000
      policy: local
      fault_tolerant: true

This allows 100 requests per minute and 1000 per hour. When a client exceeds the limit, they get a 429 Too Many Requests response:

{
  "message": "API rate limit exceeded",
  "retry_after": 45
}

You can also implement more sophisticated rate limiting based on user tiers:

plugins:
  - name: rate-limiting
    config:
      minute: 1000
      hour: 10000
      limit_by: consumer
      policy: redis
      redis_host: redis-cluster
      redis_port: 6379

4. Service Discovery

In a dynamic microservices environment, services come and go. They scale up and down, get deployed to different hosts, and sometimes fail. Service discovery helps the gateway find healthy service instances automatically.

Here's how you might configure service discovery with Consul and Kong:

-- Kong configuration with Consul DNS
upstream user-service {
    server user-service.service.consul:8001;
    balancer least_conn;
}

upstream order-service {
    server order-service.service.consul:8002;
    balancer round_robin;
}

When a new instance of the user service starts, it registers with Consul, and Kong automatically starts routing traffic to it.

Comparing API Gateway Solutions

Kong

Kong is an open-source API gateway built on Nginx and Lua. It's highly extensible with a plugin architecture.

Pros:- Massive plugin ecosystem (authentication, rate limiting, logging, transformations) - High performance (handles 100k+ requests per second) - Can run anywhere (on-premises, cloud, Kubernetes) - Active open-source community

Cons:- Requires additional infrastructure (PostgreSQL or Cassandra for clustering) - Plugin development requires Lua knowledge - Enterprise features cost money

Here's a complete Kong configuration example:

_format_version: "3.0"

services:
  - name: petstore-api
    url: http://petstore-backend:8080
    routes:
      - name: pets-route
        paths:
          - /api/v1/pets
        strip_path: false
    plugins:
      - name: jwt
        config:
          claims_to_verify:
            - exp
      - name: rate-limiting
        config:
          minute: 100
          policy: redis
          redis_host: redis
      - name: cors
        config:
          origins:
            - https://petstore.com
          methods:
            - GET
            - POST
            - PUT
            - DELETE
          credentials: true
      - name: request-transformer
        config:
          add:
            headers:
              - X-Gateway-Version:1.0

AWS API Gateway

AWS API Gateway is a fully managed service that integrates seamlessly with other AWS services.

Pros:- Zero infrastructure management - Native AWS integrations (Lambda, DynamoDB, S3) - Built-in CloudWatch monitoring - Automatic scaling

Cons:- Vendor lock-in - Can get expensive at scale - Less flexible than self-hosted solutions - Cold start issues with Lambda integrations

Here's an AWS API Gateway configuration using CloudFormation:

Resources:
  PetStoreAPI:
    Type: AWS::ApiGateway::RestApi
    Properties:
      Name: PetStore API
      Description: API for pet store operations

  PetsResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId: !Ref PetStoreAPI
      ParentId: !GetAtt PetStoreAPI.RootResourceId
      PathPart: pets

  GetPetsMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref PetStoreAPI
      ResourceId: !Ref PetsResource
      HttpMethod: GET
      AuthorizationType: AWS_IAM
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${GetPetsFunction.Arn}/invocations
      MethodResponses:
        - StatusCode: 200
          ResponseModels:
            application/json: Empty

You can also add rate limiting with usage plans:

  UsagePlan:
    Type: AWS::ApiGateway::UsagePlan
    Properties:
      UsagePlanName: Basic Plan
      Throttle:
        RateLimit: 100
        BurstLimit: 200
      Quota:
        Limit: 10000
        Period: MONTH

Nginx

Nginx is a battle-tested web server that can function as an API gateway with the right configuration.

Pros:- Extremely fast and lightweight - Mature and stable - Flexible configuration - Free and open source

Cons:- Requires manual configuration for advanced features - No built-in service discovery - Limited plugin ecosystem compared to Kong - Configuration can get complex

Here's an Nginx configuration for API gateway functionality:

upstream user_service {
    least_conn;
    server user-service-1:8001 max_fails=3 fail_timeout=30s;
    server user-service-2:8001 max_fails=3 fail_timeout=30s;
    server user-service-3:8001 max_fails=3 fail_timeout=30s;
}

upstream order_service {
    least_conn;
    server order-service-1:8002;
    server order-service-2:8002;
}

# Rate limiting zone
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

server {
    listen 80;
    server_name api.petstore.com;

    # JWT authentication
    auth_jwt "API Gateway";
    auth_jwt_key_file /etc/nginx/jwt_key.pem;

    # Rate limiting
    limit_req zone=api_limit burst=20 nodelay;

    # User service routes
    location /api/v1/users {
        proxy_pass http://user_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Timeouts
        proxy_connect_timeout 5s;
        proxy_send_timeout 10s;
        proxy_read_timeout 10s;
    }

    # Order service routes
    location /api/v1/orders {
        proxy_pass http://order_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # CORS headers
        add_header Access-Control-Allow-Origin https://petstore.com always;
        add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE" always;
        add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
    }

    # Health check endpoint
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }
}

Advanced Gateway Patterns

Circuit Breaking

Circuit breakers prevent cascading failures when a downstream service is struggling. Here's how to implement it in Kong:

plugins:
  - name: circuit-breaker
    config:
      threshold: 10
      window_size: 60
      timeout: 30

This opens the circuit after 10 failures within 60 seconds, and tries again after 30 seconds.

Request Transformation

Sometimes you need to modify requests before they reach your services:

plugins:
  - name: request-transformer
    config:
      remove:
        headers:
          - X-Internal-Header
      add:
        headers:
          - X-Service-Version:2.0
        querystring:
          - source:gateway
      replace:
        uri: /v2/pets

Response Caching

Reduce load on your services by caching responses at the gateway:

plugins:
  - name: proxy-cache
    config:
      strategy: memory
      content_type:
        - application/json
      cache_ttl: 300
      cache_control: true

Choosing the Right Gateway

Choose Kong if:- You need extensive plugin support - You want flexibility to run anywhere - You have complex routing and transformation needs - You're comfortable managing infrastructure

Choose AWS API Gateway if:- You're already on AWS - You want zero infrastructure management - You're using Lambda functions - Cost isn't your primary concern

Choose Nginx if:- You need maximum performance - You have simple routing requirements - You want full control over configuration - You have Nginx expertise in-house

Best Practices

Keep gateways stateless: Store session data in Redis or a database, not in the gateway itself.

Monitor everything: Track latency, error rates, and throughput at the gateway level.

Use health checks: Configure the gateway to automatically remove unhealthy service instances from rotation.

Implement timeouts: Don't let slow services drag down your entire system.

Version your APIs: Use path-based versioning (/v1/pets, /v2/pets) to support multiple API versions.

Secure internal communication: Even though the gateway handles external authentication, secure service-to-service communication with mutual TLS.

Real-World Example: PetStore API Gateway

Let's put it all together with a complete PetStore API gateway setup using Kong:

_format_version: "3.0"

services:
  - name: pets-service
    url: http://pets-backend:8080
    retries: 3
    connect_timeout: 5000
    write_timeout: 10000
    read_timeout: 10000
    routes:
      - name: list-pets
        paths:
          - /api/v1/pets
        methods:
          - GET
      - name: create-pet
        paths:
          - /api/v1/pets
        methods:
          - POST
      - name: get-pet
        paths:
          - /api/v1/pets/(?<id>\d+)
        methods:
          - GET
    plugins:
      - name: jwt
      - name: rate-limiting
        config:
          minute: 100
          hour: 1000
          policy: redis
          redis_host: redis
      - name: cors
        config:
          origins:
            - "*"
          credentials: true
      - name: proxy-cache
        config:
          strategy: redis
          redis:
            host: redis
            port: 6379
          content_type:
            - application/json
          cache_ttl: 300
      - name: prometheus
        config:
          per_consumer: true

  - name: orders-service
    url: http://orders-backend:8081
    routes:
      - name: orders
        paths:
          - /api/v1/orders
    plugins:
      - name: jwt
      - name: rate-limiting
        config:
          minute: 50
          hour: 500

This configuration gives you authentication, rate limiting, caching, CORS support, and monitoring—all without writing a single line of code in your microservices.

Wrapping Up

API gateways are essential for managing microservices at scale. They centralize cross-cutting concerns like authentication, rate limiting, and monitoring, making your architecture cleaner and more maintainable.

Kong offers the most flexibility and features, AWS API Gateway provides the easiest setup for AWS users, and Nginx delivers raw performance with full control. Choose based on your specific needs, team expertise, and infrastructure constraints.

The key is to start simple and add complexity only when you need it. A basic gateway with routing and authentication will take you far, and you can always add more sophisticated features as your system grows.