Streaming API Responses with Server-Sent Events

Meta Description: Stream large datasets, real-time updates, and AI-generated content with Server-Sent Events. Learn SSE implementation with examples. Keywords: server sent events, streaming api, sse protocol, real-time streaming, api streaming, progressive responses Word Count: ~2,100 words Your API returns a large dataset. The client waits 10 seconds for the

TRY NANO BANANA FOR FREE

Streaming API Responses with Server-Sent Events

TRY NANO BANANA FOR FREE
Contents

Meta Description: Stream large datasets, real-time updates, and AI-generated content with Server-Sent Events. Learn SSE implementation with examples.

Keywords: server sent events, streaming api, sse protocol, real-time streaming, api streaming, progressive responses

Word Count: ~2,100 words


Your API returns a large dataset. The client waits 10 seconds for the full response. Users see a loading spinner. The experience feels slow.

Or you're generating AI content. The model takes 30 seconds to complete. Users wait with no feedback.

Streaming solves this. Send data as it becomes available. Users see progress immediately.

Server-Sent Events (SSE) makes streaming simple.

Why Stream API Responses?

Use Case 1: Large Datasets

Instead of waiting for all 10,000 records:

// Non-streaming (slow)
const response = await fetch('/api/pets');
const pets = await response.json(); // Wait for all 10,000 records
displayPets(pets);

Stream records as they're fetched:

// Streaming (fast)
const eventSource = new EventSource('/api/pets/stream');
eventSource.onmessage = (event) => {
  const pet = JSON.parse(event.data);
  displayPet(pet); // Show immediately
};

Users see results instantly instead of waiting.

Use Case 2: AI Content Generation

AI models generate text token by token. Stream tokens as they're generated:

const eventSource = new EventSource('/api/ai/generate');
eventSource.onmessage = (event) => {
  const token = event.data;
  appendToOutput(token); // Show each word as it's generated
};

Users see content appear in real-time, like ChatGPT.

Use Case 3: Progress Updates

Long-running operations can stream progress:

const eventSource = new EventSource('/api/reports/generate');
eventSource.onmessage = (event) => {
  const progress = JSON.parse(event.data);
  updateProgressBar(progress.percent); // Show progress
};

Users know the operation is working, not stuck.

Use Case 4: Real-Time Monitoring

Stream metrics, logs, or events:

const eventSource = new EventSource('/api/metrics/stream');
eventSource.onmessage = (event) => {
  const metric = JSON.parse(event.data);
  updateDashboard(metric); // Update in real-time
};

Dashboards update automatically without polling.

SSE Basics

SSE is an HTTP-based protocol for server-to-client streaming.

Client Code

const eventSource = new EventSource('https://api.petstoreapi.com/v1/pets/stream');

eventSource.onopen = () => {
  console.log('Connection opened');
};

eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Received:', data);
};

eventSource.onerror = (error) => {
  console.error('Error:', error);
  eventSource.close();
};

// Close when done
eventSource.close();

Server Response Format

HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
Connection: keep-alive

data: {"id":"123","name":"Max","species":"DOG"}

data: {"id":"456","name":"Bella","species":"CAT"}

data: {"id":"789","name":"Charlie","species":"BIRD"}

Each event: - Starts with data:- Contains the payload - Ends with two newlines (\n\n)

Streaming Pet Listings

Let's stream pet listings from a database.

Server Implementation (Node.js)

const express = require('express');
const app = express();

app.get('/v1/pets/stream', async (req, res) => {
  // Set SSE headers
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');
  res.setHeader('X-Accel-Buffering', 'no'); // Disable nginx buffering

  // Parse filters
  const filters = {
    species: req.query.species,
    status: req.query.status,
  };

  try {
    // Stream pets from database
    const stream = db.pets.stream(filters);

    stream.on('data', (pet) => {
      // Send each pet as an SSE event
      res.write(`data: ${JSON.stringify(pet)}\n\n`);
    });

    stream.on('end', () => {
      // Send completion event
      res.write('data: {"done":true}\n\n');
      res.end();
    });

    stream.on('error', (error) => {
      console.error('Stream error:', error);
      res.write(`data: {"error":"${error.message}"}\n\n`);
      res.end();
    });

    // Handle client disconnect
    req.on('close', () => {
      stream.destroy();
      res.end();
    });
  } catch (error) {
    res.write(`data: {"error":"${error.message}"}\n\n`);
    res.end();
  }
});

Client Implementation

function streamPets(filters = {}) {
  const params = new URLSearchParams(filters);
  const eventSource = new EventSource(`/v1/pets/stream?${params}`);

  const pets = [];

  eventSource.onmessage = (event) => {
    const data = JSON.parse(event.data);

    if (data.done) {
      console.log('Stream complete:', pets.length, 'pets received');
      eventSource.close();
      return;
    }

    if (data.error) {
      console.error('Stream error:', data.error);
      eventSource.close();
      return;
    }

    // Add pet to list and display
    pets.push(data);
    displayPet(data);
  };

  eventSource.onerror = (error) => {
    console.error('Connection error:', error);
    eventSource.close();
  };

  return eventSource;
}

// Usage
const stream = streamPets({ species: 'DOG', status: 'AVAILABLE' });

// Stop streaming
// stream.close();

Streaming AI-Generated Content

Stream AI responses token by token.

Server Implementation

app.post('/v1/ai/chat/stream', async (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  const { messages, model = 'PET_ADVISOR_1' } = req.body;

  try {
    // Call AI API with streaming
    const stream = await ai.chat.completions.create({
      model: model,
      messages: messages,
      stream: true,
    });

    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content;

      if (content) {
        // Send each token
        res.write(`data: ${JSON.stringify({ content })}\n\n`);
      }
    }

    // Send completion
    res.write('data: {"done":true}\n\n');
    res.end();
  } catch (error) {
    res.write(`data: {"error":"${error.message}"}\n\n`);
    res.end();
  }
});

Client Implementation

async function streamAIChat(messages) {
  const response = await fetch('/v1/ai/chat/stream', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${API_KEY}`,
    },
    body: JSON.stringify({ messages }),
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();

    if (done) break;

    buffer += decoder.decode(value, { stream: true });

    // Process complete events
    const events = buffer.split('\n\n');
    buffer = events.pop(); // Keep incomplete event in buffer

    for (const event of events) {
      if (!event.trim()) continue;

      const line = event.replace(/^data: /, '');
      const data = JSON.parse(line);

      if (data.done) {
        console.log('Stream complete');
        return;
      }

      if (data.error) {
        console.error('Error:', data.error);
        return;
      }

      if (data.content) {
        // Append token to output
        appendToken(data.content);
      }
    }
  }
}

// Usage
streamAIChat([
  { role: 'user', content: 'What breed of dog is best for apartments?' }
]);

Streaming Progress Updates

Stream progress for long-running operations.

Server Implementation

app.post('/v1/reports/generate/stream', async (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  const { reportType, filters } = req.body;

  try {
    // Generate report with progress updates
    const generator = generateReport(reportType, filters);

    for await (const progress of generator) {
      res.write(`data: ${JSON.stringify(progress)}\n\n`);
    }

    res.end();
  } catch (error) {
    res.write(`data: {"error":"${error.message}"}\n\n`);
    res.end();
  }
});

async function* generateReport(type, filters) {
  yield { stage: 'fetching', percent: 0 };

  const data = await fetchData(filters);
  yield { stage: 'fetching', percent: 30 };

  yield { stage: 'processing', percent: 30 };
  const processed = await processData(data);
  yield { stage: 'processing', percent: 60 };

  yield { stage: 'generating', percent: 60 };
  const report = await generatePDF(processed);
  yield { stage: 'generating', percent: 90 };

  yield { stage: 'uploading', percent: 90 };
  const url = await uploadReport(report);
  yield { stage: 'complete', percent: 100, url };
}

Client Implementation

async function generateReportWithProgress(reportType, filters) {
  const response = await fetch('/v1/reports/generate/stream', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ reportType, filters }),
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const events = buffer.split('\n\n');
    buffer = events.pop();

    for (const event of events) {
      if (!event.trim()) continue;

      const line = event.replace(/^data: /, '');
      const progress = JSON.parse(line);

      // Update UI
      updateProgressBar(progress.percent);
      updateStageLabel(progress.stage);

      if (progress.stage === 'complete') {
        console.log('Report ready:', progress.url);
        downloadReport(progress.url);
      }
    }
  }
}

SSE Best Practices

1. Set Proper Headers

res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.setHeader('X-Accel-Buffering', 'no'); // Disable nginx buffering

2. Handle Client Disconnects

req.on('close', () => {
  // Clean up resources
  stream.destroy();
  res.end();
});

3. Send Heartbeats

Keep connections alive with periodic comments:

const heartbeat = setInterval(() => {
  res.write(': heartbeat\n\n');
}, 30000); // Every 30 seconds

req.on('close', () => {
  clearInterval(heartbeat);
});

4. Use Event IDs for Resumption

let eventId = 0;

stream.on('data', (data) => {
  eventId++;
  res.write(`id: ${eventId}\n`);
  res.write(`data: ${JSON.stringify(data)}\n\n`);
});

Clients can resume from the last event:

const lastEventId = localStorage.getItem('lastEventId');
const eventSource = new EventSource(`/stream?lastEventId=${lastEventId}`);

5. Implement Backpressure

Don't overwhelm clients:

stream.on('data', (data) => {
  const canWrite = res.write(`data: ${JSON.stringify(data)}\n\n`);

  if (!canWrite) {
    stream.pause();
    res.once('drain', () => stream.resume());
  }
});

When to Use SSE

Use SSE when: - You need server-to-client streaming - You want automatic reconnection - You're streaming text data - You need HTTP compatibility

Don't use SSE when: - You need bidirectional communication (use WebSocket) - You're streaming large binary data (use chunked transfer encoding) - You need low latency (WebSocket is faster)

Conclusion

SSE makes streaming simple. It uses standard HTTP, reconnects automatically, and works everywhere.

Use it for: - Large dataset pagination - AI content generation - Progress updates - Real-time monitoring

The Modern PetStore API uses SSE for streaming pet listings, AI chat responses, and report generation progress.

Start streaming your API responses today.