Back to Blog

Next.js Streaming Responses: Server-Sent Events, AI Response Streaming, and Route Handlers

Stream responses from Next.js route handlers using Server-Sent Events. Covers ReadableStream setup, AI completion streaming with the Vercel AI SDK, chunked transfer encoding, client-side EventSource, and abort handling.

Viprasol Tech Team
May 12, 2027
12 min read

Streaming is how modern AI UIs feel fast β€” the response appears token by token instead of the user staring at a spinner for 5 seconds. But streaming is useful beyond AI: real-time log tailing, live CSV export progress, and notification feeds all benefit from the same pattern.

Next.js route handlers support streaming natively via ReadableStream. Server-Sent Events (SSE) give you a standardized protocol over a regular HTTP connection β€” no WebSocket handshake required, works through proxies, auto-reconnects on disconnect.

Basic SSE Route Handler

// app/api/stream/route.ts
import { NextRequest } from "next/server";

export const runtime = "edge"; // Optional: run at edge for lower latency

export async function GET(req: NextRequest) {
  const encoder = new TextEncoder();

  const stream = new ReadableStream({
    async start(controller) {
      // SSE format: each event is "data: <payload>\n\n"
      function send(data: string, event?: string) {
        let chunk = "";
        if (event) chunk += `event: ${event}\n`;
        chunk += `data: ${data}\n\n`;
        controller.enqueue(encoder.encode(chunk));
      }

      try {
        // Simulate streaming data (replace with real logic)
        for (let i = 1; i <= 5; i++) {
          send(JSON.stringify({ step: i, message: `Processing step ${i}` }));
          await new Promise((r) => setTimeout(r, 500));
        }
        send(JSON.stringify({ done: true }), "complete");
      } catch (err) {
        send(JSON.stringify({ error: "Stream failed" }), "error");
      } finally {
        controller.close();
      }
    },

    cancel() {
      // Client disconnected β€” clean up any resources
      console.log("Stream cancelled by client");
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type":  "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      "Connection":    "keep-alive",
      // Allow CORS for cross-origin SSE (if needed)
      "Access-Control-Allow-Origin": "*",
    },
  });
}

Client-Side EventSource Consumer

// hooks/use-event-source.ts
"use client";

import { useEffect, useRef, useCallback, useState } from "react";

interface UseEventSourceOptions {
  url:          string;
  enabled?:     boolean;
  onMessage?:   (data: string) => void;
  onError?:     (event: Event) => void;
  onOpen?:      () => void;
  // Named event handlers
  events?:      Record<string, (data: string) => void>;
}

export function useEventSource({
  url,
  enabled = true,
  onMessage,
  onError,
  onOpen,
  events = {},
}: UseEventSourceOptions) {
  const [status, setStatus] = useState<"connecting" | "open" | "closed">("closed");
  const esRef = useRef<EventSource | null>(null);

  const close = useCallback(() => {
    if (esRef.current) {
      esRef.current.close();
      esRef.current = null;
      setStatus("closed");
    }
  }, []);

  useEffect(() => {
    if (!enabled) return;

    const es = new EventSource(url);
    esRef.current = es;
    setStatus("connecting");

    es.onopen = () => {
      setStatus("open");
      onOpen?.();
    };

    es.onmessage = (event) => {
      onMessage?.(event.data);
    };

    es.onerror = (event) => {
      setStatus("closed");
      onError?.(event);
      // EventSource auto-reconnects unless you close it
    };

    // Named event listeners
    Object.entries(events).forEach(([eventName, handler]) => {
      es.addEventListener(eventName, (e: MessageEvent) => handler(e.data));
    });

    return () => {
      es.close();
      setStatus("closed");
    };
  }, [url, enabled]);

  return { status, close };
}

🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different β€” senior engineers only, direct Slack access, and a 5.0β˜… Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript β€” production-grade stack
  • Fixed-price contracts β€” no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

AI Completion Streaming with Vercel AI SDK

// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { NextRequest } from "next/server";
import { auth } from "@/auth";
import { z } from "zod";

const RequestSchema = z.object({
  messages: z.array(
    z.object({
      role:    z.enum(["user", "assistant", "system"]),
      content: z.string().max(10000),
    })
  ).min(1).max(50),
});

export async function POST(req: NextRequest) {
  const session = await auth();
  if (!session?.user) {
    return new Response("Unauthorized", { status: 401 });
  }

  const body = await req.json();
  const parsed = RequestSchema.safeParse(body);
  if (!parsed.success) {
    return new Response("Invalid request", { status: 400 });
  }

  const result = await streamText({
    model: openai("gpt-4o-mini"),
    system: `You are a helpful assistant for ${session.user.name}.
    Today is ${new Date().toISOString().split("T")[0]}.`,
    messages: parsed.data.messages,
    maxTokens: 2000,
    temperature: 0.7,
    // Called when streaming completes β€” good for logging/billing
    onFinish: async ({ text, usage }) => {
      // Log to database for billing/analytics
      await logAIUsage({
        userId:     session.user.id,
        model:      "gpt-4o-mini",
        inputTokens:  usage.promptTokens,
        outputTokens: usage.completionTokens,
        content:    text,
      });
    },
  });

  // toDataStreamResponse() returns the streaming response in Vercel AI SDK format
  return result.toDataStreamResponse();
}

async function logAIUsage(params: {
  userId: string;
  model: string;
  inputTokens: number;
  outputTokens: number;
  content: string;
}) {
  // Insert into ai_usage table for cost tracking
  // Cost: gpt-4o-mini = $0.15/1M input tokens, $0.60/1M output tokens
  const costUsdCents =
    Math.round((params.inputTokens / 1_000_000) * 15) +
    Math.round((params.outputTokens / 1_000_000) * 60);

  await prisma.aiUsageLog.create({ data: { ...params, costUsdCents } });
}

Chat UI with useChat Hook

// components/chat/chat-window.tsx
"use client";

import { useChat } from "ai/react";
import { useRef, useEffect } from "react";
import { Send, Loader2 } from "lucide-react";

export function ChatWindow() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, stop } =
    useChat({
      api:         "/api/chat",
      // Optimistic update: add user message before server responds
      initialMessages: [],
      onError: (err) => {
        console.error("Chat error:", err);
      },
    });

  const bottomRef = useRef<HTMLDivElement>(null);

  // Auto-scroll to bottom as new tokens arrive
  useEffect(() => {
    bottomRef.current?.scrollIntoView({ behavior: "smooth" });
  }, [messages]);

  return (
    <div className="flex flex-col h-[600px] bg-white rounded-2xl border border-gray-200">
      {/* Message list */}
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.length === 0 && (
          <p className="text-sm text-gray-400 text-center pt-8">
            Ask me anything…
          </p>
        )}
        {messages.map((msg) => (
          <div
            key={msg.id}
            className={`flex ${msg.role === "user" ? "justify-end" : "justify-start"}`}
          >
            <div
              className={`
                max-w-[80%] px-4 py-3 rounded-2xl text-sm
                ${msg.role === "user"
                  ? "bg-blue-600 text-white rounded-br-sm"
                  : "bg-gray-100 text-gray-900 rounded-bl-sm"}
              `}
            >
              {/* Render markdown-like content */}
              <p className="whitespace-pre-wrap">{msg.content}</p>
            </div>
          </div>
        ))}

        {/* Streaming indicator */}
        {isLoading && (
          <div className="flex justify-start">
            <div className="bg-gray-100 px-4 py-3 rounded-2xl rounded-bl-sm">
              <Loader2 className="w-4 h-4 animate-spin text-gray-400" />
            </div>
          </div>
        )}

        <div ref={bottomRef} />
      </div>

      {/* Input */}
      <form
        onSubmit={handleSubmit}
        className="flex items-center gap-2 p-4 border-t border-gray-100"
      >
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Message…"
          disabled={isLoading}
          className="flex-1 text-sm border border-gray-200 rounded-xl px-4 py-2.5 focus:outline-none focus:ring-2 focus:ring-blue-500 disabled:opacity-50"
        />
        {isLoading ? (
          <button
            type="button"
            onClick={stop}
            className="px-4 py-2.5 bg-gray-100 text-gray-600 text-sm font-medium rounded-xl hover:bg-gray-200"
          >
            Stop
          </button>
        ) : (
          <button
            type="submit"
            disabled={!input.trim()}
            className="p-2.5 bg-blue-600 text-white rounded-xl hover:bg-blue-700 disabled:opacity-50"
          >
            <Send className="w-4 h-4" />
          </button>
        )}
      </form>
    </div>
  );
}

πŸš€ Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Custom SSE: Job Progress Streaming

For long-running jobs (CSV export, report generation):

// app/api/jobs/[jobId]/stream/route.ts
import { NextRequest } from "next/server";
import { auth } from "@/auth";
import { prisma } from "@/lib/prisma";

export async function GET(
  req: NextRequest,
  { params }: { params: { jobId: string } }
) {
  const session = await auth();
  if (!session?.user) return new Response("Unauthorized", { status: 401 });

  const job = await prisma.exportJob.findFirst({
    where: { id: params.jobId, workspaceId: session.user.workspaceId },
  });
  if (!job) return new Response("Not found", { status: 404 });

  const encoder = new TextEncoder();
  let interval: NodeJS.Timeout;

  const stream = new ReadableStream({
    start(controller) {
      function send(data: object) {
        controller.enqueue(encoder.encode(`data: ${JSON.stringify(data)}\n\n`));
      }

      // Poll job status every second
      interval = setInterval(async () => {
        const current = await prisma.exportJob.findUnique({
          where:  { id: params.jobId },
          select: { status: true, progress: true, downloadUrl: true, error: true },
        });

        if (!current) {
          clearInterval(interval);
          controller.close();
          return;
        }

        send({ status: current.status, progress: current.progress });

        if (current.status === "completed") {
          send({ status: "completed", downloadUrl: current.downloadUrl });
          clearInterval(interval);
          controller.close();
        } else if (current.status === "failed") {
          send({ status: "failed", error: current.error });
          clearInterval(interval);
          controller.close();
        }
      }, 1000);
    },

    cancel() {
      clearInterval(interval);
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type":  "text/event-stream",
      "Cache-Control": "no-cache",
      "Connection":    "keep-alive",
    },
  });
}
// components/export-progress.tsx
"use client";

import { useState, useEffect } from "react";
import { useEventSource } from "@/hooks/use-event-source";

export function ExportProgress({ jobId }: { jobId: string }) {
  const [progress, setProgress]     = useState(0);
  const [status, setStatus]         = useState("processing");
  const [downloadUrl, setDownloadUrl] = useState<string | null>(null);
  const [enabled, setEnabled]       = useState(true);

  const { close } = useEventSource({
    url:     `/api/jobs/${jobId}/stream`,
    enabled,
    onMessage(data) {
      const parsed = JSON.parse(data);
      setProgress(parsed.progress ?? progress);
      setStatus(parsed.status);
      if (parsed.downloadUrl) {
        setDownloadUrl(parsed.downloadUrl);
        setEnabled(false); // Stop streaming once complete
      }
    },
  });

  return (
    <div className="space-y-3">
      <div className="flex items-center justify-between text-sm text-gray-700">
        <span>Generating export…</span>
        <span>{progress}%</span>
      </div>
      <div className="w-full bg-gray-100 rounded-full h-2">
        <div
          className="h-2 bg-blue-500 rounded-full transition-all duration-300"
          style={{ width: `${progress}%` }}
        />
      </div>
      {downloadUrl && (
        <a
          href={downloadUrl}
          download
          className="block text-center px-4 py-2 bg-blue-600 text-white text-sm font-semibold rounded-lg hover:bg-blue-700"
        >
          Download CSV β†’
        </a>
      )}
    </div>
  );
}

Cost and Timeline Estimates

ScopeTeamTimelineCost Range
Basic SSE route handler1 devHalf a day$150–300
AI chat with streaming + useChat1 dev1–2 days$400–800
Job progress streaming (poll-based)1 dev1–2 days$400–800
+ Abort handling + reconnect logic + typing1 dev1 day$300–600

See Also


Working With Viprasol

Streaming responses transform AI integrations from slow to snappy. Our team implements SSE route handlers with proper text/event-stream headers, the Vercel AI SDK streamText + toDataStreamResponse() pattern for AI completions, useChat hook integration on the client, and custom job progress streaming for long-running exports.

What we deliver:

  • ReadableStream SSE route handler with encoder, event format, and cancel cleanup
  • useEventSource hook with status tracking and named event handlers
  • /api/chat route with streamText, auth guard, token logging, and cost tracking
  • ChatWindow with useChat, auto-scroll, stop button, and streaming indicator
  • Job progress SSE with 1-second poll interval, completion/failure detection, and download URL delivery

Talk to our team about your AI feature roadmap β†’

Or explore our AI and machine learning services.

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms β€” we build it all with Next.js and React.

Free consultation β€’ No commitment β€’ Response within 24 hours

Viprasol Β· Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse β‰₯90 scores, mobile-first design, and full source code ownership. Senior engineers only β€” from architecture through deployment.