WebSockets vs SSE vs Long Polling: Choosing Real-Time Communication for Your App
Compare real-time communication strategies in 2026 — WebSockets for bidirectional messaging, Server-Sent Events for server push, long polling as a fallback, imp
WebSockets vs SSE vs Long Polling: Choosing Real-Time Communication for Your App
Real-time features — live notifications, collaborative editing, chat, live dashboards — all require pushing data from server to client. The choice of protocol matters for both implementation complexity and scaling behavior.
The right choice is often SSE, not WebSockets. WebSockets are more powerful but also more complex to scale and operate.
Quick Decision Guide
| Use | When |
|---|---|
| SSE (Server-Sent Events) | Server → client only; notifications, live feeds, progress updates |
| WebSockets | Bidirectional; chat, collaborative editing, multiplayer, real-time sync |
| Long Polling | Fallback for firewalls/proxies that block WebSockets; simple infrequent updates |
| WebRTC | Peer-to-peer audio/video/data; low latency media |
Server-Sent Events (SSE)
SSE is an HTTP-based protocol — the client makes one HTTP GET request, and the server keeps the connection open, streaming newline-delimited events. It uses the browser's built-in EventSource API.
When SSE is the right choice:
- Notifications (new messages, order updates, alerts)
- Live dashboards (metrics, analytics)
- Background job progress
- Activity feeds
- Anything where data flows server → client
SSE advantages over WebSockets for these cases:
- Works over standard HTTP/2 (multiplexed with other requests)
- Automatic reconnection built into
EventSource - No need for a separate WebSocket server
- Works through HTTP proxies and load balancers that might block WebSocket upgrades
Server implementation (Fastify):
// server/routes/events.ts
import type { FastifyInstance } from 'fastify';
export async function eventsRoutes(app: FastifyInstance) {
app.get('/api/events', {
schema: {
querystring: { type: 'object', properties: { lastEventId: { type: 'string' } } },
},
}, async (request, reply) => {
const userId = request.user.id; // From JWT auth middleware
const lastEventId = (request.query as any).lastEventId as string | undefined;
// SSE headers
reply.raw.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no', // Disable Nginx buffering
});
// Send any missed events since lastEventId (replay)
if (lastEventId) {
const missed = await getMissedEvents(userId, lastEventId);
for (const event of missed) {
reply.raw.write(formatSSEEvent(event));
}
}
// Send initial ping to confirm connection
reply.raw.write(': ping\n\n');
// Subscribe to events for this user
const unsubscribe = eventBus.subscribe(userId, (event) => {
reply.raw.write(formatSSEEvent(event));
});
// Keepalive ping every 30 seconds (proxies close idle connections)
const keepalive = setInterval(() => {
reply.raw.write(': keepalive\n\n');
}, 30_000);
// Cleanup on disconnect
request.socket.on('close', () => {
unsubscribe();
clearInterval(keepalive);
});
// Keep connection open — don't call reply.send()
await new Promise<void>((resolve) => {
request.socket.on('close', resolve);
});
});
}
function formatSSEEvent(event: { id: string; type: string; data: unknown }): string {
return [
`id: ${event.id}`,
`event: ${event.type}`,
`data: ${JSON.stringify(event.data)}`,
'', // Blank line terminates the event
'',
].join('\n');
}
Client implementation:
// lib/sse.ts
export function createSSEConnection(userId: string, token: string) {
// EventSource doesn't support custom headers — pass token as query param
// (or use cookie-based auth, which EventSource sends automatically)
const url = `/api/events?token=${encodeURIComponent(token)}`;
const es = new EventSource(url);
es.addEventListener('notification', (event) => {
const notification = JSON.parse(event.data);
showNotification(notification);
});
es.addEventListener('order_update', (event) => {
const order = JSON.parse(event.data);
updateOrderInStore(order);
});
es.onopen = () => console.log('SSE connected');
es.onerror = (e) => console.warn('SSE error', e);
// EventSource auto-reconnects on error — no manual retry needed
return () => es.close(); // Cleanup function
}
// React hook
export function useSSE() {
const { token } = useAuth();
useEffect(() => {
if (!token) return;
return createSSEConnection('', token);
}, [token]);
}
🌐 Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript — production-grade stack
- Fixed-price contracts — no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
WebSockets
WebSockets provide a persistent, full-duplex TCP connection. Both client and server can send messages at any time.
When WebSockets are necessary:
- Chat (client sends messages, server distributes)
- Collaborative document editing (operational transforms, CRDT sync)
- Multiplayer games (low latency, bidirectional)
- Presence/cursor sharing (many clients, bidirectional updates)
Server (Fastify + ws):
// server/websocket.ts
import Fastify from 'fastify';
import fastifyWebsocket from '@fastify/websocket';
import type { WebSocket } from 'ws';
const app = Fastify();
await app.register(fastifyWebsocket);
// Connection registry: userId → Set<WebSocket>
const connections = new Map<string, Set<WebSocket>>();
app.get('/ws', { websocket: true }, async (socket, request) => {
const token = new URL(request.url!, 'http://x').searchParams.get('token');
const user = await verifyJWT(token!);
if (!user) {
socket.close(4001, 'Unauthorized');
return;
}
// Register connection
if (!connections.has(user.id)) connections.set(user.id, new Set());
connections.get(user.id)!.add(socket);
// Heartbeat: close stale connections
const heartbeat = setInterval(() => {
if (socket.readyState !== socket.OPEN) {
clearInterval(heartbeat);
return;
}
socket.ping();
}, 30_000);
socket.on('message', async (raw) => {
const message = JSON.parse(raw.toString());
switch (message.type) {
case 'chat:send': {
const saved = await saveMessage(user.id, message.payload);
// Broadcast to all participants
for (const participantId of message.payload.participants) {
broadcastToUser(participantId, { type: 'chat:message', data: saved });
}
break;
}
case 'presence:update': {
broadcastToRoom(message.payload.roomId, {
type: 'presence:cursor',
data: { userId: user.id, ...message.payload.cursor },
}, user.id);
break;
}
}
});
socket.on('close', () => {
connections.get(user.id)?.delete(socket);
if (connections.get(user.id)?.size === 0) connections.delete(user.id);
clearInterval(heartbeat);
});
});
function broadcastToUser(userId: string, message: unknown) {
const userSockets = connections.get(userId);
if (!userSockets) return;
const payload = JSON.stringify(message);
for (const ws of userSockets) {
if (ws.readyState === ws.OPEN) ws.send(payload);
}
}
Long Polling
Long polling holds the HTTP request open until an event occurs (or a timeout). It's the fallback when WebSockets are blocked by corporate proxies.
// server/routes/poll.ts
app.get('/api/poll', async (request, reply) => {
const userId = request.user.id;
const cursor = (request.query as any).cursor as string;
// Check for immediately available events
const immediate = await getNewEvents(userId, cursor);
if (immediate.length > 0) {
return reply.send({ events: immediate, cursor: immediate[immediate.length - 1].id });
}
// Wait up to 30 seconds for new events
const event = await waitForEvent(userId, 30_000);
if (event) {
return reply.send({ events: [event], cursor: event.id });
}
// Timeout — client should reconnect immediately
return reply.send({ events: [], cursor });
});
// Client
async function longPoll(cursor: string): Promise<void> {
try {
const response = await fetch(`/api/poll?cursor=${cursor}`);
const { events, cursor: newCursor } = await response.json();
for (const event of events) processEvent(event);
// Immediately reconnect
longPoll(newCursor);
} catch (e) {
// Reconnect after backoff
await sleep(2000);
longPoll(cursor);
}
}
🚀 Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4–8 weeks, full platforms in 3–5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Scaling Real-Time Connections
The problem: with multiple server instances, a WebSocket or SSE connection lands on one server. An event for that user must reach their specific server.
Solution: Redis Pub/Sub as the message bus:
// lib/event-bus.ts
import { Redis } from 'ioredis';
const publisher = new Redis(process.env.REDIS_URL!);
const subscriber = new Redis(process.env.REDIS_URL!);
// Subscribe to user-specific channel on this server
const localSubscribers = new Map<string, Set<(event: unknown) => void>>();
subscriber.on('message', (channel, message) => {
const subscribers = localSubscribers.get(channel);
if (!subscribers) return;
const event = JSON.parse(message);
for (const cb of subscribers) cb(event);
});
export const eventBus = {
// Publish an event — Redis distributes to all server instances
async publish(userId: string, event: unknown): Promise<void> {
await publisher.publish(`user:${userId}`, JSON.stringify(event));
},
// Subscribe on this server instance (called when SSE/WS connection opens)
subscribe(userId: string, callback: (event: unknown) => void): () => void {
const channel = `user:${userId}`;
if (!localSubscribers.has(channel)) {
localSubscribers.set(channel, new Set());
subscriber.subscribe(channel);
}
localSubscribers.get(channel)!.add(callback);
return () => {
localSubscribers.get(channel)?.delete(callback);
if (localSubscribers.get(channel)?.size === 0) {
localSubscribers.delete(channel);
subscriber.unsubscribe(channel);
}
};
},
};
Comparison Table
| SSE | WebSockets | Long Polling | |
|---|---|---|---|
| Direction | Server → client | Bidirectional | Server → client |
| Protocol | HTTP/1.1, HTTP/2 | WS (TCP upgrade) | HTTP |
| Auto-reconnect | ✅ Built in | ❌ Manual | ❌ Manual |
| Proxy compatibility | ✅ Works everywhere | ⚠️ Some proxies block | ✅ Works everywhere |
| Scaling | Redis Pub/Sub | Redis Pub/Sub | No persistent connection |
| Complexity | Low | Medium | Low |
| Latency | Low | Very low | Medium (polling interval) |
| Max concurrent (per server) | ~100K (HTTP/2) | ~50K | ~10K |
Working With Viprasol
We build real-time features — SSE notification systems, WebSocket chat and collaboration, Redis Pub/Sub scaling layers, and presence indicators. Real-time is a first-class product feature, not an afterthought.
→ Talk to our team about real-time architecture and implementation.
See Also
- WebSocket Scalability — deeper dive on scaling WebSocket connections
- API Rate Limiting — rate limiting SSE and WebSocket endpoints
- Redis and Caching — Redis Pub/Sub for event distribution
- Node.js API Development — Fastify server setup
- Web Development Services — real-time feature development
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms — we build it all with Next.js and React.
Free consultation • No commitment • Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.