Back to Blog

Redis Use Cases: Caching, Pub/Sub, Rate Limiting, and Session Storage

Redis use cases in 2026 — caching strategies, pub/sub messaging, rate limiting, session storage, distributed locks, and leaderboards with production TypeScript

Viprasol Tech Team
April 27, 2026
12 min read

Redis Use Cases: Caching, Pub/Sub, Rate Limiting, and Session Storage

Redis is an in-memory data structure store that's been part of the production web stack since 2009. It's not just a cache — it's a versatile tool for session storage, pub/sub messaging, rate limiting, distributed locking, leaderboards, and real-time analytics.

This guide covers the patterns that actually get used in production, with working code for each.


Pattern 1: Caching

The most common Redis use case. Cache expensive database queries, API responses, or computed results.

import { createClient } from 'redis';

const redis = createClient({ url: process.env.REDIS_URL });
await redis.connect();

// Cache-aside pattern with automatic expiry
async function getCachedOrFetch<T>(
  key: string,
  fetcher: () => Promise<T>,
  ttlSeconds: number = 300
): Promise<T> {
  // Try cache first
  const cached = await redis.get(key);
  if (cached) {
    return JSON.parse(cached) as T;
  }

  // Cache miss — fetch from source
  const data = await fetcher();

  // Store with TTL (fire and forget — don't block response)
  redis.setEx(key, ttlSeconds, JSON.stringify(data)).catch(console.error);

  return data;
}

// Usage
async function getProductWithReviews(productId: string) {
  return getCachedOrFetch(
    `product:${productId}:with-reviews`,
    () => db('products')
      .join('reviews', 'reviews.product_id', 'products.id')
      .where('products.id', productId)
      .select('products.*', db.raw('json_agg(reviews.*) as reviews'))
      .groupBy('products.id')
      .first(),
    600  // Cache for 10 minutes
  );
}

// Invalidate on write
async function updateProduct(productId: string, data: Partial<Product>) {
  await db('products').where({ id: productId }).update(data);
  
  // Invalidate all related cache keys
  const keys = await redis.keys(`product:${productId}:*`);
  if (keys.length > 0) {
    await redis.del(keys);
  }
}

Cache Stampede Prevention

Without protection, a cache expiry causes all concurrent requests to hit the database simultaneously:

// Probabilistic early expiration prevents stampedes
async function getWithStampedeProtection<T>(
  key: string,
  fetcher: () => Promise<T>,
  ttlSeconds: number
): Promise<T> {
  const cached = await redis.get(key);
  const ttl = await redis.ttl(key);
  
  if (cached) {
    // Probabilistically recompute before expiry
    // XFetch algorithm: recompute if within a random window of expiry
    const beta = 1.0;
    const randomFactor = -Math.log(Math.random()) * beta;
    const shouldRecompute = randomFactor > (ttl / ttlSeconds);
    
    if (!shouldRecompute) {
      return JSON.parse(cached) as T;
    }
  }

  // Recompute — use a lock to prevent multiple simultaneous fetches
  const lockKey = `lock:${key}`;
  const locked = await redis.set(lockKey, '1', { NX: true, EX: 10 });
  
  if (!locked && cached) {
    // Another process is fetching — return stale data
    return JSON.parse(cached) as T;
  }

  try {
    const data = await fetcher();
    await redis.setEx(key, ttlSeconds, JSON.stringify(data));
    return data;
  } finally {
    await redis.del(lockKey);
  }
}

Pattern 2: Session Storage

Store user sessions in Redis — shared across all application instances, evicted automatically.

import session from 'express-session';
import { createClient } from 'redis';
import connectRedis from 'connect-redis';

const redisClient = createClient({ url: process.env.REDIS_URL });
await redisClient.connect();

const RedisStore = connectRedis(session);

app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET!,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: process.env.NODE_ENV === 'production',
    httpOnly: true,
    maxAge: 7 * 24 * 60 * 60 * 1000,  // 7 days
    sameSite: 'strict',
  },
  // Redis session key format: sess:${sessionId}
}));
# Python: Flask-Session with Redis
from flask import Flask, session
from flask_session import Session
import redis

app = Flask(__name__)
app.config['SECRET_KEY'] = 'your-secret-key'
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_REDIS'] = redis.from_url('redis://localhost:6379')
app.config['SESSION_PERMANENT'] = False
app.config['PERMANENT_SESSION_LIFETIME'] = 604800  # 7 days

Session(app)

🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript — production-grade stack
  • Fixed-price contracts — no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

Pattern 3: Rate Limiting

Redis is the standard backend for rate limiting — atomic operations prevent race conditions that plague database-backed rate limiters.

// Sliding window rate limiter using sorted sets
async function isRateLimited(
  identifier: string,   // IP address, user ID, API key
  limit: number,
  windowMs: number
): Promise<{ limited: boolean; remaining: number; resetAt: number }> {
  const key = `ratelimit:${identifier}`;
  const now = Date.now();
  const windowStart = now - windowMs;

  // Atomic pipeline: remove old entries + count + add new entry
  const pipeline = redis.multi();
  pipeline.zRemRangeByScore(key, 0, windowStart);    // Remove expired entries
  pipeline.zCard(key);                                // Count current entries
  pipeline.zAdd(key, { score: now, value: `${now}` }); // Add current request
  pipeline.expire(key, Math.ceil(windowMs / 1000));  // Set key expiry

  const results = await pipeline.exec() as [null, number, number, number][];
  const currentCount = results[1][1] as number;

  const limited = currentCount >= limit;
  const remaining = Math.max(0, limit - currentCount - 1);
  const resetAt = now + windowMs;

  return { limited, remaining, resetAt };
}

// Express middleware
app.use('/api/', async (req, res, next) => {
  const identifier = req.user?.id ?? req.ip;
  const { limited, remaining, resetAt } = await isRateLimited(
    identifier, 
    100,           // 100 requests
    60 * 1000      // per minute
  );

  res.setHeader('X-RateLimit-Limit', '100');
  res.setHeader('X-RateLimit-Remaining', String(remaining));
  res.setHeader('X-RateLimit-Reset', String(Math.floor(resetAt / 1000)));

  if (limited) {
    return res.status(429).json({
      error: 'Rate limit exceeded',
      retryAfter: Math.ceil((resetAt - Date.now()) / 1000),
    });
  }
  next();
});

Pattern 4: Pub/Sub Messaging

Redis pub/sub enables real-time messaging between application instances.

// Publisher (any service can publish)
async function publishUserActivity(userId: string, activity: UserActivity) {
  await redis.publish(
    `user-activity:${userId}`,
    JSON.stringify({ ...activity, timestamp: Date.now() })
  );
}

// Subscriber (e.g., notification service)
const subscriber = redis.duplicate();
await subscriber.connect();

await subscriber.subscribe('user-activity:*', (message, channel) => {
  const userId = channel.split(':')[1];
  const activity = JSON.parse(message);
  
  // Process activity (send push notification, update presence, etc.)
  processUserActivity(userId, activity);
});

// Pattern subscriptions with psubscribe
await subscriber.pSubscribe('user-activity:*', handler);

Important limitation: Redis pub/sub doesn't persist messages — if no subscriber is listening when a message is published, it's lost. For durable messaging, use Redis Streams or a proper message broker (Kafka, SQS).

Redis Streams (Durable Pub/Sub)

// Producer: append to stream (messages persist until explicitly deleted)
await redis.xAdd(
  'order-events',
  '*',            // Auto-generate ID
  {
    orderId: order.id,
    type: 'OrderPlaced',
    userId: order.userId,
    total: String(order.total),
  }
);

// Consumer group: multiple consumers, each message delivered once
await redis.xGroupCreate('order-events', 'fulfillment-service', '0', { MKSTREAM: true });

// Process messages
while (true) {
  const messages = await redis.xReadGroup(
    'fulfillment-service',
    'worker-1',
    [{ key: 'order-events', id: '>' }],  // '>' = new messages only
    { COUNT: 10, BLOCK: 5000 }            // Block up to 5 seconds
  );

  for (const stream of messages ?? []) {
    for (const message of stream.messages) {
      await processOrderEvent(message.message);
      await redis.xAck('order-events', 'fulfillment-service', message.id);
    }
  }
}

🚀 Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Pattern 5: Distributed Locks

Prevent race conditions in distributed systems when multiple instances might process the same job.

// Redlock: distributed lock with automatic expiry
async function withLock<T>(
  resource: string,
  ttlMs: number,
  fn: () => Promise<T>
): Promise<T | null> {
  const lockKey = `lock:${resource}`;
  const lockValue = crypto.randomUUID();  // Unique per lock holder

  // Acquire lock
  const acquired = await redis.set(lockKey, lockValue, {
    NX: true,   // Only set if key doesn't exist
    PX: ttlMs,  // Expire after TTL (prevents dead locks)
  });

  if (!acquired) {
    return null;  // Resource is locked — skip or retry
  }

  try {
    return await fn();
  } finally {
    // Release lock — only if we still own it (Lua script for atomicity)
    const script = `
      if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
      else
        return 0
      end
    `;
    await redis.eval(script, { keys: [lockKey], arguments: [lockValue] });
  }
}

// Usage: only one instance processes a scheduled job
async function runDailyReport() {
  const result = await withLock('daily-report', 60000, async () => {
    return generateDailyReport();
  });

  if (result === null) {
    console.log('Report already being generated by another instance');
  }
}

Pattern 6: Leaderboards and Counters

Redis sorted sets are perfect for real-time leaderboards.

// Add/update score
async function recordGameScore(userId: string, score: number): Promise<void> {
  await redis.zAdd('game:leaderboard', {
    score,
    value: userId,
  }, { GT: true });  // Only update if new score is higher
}

// Top 10 players
async function getTopPlayers(limit = 10) {
  const results = await redis.zRangeWithScores(
    'game:leaderboard',
    0, limit - 1,
    { REV: true }  // Highest scores first
  );
  
  return results.map(({ value: userId, score }, index) => ({
    rank: index + 1,
    userId,
    score,
  }));
}

// Player's rank and score
async function getPlayerRank(userId: string) {
  const [rank, score] = await Promise.all([
    redis.zRevRank('game:leaderboard', userId),
    redis.zScore('game:leaderboard', userId),
  ]);
  
  return { rank: rank !== null ? rank + 1 : null, score };
}

// Atomic increment counter (thread-safe across instances)
async function incrementPageView(pageId: string): Promise<number> {
  return redis.incr(`pageviews:${pageId}`);
}

// Bit manipulation: track daily active users efficiently
// 1 bit per user per day = 100M users = 12.5MB
async function trackDailyActive(userId: number, date: string): Promise<void> {
  await redis.setBit(`dau:${date}`, userId, 1);
}

async function getDailyActiveCount(date: string): Promise<number> {
  return redis.bitCount(`dau:${date}`);
}

Memory Management

# Check memory usage
redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|mem_fragmentation_ratio"

# Set memory limit and eviction policy
CONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru  # Evict least recently used when full

# Eviction policies:
# noeviction       — return error when full (for durable data)
# allkeys-lru      — evict least recently used from any key (for cache)
# volatile-lru     — evict LRU from keys with TTL (for session + cache mix)
# allkeys-lfu      — evict least frequently used (often better than LRU)

Redis vs Alternatives (2026)

Use CaseRedisMemcachedPostgreSQLKafka
Simple caching✅ Best✅ Good❌ Too slow❌ Wrong tool
Session storage✅ Best✅ OK
Pub/Sub (fire-forget)✅ Better
Durable messaging✅ Streams✅ Better
Rate limiting✅ Best❌ No atomics
Leaderboards✅ Best
Distributed locks✅ Advisory locks

Working With Viprasol

We integrate Redis into application architectures — caching layers, session stores, rate limiters, pub/sub systems, and real-time leaderboards.

Architecture consultation →
Software Scalability →
Web Development Services →


See Also


Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms — we build it all with Next.js and React.

Free consultation • No commitment • Response within 24 hours

Viprasol · Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.