Back to Blog

Advanced Caching Strategies: Write-Through, Write-Behind, Cache Stampede Prevention, and Redis Cluster

Master advanced caching patterns in 2026 — write-through vs write-behind vs cache-aside, cache stampede and thundering herd prevention, probabilistic early expi

Viprasol Tech Team
June 24, 2026
13 min read

Advanced Caching Strategies: Write-Through, Write-Behind, Cache Stampede Prevention, and Redis Cluster

Most teams implement cache-aside (read from cache; miss → fetch from DB → write to cache) and call it done. That works for simple cases but falls apart under high load with popular keys, during deployments, and when write patterns matter.


The Four Cache Write Patterns

PatternWrite FlowConsistencyComplexityBest For
Cache-asideApp writes DB directly; cache populated on read missEventualLowRead-heavy, tolerate stale data
Write-throughApp writes cache; cache synchronously writes DBStrongMediumWrite + read, strong consistency needed
Write-behindApp writes cache; cache asynchronously writes DBEventualHighWrite-heavy, can tolerate brief async
Read-throughApp reads cache; cache fetches DB on missEventualMediumRead-heavy with transparent population

Cache-Aside (Most Common)

// lib/cache-aside.ts
import { Redis } from 'ioredis';
import { db } from './db';

const redis = new Redis(process.env.REDIS_URL!);

export async function getUser(userId: string): Promise<User | null> {
  const cacheKey = `user:${userId}`;

  // 1. Check cache
  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);

  // 2. Miss → fetch from DB
  const user = await db.users.findUnique({ where: { id: userId } });
  if (!user) return null;

  // 3. Populate cache (TTL: 5 minutes)
  await redis.setex(cacheKey, 300, JSON.stringify(user));

  return user;
}

// Invalidate on write
export async function updateUser(userId: string, data: Partial<User>): Promise<User> {
  const user = await db.users.update({ where: { id: userId }, data });
  await redis.del(`user:${userId}`);  // Invalidate cache
  return user;
}

Weakness: After invalidation, the first read triggers a DB query. Under high load, many concurrent readers can all miss and all query the DB simultaneously — the cache stampede problem.


☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Write-Through Cache

// lib/write-through.ts
// Write to cache and DB atomically (or near-atomically)

export async function updateUserWriteThrough(userId: string, data: Partial<User>): Promise<User> {
  const user = await db.users.update({ where: { id: userId }, data });

  // Write to cache immediately after DB write
  // Cache is always up-to-date after writes
  const cacheKey = `user:${userId}`;
  await redis.setex(cacheKey, 300, JSON.stringify(user));

  return user;
}

// Read: almost always a cache hit (since writes populate cache)
export async function getUserWriteThrough(userId: string): Promise<User | null> {
  const cached = await redis.get(`user:${userId}`);
  if (cached) return JSON.parse(cached);

  // Only misses on first read or after Redis flush
  const user = await db.users.findUnique({ where: { id: userId } });
  if (user) {
    await redis.setex(`user:${userId}`, 300, JSON.stringify(user));
  }
  return user;
}

Cache Stampede Prevention

The stampede problem: a popular cache key expires, and thousands of concurrent requests all miss the cache and hit the DB simultaneously.

Solution 1: Mutex lock (one reloader at a time)

// lib/cache-with-lock.ts
import { Redis } from 'ioredis';

const redis = new Redis(process.env.REDIS_URL!);

export async function getWithLock<T>(
  key: string,
  ttlSeconds: number,
  fetchFn: () => Promise<T>,
): Promise<T> {
  // Check cache first
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached) as T;

  const lockKey = `lock:${key}`;
  const lockTTL = 10;  // 10-second lock — if reloader hangs, lock expires

  // Try to acquire lock (SET NX = only if not exists)
  const locked = await redis.set(lockKey, '1', 'EX', lockTTL, 'NX');

  if (locked) {
    // We have the lock — fetch and populate cache
    try {
      const data = await fetchFn();
      await redis.setex(key, ttlSeconds, JSON.stringify(data));
      return data;
    } finally {
      await redis.del(lockKey);
    }
  } else {
    // Another process is reloading — wait briefly and retry from cache
    await new Promise(resolve => setTimeout(resolve, 50));
    const retryCache = await redis.get(key);
    if (retryCache) return JSON.parse(retryCache) as T;
    // If still no cache, fetch directly (avoids deadlock)
    return fetchFn();
  }
}

Solution 2: Probabilistic Early Expiration (XFetch algorithm)

Instead of all requests stampeding at expiry, each request has a small probability of reloading before expiry, based on remaining TTL and recomputation cost:

// lib/early-expiration.ts
// XFetch: probabilistic early cache refresh — avoids stampedes entirely

interface CachedValue<T> {
  data: T;
  fetchDurationMs: number;  // How long it took to compute this value
  expiresAt: number;        // Unix timestamp (ms)
}

export async function getXFetch<T>(
  key: string,
  ttlSeconds: number,
  fetchFn: () => Promise<T>,
  beta: number = 1.0,  // Higher beta = more aggressive early refresh
): Promise<T> {
  const raw = await redis.get(key);

  if (raw) {
    const entry: CachedValue<T> = JSON.parse(raw);
    const now = Date.now();
    const remainingMs = entry.expiresAt - now;

    // XFetch formula: refresh early with probability based on:
    //   - How long the value took to compute (fetchDurationMs)
    //   - How close we are to expiry
    //   - Beta parameter (tuning knob)
    const shouldRefreshEarly =
      -entry.fetchDurationMs * beta * Math.log(Math.random()) >= remainingMs;

    if (!shouldRefreshEarly) {
      return entry.data;
    }
    // Fall through to refresh (this request does it; others still get cached value)
  }

  // Fetch fresh data, recording how long it takes
  const start = Date.now();
  const data = await fetchFn();
  const fetchDurationMs = Date.now() - start;

  const entry: CachedValue<T> = {
    data,
    fetchDurationMs,
    expiresAt: Date.now() + ttlSeconds * 1000,
  };

  await redis.setex(key, ttlSeconds, JSON.stringify(entry));
  return data;
}

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

Tag-Based Cache Invalidation

When you need to invalidate a group of related cache keys (e.g., "all cache entries for tenant X"):

// lib/tag-cache.ts
// Store cache keys by tag for group invalidation

export async function setWithTags(
  key: string,
  value: unknown,
  ttlSeconds: number,
  tags: string[],
): Promise<void> {
  const pipeline = redis.pipeline();

  // Store the value
  pipeline.setex(key, ttlSeconds, JSON.stringify(value));

  // Register the key under each tag (using Redis sets)
  for (const tag of tags) {
    pipeline.sadd(`tag:${tag}`, key);
    pipeline.expire(`tag:${tag}`, ttlSeconds * 2);  // Tags expire after double the value TTL
  }

  await pipeline.exec();
}

export async function invalidateTag(tag: string): Promise<void> {
  const keys = await redis.smembers(`tag:${tag}`);
  if (keys.length === 0) return;

  const pipeline = redis.pipeline();
  for (const key of keys) pipeline.del(key);
  pipeline.del(`tag:${tag}`);
  await pipeline.exec();
}

// Usage:
await setWithTags(
  `user:${userId}:profile`,
  userProfile,
  300,
  [`user:${userId}`, `tenant:${tenantId}`],
);

// When user is updated — invalidate all their cache entries
await invalidateTag(`user:${userId}`);

Redis Cluster for Scale

A single Redis instance handles ~100K ops/second and up to ~100GB of data. When you exceed this, use Redis Cluster:

Redis Cluster: 6 nodes (3 primary + 3 replica)
  - Primary 1: hash slots 0–5460      (keys: a, b, c... hashing to these slots)
  - Primary 2: hash slots 5461–10922
  - Primary 3: hash slots 10923–16383
  Each primary has one replica for failover
// lib/redis-cluster.ts
import { Cluster } from 'ioredis';

export const redis = new Cluster([
  { host: 'redis-node-1', port: 6379 },
  { host: 'redis-node-2', port: 6379 },
  { host: 'redis-node-3', port: 6379 },
], {
  redisOptions: {
    password: process.env.REDIS_PASSWORD,
    tls: {},  // TLS required in production
  },
  // Read from replicas (reduces read load on primaries)
  scaleReads: 'slave',
  // Retry on MOVED/ASK redirects (cluster resharding)
  maxRedirections: 16,
});

// Important: multi-key operations (MGET, pipeline) require keys in the same slot
// Use hash tags to force related keys to same slot:
// user:{123}:profile and user:{123}:sessions both hash to slot of "123"
const userId = '123';
const profileKey = `user:{${userId}}:profile`;   // Hash tag: {123}
const sessionsKey = `user:{${userId}}:sessions`; // Same hash tag: {123}
// These are guaranteed to be on the same cluster node → pipeline safe

const pipeline = redis.pipeline();
pipeline.get(profileKey);
pipeline.get(sessionsKey);
const [profile, sessions] = await pipeline.exec();

Cache Metrics to Monitor

// Track cache hit rate and latency in production
export async function getWithMetrics<T>(
  key: string,
  fetchFn: () => Promise<T>,
  ttl: number,
): Promise<T> {
  const start = Date.now();
  const cached = await redis.get(key);
  const cacheLatency = Date.now() - start;

  if (cached) {
    metrics.increment('cache.hit', { key_prefix: key.split(':')[0] });
    metrics.timing('cache.latency', cacheLatency);
    return JSON.parse(cached) as T;
  }

  metrics.increment('cache.miss', { key_prefix: key.split(':')[0] });

  const dbStart = Date.now();
  const data = await fetchFn();
  metrics.timing('cache.db_fetch_latency', Date.now() - dbStart);

  await redis.setex(key, ttl, JSON.stringify(data));
  return data;
}

// Alert if hit rate drops below 80%
// Cache hit rate = cache.hit / (cache.hit + cache.miss)

Working With Viprasol

We design and implement caching architectures — Redis Cluster setup, stampede prevention, write-through patterns for strong consistency, tag-based invalidation, and cache performance monitoring.

Talk to our team about caching strategy and high-performance backend architecture.


See Also

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.