Caching Strategies: CDN, Application, and Database Layer Caching Explained
Master multi-layer caching — CDN edge caching, application-level Redis caching, database query caching, and cache invalidation strategies. Real TypeScript code,
Caching Strategies: CDN, Application, and Database Layer Caching Explained
Caching is the most leveraged performance optimization available. A cache hit that takes 1ms replaces a database query that takes 50ms — and unlike most optimizations, the benefit compounds across every user who triggers the same request.
The challenge is knowing which layer to cache at, what invalidation strategy to use, and how to avoid the cache coherence bugs that make engineers distrust caches in the first place.
This guide covers all three cache layers with real implementation examples.
The Three Cache Layers
User Request
│
▼
┌─────────────┐
│ CDN Edge │ Layer 1: Geographic distribution, static assets,
│ (CloudFront│ edge-rendered HTML
│ Fastly) │ Hit rate target: 85–95% for public content
└──────┬──────┘
│ Cache miss
▼
┌─────────────┐
│ Application │ Layer 2: In-memory (Redis), computed results,
│ Cache │ user-specific data, rate limit counters
│ (Redis) │ Hit rate target: 60–85% for API responses
└──────┬──────┘
│ Cache miss
▼
┌─────────────┐
│ Database │ Layer 3: Query result cache, materialized views,
│ (Postgres) │ connection pool
│ │ Hit rate target: 90%+ for hot data (pg buffer cache)
└─────────────┘
Each layer has different characteristics: CDN caches by URL + headers, application caches by arbitrary key, database caches by query pattern.
Layer 1: CDN Caching
CDN caching is the highest-leverage starting point. A CDN serves cached responses from edge nodes close to users — eliminating round-trips to your origin server entirely.
CloudFront cache control headers (Next.js):
// next.config.ts
const nextConfig = {
async headers() {
return [
{
// Static assets — cache forever, versioned by hash
source: '/_next/static/:path*',
headers: [
{
key: 'Cache-Control',
value: 'public, max-age=31536000, immutable',
},
],
},
{
// API responses — short CDN cache, allow stale
source: '/api/products/:path*',
headers: [
{
key: 'Cache-Control',
value: 'public, s-maxage=60, stale-while-revalidate=300',
},
],
},
{
// Blog posts — ISR-style CDN cache
source: '/blog/:path*',
headers: [
{
key: 'Cache-Control',
value: 'public, s-maxage=3600, stale-while-revalidate=86400',
},
],
},
{
// User-specific API responses — never CDN-cached
source: '/api/user/:path*',
headers: [
{
key: 'Cache-Control',
value: 'private, no-cache',
},
],
},
];
},
};
s-maxage controls CDN TTL (shared cache). max-age controls browser cache. stale-while-revalidate lets the CDN serve stale content while fetching a fresh version in the background — great for content that can tolerate slight staleness.
Cache-by-header for authenticated content:
// CloudFront Terraform config — vary cache by Authorization header
resource "aws_cloudfront_cache_policy" "api" {
name = "api-cache-policy"
parameters_in_cache_key_and_forwarded_to_origin {
headers_config {
header_behavior = "whitelist"
headers {
items = ["Authorization", "Accept-Language"]
}
}
cookies_config {
cookie_behavior = "none"
}
query_strings_config {
query_string_behavior = "all"
}
}
default_ttl = 60
max_ttl = 3600
min_ttl = 0
}
When CDN caching doesn't apply: Any response that varies per authenticated user should be private (browser cache only) or no-store. Never CDN-cache responses with user-specific data — you'll serve one user's data to another.
🌐 Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript — production-grade stack
- Fixed-price contracts — no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
Layer 2: Application Cache with Redis
Application-level caching stores computed results in Redis, keyed by the inputs that determine the result. The three patterns — cache-aside, write-through, and write-behind — each suit different access patterns.
Cache-Aside (Most Common)
Read from cache; on miss, read from DB and populate cache:
// lib/cache.ts
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL!);
export async function withCache<T>(
key: string,
ttlSeconds: number,
fetchFn: () => Promise<T>
): Promise<T> {
// Try cache first
const cached = await redis.get(key);
if (cached !== null) {
return JSON.parse(cached) as T;
}
// Cache miss — fetch from source
const data = await fetchFn();
// Store in cache (fire-and-forget — don't await to avoid adding latency)
redis.setex(key, ttlSeconds, JSON.stringify(data)).catch(err =>
console.error('Cache write failed:', err)
);
return data;
}
// Usage
export async function getProduct(productId: string) {
return withCache(
`product:${productId}`,
300, // 5-minute TTL
() => db.product.findUnique({
where: { id: productId },
include: { category: true, variants: true },
})
);
}
export async function getProductList(categoryId: string, page: number) {
return withCache(
`products:category:${categoryId}:page:${page}`,
60, // 1-minute TTL — product lists change more often
() => db.product.findMany({
where: { categoryId },
skip: (page - 1) * 20,
take: 20,
})
);
}
Write-Through
Write to cache and DB simultaneously. Ensures cache is always warm but adds write latency:
export async function updateProduct(
productId: string,
data: Partial<Product>
): Promise<Product> {
const updated = await db.product.update({
where: { id: productId },
data,
});
// Update cache immediately — no stale window
await redis.setex(
`product:${productId}`,
300,
JSON.stringify(updated)
);
// Invalidate any list caches that include this product
const keys = await redis.keys(`products:category:${updated.categoryId}:*`);
if (keys.length > 0) await redis.del(...keys);
return updated;
}
Stale-While-Revalidate at Application Level
Serve stale data while revalidating in the background — similar to CDN SWR but for arbitrary cached values:
interface CacheEntry<T> {
data: T;
cachedAt: number;
ttlMs: number;
}
export async function withStaleWhileRevalidate<T>(
key: string,
freshTtlMs: number,
staleTtlMs: number,
fetchFn: () => Promise<T>
): Promise<T> {
const raw = await redis.get(key);
if (raw) {
const entry: CacheEntry<T> = JSON.parse(raw);
const age = Date.now() - entry.cachedAt;
if (age < freshTtlMs) {
return entry.data; // Fresh — return immediately
}
if (age < staleTtlMs) {
// Stale but acceptable — return stale data and revalidate in background
fetchFn().then(fresh => {
const newEntry: CacheEntry<T> = {
data: fresh,
cachedAt: Date.now(),
ttlMs: freshTtlMs,
};
redis.setex(key, Math.ceil(staleTtlMs / 1000), JSON.stringify(newEntry));
}).catch(console.error);
return entry.data;
}
}
// No cache or expired — fetch synchronously
const data = await fetchFn();
const entry: CacheEntry<T> = { data, cachedAt: Date.now(), ttlMs: freshTtlMs };
await redis.setex(key, Math.ceil(staleTtlMs / 1000), JSON.stringify(entry));
return data;
}
Cache Invalidation Strategies
Cache invalidation is the hard part. The three approaches, in order of complexity:
1. TTL-Based (Simplest)
Set a TTL. Data becomes stale at most TTL seconds after the last update. Acceptable when slight staleness is tolerable.
redis.setex('product:123', 300, JSON.stringify(product)); // Stale after 5 min
2. Event-Based Invalidation
When data changes, explicitly delete or update the cache entry:
// After updating product
await redis.del(`product:${productId}`);
// After updating product list source data
const pattern = `products:category:${categoryId}:*`;
const keys = await redis.keys(pattern); // ⚠️ Don't use KEYS in production — use SCAN
if (keys.length) await redis.del(...keys);
// Better — use SCAN for large keyspaces
async function deleteByPattern(pattern: string) {
let cursor = '0';
do {
const [nextCursor, keys] = await redis.scan(cursor, 'MATCH', pattern, 'COUNT', 100);
cursor = nextCursor;
if (keys.length) await redis.del(...keys);
} while (cursor !== '0');
}
3. Cache Tags
Tag cache entries and invalidate by tag — most flexible but requires tracking:
// Store cache entry
await redis.setex(`product:${productId}`, 300, JSON.stringify(product));
// Track which cache keys belong to this product's tag
await redis.sadd(`tag:product:${productId}`, `product:${productId}`);
await redis.sadd(`tag:category:${categoryId}`, `products:category:${categoryId}:*`);
// Invalidate by tag — deletes all entries associated with a product
async function invalidateByTag(tag: string) {
const keys = await redis.smembers(`tag:${tag}`);
if (keys.length) {
await redis.del(`tag:${tag}`, ...keys);
}
}
// On product update:
await invalidateByTag(`product:${productId}`);
await invalidateByTag(`category:${product.categoryId}`);
🚀 Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4–8 weeks, full platforms in 3–5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Layer 3: Database-Level Caching
PostgreSQL maintains a shared buffer cache (shared_buffers) — frequently accessed pages stay in memory. But you can also use materialized views and partial indexes as explicit caching mechanisms.
Materialized view for expensive aggregations:
-- Expensive query: total revenue per product category (runs every page load)
-- ❌ Without cache: 800ms on 10M rows
SELECT
c.name,
SUM(oi.quantity * oi.unit_price_cents) AS revenue_cents,
COUNT(DISTINCT o.id) AS order_count
FROM order_items oi
JOIN orders o ON o.id = oi.order_id
JOIN products p ON p.id = oi.product_id
JOIN categories c ON c.id = p.category_id
WHERE o.status = 'paid'
GROUP BY c.id, c.name;
-- ✅ Materialized view — query takes 2ms, refresh takes 800ms (run hourly)
CREATE MATERIALIZED VIEW category_revenue AS
SELECT
c.id AS category_id,
c.name,
SUM(oi.quantity * oi.unit_price_cents) AS revenue_cents,
COUNT(DISTINCT o.id) AS order_count,
NOW() AS last_refreshed_at
FROM order_items oi
JOIN orders o ON o.id = oi.order_id
JOIN products p ON p.id = oi.product_id
JOIN categories c ON c.id = p.category_id
WHERE o.status = 'paid'
GROUP BY c.id, c.name;
CREATE UNIQUE INDEX ON category_revenue (category_id);
-- Refresh hourly (or on-demand after bulk order processing)
REFRESH MATERIALIZED VIEW CONCURRENTLY category_revenue;
-- CONCURRENTLY allows reads during refresh — no lock
Query result cache with pg_query_cache or application-level:
// For queries that don't change often but are called frequently
export async function getCategoryRevenue(): Promise<CategoryRevenue[]> {
return withCache(
'category-revenue',
3600, // 1 hour — matches materialized view refresh
() => db.$queryRaw<CategoryRevenue[]>`
SELECT * FROM category_revenue ORDER BY revenue_cents DESC
`
);
}
Measuring Cache Effectiveness
Track these metrics to know if your cache is working:
// Instrument withCache to track hit/miss rates
export async function withCacheInstrumented<T>(
key: string,
ttlSeconds: number,
fetchFn: () => Promise<T>
): Promise<T> {
const start = Date.now();
const cached = await redis.get(key);
if (cached !== null) {
metrics.increment('cache.hit', { key_prefix: key.split(':')[0] });
metrics.histogram('cache.latency_ms', Date.now() - start, { result: 'hit' });
return JSON.parse(cached) as T;
}
metrics.increment('cache.miss', { key_prefix: key.split(':')[0] });
const data = await fetchFn();
redis.setex(key, ttlSeconds, JSON.stringify(data)).catch(() => {});
metrics.histogram('cache.latency_ms', Date.now() - start, { result: 'miss' });
return data;
}
Target metrics:
- Cache hit rate: > 70% for application cache, > 85% for CDN
- P99 cache hit latency: < 5ms (Redis local), < 20ms (Redis remote)
- Cache error rate: < 0.1% (Redis errors should be gracefully handled, not fatal)
Cost Comparison
| Cache Layer | Option | Monthly Cost | Hit Rate |
|---|---|---|---|
| CDN | CloudFront (10TB transfer) | ~$850 | 85–95% |
| CDN | Cloudflare Pro | $20 flat | 85–95% |
| Application | ElastiCache Redis (r7g.large) | $185 | 60–85% |
| Application | Redis Cloud (5GB) | $70 | 60–85% |
| Application | Upstash Redis (serverless) | $0.2/100K commands | 60–85% |
| Database | RDS buffer cache (included) | $0 extra | 90%+ |
Working With Viprasol
We design and implement multi-layer caching architectures as part of our performance engineering work. For e-commerce clients, proper CDN + Redis caching has reduced origin server load by 80% and cut API response times from 300ms to under 20ms for common requests.
→ Talk to our performance team about caching your application.
See Also
- Redis Use Cases — deep dive on Redis data structures and patterns
- PostgreSQL Performance — database-level optimization
- Next.js Performance — Next.js App Router caching and ISR
- Cloud Cost Optimization — reducing costs with effective caching
- Web Development Services — backend performance engineering
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms — we build it all with Next.js and React.
Free consultation • No commitment • Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.