Back to Blog

Node.js Performance Profiling: V8 Profiler, Flame Graphs, Memory Leaks, and Heap Snapshots

Profile Node.js applications in production: V8 CPU profiling with flame graphs, memory leak detection with heap snapshots, garbage collection tuning, clinic.js diagnostics, and async bottleneck identification.

Viprasol Tech Team
December 13, 2026
13 min read

A Node.js API that was fast at 100 req/s can become unusable at 1,000 req/s if there's a CPU bottleneck, a memory leak that grows over days, or a synchronous operation blocking the event loop. The hard part isn't fixing the problem โ€” it's finding it. Node.js has production-grade profiling tools built in via the V8 engine, but most developers don't know how to use them.

This post covers the full profiling toolkit: CPU profiling with flame graphs, heap snapshots for memory leaks, event loop lag monitoring, garbage collection instrumentation, and clinic.js for automated diagnostics.

The Three Performance Problems

1. CPU bottleneck:   Request takes 500ms โ€” profiler shows JSON.parse() taking 400ms
2. Memory leak:      RSS grows 50MB/hour โ€” heap snapshot shows retained closures
3. Event loop block: p99 latency spikes to 5s โ€” blocked by sync fs.readFileSync()

1. CPU Profiling with the V8 Profiler

// src/profiling/cpu-profiler.ts
import v8Profiler from 'v8-profiler-next';
import { writeFileSync } from 'fs';

// Profile a specific endpoint for 30 seconds
export async function profileEndpoint<T>(
  label: string,
  fn: () => Promise<T>
): Promise<T> {
  v8Profiler.startProfiling(label, true); // true = include JS samples

  const result = await fn();

  const profile = v8Profiler.stopProfiling(label);

  // Export in Chrome DevTools format
  profile.export((error, result) => {
    if (error) throw error;
    writeFileSync(`profiles/${label}-${Date.now()}.cpuprofile`, result!);
    profile.delete();
  });

  return result;
}

// Timed profile: capture CPU during peak load window
export async function captureProfile(durationMs: number, label = 'profile'): Promise<string> {
  const filename = `${label}-${Date.now()}.cpuprofile`;

  v8Profiler.startProfiling(label, true);
  await new Promise((resolve) => setTimeout(resolve, durationMs));
  const profile = v8Profiler.stopProfiling(label);

  return new Promise((resolve, reject) => {
    profile.export((error, result) => {
      if (error) return reject(error);
      writeFileSync(`profiles/${filename}`, result!);
      profile.delete();
      resolve(filename);
    });
  });
}
// src/routes/admin/profiling.ts โ€” admin-only profiling endpoint
import Fastify from 'fastify';

const adminRouter = Fastify();

adminRouter.get('/admin/profile/start', {
  schema: { querystring: { type: 'object', properties: { duration: { type: 'integer' } } } },
}, async (req, reply) => {
  if (req.headers['x-admin-key'] !== process.env.ADMIN_KEY) {
    return reply.status(403).send({ error: 'Forbidden' });
  }

  const duration = (req.query as { duration?: number }).duration ?? 30_000;

  // Don't await โ€” returns immediately, profile runs in background
  captureProfile(duration, 'on-demand').then((filename) => {
    console.log(`Profile saved: ${filename}`);
  });

  return { status: 'profiling', duration, message: `Will save in ${duration}ms` };
});

Reading Flame Graphs

Open profile in Chrome DevTools (Performance tab โ†’ Load profile):

Bottom-up view (most useful for CPU bottlenecks):
  JSON.parse           [42%] โ† hotspot
    โ”œโ”€โ”€ parseBody      [40%]
    โ”‚   โ””โ”€โ”€ middleware [40%]
    โ””โ”€โ”€ parseResponse  [2%]

Self time = time spent IN this function (not callees)
Total time = time in this function + everything it calls

Flame graph width = time spent โ€” wider = more CPU

๐ŸŒ Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different โ€” senior engineers only, direct Slack access, and a 5.0โ˜… Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript โ€” production-grade stack
  • Fixed-price contracts โ€” no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

2. Heap Snapshots for Memory Leaks

// src/profiling/heap-profiler.ts
import v8 from 'v8';
import { createWriteStream } from 'fs';

// Take a heap snapshot โ€” pauses GC briefly (use in off-peak hours for prod)
export function captureHeapSnapshot(filename: string): void {
  const snapshotStream = v8.writeHeapSnapshot(`snapshots/${filename}`);
  console.log(`Heap snapshot written: ${snapshotStream}`);
}

// Monitor heap growth over time
export function monitorHeapGrowth(intervalMs = 60_000): NodeJS.Timeout {
  let lastHeap = process.memoryUsage().heapUsed;

  return setInterval(() => {
    const mem = process.memoryUsage();
    const heapGrowthMB = (mem.heapUsed - lastHeap) / 1024 / 1024;
    lastHeap = mem.heapUsed;

    const stats = {
      heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
      heapTotalMB: Math.round(mem.heapTotal / 1024 / 1024),
      rssMB: Math.round(mem.rss / 1024 / 1024),
      externalMB: Math.round(mem.external / 1024 / 1024),
      growthMB: Math.round(heapGrowthMB * 10) / 10,
    };

    // Warn if growing > 10MB/minute
    if (heapGrowthMB > 10) {
      console.warn('Memory leak suspected:', stats);
      // Optionally: capture snapshot automatically
      if (heapGrowthMB > 50) {
        captureHeapSnapshot(`auto-leak-${Date.now()}.heapsnapshot`);
      }
    } else {
      console.log('Heap stats:', stats);
    }
  }, intervalMs);
}

Common Memory Leak Patterns

// โŒ Pattern 1: Event listener accumulation
// Each request adds a listener but never removes it
class DataProcessor extends EventEmitter {
  processRequest() {
    // BUG: adds listener on every call โ€” never removed
    this.on('data', (chunk) => this.handleChunk(chunk));
  }
}

// โœ… Fix: use once(), or remove listener explicitly
class DataProcessor extends EventEmitter {
  processRequest() {
    const handler = (chunk: Buffer) => this.handleChunk(chunk);
    this.once('data', handler); // Automatically removed after first emission
    // Or: this.on('data', handler); ... later: this.off('data', handler);
  }
}

// โŒ Pattern 2: Unbounded cache
const cache = new Map<string, LargeObject>();

function processItem(id: string) {
  if (!cache.has(id)) {
    cache.set(id, expensiveOperation(id)); // Cache grows forever
  }
  return cache.get(id);
}

// โœ… Fix: LRU cache with max size
import LRU from 'lru-cache';
const cache = new LRU<string, LargeObject>({ max: 1000, ttl: 1000 * 60 * 5 });

// โŒ Pattern 3: Closure retaining large scope
function createHandler(largeBuffer: Buffer) {
  // largeBuffer is retained as long as the handler exists
  return function handler() {
    return largeBuffer.slice(0, 10); // Only needs 10 bytes
  };
}

// โœ… Fix: extract only what you need from the closure scope
function createHandler(largeBuffer: Buffer) {
  const preview = largeBuffer.slice(0, 10); // Copy 10 bytes
  // largeBuffer can now be GC'd
  return function handler() {
    return preview;
  };
}

// โŒ Pattern 4: setInterval without clearInterval
function startProcessing() {
  // Interval never cleared โ€” leaks on service restart or test teardown
  setInterval(() => processQueue(), 1000);
}

// โœ… Fix: always return and clear intervals
function startProcessing(): NodeJS.Timeout {
  return setInterval(() => processQueue(), 1000);
}
// In cleanup: clearInterval(timer);

3. Event Loop Lag Monitoring

// src/monitoring/event-loop.ts
// Measure how long tasks wait in the event loop queue

export function measureEventLoopLag(): () => number {
  let lastCheck = Date.now();
  let currentLag = 0;

  // Schedule a check every 1 second โ€” if it fires late, the loop was busy
  const timer = setInterval(() => {
    const now = Date.now();
    const expected = lastCheck + 1000;
    currentLag = Math.max(0, now - expected);
    lastCheck = now;

    if (currentLag > 100) {
      console.warn(`Event loop lag: ${currentLag}ms โ€” something is blocking`);
    }
  }, 1000);

  timer.unref(); // Don't prevent process exit

  return () => currentLag;
}

// Integration with Prometheus metrics
import { Histogram, Gauge } from 'prom-client';

const eventLoopLag = new Gauge({
  name: 'nodejs_event_loop_lag_ms',
  help: 'Event loop lag in milliseconds',
});

const httpDuration = new Histogram({
  name: 'http_request_duration_ms',
  help: 'HTTP request duration',
  labelNames: ['method', 'route', 'status'],
  buckets: [5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000],
});

// Fastify hook: measure per-request
export function registerMetrics(app: FastifyInstance) {
  app.addHook('onRequest', async (req) => {
    (req as any).startTime = Date.now();
  });

  app.addHook('onSend', async (req, reply) => {
    const duration = Date.now() - (req as any).startTime;
    const route = req.routeOptions.url ?? req.url;

    httpDuration.labels(req.method, route, reply.statusCode.toString()).observe(duration);
  });
}

๐Ÿš€ Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4โ€“8 weeks, full platforms in 3โ€“5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

4. clinic.js Automated Diagnostics

npm install -g clinic

# Doctor: automated bottleneck diagnosis
clinic doctor -- node dist/server.js

# Flame: CPU flame graph
clinic flame -- node dist/server.js

# Bubbleprof: async profiling (find I/O bottlenecks)
clinic bubbleprof -- node dist/server.js

# While server is running, generate load:
npx autocannon -c 100 -d 30 http://localhost:3000/api/products
// Generate load for profiling (use autocannon programmatically)
import autocannon from 'autocannon';

export async function benchmarkEndpoint(url: string, options?: {
  connections?: number;
  duration?: number;
}): Promise<autocannon.Result> {
  return autocannon({
    url,
    connections: options?.connections ?? 50,
    duration: options?.duration ?? 30,
    headers: {
      'Authorization': `Bearer ${process.env.TEST_TOKEN}`,
    },
  });
}

5. Production-Safe Profiling

// src/profiling/production-profiler.ts
// Profile in production without impacting all requests

let isProfilingActive = false;

export async function profileIfIdle<T>(
  label: string,
  fn: () => Promise<T>
): Promise<T> {
  // Only profile 1 in 1000 requests, and only if no active profile
  if (Math.random() > 0.001 || isProfilingActive) {
    return fn();
  }

  isProfilingActive = true;
  try {
    return await profileEndpoint(label, fn);
  } finally {
    isProfilingActive = false;
  }
}

// Memory snapshot triggered by admin API or alarm
export async function triggerDiagnostics(): Promise<{
  heapMB: number;
  rssMB: number;
  snapshotFile: string | null;
}> {
  const mem = process.memoryUsage();
  const heapMB = Math.round(mem.heapUsed / 1024 / 1024);
  const rssMB = Math.round(mem.rss / 1024 / 1024);

  let snapshotFile: string | null = null;

  // Only capture snapshot if heap > 500MB (avoid overhead otherwise)
  if (heapMB > 500) {
    const filename = `heap-${Date.now()}.heapsnapshot`;
    captureHeapSnapshot(filename);
    snapshotFile = filename;
  }

  return { heapMB, rssMB, snapshotFile };
}

Profiling Checklist

SymptomToolFix
High CPU, slow responsesV8 CPU profile + flame graphOptimize hot function
Growing memory over timeHeap snapshot comparison (baseline vs leaky)Remove unbounded cache / listener
p99 latency spikesEvent loop lag monitorMove blocking ops to worker thread
Slow DB queriesAPM traces (Datadog/New Relic)Add index, optimize query
High GC pauses--expose-gc + gc-statsReduce allocations in hot path

See Also


Working With Viprasol

Node.js API getting slower under load, leaking memory, or having mysterious p99 latency spikes? We run structured profiling โ€” CPU flame graphs, heap snapshot comparisons, event loop lag analysis โ€” identify the root cause, and fix it. Most performance issues are in a handful of hot paths; finding them is the hard part.

Talk to our team โ†’ | See our web development services โ†’

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms โ€” we build it all with Next.js and React.

Free consultation โ€ข No commitment โ€ข Response within 24 hours

Viprasol ยท Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse โ‰ฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ€” from architecture through deployment.