PostgreSQL EXPLAIN ANALYZE in 2026: Reading Query Plans, Identifying Slow Queries, and Index Fixes
Master PostgreSQL EXPLAIN ANALYZE: reading query plan nodes, identifying seq scans and hash joins, using pg_stat_statements, adding indexes, and fixing N+1 query patterns.
PostgreSQL EXPLAIN ANALYZE in 2026: Reading Query Plans, Identifying Slow Queries, and Index Fixes
Every slow PostgreSQL query has a plan โ a tree of operations the planner chose to execute your query. Reading that plan is the difference between adding a random index and adding the right one. Most developers run EXPLAIN ANALYZE once, see a wall of text, and give up.
This post teaches you to read query plans systematically: what each node type means, why the planner chooses sequential scans over indexes, how to find your worst queries with pg_stat_statements, and the index patterns that fix the most common problems.
The Anatomy of EXPLAIN ANALYZE Output
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.id, o.total, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
AND o.created_at > now() - INTERVAL '7 days'
ORDER BY o.created_at DESC
LIMIT 20;
Hash Join (cost=142.50..8234.21 rows=20 width=52)
(actual time=12.431..847.231 rows=20 loops=1)
Output: o.id, o.total, u.email
Hash Cond: (o.user_id = u.id)
Buffers: shared hit=4 read=2183
-> Limit (cost=0.43..7891.20 rows=20 width=36)
(actual time=0.112..843.210 rows=20 loops=1)
-> Index Scan using orders_created_at_idx on orders o
(cost=0.43..183421.00 rows=465 width=36)
(actual time=0.108..843.198 rows=20 loops=1)
Index Cond: (created_at > (now() - '7 days'::interval))
Filter: (status = 'pending')
Rows Removed by Filter: 48392
-> Hash (cost=102.30..102.30 rows=5230 width=24)
(actual time=12.290..12.291 rows=5230 loops=1)
-> Seq Scan on users u (cost=0.00..102.30 rows=5230 width=24)
(actual time=0.012..6.231 rows=5230 loops=1)
Planning Time: 0.834 ms
Execution Time: 847.412 ms
Reading the numbers
cost=142.50..8234.21 โ estimated cost (startup..total) in arbitrary units
rows=20 โ estimated row count
actual time=12.431..847.231 โ real time in ms (startup..total)
actual rows=20 โ real row count
loops=1 โ how many times this node executed
Buffers: shared hit=4 read=2183
hit=4 โ 4 pages from shared buffer cache (fast, ~0.1ms)
read=2183 โ 2183 pages from disk (slow, ~10ms each = ~22 seconds possible)
The most important signal: Rows Removed by Filter: 48392 โ the index scan returned 48,392 rows but only 20 passed the status = 'pending' filter. This index is doing 99.96% wasted work.
The Fix: Composite Index
-- โ Current index: only on created_at
CREATE INDEX orders_created_at_idx ON orders(created_at);
-- โ
Fix: composite index matching both WHERE conditions
CREATE INDEX CONCURRENTLY idx_orders_status_created_at
ON orders(status, created_at DESC)
WHERE status = 'pending'; -- Partial index: only pending rows
-- After adding the index, re-run EXPLAIN ANALYZE:
-- Index Scan using idx_orders_status_created_at on orders o
-- Index Cond: ((status = 'pending') AND (created_at > ...))
-- Rows Removed by Filter: 0 โ perfect
-- Execution Time: 0.892 ms โ 847ms โ 0.9ms
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
Node Types: What to Look For
Sequential Scan (Seq Scan)
-- Seq Scan on orders (cost=0.00..45230.00 rows=1200000)
-- Not always bad โ fast for small tables or when fetching >10% of rows
-- Bad when: rows returned << total table rows
-- Check: actual rows << total table rows = missing index
-- Fix: add index on the filter column(s)
Index Scan vs Bitmap Index Scan
-- Index Scan: good for small result sets, uses index to fetch rows one-by-one
-- Bitmap Index Scan: good for larger result sets, collects rowids then fetches pages
-- Bitmap Heap Scan: follows rowids from Bitmap Index Scan
-- When you see Bitmap Index Scan with many Recheck Cond removals:
-- Recheck Cond: (status = 'active')
-- Rows Removed by Index Recheck: 12000
-- โ Index is lossy (e.g., partial index doesn't filter enough)
Nested Loop vs Hash Join vs Merge Join
-- Nested Loop: good for small inner tables (< few thousand rows)
-- bad for large outer ร large inner = O(nรm)
-- Hash Join: best for larger tables, builds hash table of smaller side
-- watch for: batches > 1 = spilling to disk (increase work_mem)
-- Sort (cost=... rows=... width=...)
-- Sort Method: external merge Disk: 42784kB โ spilling! increase work_mem
-- vs
-- Sort Method: quicksort Memory: 256kB โ in memory: good
-- Set work_mem for a session to fix sort/hash spills:
SET work_mem = '256MB';
Finding Slow Queries: pg_stat_statements
-- Enable in postgresql.conf:
-- shared_preload_libraries = 'pg_stat_statements'
-- pg_stat_statements.track = all
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Top 10 slowest queries by total time
SELECT
LEFT(query, 120) AS query_preview,
calls,
ROUND(total_exec_time::numeric, 2) AS total_ms,
ROUND(mean_exec_time::numeric, 2) AS avg_ms,
ROUND(stddev_exec_time::numeric, 2) AS stddev_ms,
ROUND((total_exec_time / SUM(total_exec_time) OVER ()) * 100, 1) AS pct_total
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
-- Queries with high mean time (individual slow executions)
SELECT
LEFT(query, 120) AS query_preview,
calls,
ROUND(mean_exec_time::numeric, 2) AS avg_ms,
ROUND(max_exec_time::numeric, 2) AS max_ms,
ROUND(rows::numeric / NULLIF(calls, 0), 0) AS rows_per_call
FROM pg_stat_statements
WHERE calls > 100 -- Only frequent queries
AND mean_exec_time > 100 -- Averaging over 100ms
ORDER BY mean_exec_time DESC
LIMIT 20;
-- Queries with high row scans vs rows returned (index efficiency)
SELECT
LEFT(query, 120) AS query_preview,
calls,
ROUND(rows::numeric / NULLIF(calls, 0), 0) AS rows_per_call,
shared_blks_read,
shared_blks_hit,
ROUND(
shared_blks_read::numeric / NULLIF(shared_blks_read + shared_blks_hit, 0) * 100,
1
) AS cache_miss_pct
FROM pg_stat_statements
WHERE calls > 10
AND shared_blks_read > 1000
ORDER BY cache_miss_pct DESC
LIMIT 10;
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Index Usage Analysis
-- Find indexes never used (candidates for removal)
SELECT
schemaname,
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan AS scans_since_reset
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexname NOT LIKE '%pkey%' -- Keep primary keys
ORDER BY pg_relation_size(indexrelid) DESC;
-- Find tables with no indexes doing sequential scans
SELECT
schemaname,
tablename,
seq_scan,
seq_tup_read,
idx_scan,
ROUND(seq_scan::numeric / NULLIF(seq_scan + idx_scan, 0) * 100, 1) AS seq_pct
FROM pg_stat_user_tables
WHERE seq_scan > 100
AND seq_scan > idx_scan
ORDER BY seq_tup_read DESC
LIMIT 20;
-- Check index bloat
SELECT
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan,
idx_tup_read,
CASE WHEN idx_scan = 0 THEN 'UNUSED'
WHEN idx_tup_read::numeric / idx_scan < 10 THEN 'INEFFICIENT'
ELSE 'OK'
END AS status
FROM pg_stat_user_indexes
ORDER BY pg_relation_size(indexrelid) DESC
LIMIT 20;
Auto-Explain: Log Slow Query Plans
-- postgresql.conf (or ALTER SYSTEM):
-- shared_preload_libraries = 'auto_explain'
-- auto_explain.log_min_duration = 1000 -- Log plans for queries > 1s
-- auto_explain.log_analyze = true
-- auto_explain.log_buffers = true
-- auto_explain.log_nested_statements = true
-- Session-level (no restart needed, for debugging):
LOAD 'auto_explain';
SET auto_explain.log_min_duration = 100; -- Log queries > 100ms
SET auto_explain.log_analyze = true;
-- Plans appear in PostgreSQL log โ tail with:
-- tail -f /var/log/postgresql/postgresql.log | grep -A 30 "duration:"
Common Slow Query Patterns and Fixes
1. OR prevents index use
-- โ Slow: OR across columns forces seq scan
SELECT * FROM orders WHERE status = 'pending' OR user_id = $1;
-- โ
Fix: UNION ALL (each branch can use its own index)
SELECT * FROM orders WHERE status = 'pending'
UNION ALL
SELECT * FROM orders WHERE user_id = $1 AND status != 'pending';
2. Function on indexed column disables index
-- โ Slow: function wrapping indexed column = seq scan
SELECT * FROM users WHERE LOWER(email) = $1;
-- โ
Fix: functional index
CREATE INDEX idx_users_email_lower ON users(LOWER(email));
SELECT * FROM users WHERE LOWER(email) = $1; -- Now uses the index
3. Leading wildcard in LIKE
-- โ Slow: leading % disables B-tree index
SELECT * FROM products WHERE name LIKE '%widget%';
-- โ
Fix 1: pg_trgm for substring search
CREATE EXTENSION pg_trgm;
CREATE INDEX idx_products_name_trgm ON products USING gin(name gin_trgm_ops);
SELECT * FROM products WHERE name LIKE '%widget%'; -- Uses trgm index
-- โ
Fix 2: Full-text search for word-based matching
CREATE INDEX idx_products_fts ON products USING gin(to_tsvector('english', name));
SELECT * FROM products WHERE to_tsvector('english', name) @@ plainto_tsquery('widget');
4. OFFSET pagination kills performance
-- โ Slow: OFFSET 10000 scans and discards 10000 rows
SELECT * FROM orders ORDER BY created_at DESC LIMIT 20 OFFSET 10000;
-- โ
Fix: keyset (cursor) pagination
-- First page:
SELECT * FROM orders ORDER BY created_at DESC, id DESC LIMIT 20;
-- Next page (using last row's values as cursor):
SELECT * FROM orders
WHERE (created_at, id) < ($last_created_at, $last_id)
ORDER BY created_at DESC, id DESC
LIMIT 20;
-- Requires composite index: (created_at DESC, id DESC)
EXPLAIN Options Cheatsheet
-- Basic plan (no execution)
EXPLAIN SELECT ...;
-- Plan + actual time + row counts
EXPLAIN ANALYZE SELECT ...;
-- Plan + actual + buffer usage (disk vs cache)
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;
-- JSON format (for pgBadger, explain.depesz.com)
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) SELECT ...;
-- Verbose: shows column output lists
EXPLAIN (ANALYZE, VERBOSE) SELECT ...;
-- Settings: shows which GUC settings affected the plan
EXPLAIN (ANALYZE, SETTINGS) SELECT ...;
-- WAL: shows write-ahead log activity
EXPLAIN (ANALYZE, WAL) UPDATE orders SET status = 'shipped' WHERE id = $1;
Cost and Timeline
| Task | Timeline | Cost (USD) |
|---|---|---|
| Slow query audit (pg_stat_statements review) | 0.5โ1 day | $400โ$800 |
| Index audit (unused + missing) | 0.5 day | $300โ$500 |
| Per-query EXPLAIN ANALYZE + fix | 0.5โ1 day each | $400โ$800 each |
| Full query performance audit + index strategy | 1โ2 weeks | $5,000โ$10,000 |
See Also
- PostgreSQL Connection Pooling โ Connection overhead in query timing
- PostgreSQL Materialized Views โ Pre-compute expensive aggregations
- PostgreSQL Full-Text Search โ GIN index for text queries
- PostgreSQL Window Functions โ Optimizing analytic queries
Working With Viprasol
We audit PostgreSQL query performance for production SaaS databases โ from single slow query investigations through full schema and index reviews. Our team has resolved query regressions ranging from 800ms to 0.9ms by adding the right composite or partial index.
What we deliver:
- pg_stat_statements audit: top 20 queries by total time, avg time, cache miss rate
- EXPLAIN ANALYZE interpretation with fix recommendation
- Index strategy: composite, partial, functional, GIN/GiST
- auto_explain configuration for ongoing slow query logging
- Before/after performance benchmarks
Explore our web development services or contact us to speed up your PostgreSQL queries.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.