Back to Blog

PostgreSQL Performance Tuning: Indexes, Query Optimization, and Configuration

PostgreSQL performance tuning in 2026 — index types, EXPLAIN ANALYZE, query optimization, connection pooling, autovacuum, and configuration settings that make a

Viprasol Tech Team
April 26, 2026
13 min read

PostgreSQL Performance Tuning: Indexes, Query Optimization, and Configuration

PostgreSQL is fast. Most PostgreSQL performance problems are not PostgreSQL's fault — they're missing indexes, unoptimized queries, or misconfiguration. This guide covers the diagnostics and fixes that consistently move the needle.


Step 1: Find the Slow Queries

Never optimize blind. Find what's actually slow first.

-- Enable pg_stat_statements (add to postgresql.conf, then restart)
-- shared_preload_libraries = 'pg_stat_statements'

-- Top 20 slowest queries by total execution time
SELECT
  LEFT(query, 120)          AS query_preview,
  calls,
  ROUND(total_exec_time::numeric / 1000, 2)  AS total_secs,
  ROUND(mean_exec_time::numeric, 2)           AS avg_ms,
  ROUND(stddev_exec_time::numeric, 2)         AS stddev_ms,
  rows / NULLIF(calls, 0)                     AS avg_rows
FROM pg_stat_statements
WHERE calls > 50
ORDER BY total_exec_time DESC
LIMIT 20;

-- Queries with highest cache miss rate (reads from disk vs memory)
SELECT
  LEFT(query, 100) AS query,
  calls,
  shared_blks_hit,
  shared_blks_read,
  ROUND(shared_blks_hit::numeric / NULLIF(shared_blks_hit + shared_blks_read, 0) * 100, 1) AS cache_hit_pct
FROM pg_stat_statements
WHERE calls > 10
  AND (shared_blks_hit + shared_blks_read) > 1000
ORDER BY cache_hit_pct ASC
LIMIT 20;

-- Current long-running queries (> 30 seconds)
SELECT
  pid,
  now() - query_start AS duration,
  state,
  LEFT(query, 200) AS query
FROM pg_stat_activity
WHERE state = 'active'
  AND now() - query_start > INTERVAL '30 seconds'
ORDER BY duration DESC;

Step 2: Read EXPLAIN ANALYZE

EXPLAIN ANALYZE runs the query and shows the actual execution plan with real row counts and timing.

-- Always use ANALYZE, BUFFERS to see actual data
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT
  o.id,
  o.total,
  u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
  AND o.created_at > NOW() - INTERVAL '7 days'
ORDER BY o.created_at DESC
LIMIT 50;

What to look for:

Seq Scan on orders  (cost=0.00..45823.00 rows=123456 width=64)
                    (actual time=0.124..892.431 rows=123456 loops=1)
  Filter: (status = 'pending' AND created_at > ...)
  Rows Removed by Filter: 876544
  Buffers: shared hit=12453 read=21089

Red flags:

  • Seq Scan on a large table — almost always means a missing index
  • Rows Removed by Filter >> actual rows — index would help
  • Buffers: read= is high — data not in memory, disk I/O bound
  • actual rows >> estimated rows — stale statistics, run ANALYZE
  • Hash Join with large hash batches — may need work_mem increase

Green flags:

  • Index Scan or Index Only Scan — using an index
  • Buffers: shared hit= dominates over read — data in cache
  • Estimated and actual rows are close — good statistics

🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript — production-grade stack
  • Fixed-price contracts — no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

Step 3: Create the Right Indexes

B-tree Indexes (Default — Most Common)

-- Single column: equality and range queries
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders(user_id);
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders(created_at);

-- Composite: when you always filter by both columns
-- Order matters: most selective column first, OR the column used in range last
CREATE INDEX CONCURRENTLY idx_orders_status_created 
ON orders(status, created_at DESC);
-- Supports: WHERE status = 'pending' ORDER BY created_at DESC

-- Partial index: only index a subset of rows (smaller, faster)
CREATE INDEX CONCURRENTLY idx_orders_pending 
ON orders(created_at DESC)
WHERE status = 'pending';
-- 10x smaller than full index if 10% of orders are pending

Covering Indexes (Index Only Scan)

When the query only reads columns in the index, PostgreSQL never touches the table:

-- Query: SELECT id, email FROM users WHERE email LIKE 'john%'
-- Covering index: includes all SELECT columns
CREATE INDEX CONCURRENTLY idx_users_email_covering
ON users(email) INCLUDE (id, display_name);
-- Index Only Scan — table never touched

GIN Indexes for JSONB and Arrays

-- JSONB column with GIN index — supports @>, ?, ?| operators
CREATE INDEX CONCURRENTLY idx_products_attributes_gin
ON products USING GIN (attributes);

-- Query: products where attributes contains "color": "red"
SELECT * FROM products WHERE attributes @> '{"color": "red"}';

-- Array column
CREATE INDEX CONCURRENTLY idx_posts_tags_gin ON posts USING GIN (tags);
SELECT * FROM posts WHERE tags @> ARRAY['postgresql', 'performance'];

Text Search Indexes

-- Full-text search with tsvector
ALTER TABLE articles ADD COLUMN search_vector tsvector
  GENERATED ALWAYS AS (
    setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
    setweight(to_tsvector('english', coalesce(body, '')), 'B')
  ) STORED;

CREATE INDEX CONCURRENTLY idx_articles_search ON articles USING GIN (search_vector);

-- Query
SELECT title, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'postgresql & performance') query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 10;

Step 4: Rewrite Problematic Query Patterns

N+1 Queries — Fix with JOINs or CTEs

-- ❌ N+1: For each order, fetch user separately
-- Application code doing this in a loop is a common ORM problem

-- ✅ Single query with JOIN
SELECT
  o.id,
  o.total,
  o.created_at,
  u.email,
  u.display_name
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
ORDER BY o.created_at DESC
LIMIT 100;

Avoid Functions on Indexed Columns

-- ❌ Function on indexed column → index unusable
SELECT * FROM users WHERE LOWER(email) = 'john@example.com';

-- ✅ Option 1: Functional index
CREATE INDEX CONCURRENTLY idx_users_email_lower ON users(LOWER(email));

-- ✅ Option 2: Store pre-lowercased
ALTER TABLE users ADD CONSTRAINT email_lowercase CHECK (email = LOWER(email));

Efficient Pagination

-- ❌ OFFSET pagination: scans all skipped rows
SELECT * FROM orders ORDER BY id LIMIT 20 OFFSET 10000;
-- PostgreSQL scans 10,020 rows, returns 20

-- ✅ Cursor-based: always O(1) for index scan
SELECT * FROM orders
WHERE id > $last_seen_id   -- Pass last ID from previous page
ORDER BY id
LIMIT 20;

Window Functions Instead of Correlated Subqueries

-- ❌ Correlated subquery: executes once per row
SELECT
  o.id,
  o.user_id,
  o.total,
  (SELECT SUM(total) FROM orders WHERE user_id = o.user_id) AS user_lifetime_value
FROM orders o
WHERE o.status = 'completed';

-- ✅ Window function: single pass
SELECT
  id,
  user_id,
  total,
  SUM(total) OVER (PARTITION BY user_id) AS user_lifetime_value
FROM orders
WHERE status = 'completed';

🚀 Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Step 5: Configuration Tuning

The PostgreSQL defaults are conservative. These settings have the biggest impact:

# postgresql.conf — production settings for a 16GB RAM server

# Memory
shared_buffers = 4GB              # 25% of total RAM
effective_cache_size = 12GB       # 75% of total RAM (hint to planner)
work_mem = 64MB                   # Per-operation memory (sorts, hash joins)
maintenance_work_mem = 1GB        # For VACUUM, index builds

# WAL / Write performance
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 4GB

# Connection settings (use PgBouncer for connection pooling)
max_connections = 200

# Statistics — helps planner make better decisions
default_statistics_target = 200   # Default 100; increase for complex queries

# Logging — identify slow queries
log_min_duration_statement = 1000 # Log queries > 1 second
log_checkpoints = on
log_lock_waits = on
# Apply PGTune recommendations for your server size
# https://pgtune.leopard.in.ua/

# Check current settings
SELECT name, setting, unit FROM pg_settings
WHERE name IN (
  'shared_buffers', 'effective_cache_size', 'work_mem',
  'max_connections', 'default_statistics_target'
);

Step 6: Monitor Table Bloat and Autovacuum

PostgreSQL uses MVCC — deleted/updated rows leave dead tuples that accumulate and slow queries.

-- Tables with highest bloat
SELECT
  schemaname,
  tablename,
  pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
  n_dead_tup,
  n_live_tup,
  ROUND(n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100, 1) AS dead_pct,
  last_autovacuum,
  last_autoanalyze
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 20;

-- Manual vacuum on high-bloat tables
VACUUM ANALYZE orders;

-- Force vacuum (resets dead tuple counter even if autovacuum recently ran)
VACUUM (ANALYZE, VERBOSE) orders;
# Tune autovacuum for high-write tables
# (add to postgresql.conf or ALTER TABLE SET)
ALTER TABLE orders SET (
  autovacuum_vacuum_scale_factor = 0.01,  -- Vacuum when 1% of rows are dead (default 20%)
  autovacuum_analyze_scale_factor = 0.005
);

Quick Performance Wins Checklist

CheckHowExpected Impact
Missing foreign key indexespg_stat_user_tables + pg_constraintHigh
Tables with seq scanspg_stat_user_tables.seq_scan > 100High
Unused indexespg_stat_user_indexes.idx_scan = 0Medium (save space/write overhead)
Stale statisticslast_autoanalyze > 3 days agoMedium
High dead tuple ration_dead_tup / (n_live_tup + n_dead_tup) > 10%Medium
Low cache hit ratepg_stat_bgwriter hit ratio < 95%High (add RAM)
work_mem too lowEXPLAIN shows disk sortsMedium

Working With Viprasol

We run PostgreSQL performance audits — finding slow queries, identifying missing indexes, tuning configuration, and implementing schema changes that improve performance by 5–50×.

Database performance audit →
Software Development Services →


Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms — we build it all with Next.js and React.

Free consultation • No commitment • Response within 24 hours

Viprasol · Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.