Back to Blog

PostgreSQL Performance: PgBouncer, VACUUM, Autovacuum Tuning, and Table Partitioning

Tune PostgreSQL for production in 2026 — PgBouncer connection pooling setup, VACUUM and autovacuum configuration, bloat monitoring, table partitioning for time-

Viprasol Tech Team
June 17, 2026
13 min read

PostgreSQL Performance: PgBouncer, VACUUM, Autovacuum Tuning, and Table Partitioning

PostgreSQL defaults are conservative — suitable for a wide range of hardware but not optimal for any specific workload. Production PostgreSQL requires configuration tuning, connection management, and an understanding of MVCC's maintenance requirements.

These are the changes that produce the largest improvements per hour invested.


Connection Pooling with PgBouncer

PostgreSQL spawns a new OS process per connection. Each connection costs ~5–10MB of RAM. At 500 concurrent connections, that's 2.5–5GB just for connection overhead — before any actual query work.

The solution: PgBouncer sits between your application and PostgreSQL, maintaining a small pool of actual database connections and multiplexing hundreds of application connections across them.

Application (1000 connections) → PgBouncer (30 real DB connections) → PostgreSQL

PgBouncer pooling modes:

ModeHow It WorksUse When
SessionOne DB connection per app session until disconnectApp uses SET, advisory locks, prepared statements
TransactionDB connection held only during a transactionMost web apps — best throughput
StatementDB connection released after each statementCan't use multi-statement transactions

Transaction mode is correct for most Next.js/Fastify apps.

Docker Compose PgBouncer setup:

# docker-compose.yml
services:
  pgbouncer:
    image: bitnami/pgbouncer:latest
    environment:
      POSTGRESQL_HOST: postgres
      POSTGRESQL_PORT: 5432
      POSTGRESQL_DATABASE: app_production
      POSTGRESQL_USERNAME: app
      POSTGRESQL_PASSWORD: ${DB_PASSWORD}
      PGBOUNCER_PORT: 6432
      PGBOUNCER_DATABASE: app_production
      PGBOUNCER_POOL_MODE: transaction
      PGBOUNCER_MAX_CLIENT_CONN: 1000   # Max app connections to PgBouncer
      PGBOUNCER_DEFAULT_POOL_SIZE: 25   # Real DB connections per database
      PGBOUNCER_MIN_POOL_SIZE: 5        # Keep minimum connections alive
      PGBOUNCER_RESERVE_POOL_SIZE: 5    # Emergency reserve connections
      PGBOUNCER_RESERVE_POOL_TIMEOUT: 5
      PGBOUNCER_SERVER_IDLE_TIMEOUT: 600
    ports:
      - "6432:6432"
    depends_on:
      - postgres
# pgbouncer.ini (alternative: direct file config)
[databases]
app_production = host=postgres port=5432 dbname=app_production

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 5
server_idle_timeout = 600
log_connections = 0  # Disable in production (noisy)
log_disconnections = 0

Monitor PgBouncer:

-- Connect to pgbouncer admin console (port 6432, db "pgbouncer")
SHOW POOLS;
-- sv_active: DB connections currently executing queries
-- sv_idle:   DB connections idle and available
-- cl_active: App connections using a DB connection
-- cl_waiting: App connections waiting for a pool slot (>0 means pool is saturated)

SHOW STATS;
-- total_query_time, total_wait_time, total_requests

PostgreSQL Configuration Tuning

PostgreSQL's most impactful settings (for a dedicated 16GB RAM server):

# postgresql.conf — key settings

# Memory
shared_buffers = 4GB              # 25% of total RAM — PostgreSQL's main cache
effective_cache_size = 12GB       # 75% of RAM — planner hint, not actual allocation
work_mem = 64MB                   # Memory per sort/hash operation per connection
                                  # With 100 connections: can use 100 × 64MB = 6.4GB
                                  # Set conservatively; raise per-query with SET if needed
maintenance_work_mem = 1GB        # For VACUUM, CREATE INDEX — raise for faster maintenance

# WAL (Write-Ahead Log)
wal_buffers = 64MB                # Buffer for WAL writes (default 4MB is too small)
checkpoint_completion_target = 0.9 # Spread checkpoint I/O across 90% of checkpoint interval
max_wal_size = 4GB                # Allow larger WAL before forced checkpoint
min_wal_size = 1GB

# Parallelism
max_parallel_workers_per_gather = 4     # Parallel query workers
max_parallel_maintenance_workers = 4   # Parallel VACUUM and CREATE INDEX workers
max_worker_processes = 16

# Connections
max_connections = 100             # Keep low when using PgBouncer (PgBouncer handles multiplexing)

# Statistics
track_io_timing = on              # Enables I/O time in EXPLAIN ANALYZE BUFFERS
track_functions = pl              # Track function call counts

# Logging (production)
log_min_duration_statement = 1000  # Log queries over 1 second
log_lock_waits = on               # Log when a query waits on a lock
deadlock_timeout = 1s

🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript — production-grade stack
  • Fixed-price contracts — no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

VACUUM and Autovacuum

PostgreSQL uses MVCC (Multi-Version Concurrency Control). When you UPDATE or DELETE a row, the old version is kept for transactions that started before your change. Dead tuples accumulate and must be cleaned up by VACUUM.

What happens without regular VACUUM:

  1. Table bloat — dead tuples take space, making table scans slower
  2. Index bloat — index entries for dead rows
  3. Transaction ID wraparound — after ~2 billion transactions, PostgreSQL will stop accepting writes to prevent corruption

Autovacuum runs automatically, but its defaults are often too conservative for high-write tables:

-- Check tables needing vacuum attention
SELECT
  relname AS table,
  n_live_tup AS live_rows,
  n_dead_tup AS dead_rows,
  round(n_dead_tup * 100.0 / nullif(n_live_tup + n_dead_tup, 0), 1) AS dead_pct,
  last_autovacuum,
  last_autoanalyze
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 20;

-- If dead_pct > 10% on active tables, autovacuum is not keeping up

Tune autovacuum for high-write tables:

-- Per-table autovacuum settings (overrides postgresql.conf)
-- For a high-write table like 'events' (millions of inserts/day)
ALTER TABLE events SET (
  autovacuum_vacuum_scale_factor = 0.01,   -- Vacuum when 1% of table is dead (default 20%)
  autovacuum_analyze_scale_factor = 0.005, -- Analyze when 0.5% of table is new/changed
  autovacuum_vacuum_cost_delay = 2,        -- ms between autovacuum I/O bursts (default 20)
  autovacuum_vacuum_threshold = 100        -- Minimum dead rows before triggering
);

-- For time-series tables where you mostly insert (rarely update/delete):
-- Disable autovacuum entirely and vacuum manually in off-peak hours
ALTER TABLE metrics SET (autovacuum_enabled = false);
-- Then schedule: VACUUM ANALYZE metrics; (in a cron job at 3am)

Manual VACUUM for immediate relief:

-- Non-blocking VACUUM (safe to run in production)
VACUUM (ANALYZE, VERBOSE) orders;

-- VACUUM FULL reclaims disk space but locks the table — use during maintenance window only
-- For large tables, use pg_repack instead (doesn't lock)
-- pg_repack: https://github.com/reorg/pg_repack

Table Partitioning

Partitioning splits a large table into smaller physical pieces based on a key (usually a timestamp or tenant ID). PostgreSQL's query planner can skip partitions that don't match the query — "partition pruning."

When to partition:

  • Table > 50GB and growing
  • Queries almost always filter by the partition key
  • You need fast bulk deletes (drop a partition instead of deleting rows)
  • You want to move old data to cheaper storage

Range partitioning by month (time-series data):

-- Create partitioned table
CREATE TABLE events (
  id          UUID NOT NULL DEFAULT gen_random_uuid(),
  tenant_id   UUID NOT NULL,
  event_type  TEXT NOT NULL,
  payload     JSONB,
  created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()
) PARTITION BY RANGE (created_at);

-- Create monthly partitions
CREATE TABLE events_2026_01 PARTITION OF events
  FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');

CREATE TABLE events_2026_02 PARTITION OF events
  FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');

-- Continue monthly...

-- Each partition gets its own indexes
CREATE INDEX ON events_2026_01 (tenant_id, created_at DESC);
CREATE INDEX ON events_2026_02 (tenant_id, created_at DESC);

-- Queries with created_at filter use only relevant partition(s)
EXPLAIN SELECT * FROM events
WHERE tenant_id = '...' AND created_at >= '2026-01-01' AND created_at < '2026-02-01';
-- → Scans only events_2026_01, not all partitions

Automate partition creation:

// scripts/create-monthly-partition.ts
// Run monthly via cron job (first day of the month)
import { db } from '../lib/db';

async function createNextMonthPartition(): Promise<void> {
  const now = new Date();
  const next = new Date(now.getFullYear(), now.getMonth() + 1, 1);
  const after = new Date(now.getFullYear(), now.getMonth() + 2, 1);

  const fromDate = next.toISOString().slice(0, 10);   // 2026-07-01
  const toDate = after.toISOString().slice(0, 10);     // 2026-08-01
  const suffix = fromDate.slice(0, 7).replace('-', '_');  // 2026_07

  await db.$executeRawUnsafe(`
    CREATE TABLE IF NOT EXISTS events_${suffix} PARTITION OF events
      FOR VALUES FROM ('${fromDate}') TO ('${toDate}');
    CREATE INDEX IF NOT EXISTS idx_events_${suffix}_tenant_created
      ON events_${suffix} (tenant_id, created_at DESC);
  `);

  console.log(`Created partition: events_${suffix}`);
}

createNextMonthPartition();

Dropping old partitions (fast, no table scan):

-- Drop a partition that's over retention period (instant, no row-by-row delete)
ALTER TABLE events DETACH PARTITION events_2024_01;
DROP TABLE events_2024_01;
-- vs DELETE FROM events WHERE created_at < '2024-02-01' (could take hours on large table)

🚀 Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Monitoring Bloat and Table Size

-- Table and index sizes with bloat estimate
SELECT
  tablename,
  pg_size_pretty(pg_total_relation_size(tablename::regclass)) AS total_size,
  pg_size_pretty(pg_relation_size(tablename::regclass)) AS table_size,
  pg_size_pretty(pg_indexes_size(tablename::regclass)) AS index_size,
  n_live_tup AS live_rows,
  n_dead_tup AS dead_rows
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(tablename::regclass) DESC
LIMIT 20;

-- Transaction ID age (wraparound risk — should be well under 1.5 billion)
SELECT
  relname,
  age(relfrozenxid) AS xid_age,
  pg_size_pretty(pg_total_relation_size(oid)) AS size
FROM pg_class
WHERE relkind = 'r'
ORDER BY age(relfrozenxid) DESC
LIMIT 10;
-- If xid_age > 1.5 billion: VACUUM FREEZE immediately

Working With Viprasol

We tune PostgreSQL for production SaaS workloads — PgBouncer connection pooling, autovacuum configuration, table partitioning, index optimization, and ongoing performance monitoring.

Talk to our team about database performance and architecture.


See Also

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms — we build it all with Next.js and React.

Free consultation • No commitment • Response within 24 hours

Viprasol · Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.