PostgreSQL Table Partitioning: Range, List, Hash, and pg_partman Automation
Master PostgreSQL table partitioning: range partitioning for time-series data, list partitioning by region or tenant, hash partitioning for even distribution, partition pruning, and pg_partman for automated maintenance.
A single PostgreSQL table with 500 million rows is manageable. A billion rows starts to show cracks โ vacuum takes hours, index bloat accumulates, queries that should prune by date still scan large portions of the table. Partitioning splits the physical storage while keeping the logical table interface intact. Queries targeting a date range touch only the relevant partitions; vacuum runs per partition in minutes.
This post covers the three partitioning strategies with production examples, partition pruning mechanics, index design for partitioned tables, and pg_partman for fully automated partition lifecycle management.
When to Partition
Partitioning adds operational complexity. Use it when:
- Table exceeds 500M rows or 100GB
- You have clear time-based or categorical access patterns
- Bulk deletes by date are a regular operation (log retention, GDPR cleanup)
- Vacuum and autovacuum are taking too long on a single table
Don't partition prematurely โ a well-indexed 50M-row table with good query patterns outperforms a poorly designed partitioned table.
1. Range Partitioning (Time-Series)
Range partitioning is ideal for append-mostly data with time-based queries and retention policies.
Creating a Range-Partitioned Table
-- Parent table: declares the structure and partition key
CREATE TABLE events (
id UUID NOT NULL DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
event_type TEXT NOT NULL,
user_id UUID,
payload JSONB NOT NULL DEFAULT '{}',
occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (id, occurred_at) -- Partition key MUST be in primary key
) PARTITION BY RANGE (occurred_at);
-- Child partitions: one per month
CREATE TABLE events_2026_10
PARTITION OF events
FOR VALUES FROM ('2026-10-01') TO ('2026-11-01');
CREATE TABLE events_2026_11
PARTITION OF events
FOR VALUES FROM ('2026-11-01') TO ('2026-12-01');
CREATE TABLE events_2026_12
PARTITION OF events
FOR VALUES FROM ('2026-12-01') TO ('2027-01-01');
-- Default partition: catches rows outside all defined ranges
CREATE TABLE events_default
PARTITION OF events
DEFAULT;
-- Indexes are created per partition (attach to parent to replicate)
CREATE INDEX ON events (tenant_id, occurred_at DESC);
CREATE INDEX ON events (user_id, occurred_at DESC) WHERE user_id IS NOT NULL;
CREATE INDEX ON events USING GIN (payload);
Partition Pruning in Action
-- PostgreSQL eliminates partitions that can't match the WHERE clause
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT count(*), event_type
FROM events
WHERE occurred_at >= '2026-11-01'
AND occurred_at < '2026-12-01'
AND tenant_id = 'tenant-uuid-here'
GROUP BY event_type;
-- Output shows: Partitions: events_2026_11 (and only that one)
-- Without partitioning: full table scan
-- Verify pruning is enabled (default on in PostgreSQL 12+)
SHOW enable_partition_pruning; -- should be 'on'
Partition Pruning Requirements
-- โ
Pruning works: constant in WHERE clause
SELECT * FROM events WHERE occurred_at >= '2026-11-01';
-- โ
Pruning works: NOW() is treated as constant at plan time
SELECT * FROM events WHERE occurred_at >= NOW() - INTERVAL '7 days';
-- โ Pruning does NOT work: column referenced in function
SELECT * FROM events WHERE DATE_TRUNC('month', occurred_at) = '2026-11-01';
-- Fix: rewrite as range
SELECT * FROM events
WHERE occurred_at >= '2026-11-01' AND occurred_at < '2026-12-01';
-- โ Pruning does NOT work: partition key compared to another column
SELECT * FROM events e
JOIN subscriptions s ON e.occurred_at BETWEEN s.start_date AND s.end_date;
-- Use nested loop with partition pruning per row โ check EXPLAIN output
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
2. List Partitioning (By Category)
List partitioning works when the partition key has a known finite set of values โ region, status, tenant type.
-- Partition by region for data residency compliance
CREATE TABLE customer_data (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
region TEXT NOT NULL, -- 'us-east', 'eu-west', 'ap-south'
data JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) PARTITION BY LIST (region);
CREATE TABLE customer_data_us
PARTITION OF customer_data
FOR VALUES IN ('us-east', 'us-west', 'us-central');
CREATE TABLE customer_data_eu
PARTITION OF customer_data
FOR VALUES IN ('eu-west', 'eu-central', 'eu-north');
CREATE TABLE customer_data_ap
PARTITION OF customer_data
FOR VALUES IN ('ap-south', 'ap-east', 'ap-southeast');
CREATE TABLE customer_data_other
PARTITION OF customer_data
DEFAULT;
Multi-Level Partitioning (Range + List)
-- Partition by month, then by region within each month
CREATE TABLE metrics (
id UUID NOT NULL DEFAULT gen_random_uuid(),
region TEXT NOT NULL,
metric_name TEXT NOT NULL,
value NUMERIC NOT NULL,
recorded_at TIMESTAMPTZ NOT NULL,
PRIMARY KEY (id, recorded_at, region)
) PARTITION BY RANGE (recorded_at);
CREATE TABLE metrics_2026_11
PARTITION OF metrics
FOR VALUES FROM ('2026-11-01') TO ('2026-12-01')
PARTITION BY LIST (region);
CREATE TABLE metrics_2026_11_us
PARTITION OF metrics_2026_11
FOR VALUES IN ('us-east', 'us-west');
CREATE TABLE metrics_2026_11_eu
PARTITION OF metrics_2026_11
FOR VALUES IN ('eu-west', 'eu-central');
3. Hash Partitioning (Even Distribution)
Hash partitioning distributes rows evenly when there's no natural range or list boundary. Good for horizontally scaling write-heavy tables by tenant.
-- Distribute write load across 8 partitions by tenant_id hash
CREATE TABLE api_logs (
id UUID NOT NULL DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
endpoint TEXT NOT NULL,
method TEXT NOT NULL,
status_code SMALLINT NOT NULL,
duration_ms INTEGER NOT NULL,
logged_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (id, tenant_id)
) PARTITION BY HASH (tenant_id);
-- 8 partitions: modulus=8, remainder=0..7
CREATE TABLE api_logs_p0 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 0);
CREATE TABLE api_logs_p1 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 1);
CREATE TABLE api_logs_p2 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 2);
CREATE TABLE api_logs_p3 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 3);
CREATE TABLE api_logs_p4 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 4);
CREATE TABLE api_logs_p5 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 5);
CREATE TABLE api_logs_p6 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 6);
CREATE TABLE api_logs_p7 PARTITION OF api_logs FOR VALUES WITH (MODULUS 8, REMAINDER 7);
Important: Hash partitioning does NOT prune on range queries. It's for write distribution, not read optimization. For time-based queries on hash-partitioned tables, add a range-based index, not another partition level.
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
4. pg_partman: Automated Partition Management
Creating monthly partitions manually is error-prone. pg_partman automates creation, maintenance, and retention.
Installation
# Install pg_partman extension
sudo apt install postgresql-16-partman
# Or via Docker:
# Use the standard postgres image + compile from source, or use supabase/postgres
-- Enable in PostgreSQL
CREATE SCHEMA partman;
CREATE EXTENSION pg_partman SCHEMA partman;
Setup Automated Range Partitioning
-- Create the parent table (pg_partman manages child partitions)
CREATE TABLE events (
id UUID NOT NULL DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
event_type TEXT NOT NULL,
payload JSONB NOT NULL DEFAULT '{}',
occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (id, occurred_at)
) PARTITION BY RANGE (occurred_at);
-- Tell pg_partman to manage this table
SELECT partman.create_parent(
p_parent_table => 'public.events',
p_control => 'occurred_at', -- Partition key column
p_type => 'range',
p_interval => '1 month', -- One partition per month
p_premake => 4, -- Create 4 future partitions in advance
p_start_partition => '2026-01-01 00:00:00+00'
);
-- Configure retention (auto-drop old partitions)
UPDATE partman.part_config
SET
retention = '12 months', -- Keep 12 months of data
retention_keep_table = false, -- Actually DROP the partition
retention_keep_index = false,
infinite_time_partitions = true, -- Create partitions forever
premake = 4
WHERE parent_table = 'public.events';
Maintenance Job
-- Run maintenance (creates new partitions, drops old ones per retention)
-- Call this via cron every hour
SELECT partman.run_maintenance(p_analyze => false);
-- Full maintenance with ANALYZE
SELECT partman.run_maintenance(p_analyze => true);
# /etc/cron.d/pg-partman
0 * * * * postgres psql -U postgres -d myapp -c "SELECT partman.run_maintenance();" >> /var/log/pg_partman.log 2>&1
Monitor Partition Status
-- Check partition status managed by pg_partman
SELECT
parent_table,
partition_interval,
premake,
retention,
last_partition_created,
last_run
FROM partman.part_config
ORDER BY parent_table;
-- List all partitions with row counts and sizes
SELECT
child.relname AS partition_name,
pg_size_pretty(pg_relation_size(child.oid)) AS size,
pg_size_pretty(pg_total_relation_size(child.oid)) AS total_size,
to_timestamp(split_part(child.relname, '_', -2) || split_part(child.relname, '_', -1), 'YYYYMM') AS partition_month
FROM pg_inherits
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
WHERE parent.relname = 'events'
ORDER BY partition_month DESC;
5. Attaching and Detaching Partitions
Detaching a partition is instant โ it doesn't move data. This is how you archive old data to cold storage.
-- Detach old partition (instant operation โ no lock escalation)
ALTER TABLE events DETACH PARTITION events_2025_01;
-- The detached partition becomes a standalone table
-- You can now:
-- 1. Archive it to S3 via pg_dump
-- 2. Move it to a tablespace on slower storage
-- 3. Keep it for ad-hoc historical queries
-- Archive to S3 and drop
\! pg_dump -t events_2025_01 myapp | gzip > /backups/events_2025_01.sql.gz
DROP TABLE events_2025_01;
-- Attach an existing table as a new partition (instant)
ALTER TABLE events ATTACH PARTITION events_imported
FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');
6. Index Strategies for Partitioned Tables
-- Global index on parent = creates indexes on ALL partitions
CREATE INDEX idx_events_tenant_time
ON events (tenant_id, occurred_at DESC);
-- This creates:
-- idx_events_2026_10_tenant_time
-- idx_events_2026_11_tenant_time
-- ... one per partition
-- For UNIQUE constraints: partition key MUST be included
CREATE UNIQUE INDEX ON events (id, occurred_at);
-- โ
Works: occurred_at is the partition key
CREATE UNIQUE INDEX ON events (id);
-- โ Fails: PostgreSQL can't enforce uniqueness across partitions without partition key
-- Partial index per partition (more selective = faster)
CREATE INDEX idx_events_2026_11_errors
ON events_2026_11 (tenant_id, occurred_at DESC)
WHERE event_type LIKE 'error.%';
Performance Impact
-- Benchmark: query performance before and after partitioning
-- Setup: 500M row events table, partitioned monthly (60 partitions)
-- Query: events for one tenant, last 30 days
-- BEFORE partitioning:
-- Seq Scan on events (cost=0.00..8500000.00 rows=500000000)
-- Planning time: 8ms, Execution time: 47,000ms
-- AFTER partitioning (2 partitions touched):
-- Seq Scan on events_2026_10 (cost=0.00..284000.00 rows=8333333)
-- Seq Scan on events_2026_11 (cost=0.00..284000.00 rows=8333333)
-- Planning time: 12ms, Execution time: 320ms
-- 147x improvement on the query
-- Slight overhead on planning (scanning partition catalog)
Cost Reference: Partitioning by Table Size
| Table Size | Strategy | Partition Size Target | Partitions |
|---|---|---|---|
| < 100GB | No partitioning | โ | โ |
| 100GBโ1TB | Monthly range | ~10โ50GB | 12โ120 |
| 1TBโ10TB | Weekly range | ~10โ50GB | 52โ520 |
| > 10TB | Daily range + pg_partman | ~5โ20GB | 365+ |
| Any + multi-region | List by region + range | Varies | Depends |
See Also
- PostgreSQL Performance: Indexes, Query Plans, and Connection Pooling
- PostgreSQL JSONB Patterns for Semi-Structured Data
- PostgreSQL Extensions: PostGIS, pg_trgm, and pgvector
- PostgreSQL Full-Text Search
- Database Migrations Zero-Downtime
Working With Viprasol
Working with a PostgreSQL table that's grown past the point where standard indexes solve the performance problem? We design and implement partitioning strategies โ range, list, hash, or hybrid โ with pg_partman automation, retention policies, and zero-downtime migration from unpartitioned tables.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.