PostgreSQL Partitioning in 2026: Range, List, and Hash Partitioning with Partition Pruning
Deep-dive into PostgreSQL partitioning: range partitioning for time series, list partitioning for tenants, hash partitioning for even distribution, partition pruning, and maintenance automation.
PostgreSQL Partitioning in 2026: Range, List, and Hash Partitioning with Partition Pruning
Partitioning splits a large table into smaller physical segments that PostgreSQL manages as a single logical table. Done right, it makes queries against recent data orders of magnitude faster and makes data retention (DROP old partitions) an O(1) operation instead of a table-blocking DELETE.
Done wrong, it adds complexity without benefit. Partitioning makes sense for tables above ~100GB or tables where you consistently query a narrow slice (current month, specific tenant, specific status). Below that threshold, proper indexing is both simpler and faster.
When to Partition
| Scenario | Partition Strategy | Value |
|---|---|---|
| Time-series data (logs, events, metrics) | Range on timestamp | Fast recent-data queries, fast retention |
| Multi-tenant SaaS | List on tenant_id or Hash on tenant_id | Tenant isolation, faster cross-tenant queries |
| Large lookup tables with even access | Hash | Even distribution, no hot spots |
| Status-based routing | List on status | Fast "pending only" queries |
Don't partition tables under 50GB โ the planning overhead and complexity outweigh benefits. Start with proper indexes, consider partitioning when queries remain slow despite indexes.
Range Partitioning (Time Series)
-- Parent table: partitioned by month
CREATE TABLE audit_events (
id UUID NOT NULL DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL,
action TEXT NOT NULL,
occurred_at TIMESTAMPTZ NOT NULL DEFAULT now(),
payload JSONB,
PRIMARY KEY (occurred_at, id) -- Include partition key in PK
) PARTITION BY RANGE (occurred_at);
-- Create partitions for current and future months
CREATE TABLE audit_events_2027_01 PARTITION OF audit_events
FOR VALUES FROM ('2027-01-01') TO ('2027-02-01');
CREATE TABLE audit_events_2027_02 PARTITION OF audit_events
FOR VALUES FROM ('2027-02-01') TO ('2027-03-01');
CREATE TABLE audit_events_2027_03 PARTITION OF audit_events
FOR VALUES FROM ('2027-03-01') TO ('2027-04-01');
-- DEFAULT partition catches rows that don't match any range
-- (Useful during transition; remove once all partitions are in place)
CREATE TABLE audit_events_default PARTITION OF audit_events DEFAULT;
-- Indexes are created per partition (or inherit from parent)
CREATE INDEX idx_audit_2027_02_workspace
ON audit_events_2027_02 (workspace_id, occurred_at DESC);
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
Partition Pruning: The Core Benefit
-- Query with partition key in WHERE clause โ only scans matching partition
EXPLAIN SELECT * FROM audit_events
WHERE occurred_at >= '2027-02-01'
AND occurred_at < '2027-03-01'
AND workspace_id = 'abc-123';
-- EXPLAIN output:
-- Append
-- -> Index Scan on audit_events_2027_02 โ Only February partition!
-- Index Cond: (workspace_id = 'abc-123' AND occurred_at >= ... AND ...)
-- Without partition key in WHERE โ scans all partitions (no pruning)
EXPLAIN SELECT * FROM audit_events
WHERE workspace_id = 'abc-123';
-- โ Seq Scan on ALL partitions (expensive)
-- Fix: add workspace_id to the partition key (sub-partitioning)
-- or ensure queries always include occurred_at range
Automating Monthly Partition Creation
-- Function to create the next month's partition automatically
CREATE OR REPLACE FUNCTION create_monthly_partition(
parent_table TEXT,
partition_date DATE DEFAULT date_trunc('month', now() + INTERVAL '1 month')
)
RETURNS void LANGUAGE plpgsql AS $$
DECLARE
partition_name TEXT;
start_date DATE := date_trunc('month', partition_date);
end_date DATE := start_date + INTERVAL '1 month';
BEGIN
partition_name := format('%s_%s_%s',
parent_table,
to_char(start_date, 'YYYY'),
to_char(start_date, 'MM')
);
IF NOT EXISTS (
SELECT 1 FROM pg_class WHERE relname = partition_name
) THEN
EXECUTE format(
'CREATE TABLE %I PARTITION OF %I FOR VALUES FROM (%L) TO (%L)',
partition_name, parent_table, start_date, end_date
);
-- Create indexes on the new partition
EXECUTE format(
'CREATE INDEX idx_%s_workspace ON %I (workspace_id, occurred_at DESC)',
partition_name, partition_name
);
RAISE NOTICE 'Created partition: %', partition_name;
ELSE
RAISE NOTICE 'Partition % already exists', partition_name;
END IF;
END;
$$;
-- Call via pg_cron (or a Lambda cron job) on the 25th of each month
-- SELECT create_monthly_partition('audit_events');
-- SELECT create_monthly_partition('usage_events');
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Retention: Dropping Old Partitions
-- Drop partitions older than 24 months (O(1) โ no row scanning)
CREATE OR REPLACE FUNCTION drop_old_partitions(
parent_table TEXT,
retain_months INTEGER DEFAULT 24
)
RETURNS void LANGUAGE plpgsql AS $$
DECLARE
partition_name TEXT;
cutoff_date DATE := date_trunc('month', now() - make_interval(months => retain_months));
BEGIN
FOR partition_name IN
SELECT relname FROM pg_class c
JOIN pg_inherits i ON i.inhrelid = c.oid
JOIN pg_class p ON i.inhparent = p.oid
WHERE p.relname = parent_table
AND c.relname < format('%s_%s_%s',
parent_table,
to_char(cutoff_date, 'YYYY'),
to_char(cutoff_date, 'MM'))
LOOP
-- Archive to S3 before dropping (optional)
-- COPY audit_events TO 's3://...' USING EXTERNAL;
EXECUTE format('DROP TABLE IF EXISTS %I', partition_name);
RAISE NOTICE 'Dropped partition: %', partition_name;
END LOOP;
END;
$$;
List Partitioning (Multi-Tenant)
-- Partition by tenant region or plan tier
-- Use when partition key has low cardinality (< 20 values)
CREATE TABLE workspace_data (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL,
region TEXT NOT NULL CHECK (region IN ('us', 'eu', 'apac')),
payload JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
) PARTITION BY LIST (region);
CREATE TABLE workspace_data_us PARTITION OF workspace_data FOR VALUES IN ('us');
CREATE TABLE workspace_data_eu PARTITION OF workspace_data FOR VALUES IN ('eu');
CREATE TABLE workspace_data_apac PARTITION OF workspace_data FOR VALUES IN ('apac');
-- Physical storage: EU data in EU tablespace (compliance)
-- CREATE TABLE workspace_data_eu PARTITION OF workspace_data
-- FOR VALUES IN ('eu') TABLESPACE eu_tablespace;
-- Query with region = only scans that partition
SELECT * FROM workspace_data WHERE region = 'eu' AND workspace_id = $1;
-- โ Only scans workspace_data_eu partition
Hash Partitioning (Even Distribution)
-- Hash partitioning: even spread across N shards based on workspace_id hash
-- Use when you have many tenants and want even I/O distribution
CREATE TABLE events (
id UUID NOT NULL DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL,
event_type TEXT NOT NULL,
payload JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (workspace_id, created_at, id)
) PARTITION BY HASH (workspace_id);
-- 8 hash buckets (should be power of 2 for even distribution)
CREATE TABLE events_p0 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 0);
CREATE TABLE events_p1 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 1);
CREATE TABLE events_p2 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 2);
CREATE TABLE events_p3 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 3);
CREATE TABLE events_p4 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 4);
CREATE TABLE events_p5 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 5);
CREATE TABLE events_p6 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 6);
CREATE TABLE events_p7 PARTITION OF events FOR VALUES WITH (MODULUS 8, REMAINDER 7);
-- Hash partitioning does NOT enable pruning on workspace_id alone
-- (Hash of workspace_id determines bucket, but planner can't predict which bucket)
-- Use for write distribution, not query pruning
Sub-Partitioning (Composite Strategy)
-- Range by month, then hash by workspace_id within each month
-- Enables both time-based pruning AND even write distribution
CREATE TABLE events_2027_02 PARTITION OF events
FOR VALUES FROM ('2027-02-01') TO ('2027-03-01')
PARTITION BY HASH (workspace_id);
CREATE TABLE events_2027_02_p0 PARTITION OF events_2027_02
FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE events_2027_02_p1 PARTITION OF events_2027_02
FOR VALUES WITH (MODULUS 4, REMAINDER 1);
CREATE TABLE events_2027_02_p2 PARTITION OF events_2027_02
FOR VALUES WITH (MODULUS 4, REMAINDER 2);
CREATE TABLE events_2027_02_p3 PARTITION OF events_2027_02
FOR VALUES WITH (MODULUS 4, REMAINDER 3);
-- Query: time range โ prunes to month partition โ hash routes to one bucket
SELECT * FROM events
WHERE created_at >= '2027-02-01' AND created_at < '2027-03-01'
AND workspace_id = 'abc-123';
-- Execution plan touches only events_2027_02_p{N} (one sub-partition)
Monitoring Partition Health
-- Row count and size per partition
SELECT
child.relname AS partition_name,
pg_size_pretty(pg_relation_size(child.oid)) AS partition_size,
pg_size_pretty(pg_total_relation_size(child.oid)) AS total_size_with_indexes,
(SELECT reltuples::BIGINT FROM pg_class WHERE oid = child.oid) AS estimated_rows
FROM pg_inherits
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
WHERE parent.relname = 'audit_events'
ORDER BY child.relname;
-- Detect missing default partition (unrouted rows)
SELECT COUNT(*) FROM audit_events_default;
-- Should be 0 after all partitions are in place
Attach/Detach Partitions
-- Detach a partition for archival (non-blocking in PG 14+)
ALTER TABLE audit_events DETACH PARTITION audit_events_2025_01 CONCURRENTLY;
-- Now audit_events_2025_01 is a standalone table
-- Copy to S3, then:
DROP TABLE audit_events_2025_01;
-- Attach a pre-populated table as a new partition
ALTER TABLE audit_events ATTACH PARTITION audit_events_2027_04
FOR VALUES FROM ('2027-04-01') TO ('2027-05-01');
-- Requires constraint check โ add check constraint to table first for fast attach
Cost and Timeline
| Task | Timeline | Cost (USD) |
|---|---|---|
| Partition strategy design | 0.5 day | $400โ$600 |
| Initial partition migration (existing table) | 1โ3 days | $800โ$2,500 |
| Partition automation (pg_cron / Lambda) | 0.5โ1 day | $400โ$800 |
| Retention automation | 0.5 day | $300โ$500 |
| Full partitioning system | 1โ2 weeks | $5,000โ$10,000 |
See Also
- PostgreSQL Partitioning โ Fundamentals and first partition setup
- PostgreSQL Schema Migrations โ Migrating existing tables to partitioned
- PostgreSQL Row-Level Security โ RLS with partitioned tables
- AWS Lambda Scheduled โ Automating partition creation via Lambda cron
Working With Viprasol
We design and implement PostgreSQL partitioning strategies for high-volume SaaS databases โ from time-series audit logs through multi-tenant event stores. Our team has migrated production tables with billions of rows to partitioned schemas with zero downtime.
What we deliver:
- Partition strategy selection (range/list/hash/sub-partition) based on your query patterns
- Partition migration plan for existing large tables (non-blocking)
- Automated partition creation and retention scripts
- Index strategy per partition
- Query analysis to confirm partition pruning is working
Explore our web development services or contact us to partition your large PostgreSQL tables.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.