Back to Blog

Database Development: PostgreSQL, Schema Design, and Performance

Database development in 2026 — PostgreSQL schema design, indexing strategy, query optimization, migrations, connection pooling, and choosing the right database

Viprasol Tech Team
March 25, 2026
12 min read

Database Development: PostgreSQL, Schema Design, and Performance

By Viprasol Tech Team


The database is the part of the application that's hardest to change after launch. The data model you design at the start of a project — the tables, relationships, constraints, and indexes — shapes what queries are possible, what reports you can generate, and how the system performs at scale. Getting it wrong means expensive migrations at the worst possible time.

This guide covers database design principles, PostgreSQL-specific patterns that matter in production, indexing strategy, query optimization, and the migration patterns that keep your database schema evolving safely.


Choosing the Right Database for Your Use Case

The default in 2026 for most web applications: PostgreSQL. It handles relational data, JSON documents (via JSONB), full-text search, time series (with TimescaleDB extension), geospatial queries (PostGIS), and vector embeddings (pgvector) — all in one system. The case for adding other databases to your stack should require a specific capability PostgreSQL can't match.

DatabaseUse CaseWhen to Add It
PostgreSQLPrimary data storeDefault for all relational + document needs
RedisCache, sessions, pub/sub, queuesWhen you need sub-millisecond reads for specific hot data
ClickHouseAnalytical queries on large datasets>100M rows with complex aggregations that are slow in PostgreSQL
ElasticsearchFull-text search, log analyticsAdvanced search features beyond PostgreSQL's tsvector
InfluxDB/TimescaleDBTime series at scaleIoT, high-frequency metrics (>1M writes/day)
Pinecone/pgvectorVector similarity searchRAG systems, recommendation engines

Schema Design: The Principles That Matter

Use UUIDs, Not Sequential IDs

-- Sequential IDs expose business metrics (user 1 vs user 100,000)
-- and make merging data from multiple sources difficult
CREATE TABLE users (
  id         UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  email      TEXT NOT NULL UNIQUE,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

gen_random_uuid() (PostgreSQL 13+) is more efficient than uuid-generate-v4() from the uuid-ossp extension. For write-heavy tables where index fragmentation from random UUIDs matters, consider ULIDs (sortable, time-ordered UUIDs) via a custom function.

Soft Deletes with Care

-- Soft delete: mark as deleted, keep the row
ALTER TABLE projects ADD COLUMN deleted_at TIMESTAMPTZ;

-- Every query needs the filter — easy to forget
SELECT * FROM projects WHERE deleted_at IS NULL AND user_id = $1;

-- Partial index: exclude deleted rows from unique constraint
CREATE UNIQUE INDEX projects_slug_unique 
  ON projects (slug) 
  WHERE deleted_at IS NULL;  -- allows same slug on a deleted + active project

Soft deletes introduce complexity: every query needs WHERE deleted_at IS NULL, foreign key integrity becomes complex, and the table grows indefinitely. Consider: do you actually need soft deletes, or do you need an audit log? Often the audit log is the right answer.

Timestamps — Always With Time Zone

-- Never use TIMESTAMP — always TIMESTAMPTZ
-- TIMESTAMP stores time without timezone offset, causes bugs with DST and international users

created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()

-- Auto-update updated_at trigger
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = NOW();
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER set_updated_at
  BEFORE UPDATE ON projects
  FOR EACH ROW EXECUTE FUNCTION update_updated_at();

Constraints at the Database Level

Don't rely on application code to enforce data integrity — enforce it in the database. Application bugs get deployed; database constraints don't get bypassed.

CREATE TABLE orders (
  id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id         UUID NOT NULL REFERENCES users(id) ON DELETE RESTRICT,
  status          TEXT NOT NULL CHECK (status IN ('pending', 'paid', 'shipped', 'cancelled')),
  amount_cents    BIGINT NOT NULL CHECK (amount_cents > 0),   -- never zero or negative
  currency        CHAR(3) NOT NULL DEFAULT 'USD',
  created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Partial unique constraint: one active subscription per customer
CREATE UNIQUE INDEX one_active_subscription_per_customer
  ON subscriptions (customer_id)
  WHERE status IN ('trialing', 'active', 'past_due');

🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript — production-grade stack
  • Fixed-price contracts — no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

Indexing Strategy

The right indexes make queries fast. Too many indexes slow down writes and waste storage. The wrong indexes don't help at all.

Understanding the query planner:

-- EXPLAIN ANALYZE shows exactly what PostgreSQL does to execute your query
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.id, u.email, p.name as project_name
FROM users u
JOIN projects p ON p.user_id = u.id
WHERE u.created_at > NOW() - INTERVAL '30 days'
  AND p.status = 'active'
ORDER BY u.created_at DESC
LIMIT 20;

-- Look for:
-- Seq Scan on large tables = missing index
-- high actual rows vs estimated rows = stale statistics (run ANALYZE)
-- High cost nodes = optimization opportunity

Index types and when to use them:

-- B-tree (default): equality and range queries
CREATE INDEX idx_projects_user_id ON projects (user_id);
CREATE INDEX idx_orders_created_at ON orders (created_at DESC);

-- Composite index: multiple conditions (order matters — most selective first)
CREATE INDEX idx_projects_user_status 
  ON projects (user_id, status)   -- user_id first (high selectivity)
  WHERE deleted_at IS NULL;       -- partial index excludes deleted rows

-- GIN index: JSONB, arrays, full-text search
CREATE INDEX idx_projects_tags ON projects USING GIN (tags);
CREATE INDEX idx_documents_tsv ON documents USING GIN (to_tsvector('english', content));

-- BRIN index: sequential data (time series, auto-increment IDs)
-- Much smaller than B-tree, effective for data inserted in order
CREATE INDEX idx_events_created_at_brin ON events USING BRIN (created_at);

Identifying missing indexes:

-- Tables with high sequential scan rates (candidates for indexes)
SELECT
  schemaname,
  relname AS table_name,
  seq_scan,
  idx_scan,
  n_live_tup AS row_count,
  pg_size_pretty(pg_total_relation_size(relid)) AS total_size
FROM pg_stat_user_tables
WHERE seq_scan > idx_scan
  AND n_live_tup > 10000
ORDER BY seq_scan DESC;

Query Optimization Patterns

Avoid N+1 Queries with JOINs

-- Bad: fetch projects, then for each project fetch its owner (N+1 in application code)
SELECT * FROM projects WHERE user_id = $1;
-- Then for each project: SELECT * FROM users WHERE id = $project.owner_id

-- Good: single JOIN
SELECT
  p.id,
  p.name,
  p.status,
  u.id   AS owner_id,
  u.name AS owner_name,
  u.email AS owner_email
FROM projects p
JOIN users u ON u.id = p.owner_id
WHERE p.user_id = $1
  AND p.deleted_at IS NULL;

Pagination: Cursor-Based Over Offset

-- Bad: OFFSET-based pagination is slow at high offsets
SELECT * FROM projects ORDER BY created_at DESC LIMIT 20 OFFSET 10000;
-- PostgreSQL still scans 10,020 rows; discards first 10,000

-- Good: cursor-based (keyset pagination)
SELECT * FROM projects 
WHERE created_at < $cursor_timestamp   -- from previous page's last row
ORDER BY created_at DESC 
LIMIT 20;
-- Only scans the 20 rows returned — fast regardless of position

CTEs for Readability, Subqueries for Performance

-- CTEs (WITH clause) are materialized by default in older PostgreSQL
-- Use MATERIALIZED or NOT MATERIALIZED explicitly (PG 12+)

WITH active_users AS NOT MATERIALIZED (  -- inline, not materialized
  SELECT id FROM users WHERE last_active_at > NOW() - INTERVAL '30 days'
)
SELECT p.name, p.status
FROM projects p
JOIN active_users au ON au.id = p.owner_id
WHERE p.status = 'active';

🚀 Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Database Migrations: Zero-Downtime Patterns

Migrations on a live production database require care. Adding a column or index can lock the table and cause downtime.

Adding a column safely:

-- Safe: add nullable column (no lock)
ALTER TABLE orders ADD COLUMN tracking_number TEXT;

-- Then backfill in batches (not one UPDATE on millions of rows):
DO $$
DECLARE
  batch_size INT := 1000;
  last_id    UUID := '00000000-0000-0000-0000-000000000000';
BEGIN
  LOOP
    UPDATE orders SET tracking_number = generate_tracking()
    WHERE id > last_id AND tracking_number IS NULL
    ORDER BY id LIMIT batch_size
    RETURNING id INTO last_id;
    EXIT WHEN NOT FOUND;
    PERFORM pg_sleep(0.1);  -- throttle: don't hammer the database
  END LOOP;
END $$;

-- Then add NOT NULL constraint (after all rows are backfilled)
ALTER TABLE orders ALTER COLUMN tracking_number SET NOT NULL;

Adding an index without locking:

-- Standard CREATE INDEX locks the table for writes during build
-- CREATE INDEX CONCURRENTLY does not — takes longer, but safe for production

CREATE INDEX CONCURRENTLY idx_orders_tracking 
  ON orders (tracking_number)
  WHERE tracking_number IS NOT NULL;

Expand-Contract pattern for renaming columns:

Step 1: Add new column (old column still exists)
Step 2: Deploy code that writes to BOTH old and new column
Step 3: Backfill new column from old
Step 4: Deploy code that reads from new column
Step 5: Deploy code that writes to new column only
Step 6: Drop old column (safe — no code reads it)

This pattern allows zero-downtime renames that would otherwise require locking table.


Connection Pooling

PostgreSQL has a per-connection overhead of ~5–10MB RAM. At 100 connections, that's 500MB–1GB dedicated to connection management. PgBouncer (connection pooler) sits between your application and PostgreSQL, multiplexing many application connections into fewer database connections.

# pgbouncer.ini
[databases]
myapp = host=postgres-primary port=5432 dbname=myapp

[pgbouncer]
pool_mode = transaction       # transaction pooling — most efficient
max_client_conn = 1000        # max connections from application
default_pool_size = 25        # connections to PostgreSQL per database
min_pool_size = 5
reserve_pool_size = 5

For ECS/Kubernetes deployments: each task/pod has its own connection pool. With 10 tasks each holding 25 connections, that's 250 connections to PostgreSQL. Set max_connections in PostgreSQL accordingly (default is 100 — almost always too low for production).


Cost Ranges for Database Development

Project TypeScopeCost Range
Schema design (new product)Data model + migrations + indexes$8K–$25K
Database performance auditQuery analysis + index recommendations$8K–$20K
Database migration (technology change)e.g., MySQL → PostgreSQL$15K–$50K
Ongoing DBA supportQuery review + maintenance$3K–$8K/month

Working With Viprasol

Database design is integrated into every project we deliver — we design schemas during discovery, set up proper indexing from day one, use EXPLAIN ANALYZE to validate query performance, and implement zero-downtime migration patterns. We use PostgreSQL as the default and add other databases only when genuinely warranted.

Our web application development services include database design and optimization as standard deliverables.

Need database development expertise? Viprasol Tech designs and optimizes databases for startups and enterprises. Contact us.


See also: Custom Web Application Development · SaaS Development Services · Software Architecture Consulting

Sources: PostgreSQL Documentation · PgBouncer Documentation · Use the Index, Luke — SQL Indexing Guide

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms — we build it all with Next.js and React.

Free consultation • No commitment • Response within 24 hours

Viprasol · Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.