PostgreSQL Upsert Patterns: ON CONFLICT, Idempotent Writes, and Merge Operations
Master PostgreSQL upsert with ON CONFLICT DO UPDATE. Covers conflict targets, partial updates with EXCLUDED, idempotent write patterns, conditional upserts, bulk upserts, and common pitfalls like lost updates.
INSERT ... ON CONFLICT DO UPDATE — PostgreSQL's upsert — is one of the most useful and most misused features in the database. Used correctly, it enables idempotent writes (safe to retry), efficient synchronization of external data, and atomic check-and-set operations. Used carelessly, it silently loses updates and creates subtle race conditions.
This guide covers the patterns that work and the pitfalls to avoid.
Basic Syntax
-- Insert or update: conflict on primary key
INSERT INTO users (id, email, name, updated_at)
VALUES ($1, $2, $3, NOW())
ON CONFLICT (id) DO UPDATE SET
email = EXCLUDED.email,
name = EXCLUDED.name,
updated_at = NOW();
-- EXCLUDED refers to the row that was rejected (the attempted insert values)
-- Very common mistake: writing "VALUES.email" instead of EXCLUDED.email — syntax error
Conflict Targets
-- Conflict on primary key (implicit)
INSERT INTO items (id, name) VALUES ($1, $2)
ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name;
-- Conflict on unique column
INSERT INTO users (email, name) VALUES ($1, $2)
ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.name;
-- Conflict on composite unique constraint
INSERT INTO workspace_members (workspace_id, user_id, role)
VALUES ($1, $2, $3)
ON CONFLICT (workspace_id, user_id) DO UPDATE SET
role = EXCLUDED.role,
updated_at = NOW();
-- Conflict on partial unique index
-- Requires the exact constraint or index name
CREATE UNIQUE INDEX idx_active_slug ON posts(workspace_id, slug)
WHERE deleted_at IS NULL;
INSERT INTO posts (workspace_id, slug, title)
VALUES ($1, $2, $3)
ON CONFLICT ON CONSTRAINT idx_active_slug DO UPDATE SET
title = EXCLUDED.title;
☁️ Is Your Cloud Costing Too Much?
Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.
- AWS, GCP, Azure certified engineers
- Infrastructure as Code (Terraform, CDK)
- Docker, Kubernetes, GitHub Actions CI/CD
- Typical audit recovers $500–$3,000/month in savings
DO NOTHING vs DO UPDATE
-- DO NOTHING: insert if not exists, ignore if exists
-- Returns nothing for conflicting rows (can't tell if inserted or skipped)
INSERT INTO event_log (event_id, type, payload)
VALUES ($1, $2, $3)
ON CONFLICT (event_id) DO NOTHING;
-- DO UPDATE RETURNING: tells you which row was affected and what happened
INSERT INTO idempotency_keys (key, response_body, created_at)
VALUES ($1, $2, NOW())
ON CONFLICT (key) DO UPDATE SET
last_seen_at = NOW() -- Just touch it
RETURNING
id,
(xmax = 0) AS inserted, -- xmax = 0 means it was inserted (not updated)
response_body;
Conditional Update (Don't Overwrite Newer Data)
-- Only update if the incoming data is newer
INSERT INTO user_profiles (user_id, bio, avatar_url, updated_at)
VALUES ($1, $2, $3, $4::timestamptz)
ON CONFLICT (user_id) DO UPDATE SET
bio = EXCLUDED.bio,
avatar_url = EXCLUDED.avatar_url,
updated_at = EXCLUDED.updated_at
WHERE
-- Only apply update if incoming timestamp is newer than current
EXCLUDED.updated_at > user_profiles.updated_at;
-- "WHERE" clause in ON CONFLICT filters which rows get updated
-- Rows that fail the WHERE: conflict is detected but update is skipped
-- Returns nothing for skipped rows
⚙️ DevOps Done Right — Zero Downtime, Full Automation
Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.
- Staging + production environments with feature flags
- Automated security scanning in the pipeline
- Uptime monitoring + alerting + runbook automation
- On-call support handover docs included
Partial Update: Only Set Provided Fields
-- Only update fields that are actually changing (don't null out missing fields)
INSERT INTO settings (workspace_id, key, value, updated_at)
VALUES ($1, $2, $3, NOW())
ON CONFLICT (workspace_id, key) DO UPDATE SET
value = EXCLUDED.value,
updated_at = NOW();
-- For multi-field settings: use COALESCE to preserve existing values
INSERT INTO workspace_config (workspace_id, theme, language, timezone)
VALUES ($1, $2, $3, $4)
ON CONFLICT (workspace_id) DO UPDATE SET
theme = COALESCE(EXCLUDED.theme, workspace_config.theme),
language = COALESCE(EXCLUDED.language, workspace_config.language),
timezone = COALESCE(EXCLUDED.timezone, workspace_config.timezone),
updated_at = NOW();
-- If EXCLUDED.theme is NULL, keep existing theme
Incrementing Counters Atomically
-- Upsert a counter: create with 1 if not exists, increment if exists
INSERT INTO page_views (page, date, views)
VALUES ($1, CURRENT_DATE, 1)
ON CONFLICT (page, date) DO UPDATE SET
views = page_views.views + 1;
-- Note: "page_views.views + 1" NOT "EXCLUDED.views + page_views.views"
-- EXCLUDED.views = 1 (the attempted insert value), not meaningful for counters
-- For race-condition-safe counter: use FOR UPDATE in the select path
-- or rely on PostgreSQL's row-level locking in ON CONFLICT UPDATE (it's safe)
Bulk Upsert with unnest
// lib/db/bulk-upsert.ts
import { prisma } from "@/lib/prisma";
interface MetricRecord {
workspaceId: string;
date: string;
metricName: string;
value: number;
}
// Upsert thousands of rows in a single query using unnest
export async function bulkUpsertMetrics(records: MetricRecord[]): Promise<void> {
if (!records.length) return;
const workspaceIds = records.map((r) => r.workspaceId);
const dates = records.map((r) => r.date);
const metricNames = records.map((r) => r.metricName);
const values = records.map((r) => r.value);
await prisma.$executeRaw`
INSERT INTO daily_metrics (workspace_id, date, metric_name, value, updated_at)
SELECT
UNNEST(${workspaceIds}::uuid[]),
UNNEST(${dates}::date[]),
UNNEST(${metricNames}::text[]),
UNNEST(${values}::numeric[]),
NOW()
ON CONFLICT (workspace_id, date, metric_name) DO UPDATE SET
value = EXCLUDED.value,
updated_at = NOW()
`;
}
Upsert with JSON for Flexible Attributes
-- Merge JSONB attributes: don't overwrite, merge/patch
INSERT INTO product_attributes (product_id, attributes, updated_at)
VALUES ($1, $2::jsonb, NOW())
ON CONFLICT (product_id) DO UPDATE SET
-- Merge: existing attributes + new attributes (new wins on key conflict)
attributes = product_attributes.attributes || EXCLUDED.attributes,
updated_at = NOW();
-- Example:
-- Existing: {"color": "red", "size": "M"}
-- Incoming: {"size": "L", "weight": "200g"}
-- Result: {"color": "red", "size": "L", "weight": "200g"}
-- Deep merge (requires custom function or use jsonb_set per key)
-- For most use cases, || is sufficient
Idempotency Key Pattern
-- Safely implement idempotent API endpoints
CREATE TABLE idempotency_keys (
key TEXT PRIMARY KEY,
status TEXT NOT NULL DEFAULT 'processing', -- 'processing' | 'complete' | 'failed'
request_hash TEXT NOT NULL,
response_body JSONB,
expires_at TIMESTAMPTZ NOT NULL DEFAULT NOW() + INTERVAL '24 hours',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_idempotency_expires ON idempotency_keys(expires_at);
// lib/idempotency.ts
import { prisma } from "@/lib/prisma";
import { createHash } from "crypto";
export async function withIdempotency<T>(
key: string,
requestBody: unknown,
fn: () => Promise<T>
): Promise<{ result: T; cached: boolean }> {
const requestHash = createHash("sha256")
.update(JSON.stringify(requestBody))
.digest("hex");
// Try to insert the key; if it exists, return cached response
const existing = await prisma.$queryRaw<Array<{ response_body: unknown; status: string; inserted: boolean }>>`
INSERT INTO idempotency_keys (key, request_hash)
VALUES (${key}, ${requestHash})
ON CONFLICT (key) DO UPDATE SET
last_seen_at = NOW()
RETURNING response_body, status, (xmax = 0) AS inserted
`;
const row = existing[0];
if (!row.inserted) {
// Key already existed
if (row.status === "complete" && row.response_body) {
return { result: row.response_body as T, cached: true };
}
if (row.status === "processing") {
throw new Error("Request is still processing — retry after a moment");
}
if (row.status === "failed") {
throw new Error("Previous attempt failed — use a new idempotency key");
}
}
// Execute the operation
try {
const result = await fn();
await prisma.$executeRaw`
UPDATE idempotency_keys
SET status = 'complete', response_body = ${JSON.stringify(result)}::jsonb
WHERE key = ${key}
`;
return { result, cached: false };
} catch (err) {
await prisma.$executeRaw`
UPDATE idempotency_keys SET status = 'failed' WHERE key = ${key}
`;
throw err;
}
}
Common Pitfalls
-- ❌ PITFALL 1: Updating updated_at even when nothing changed
-- This makes row-level change detection unreliable
ON CONFLICT (id) DO UPDATE SET
name = EXCLUDED.name,
updated_at = NOW(); -- Sets even if name didn't change
-- ✅ FIX: Only update if something actually changed
ON CONFLICT (id) DO UPDATE SET
name = EXCLUDED.name,
updated_at = CASE
WHEN EXCLUDED.name IS DISTINCT FROM items.name
THEN NOW()
ELSE items.updated_at
END;
-- ❌ PITFALL 2: Lost update with concurrent upserts
-- Transaction 1 and 2 both check "if not exists, insert"
-- Both get "not found", both try to insert — one fails or overwrites the other
-- Solution: use ON CONFLICT, not separate SELECT + INSERT
-- ❌ PITFALL 3: ON CONFLICT DO NOTHING loses the RETURNING
-- If you need the row after insert-or-skip, use DO UPDATE with a no-op
INSERT INTO tags (workspace_id, name)
VALUES ($1, $2)
ON CONFLICT (workspace_id, name) DO UPDATE SET
name = tags.name -- No-op update (sets to current value)
RETURNING id; -- Now always returns the row
-- ❌ PITFALL 4: Forgetting to index the conflict column
-- ON CONFLICT requires a unique constraint or unique index
-- Without it: syntax error "there is no unique or exclusion constraint matching the ON CONFLICT specification"
TypeScript Prisma Upsert
// Prisma upsert — uses ON CONFLICT under the hood
const user = await prisma.user.upsert({
where: { email: "alice@example.com" },
create: {
email: "alice@example.com",
name: "Alice",
role: "member",
},
update: {
name: "Alice Updated", // Only updates these fields on conflict
updatedAt: new Date(),
},
});
// For conditional upserts (Prisma doesn't support WHERE clause on DO UPDATE):
// Use prisma.$queryRaw for complex conditional upserts
await prisma.$executeRaw`
INSERT INTO user_sessions (user_id, token, expires_at)
VALUES (${userId}::uuid, ${token}, ${expiresAt}::timestamptz)
ON CONFLICT (token) DO UPDATE SET
expires_at = EXCLUDED.expires_at
WHERE EXCLUDED.expires_at > user_sessions.expires_at
`;
Cost and Timeline Estimates
This is a database pattern — the "cost" is developer time to implement correctly:
| Pattern | Complexity | Time to Implement |
|---|---|---|
| Basic upsert | Low | 30 min |
| Conditional update (WHERE clause) | Low | 1 hour |
| Bulk upsert with unnest | Medium | 2–3 hours |
| Idempotency key system | Medium | 1–2 days |
| JSONB merge upsert | Low | 1 hour |
Performance: a single upsert is as fast as a regular INSERT (~0.1–1ms). Bulk unnest upserts of 1K rows take ~10–50ms, orders of magnitude faster than 1K individual upserts.
See Also
- PostgreSQL Full-Text Search Advanced
- PostgreSQL JSONB Patterns and Indexing
- PostgreSQL Triggers and Audit Tables
- SaaS Webhook System with Idempotent Delivery
- Stripe Webhook Handling with Idempotency
Working With Viprasol
Upsert is one of those features where 10 minutes of correct usage saves hours of debugging race conditions and duplicate records. Our team uses ON CONFLICT DO UPDATE throughout our data pipelines — for metric ingestion, event processing, sync endpoints, and idempotent API operations — and we know the edge cases that cause silent data loss.
What we deliver:
- Upsert patterns for your specific conflict scenarios
- Conditional update with WHERE clause to avoid overwriting newer data
- Bulk upsert via unnest for high-throughput ingestion
- Idempotency key system for safe API retries
- JSONB merge upsert for flexible attribute storage
Talk to our team about your database write patterns →
Or explore our cloud and database services.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need DevOps & Cloud Expertise?
Scale your infrastructure with confidence. AWS, GCP, Azure certified team.
Free consultation • No commitment • Response within 24 hours
Making sense of your data at scale?
Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.