Back to Blog

PostgreSQL Connection Pooling in 2026: PgBouncer, RDS Proxy, and Pool Sizing

Master PostgreSQL connection pooling in 2026: PgBouncer transaction mode, RDS Proxy for serverless, pool sizing formulas, connection leak detection, and Prisma/node-postgres configuration.

Viprasol Tech Team
January 30, 2027
13 min read

PostgreSQL Connection Pooling in 2026: PgBouncer, RDS Proxy, and Pool Sizing

PostgreSQL has a hard limit on simultaneous connections โ€” typically 100 on small instances, 500โ€“5000 on large ones. Each connection consumes ~5โ€“10MB of RAM on the server regardless of whether it's doing anything. A Next.js app with 10 serverless instances, each with a 10-connection pool, hits 100 connections before you've handled a single user request.

Connection pooling is the solution: a proxy sits between your application and PostgreSQL, reusing connections efficiently. This post covers the three-layer pooling stack used in production in 2026, PgBouncer vs RDS Proxy trade-offs, pool sizing math, and how to configure node-postgres and Prisma correctly.


The Connection Problem

Without pooling:
  10 Next.js instances ร— 10 DB connections each = 100 connections
  + 5 worker processes ร— 5 connections = 25 connections  
  + 3 cron jobs ร— 2 connections = 6 connections
  Total: 131 connections on a db.t3.medium (100 max) โ†’ FATAL: too many connections

With PgBouncer:
  All app instances โ†’ PgBouncer (manages a pool of 20 server connections)
  PgBouncer โ†’ PostgreSQL (20 connections max)
  App sees: unlimited virtual connections
  PostgreSQL sees: 20 connections

Layer 1: Application-Level Pool (node-postgres)

// lib/db/pool.ts
import { Pool, PoolConfig } from "pg";

// Pool configuration for a long-running server (ECS, EC2)
const poolConfig: PoolConfig = {
  connectionString: process.env.DATABASE_URL,
  
  // Max connections from this application instance to PgBouncer/DB
  // Formula: (num_cores * 2) + effective_spindle_count
  // For a 2-core instance: (2*2)+1 = 5 connections is often sufficient
  max: parseInt(process.env.DB_POOL_MAX ?? "10"),
  
  // Minimum pool size (keep these warm)
  min: parseInt(process.env.DB_POOL_MIN ?? "2"),
  
  // How long a connection can sit idle before being closed (ms)
  idleTimeoutMillis: 30_000,    // 30 seconds
  
  // How long to wait for a connection from the pool (ms)
  connectionTimeoutMillis: 5_000, // 5 seconds โ€” fail fast
  
  // Max number of waiting clients (if pool exhausted)
  // -1 = unlimited; set a limit to fail fast on overload
  maxWaitingClients: 20,

  // Enable keep-alive to detect dead connections
  keepAlive: true,
  keepAliveInitialDelayMillis: 10_000,

  // Statement timeout โ€” prevent long queries from blocking pool connections
  statement_timeout: 30_000,     // 30 seconds
  
  // SSL for managed databases
  ssl: process.env.DATABASE_SSL === "true"
    ? { rejectUnauthorized: true }
    : undefined,
};

export const pool = new Pool(poolConfig);

// Monitor pool health
pool.on("error", (err, client) => {
  console.error("Unexpected pool client error:", err);
});

pool.on("connect", () => {
  if (process.env.NODE_ENV === "development") {
    console.debug(`Pool total: ${pool.totalCount}, idle: ${pool.idleCount}, waiting: ${pool.waitingCount}`);
  }
});

// Graceful shutdown
process.on("SIGTERM", async () => {
  await pool.end();
});

// Health check
export async function checkDatabaseHealth(): Promise<{ healthy: boolean; latencyMs: number }> {
  const start = Date.now();
  try {
    const client = await pool.connect();
    await client.query("SELECT 1");
    client.release();
    return { healthy: true, latencyMs: Date.now() - start };
  } catch {
    return { healthy: false, latencyMs: Date.now() - start };
  }
}

Prisma connection pool configuration:

// lib/db/prisma.ts
import { PrismaClient } from "@prisma/client";

const globalForPrisma = globalThis as unknown as { prisma: PrismaClient };

export const db =
  globalForPrisma.prisma ||
  new PrismaClient({
    log:
      process.env.NODE_ENV === "development"
        ? ["query", "warn", "error"]
        : ["warn", "error"],

    // Datasource configuration
    datasources: {
      db: {
        url: process.env.DATABASE_URL,
      },
    },
  });

// Prevent multiple Prisma instances in development (hot reload)
if (process.env.NODE_ENV !== "production") globalForPrisma.prisma = db;
# .env โ€” Prisma connection pool URL parameters
# connection_limit: max connections per Prisma instance
# pool_timeout: max wait time for a connection (seconds)
# connect_timeout: TCP connection timeout
DATABASE_URL="postgresql://user:password@pgbouncer:5432/mydb?connection_limit=5&pool_timeout=10&connect_timeout=10"

๐ŸŒ Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different โ€” senior engineers only, direct Slack access, and a 5.0โ˜… Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript โ€” production-grade stack
  • Fixed-price contracts โ€” no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

Layer 2: PgBouncer

PgBouncer is the industry-standard connection pooler for PostgreSQL. Run it as a sidecar container alongside your application or as a dedicated service.

PgBouncer modes:

ModeHow it worksUse case
Session1 server connection per client sessionIncompatible with prepared statements from multiple clients
TransactionServer connection assigned per transaction, returned to pool after COMMIT/ROLLBACKโœ… Best for most apps
StatementServer connection returned after each statementIncompatible with transactions โ€” rarely used

Use transaction mode for most Node.js/Python applications. Session mode wastes connections.

# pgbouncer.ini
[databases]
# Wildcard entry โ€” any database name maps to the same PostgreSQL server
* = host=postgres-primary port=5432 dbname=mydb

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 5432

# Authentication
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

# Pooling mode
pool_mode = transaction

# Max server connections to PostgreSQL (tune to DB server capacity)
# Formula: (max_connections * 0.8) โ€” leave headroom for admin, replication
max_server_conn = 80

# Default pool size per database+user pair
default_pool_size = 20

# Maximum client connections (your app can have many virtual connections)
max_client_conn = 2000

# Min pool size (keep N connections warm)
min_pool_size = 5

# Idle timeout for server connections (seconds)
server_idle_timeout = 300

# Client connection timeout
client_login_timeout = 10

# Health check query
server_check_query = SELECT 1

# Logging
log_connections = 0    # Disable in production (too verbose)
log_disconnections = 0
log_pooler_errors = 1
stats_period = 60

# Admin interface
admin_users = pgbouncer_admin

Docker Compose for local development:

# docker-compose.yml
services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
    ports:
      - "5433:5432"   # Direct access (for migrations)

  pgbouncer:
    image: bitnami/pgbouncer:latest
    environment:
      POSTGRESQL_HOST: postgres
      POSTGRESQL_PORT: 5432
      POSTGRESQL_DATABASE: mydb
      POSTGRESQL_USERNAME: myuser
      POSTGRESQL_PASSWORD: mypassword
      PGBOUNCER_POOL_MODE: transaction
      PGBOUNCER_MAX_CLIENT_CONN: 1000
      PGBOUNCER_DEFAULT_POOL_SIZE: 10
    ports:
      - "5432:6432"   # App connects here
    depends_on:
      - postgres

Layer 3: RDS Proxy (for Serverless / Lambda)

RDS Proxy is AWS's managed connection pooler, essential for Lambda functions which create and destroy connections on every invocation:

# terraform/rds-proxy.tf

resource "aws_db_proxy" "main" {
  name                   = "${var.name}-${var.environment}"
  debug_logging          = false
  engine_family          = "POSTGRESQL"
  idle_client_timeout    = 1800  # 30 minutes
  require_tls            = true
  role_arn               = aws_iam_role.rds_proxy.arn
  vpc_security_group_ids = [aws_security_group.rds_proxy.id]
  vpc_subnet_ids         = var.private_subnet_ids

  auth {
    auth_scheme = "SECRETS"
    iam_auth    = "REQUIRED"
    secret_arn  = aws_secretsmanager_secret.db_credentials.arn
  }

  tags = var.common_tags
}

resource "aws_db_proxy_default_target_group" "main" {
  db_proxy_name = aws_db_proxy.main.name

  connection_pool_config {
    connection_borrow_timeout    = 120  # Wait up to 2 min for a connection
    max_connections_percent      = 80   # Use max 80% of DB max_connections
    max_idle_connections_percent = 50   # Keep 50% idle connections warm
  }
}

resource "aws_db_proxy_target" "main" {
  db_instance_identifier = aws_db_instance.main.id
  db_proxy_name          = aws_db_proxy.main.name
  target_group_name      = aws_db_proxy_default_target_group.main.name
}

Lambda connecting via RDS Proxy:

// Lambda function using RDS Proxy with IAM auth
import { Pool } from "pg";
import { Signer } from "@aws-sdk/rds-signer";

let pool: Pool | null = null;

async function getPool(): Promise<Pool> {
  if (pool) return pool;

  // Generate IAM auth token (expires in 15 minutes)
  const signer = new Signer({
    hostname: process.env.RDS_PROXY_ENDPOINT!,
    port: 5432,
    region: process.env.AWS_REGION!,
    username: process.env.DB_USERNAME!,
  });

  const token = await signer.getAuthToken();

  pool = new Pool({
    host: process.env.RDS_PROXY_ENDPOINT,
    port: 5432,
    database: process.env.DB_NAME,
    user: process.env.DB_USERNAME,
    password: token,
    ssl: { rejectUnauthorized: true },
    max: 2,          // Lambda: keep pool small (1-2 connections per instance)
    min: 0,
    idleTimeoutMillis: 0,  // Lambda: don't keep idle connections
    connectionTimeoutMillis: 10_000,
  });

  return pool;
}

export const handler = async (event: any) => {
  const db = await getPool();
  const client = await db.connect();
  try {
    const result = await client.query("SELECT NOW()");
    return { statusCode: 200, body: JSON.stringify(result.rows[0]) };
  } finally {
    client.release();
  }
};

๐Ÿš€ Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4โ€“8 weeks, full platforms in 3โ€“5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Pool Sizing Formula

# PostgreSQL server max_connections guideline:
# max_connections = (RAM_GB * 25)
# Example: 4GB RDS instance โ†’ max_connections โ‰ˆ 100

# PgBouncer max_server_conn:
# max_server_conn = max_connections * 0.8  (leave headroom)
# Example: 100 * 0.8 = 80

# Application pool size per instance:
# pool_max = max_server_conn / num_app_instances
# Example: 80 connections / 10 instances = 8 per instance

# Lambda (via RDS Proxy):
# pool_max = 1-2 per Lambda instance
# RDS Proxy handles the actual pooling

# Rule of thumb for web apps:
# - Requests per second ร— avg query time in seconds = connections needed
# - Example: 100 req/s ร— 0.05s avg query time = 5 connections sufficient
#   (but add overhead for bursts: 5 ร— 2 = 10 connections)

Detecting Connection Leaks

// Detect connections not returned to pool
pool.on("connect", (client) => {
  const acquire = new Error("Connection acquired");
  const originalRelease = client.release.bind(client);

  // Override release to track if it's called
  let released = false;
  client.release = (...args: any[]) => {
    released = true;
    return originalRelease(...args);
  };

  // Alert if connection held for more than 30 seconds without release
  setTimeout(() => {
    if (!released) {
      console.error(
        "CONNECTION LEAK DETECTED โ€” connection not released after 30s",
        acquire.stack
      );
    }
  }, 30_000);
});

// Monitor pool metrics (expose as Prometheus metrics or CloudWatch)
setInterval(() => {
  const metrics = {
    totalConnections: pool.totalCount,
    idleConnections: pool.idleCount,
    waitingRequests: pool.waitingCount,
  };

  if (metrics.waitingRequests > 5) {
    console.warn("Pool pressure:", metrics);
  }
}, 10_000);

Cost Estimates

SolutionSetup CostMonthly Cost
PgBouncer (self-managed, ECS sidecar)1โ€“2 days$0 (task overhead only)
RDS Proxy2โ€“4 hours$0.015/vCPU-hour ร— DB vCPUs
RDS Proxy (db.t3.medium, 2 vCPU)โ€”~$21/month
RDS Proxy (db.r6g.xlarge, 4 vCPU)โ€”~$43/month
PgBouncer (dedicated t3.micro)โ€”~$8/month

See Also


Working With Viprasol

We configure PostgreSQL connection pooling for SaaS products โ€” from PgBouncer deployment through RDS Proxy for Lambda-heavy architectures. Our team has resolved connection exhaustion incidents and optimized pool configurations for products handling thousands of concurrent users.

What we deliver:

  • PgBouncer configuration and deployment (Docker/ECS sidecar)
  • RDS Proxy setup with IAM auth for serverless workloads
  • Pool sizing analysis based on actual traffic patterns
  • Connection leak detection and monitoring
  • Prisma and node-postgres pool configuration review

Explore our web development services or contact us to optimize your database connection pooling.

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms โ€” we build it all with Next.js and React.

Free consultation โ€ข No commitment โ€ข Response within 24 hours

Viprasol ยท Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse โ‰ฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ€” from architecture through deployment.