Cloud Database Options in 2026: RDS vs Aurora vs PlanetScale vs Neon vs Supabase
Compare cloud database options in 2026: AWS RDS, Aurora Serverless v2, PlanetScale, Neon, and Supabase. Pricing, scaling, branching, and when to choose each for your SaaS.
Cloud Database Options in 2026: RDS vs Aurora vs PlanetScale vs Neon vs Supabase
The managed database market has fragmented dramatically. In 2020, most teams chose between RDS and Aurora and called it a day. In 2026, you have RDS (still fine), Aurora Serverless v2 (genuinely useful for spiky workloads), PlanetScale (MySQL with Git-like branching — and now Postgres support), Neon (serverless Postgres with branching), and Supabase (Postgres with a full BaaS layer on top).
Each solves different problems. Choosing wrong costs you migration pain later. This post gives you the comparison matrix, concrete pricing examples, and the questions that determine which one fits your situation.
Quick Comparison Matrix
| RDS PostgreSQL | Aurora Serverless v2 | PlanetScale | Neon | Supabase | |
|---|---|---|---|---|---|
| Engine | PostgreSQL | PostgreSQL / MySQL | MySQL / Postgres | PostgreSQL | PostgreSQL |
| Serverless scaling | ❌ | ✅ | ✅ | ✅ | ❌ (managed) |
| Scale to zero | ❌ | ❌ | ✅ | ✅ | ❌ |
| Database branching | ❌ | ❌ | ✅ | ✅ | ❌ |
| Connection pooling | Manual (PgBouncer) | Manual | Built-in | Built-in (Neon proxy) | Built-in (pgBouncer) |
| Read replicas | ✅ Manual | ✅ Auto | ✅ | ✅ | ✅ |
| Vector search | pgvector ext. | pgvector ext. | ❌ | pgvector ext. | pgvector ext. |
| Auth / BaaS | ❌ | ❌ | ❌ | ❌ | ✅ Full BaaS |
| Extensions | Most | Most | Limited | Most | Most |
| Multi-region | ✅ | ✅ | ✅ Global | ❌ | ❌ |
| PITR | 35 days | 35 days | ✅ | 7–30 days | 7–30 days |
| Pricing model | Instance hours | ACU hours | Rows read/written | Compute-seconds | Compute-hours |
AWS RDS PostgreSQL
RDS is the safe, predictable choice. You pick an instance size, pay by the hour, and get automated backups, OS patching, and Multi-AZ failover. No surprises.
When to choose RDS
- Predictable, sustained load (>200 connections consistently)
- You need specific PostgreSQL extensions not available elsewhere
- Regulatory requirements for data residency in specific AWS regions
- Team already deep in AWS ecosystem
Sizing and pricing (us-east-1, 2026)
| Instance | vCPU | RAM | Cost/month | Suitable for |
|---|---|---|---|---|
| db.t4g.medium | 2 | 4 GB | ~$50 | Dev/staging |
| db.t4g.large | 2 | 8 GB | ~$100 | Small prod (<50 connections) |
| db.r8g.large | 2 | 16 GB | ~$175 | Mid prod |
| db.r8g.xlarge | 4 | 32 GB | ~$350 | Larger prod |
| db.r8g.2xlarge | 8 | 64 GB | ~$700 | High-throughput prod |
# terraform/rds.tf
resource "aws_db_instance" "primary" {
identifier = "myapp-prod"
engine = "postgres"
engine_version = "17.2"
instance_class = "db.r8g.large"
allocated_storage = 100 # GB
max_allocated_storage = 1000 # Auto-scale storage up to 1TB
storage_type = "gp3"
storage_encrypted = true
kms_key_id = aws_kms_key.rds.arn
db_name = "myapp"
username = "myapp_admin"
password = random_password.db.result
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.main.name
multi_az = true # Standby in different AZ
backup_retention_period = 35 # Days
deletion_protection = true
performance_insights_enabled = true
monitoring_interval = 60 # Enhanced monitoring
parameter_group_name = aws_db_parameter_group.postgres17.name
}
resource "aws_db_parameter_group" "postgres17" {
family = "postgres17"
parameter {
name = "shared_preload_libraries"
value = "pg_stat_statements,auto_explain"
}
parameter {
name = "log_min_duration_statement"
value = "1000" # Log queries >1 second
}
parameter {
name = "work_mem"
value = "65536" # 64MB per sort/hash operation
}
}
☁️ Is Your Cloud Costing Too Much?
Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.
- AWS, GCP, Azure certified engineers
- Infrastructure as Code (Terraform, CDK)
- Docker, Kubernetes, GitHub Actions CI/CD
- Typical audit recovers $500–$3,000/month in savings
Aurora Serverless v2
Aurora Serverless v2 scales in ACUs (Aurora Capacity Units) between a minimum and maximum in sub-second increments. This makes it attractive for workloads with significant peaks (batch jobs, end-of-month reporting, traffic spikes).
When to choose Aurora Serverless v2
- Variable load with 3x+ peak-to-average ratio
- Need to scale up quickly without manual instance resizing
- Want to start small (0.5 ACU minimum) and grow automatically
- Global tables (Aurora Global Database) for multi-region
Pricing example
At 0.5 ACU min, 16 ACU max, Aurora PostgreSQL us-east-1:
- Off-peak (0.5 ACU, 20 hours/day): ~$15/month
- Peak (8 ACU avg, 4 hours/day): ~$40/month
- Total compute: ~$55/month vs $175 for a fixed r8g.large
resource "aws_rds_cluster" "aurora" {
cluster_identifier = "myapp-aurora"
engine = "aurora-postgresql"
engine_version = "17.2"
database_name = "myapp"
master_username = "myapp_admin"
master_password = random_password.db.result
storage_encrypted = true
backup_retention_period = 35
serverlessv2_scaling_configuration {
min_capacity = 0.5 # ACUs (minimum when idle)
max_capacity = 64 # ACUs (maximum under load)
# 1 ACU ≈ 2 GB RAM; 64 ACU = 128 GB RAM
}
}
resource "aws_rds_cluster_instance" "writer" {
identifier = "myapp-aurora-writer"
cluster_identifier = aws_rds_cluster.aurora.id
instance_class = "db.serverless"
engine = aws_rds_cluster.aurora.engine
engine_version = aws_rds_cluster.aurora.engine_version
}
resource "aws_rds_cluster_instance" "reader" {
identifier = "myapp-aurora-reader"
cluster_identifier = aws_rds_cluster.aurora.id
instance_class = "db.serverless"
engine = aws_rds_cluster.aurora.engine
engine_version = aws_rds_cluster.aurora.engine_version
promotion_tier = 2
}
Neon: Serverless PostgreSQL with Branching
Neon separates storage from compute. Compute scales to zero when idle; storage is persistent. This architecture enables database branching — creating a copy-on-write branch from any point in time in milliseconds.
When to choose Neon
- You want serverless PostgreSQL that scales to zero (dev environments, staging)
- Database branching for CI/CD (each PR gets its own database branch)
- You're building with serverless compute (Lambda, Vercel, Cloudflare Workers)
- Cost-sensitive early-stage product with unpredictable load
Neon branching for CI/CD
# .github/workflows/test.yml — Neon database branching
name: Test with Neon Branch
on:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Create Neon branch
id: create-branch
uses: neondatabase/create-branch-action@v5
with:
project_id: ${{ secrets.NEON_PROJECT_ID }}
api_key: ${{ secrets.NEON_API_KEY }}
branch_name: preview/pr-${{ github.event.number }}
parent: main
- name: Run migrations on branch
env:
DATABASE_URL: ${{ steps.create-branch.outputs.db_url }}
run: npx prisma migrate deploy
- name: Run tests
env:
DATABASE_URL: ${{ steps.create-branch.outputs.db_url }}
run: npm test
- name: Delete Neon branch
if: always()
uses: neondatabase/delete-branch-action@v3
with:
project_id: ${{ secrets.NEON_PROJECT_ID }}
api_key: ${{ secrets.NEON_API_KEY }}
branch_id: ${{ steps.create-branch.outputs.branch_id }}
Neon pricing (2026)
| Plan | Compute | Storage | Cost |
|---|---|---|---|
| Free | 0.25 vCPU, 1 GB RAM, auto-suspend | 0.5 GB | $0 |
| Launch | Up to 4 vCPU, 16 GB RAM | 10 GB included | $19/month |
| Scale | Up to 8 vCPU, 32 GB RAM | 50 GB included | $69/month |
| Business | Up to 10 vCPU, 64 GB RAM | 500 GB included | $700/month |
⚙️ DevOps Done Right — Zero Downtime, Full Automation
Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.
- Staging + production environments with feature flags
- Automated security scanning in the pipeline
- Uptime monitoring + alerting + runbook automation
- On-call support handover docs included
Supabase: PostgreSQL + BaaS
Supabase gives you a full-featured PostgreSQL database plus an auth system, real-time subscriptions, file storage, and auto-generated REST/GraphQL APIs — all in one platform. It's the fastest way to get a backend running.
When to choose Supabase
- Indie hacker / fast MVP where speed-to-launch matters most
- You want auth, storage, and real-time out of the box
- Your team has frontend engineers doing backend work
- Building with the Supabase JS client in React/Next.js
Supabase Row Level Security (production pattern)
-- Enable RLS on every table
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
-- Policy: users can only read their own posts
CREATE POLICY "Users can read own posts" ON posts
FOR SELECT USING (auth.uid() = user_id);
-- Policy: users can insert their own posts
CREATE POLICY "Users can create own posts" ON posts
FOR INSERT WITH CHECK (auth.uid() = user_id);
-- Policy: users can update their own posts
CREATE POLICY "Users can update own posts" ON posts
FOR UPDATE USING (auth.uid() = user_id);
-- Policy: admins can read all posts
CREATE POLICY "Admins can read all posts" ON posts
FOR SELECT USING (
EXISTS (SELECT 1 FROM user_roles WHERE user_id = auth.uid() AND role = 'admin')
);
// src/lib/supabase.ts
import { createClient } from '@supabase/supabase-js';
import type { Database } from './database.types'; // Auto-generated by Supabase CLI
export const supabase = createClient<Database>(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
);
// Real-time subscription example
export function subscribeToNewPosts(
onInsert: (post: Database['public']['Tables']['posts']['Row']) => void,
) {
return supabase
.channel('public:posts')
.on(
'postgres_changes',
{ event: 'INSERT', schema: 'public', table: 'posts' },
(payload) => onInsert(payload.new as any),
)
.subscribe();
}
Supabase pricing (2026)
| Plan | Database | Bandwidth | Auth | Cost |
|---|---|---|---|---|
| Free | 500 MB, 2 CPUs | 5 GB/month | 50K MAU | $0 |
| Pro | 8 GB, dedicated | 250 GB/month | 100K MAU | $25/month |
| Team | Dedicated | 1 TB/month | Unlimited | $599/month |
Decision Flowchart
Do you need auth, real-time, or file storage built-in?
├─ Yes → Supabase (especially for MVPs and indie projects)
└─ No → continue
Is your team already deep in AWS?
├─ Yes → RDS or Aurora
│ └─ Spiky load (>3x peak-to-avg)? → Aurora Serverless v2
│ └─ Steady load or specific extensions? → RDS PostgreSQL
└─ No → continue
Do you want branching for CI/CD preview environments?
├─ Yes → Neon
└─ No → continue
Are you on MySQL and need Vitess-style horizontal sharding?
├─ Yes → PlanetScale
└─ PostgreSQL with basic managed service → Neon or RDS
Migration Considerations
// Migrating from one database provider to another
// Step 1: Validate schema compatibility
// - PlanetScale MySQL → PostgreSQL: check enum syntax, JSON types, auto_increment vs serial
// - Any → Neon: all standard PostgreSQL DDL works; check extensions availability
// - Any → Supabase: enable RLS on all tables or data is publicly readable via REST API
// Step 2: Export data
// pg_dump -Fc -h old-host -U user -d mydb > backup.dump
// pg_restore -h new-host -U user -d mydb backup.dump
// Step 3: Dual-write window (for zero-downtime migration)
// Write to both old and new DB; read from old; validate new; cut over reads
Working With Viprasol
We help teams select, configure, and migrate to the right cloud database for their workload — from Aurora Serverless capacity planning to Neon branching CI/CD pipeline setup.
What we deliver:
- Cloud database selection based on workload analysis and cost modeling
- RDS/Aurora configuration and Terraform setup
- Neon branching CI/CD pipeline for preview environments
- Supabase RLS policy design and auth integration
- Zero-downtime database migrations between providers
→ Discuss your database architecture → Cloud infrastructure services
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need DevOps & Cloud Expertise?
Scale your infrastructure with confidence. AWS, GCP, Azure certified team.
Free consultation • No commitment • Response within 24 hours
Making sense of your data at scale?
Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.