Back to Blog

AWS Cost Optimization: Reserved Instances, Savings Plans, S3 Lifecycle, and RDS Right-Sizing

Reduce AWS costs 40–70% — EC2 Reserved Instances vs Savings Plans, S3 Intelligent-Tiering and lifecycle policies, RDS right-sizing, ECS Fargate Spot, data trans

Viprasol Tech Team
May 25, 2026
12 min read

AWS Cost Optimization: Reserved Instances, Savings Plans, S3 Lifecycle, and RDS Right-Sizing

AWS bills are often 40–60% higher than they need to be. The excess comes from on-demand pricing for workloads that run continuously, oversized instances, S3 storage that never gets accessed, and data transfer charges that no one noticed.

This guide covers the changes that have the highest return per hour of engineering time.


Where the Money Goes

For a typical SaaS product, cost typically breaks down:

Service% of BillTop Optimization
EC2 / ECS / EKS compute40–55%Reserved Instances or Savings Plans
RDS databases20–30%Reserved + right-size + storage optimization
S3 storage5–15%Lifecycle policies + Intelligent-Tiering
Data transfer5–20%CloudFront, VPC endpoints, region planning
ElastiCache5–10%Reserved instances
Other5–10%Service-specific

Commitment Discounts: Savings Plans vs Reserved Instances

Savings PlansReserved Instances
FlexibilityAny EC2/Fargate/Lambda in regionSpecific instance family and size
DiscountUp to 66% vs on-demandUp to 72% vs on-demand
Term1 or 3 years1 or 3 years
PaymentAll upfront, partial, or noneAll upfront, partial, or none
Best forVarying instance types, FargateStable workloads, specific instances

Which to choose:

  • Compute Savings Plan: Most flexible — covers any EC2, Fargate, or Lambda in any region. Choose this for most teams.
  • EC2 Instance Savings Plan: Covers specific instance family in one region (e.g., m5 in us-east-1). Higher discount (~72%) but less flexible.
  • Reserved Instances: Still worth it for RDS and ElastiCache (Savings Plans don't cover these).

How to buy Savings Plans:

# Step 1: Check your usage with AWS Cost Explorer
# Look at: Cost Explorer → Savings Plans → Recommendations
# AWS calculates the optimal commitment based on your last 7/30/60 days

# Step 2: Calculate commitment amount
# Don't commit to 100% of baseline — leave room for variability
# Typical: commit to 70–80% of steady-state usage

# Step 3: Purchase via Console or CLI
aws savingsplans purchase-savings-plan \
  --savings-plan-offering-id <id-from-recommendations> \
  --commitment 0.50 \  # $0.50/hour commitment
  --purchase-time 2026-06-01T00:00:00Z

Savings example:

  • On-demand: 4× m5.xlarge ECS Fargate tasks, running 24/7 = ~$480/month
  • Compute Savings Plan (1-year, no upfront): ~$320/month (33% savings)
  • Compute Savings Plan (1-year, all upfront): ~$280/month (42% savings)

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

RDS Cost Optimization

RDS is often the second-largest line item and has multiple optimization levers:

Right-Sizing

# Find oversized RDS instances (CloudWatch metrics)
aws cloudwatch get-metric-statistics \
  --namespace AWS/RDS \
  --metric-name CPUUtilization \
  --dimensions Name=DBInstanceIdentifier,Value=production-postgres \
  --start-time 2026-04-01T00:00:00Z \
  --end-time 2026-05-01T00:00:00Z \
  --period 86400 \
  --statistics Average Maximum

# If CPU avg < 10% and max < 40%: you're probably overprovisioned
# Check also: FreeableMemory, ReadIOPS, WriteIOPS

Instance family stepping down (example):

db.r6g.4xlarge (128GB RAM, $1,400/mo) → db.r6g.2xlarge (64GB RAM, $730/mo)
Justification: FreeableMemory never drops below 30GB
Savings: $670/month ($8,040/year)

Reserved Instances for RDS

# Purchase RDS Reserved Instance
aws rds purchase-reserved-db-instances-offering \
  --reserved-db-instances-offering-id <offering-id> \
  --reserved-db-instance-id production-postgres-ri \
  --db-instance-count 1
# Typical savings: 36–42% vs on-demand for 1-year no-upfront

Aurora Serverless v2 for Variable Workloads

# terraform/rds.tf
resource "aws_rds_cluster" "aurora_serverless" {
  cluster_identifier      = "api-db"
  engine                  = "aurora-postgresql"
  engine_mode             = "provisioned"
  engine_version          = "15.4"
  database_name           = "app"
  master_username         = "admin"
  master_password         = var.db_password

  serverlessv2_scaling_configuration {
    min_capacity = 0.5   # 0.5 ACUs minimum (~$0.06/hr at minimum)
    max_capacity = 4     # Scale up to 4 ACUs under load
  }
}

resource "aws_rds_cluster_instance" "aurora_serverless" {
  cluster_identifier = aws_rds_cluster.aurora_serverless.id
  instance_class     = "db.serverless"
  engine             = aws_rds_cluster.aurora_serverless.engine
  engine_version     = aws_rds_cluster.aurora_serverless.engine_version
}
# Cost: $0.12/ACU-hour. At 0.5 ACU idle: ~$43/month
# vs db.t3.medium always-on: ~$60/month with more disk overhead
# Better for dev/staging with variable load

S3 Cost Optimization

S3 has multiple storage classes. Most teams keep everything in Standard when they should be using tiered storage:

Storage ClassCost/GB/moAccess CostUse For
Standard$0.023$0Frequently accessed data
Intelligent-Tiering$0.023 → auto-movesSmall feeUnknown access patterns
Standard-IA$0.0125$0.01/GB retrievedInfrequent, need rapid access
Glacier Instant$0.004$0.03/GB retrievedArchives accessed quarterly
Glacier Flexible$0.0036Minutes to hours retrievalLong-term archives
Glacier Deep Archive$0.00099Hours retrieval7+ year compliance archives

Lifecycle policy:

{
  "Rules": [
    {
      "ID": "IntelligentTieringForUserUploads",
      "Status": "Enabled",
      "Filter": { "Prefix": "uploads/" },
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "INTELLIGENT_TIERING"
        }
      ]
    },
    {
      "ID": "ArchiveOldLogs",
      "Status": "Enabled",
      "Filter": { "Prefix": "logs/" },
      "Transitions": [
        { "Days": 30, "StorageClass": "STANDARD_IA" },
        { "Days": 90, "StorageClass": "GLACIER_INSTANT_RETRIEVAL" },
        { "Days": 365, "StorageClass": "DEEP_ARCHIVE" }
      ],
      "Expiration": { "Days": 2555 }  // Delete after 7 years
    },
    {
      "ID": "DeleteIncompleteMultipartUploads",
      "Status": "Enabled",
      "Filter": {},
      "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 7 }
    }
  ]
}
# Apply lifecycle policy
aws s3api put-bucket-lifecycle-configuration \
  --bucket your-bucket \
  --lifecycle-configuration file://lifecycle.json

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

ECS Fargate Spot

Fargate Spot uses spare AWS capacity at 70% discount. Suitable for fault-tolerant, interruptible workloads:

# terraform/ecs.tf
resource "aws_ecs_service" "worker" {
  name            = "background-worker"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.worker.arn
  desired_count   = 5

  capacity_provider_strategy {
    capacity_provider = "FARGATE"
    weight            = 1
    base              = 1  # At least 1 on-demand task (safety)
  }

  capacity_provider_strategy {
    capacity_provider = "FARGATE_SPOT"
    weight            = 4  # 4× more Spot than on-demand (80% Spot)
  }
}

Don't use Spot for your API service — interruptions cause 2-minute drains during which capacity drops. Use Spot for: workers, batch jobs, CI builds, data processing.


Data Transfer Cost Reduction

Data transfer between AWS services in the same region is free. Cross-region and internet egress is expensive:

# Common data transfer charges to eliminate:

# 1. Use VPC endpoints for S3/DynamoDB (free transfer within VPC)
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-xxx \
  --service-name com.amazonaws.us-east-1.s3 \
  --route-table-ids rtb-xxx
# Saves: $0.01/GB for S3 traffic that previously went through NAT gateway

# 2. Put CloudFront in front of S3 (CloudFront egress is cheaper than S3 direct)
# S3 → internet: $0.09/GB
# CloudFront → internet: $0.0085/GB (90% cheaper for high volume)

# 3. Keep services in the same AZ when possible
# Cross-AZ data transfer: $0.01/GB each way
# Same-AZ: free

Cost Monitoring: AWS Cost Anomaly Detection

# Set up cost anomaly detection via AWS CLI
aws ce create-anomaly-monitor \
  --anomaly-monitor '{
    "MonitorName": "Service Monitor",
    "MonitorType": "DIMENSIONAL",
    "MonitorDimension": "SERVICE"
  }'

# Subscribe to alerts (email when anomaly detected)
aws ce create-anomaly-subscription \
  --anomaly-subscription '{
    "SubscriptionName": "DailyAnomalyAlert",
    "MonitorArnList": ["arn:aws:ce::xxx:anomalymonitor/xxx"],
    "Subscribers": [
      { "Address": "engineering@yourcompany.com", "Type": "EMAIL" }
    ],
    "Threshold": 20,
    "Frequency": "DAILY"
  }'

Terraform for budget alerts:

resource "aws_budgets_budget" "monthly" {
  name         = "Monthly AWS Budget"
  budget_type  = "COST"
  limit_amount = "5000"
  limit_unit   = "USD"
  time_unit    = "MONTHLY"

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 80
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = ["engineering@yourcompany.com"]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 100
    threshold_type             = "PERCENTAGE"
    notification_type          = "FORECASTED"
    subscriber_email_addresses = ["engineering@yourcompany.com", "cto@yourcompany.com"]
  }
}

Working With Viprasol

We conduct AWS cost optimization audits — identifying savings opportunities, purchasing commitment discounts, implementing S3 lifecycle policies, right-sizing databases, and setting up cost monitoring. Most audits find 30–50% in immediate savings.

Talk to our cloud team about AWS cost optimization.


See Also

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.