Back to Blog

AWS Lambda Optimization: Cold Starts, SnapStart, Memory Tuning, and Powertools

Optimize AWS Lambda functions for production: eliminate cold starts with SnapStart and provisioned concurrency, right-size memory with Lambda Power Tuning, use Lambda Layers for shared dependencies, and instrument with AWS Lambda Powertools for TypeScript.

Viprasol Tech Team
October 15, 2026
13 min read

Lambda cold starts happen when a new execution environment is initialized — runtime loaded, code imported, connections established. For TypeScript functions with heavy dependencies (Prisma, AWS SDK, etc.), this can add 1–5 seconds to the first request after a period of inactivity.

The optimization path: minimize initialization work (what you load at module level), right-size memory (more memory = more CPU = faster init), use SnapStart for Java, and use provisioned concurrency only when you've exhausted other options.


Cold Start Anatomy

Cold start = Init duration + Handler duration

Init duration: everything that runs before handler()
  - Lambda runtime initialization (~50–200ms, you can't control this)
  - Function code import (~50ms to 3s, you control this)
  - Top-level code execution (DB connections, SDK clients, you control this)

Handler duration: your actual handler logic
  - Database queries
  - External API calls
  - Business logic
// What runs during init (module-level) vs handler

// COLD START CODE — runs once per execution environment
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient } from "@aws-sdk/lib-dynamodb";

// Reuse clients across invocations — DO initialize at module level
const dynamo = DynamoDBDocumentClient.from(
  new DynamoDBClient({ region: process.env.AWS_REGION })
);

// HANDLER CODE — runs on every invocation
export async function handler(event: APIGatewayProxyEvent) {
  // Don't re-create clients here — use the module-level instance
  const result = await dynamo.send(/* ... */);
  return { statusCode: 200, body: JSON.stringify(result) };
}

Minimizing Cold Start Time

// 1. Import only what you need — avoid barrel imports

// BAD: imports the entire AWS SDK
import * as AWS from "aws-sdk";
const s3 = new AWS.S3();

// GOOD: v3 modular SDK — only imports S3
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({ region: process.env.AWS_REGION });

// 2. Lazy-load heavy dependencies not needed on every invocation
let pdfGenerator: typeof import("puppeteer") | null = null;

async function generatePDF(html: string): Promise<Buffer> {
  // Only loaded when actually needed (not every invocation)
  if (!pdfGenerator) {
    pdfGenerator = await import("puppeteer");
  }
  // ... use pdfGenerator
}

// 3. Defer non-critical initialization with addInitializationHook (Powertools)
// See Powertools section below

// 4. esbuild bundling — bundle everything into a single file
// No require() overhead at runtime
// package.json build script using esbuild
{
  "scripts": {
    "build": "esbuild src/handler.ts --bundle --platform=node --target=node22 --outfile=dist/handler.js --minify --external:@aws-sdk/*",
    "build:layer": "npm ci --omit=dev && zip -r layer.zip node_modules"
  }
}

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Lambda Layers for Shared Dependencies

# template.yaml (SAM) — Lambda Layer for shared dependencies
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31

Resources:
  # Layer: shared node_modules across functions
  DependenciesLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: production-dependencies
      ContentUri: layers/dependencies/
      CompatibleRuntimes:
        - nodejs22.x
      RetentionPolicy: Retain  # Keep old versions (functions may reference them)
    Metadata:
      BuildMethod: nodejs22.x
      BuildProperties:
        External:
          - "@aws-sdk/*"  # Already in Lambda runtime — don't bundle

  # Function uses the layer
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: dist/handler.handler
      Runtime: nodejs22.x
      MemorySize: 512
      Timeout: 30
      Layers:
        - !Ref DependenciesLayer
      Environment:
        Variables:
          NODE_PATH: "/opt/nodejs/node_modules"  # Layer path
      Events:
        Api:
          Type: HttpApi
          Properties:
            Path: /{proxy+}
            Method: ANY

Memory Right-Sizing with Lambda Power Tuning

Lambda memory controls both RAM and CPU allocation. More memory = faster execution — often reducing cost even though price-per-ms is higher:

# Deploy Lambda Power Tuning (open source tool by AWS)
# https://github.com/alexcasalboni/aws-lambda-power-tuning

# Run tuning across memory sizes: 128MB to 3008MB
aws stepfunctions start-execution \
  --state-machine-arn arn:aws:states:us-east-1:123456789:stateMachine:powerTuningMachine \
  --input '{
    "lambdaARN": "arn:aws:lambda:us-east-1:123456789:function:my-function",
    "powerValues": [128, 256, 512, 1024, 1769, 3008],
    "num": 20,
    "payload": {"path": "/api/health"},
    "parallelInvocation": true,
    "strategy": "cost"
  }'
Typical tuning results for a Node.js API handler:

Memory  | Avg Duration | Cost/invocation | Recommendation
128MB   | 850ms        | $0.0000014       | ❌ Too slow
256MB   | 420ms        | $0.0000014       | Same cost, 2× faster
512MB   | 210ms        | $0.0000014       | Same cost, 4× faster  ← Sweet spot
1024MB  | 105ms        | $0.0000014       | Same cost, 8× faster  ← If latency matters
1769MB  | 62ms         | $0.0000014       | 1 vCPU, good for CPU-bound work
3008MB  | 35ms         | $0.0000024       | More expensive

Rule: CPU-bound functions benefit from more memory.
      I/O-bound functions (waiting on DB) see diminishing returns above ~512MB.

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

Provisioned Concurrency

# For latency-sensitive endpoints that can't tolerate cold starts
# Cost: ~$0.0000646 per provisioned-concurrency-hour

ApiFunction:
  Type: AWS::Serverless::Function
  Properties:
    MemorySize: 1024
    AutoPublishAlias: live  # Required for provisioned concurrency

# Provisioned concurrency — keeps N environments warm
ProvisionedConcurrencyConfig:
  Type: AWS::Lambda::ProvisionedConcurrencyConfig
  Properties:
    FunctionName: !Ref ApiFunction
    Qualifier: !GetAtt ApiFunctionAlias.FunctionVersion
    ProvisionedConcurrentExecutions: 10  # Keep 10 warm

# Scale provisioned concurrency with Application Auto Scaling
ConcurrencyScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MinCapacity: 2
    MaxCapacity: 50
    ResourceId: !Sub "function:${ApiFunction}:live"
    ScalableDimension: lambda:function:ProvisionedConcurrency
    ServiceNamespace: lambda

ConcurrencyScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyType: TargetTrackingScaling
    TargetTrackingScalingPolicyConfiguration:
      TargetValue: 0.7  # Scale up when 70% of provisioned capacity is in use
      PredefinedMetricSpecification:
        PredefinedMetricType: LambdaProvisionedConcurrencyUtilization

AWS Lambda Powertools for TypeScript

// src/handler.ts — production-grade Lambda with Powertools

import { Logger } from "@aws-lambda-powertools/logger";
import { Tracer } from "@aws-lambda-powertools/tracer";
import { Metrics, MetricUnit } from "@aws-lambda-powertools/metrics";
import { injectLambdaContext } from "@aws-lambda-powertools/logger/middleware";
import { captureLambdaHandler } from "@aws-lambda-powertools/tracer/middleware";
import { logMetrics } from "@aws-lambda-powertools/metrics/middleware";
import middy from "@middy/core";
import type { APIGatewayProxyEventV2, APIGatewayProxyResultV2 } from "aws-lambda";

// Initialize at module level — shared across invocations
const logger = new Logger({
  serviceName: "api-service",
  logLevel: process.env.LOG_LEVEL ?? "INFO",
});

const tracer = new Tracer({ serviceName: "api-service" });

const metrics = new Metrics({
  namespace: "ViprasolPlatform",
  serviceName: "api-service",
});

// Business logic — pure, testable
async function processRequest(
  event: APIGatewayProxyEventV2
): Promise<APIGatewayProxyResultV2> {
  const path = event.rawPath;
  const method = event.requestContext.http.method;

  logger.info("Processing request", { path, method });

  // Add custom segment for X-Ray tracing
  const segment = tracer.getSegment();
  const subsegment = segment?.addNewSubsegment("database-query");

  try {
    const result = await queryDatabase(event);

    subsegment?.close();

    // Emit business metric
    metrics.addMetric("RequestProcessed", MetricUnit.Count, 1);
    metrics.addMetadata("path", path);

    return {
      statusCode: 200,
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(result),
    };
  } catch (error) {
    subsegment?.addError(error as Error);
    subsegment?.close();

    logger.error("Request processing failed", { error, path });
    metrics.addMetric("RequestFailed", MetricUnit.Count, 1);

    return {
      statusCode: 500,
      body: JSON.stringify({ error: "Internal server error" }),
    };
  }
}

// Handler with Powertools middleware (Middy)
export const handler = middy(processRequest)
  .use(injectLambdaContext(logger, { clearState: true }))
  .use(captureLambdaHandler(tracer))
  .use(logMetrics(metrics, { captureColdStartMetric: true }));

Lambda Cost Estimates

Configuration1M invocations/month100ms avgMonthly cost
128MB1M100ms~$0.21
512MB1M100ms~$0.83
1024MB1M50ms~$0.83
512MB10M100ms~$8.34
1024MB10M50ms~$8.34
Provisioned (10 units)N/AN/A~$46/month base

Free tier: 1M requests + 400,000 GB-seconds per month


See Also


Working With Viprasol

Lambda performance tuning requires measuring first, then optimizing. We profile cold start contributors, right-size memory with Lambda Power Tuning, implement Powertools for structured observability, and design connection pooling architectures that work with serverless concurrency models.

Serverless engineering → | Talk to our engineers →

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.