Back to Blog

AWS Lambda Container Images in 2026: Custom Runtimes, Large Dependencies, and ECR Deployment

Deploy AWS Lambda with container images: custom runtimes, large ML dependencies, multi-stage Dockerfile, ECR deployment, Lambda image caching, and Terraform configuration.

Viprasol Tech Team
February 2, 2027
13 min read

AWS Lambda Container Images in 2026: Custom Runtimes, Large Dependencies, and ECR Deployment

Lambda's standard deployment packages are limited to 50MB zipped (250MB unzipped). That's fine for most Node.js or Python functions, but falls apart the moment you need large ML libraries (PyTorch is 800MB), ffmpeg binaries, or a custom runtime. Container images solve this — Lambda supports images up to 10GB.

Container images also enable any runtime (Rust, Go, Java with custom JVM, Ruby, PHP, custom Python versions) and give you a reproducible build environment. This post covers multi-stage Dockerfiles for Lambda, custom runtimes using the Lambda Runtime Interface Client, ECR deployment, image caching, and Terraform.


When to Use Container Images vs ZIP

Use CaseZIP PackageContainer Image
Node.js/Python standard functions✅ Preferred (faster deploy)❌ Overkill
Large ML libraries (PyTorch, TensorFlow)❌ Too large✅ Required
ffmpeg / ImageMagick / headless Chrome❌ Size limits✅ Required
Custom runtime (Rust, PHP, custom Go)✅ Custom runtime layer✅ Easier
Reproducible builds❌ Depends on CI✅ Dockerfile
Cold start performanceFaster (no container init)Slightly slower (first cold start)
Deployment speedFast (~seconds)Slower (image push to ECR)

Multi-Stage Dockerfile (Node.js)

# Dockerfile — Node.js Lambda function with large dependencies
# Stage 1: Build
FROM node:22-alpine AS builder

WORKDIR /app

# Copy package files first (cache layer)
COPY package*.json ./
RUN npm ci --only=production

# Copy source
COPY tsconfig.json .
COPY src ./src

# Build TypeScript
RUN npx tsc --outDir dist

# Prune dev dependencies
RUN npm prune --production

# Stage 2: Lambda runtime
FROM public.ecr.aws/lambda/nodejs22.x:latest AS runtime

# Copy built artifacts
WORKDIR ${LAMBDA_TASK_ROOT}

COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json .

# Lambda handler — must match function_name.handler format
CMD ["dist/handler.handler"]

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Multi-Stage Dockerfile (Python with ML Libraries)

# Dockerfile — Python Lambda with PyTorch (large dependencies)
FROM python:3.12-slim AS builder

WORKDIR /build

# Install build deps
RUN pip install --no-cache-dir pip-tools

# Install CPU-only PyTorch (smaller than CUDA version)
COPY requirements.txt .
RUN pip install --no-cache-dir \
    --extra-index-url https://download.pytorch.org/whl/cpu \
    -t /packages \
    -r requirements.txt

# Stage 2: Lambda runtime
FROM public.ecr.aws/lambda/python3.12:latest

# Copy installed packages
COPY --from=builder /packages ${LAMBDA_TASK_ROOT}

# Copy function code
COPY src/ ${LAMBDA_TASK_ROOT}/

CMD ["handler.lambda_handler"]
# requirements.txt
torch==2.2.0+cpu
torchvision==0.17.0+cpu
transformers==4.38.0
numpy==1.26.4
pillow==10.2.0

Custom Runtime (Rust)

For maximum performance, Rust on Lambda via container image:

# Dockerfile — Rust Lambda with custom runtime
FROM rust:1.76-slim AS builder

WORKDIR /app

# Cache dependencies
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo 'fn main() {}' > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/lambda_function*

# Build actual function
COPY src ./src
RUN cargo build --release

# Stage 2: Minimal runtime (no Rust toolchain needed)
FROM public.ecr.aws/lambda/provided.al2023:latest

COPY --from=builder /app/target/release/bootstrap ${LAMBDA_RUNTIME_DIR}/bootstrap

CMD ["handler"]
// src/main.rs — Rust Lambda handler
use lambda_runtime::{run, service_fn, tracing, Error, LambdaEvent};
use serde::{Deserialize, Serialize};

#[derive(Deserialize)]
struct Request {
    order_id: String,
    amount: i64,
}

#[derive(Serialize)]
struct Response {
    order_id: String,
    processed: bool,
    timestamp: String,
}

async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
    let (request, _context) = event.into_parts();
    
    // Ultra-fast processing with Rust
    Ok(Response {
        order_id: request.order_id,
        processed: true,
        timestamp: chrono::Utc::now().to_rfc3339(),
    })
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing::init_default_subscriber();
    run(service_fn(handler)).await
}

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

ECR Repository and Image Build

# terraform/ecr.tf

resource "aws_ecr_repository" "lambda_function" {
  name                 = "${var.name}-${var.environment}-lambda"
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = true  # Automatic vulnerability scanning
  }

  # Lifecycle policy: keep last 10 images, clean up untagged after 1 day
  lifecycle_policy {
    policy = jsonencode({
      rules = [
        {
          rulePriority = 1
          description  = "Remove untagged images after 1 day"
          selection = {
            tagStatus   = "untagged"
            countType   = "sinceImagePushed"
            countUnit   = "days"
            countNumber = 1
          }
          action = { type = "expire" }
        },
        {
          rulePriority = 2
          description  = "Keep last 10 tagged images"
          selection = {
            tagStatus     = "tagged"
            tagPrefixList = ["v"]
            countType     = "imageCountMoreThan"
            countNumber   = 10
          }
          action = { type = "expire" }
        }
      ]
    })
  }

  tags = var.common_tags
}

# Lambda function using container image
resource "aws_lambda_function" "ml_processor" {
  function_name = "${var.name}-${var.environment}-ml-processor"
  role          = aws_iam_role.lambda.arn

  # Container image — no runtime/handler needed
  package_type = "Image"
  image_uri    = "${aws_ecr_repository.lambda_function.repository_url}:${var.image_tag}"

  architectures = ["arm64"]   # Graviton for cost savings
  memory_size   = 3008        # ML workloads need more RAM
  timeout       = 60          # Allow time for model inference
  
  # Provisioned concurrency for ML (avoids cold start on every request)
  # Optional: only if latency is critical

  environment {
    variables = {
      MODEL_BUCKET = aws_s3_bucket.models.bucket
      LOG_LEVEL    = var.environment == "production" ? "WARN" : "DEBUG"
    }
  }

  image_config {
    # Override CMD from Dockerfile if needed
    # command = ["dist/other-handler.handler"]
  }

  vpc_config {
    subnet_ids         = var.private_subnet_ids
    security_group_ids = [aws_security_group.lambda.id]
  }

  tags = var.common_tags

  lifecycle {
    ignore_changes = [image_uri]  # Managed by CI/CD, not Terraform
  }
}

CI/CD Pipeline for Container Images

# .github/workflows/deploy-lambda.yml
name: Deploy Lambda Container

on:
  push:
    branches: [main]
    paths:
      - 'functions/ml-processor/**'
      - '.github/workflows/deploy-lambda.yml'

env:
  AWS_REGION: us-east-1
  ECR_REPOSITORY: myapp-production-lambda
  FUNCTION_NAME: myapp-production-ml-processor

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write   # OIDC for AWS auth
      contents: read

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-deploy
          aws-region: ${{ env.AWS_REGION }}

      - name: Login to ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build, tag, and push image
        id: build-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          IMAGE_TAG: ${{ github.sha }}
        run: |
          docker build \
            --platform linux/arm64 \
            --cache-from $ECR_REGISTRY/$ECR_REPOSITORY:latest \
            --build-arg BUILDKIT_INLINE_CACHE=1 \
            -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG \
            -t $ECR_REGISTRY/$ECR_REPOSITORY:latest \
            functions/ml-processor/

          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest

          echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT

      - name: Update Lambda function
        run: |
          aws lambda update-function-code \
            --function-name $FUNCTION_NAME \
            --image-uri ${{ steps.build-image.outputs.image }} \
            --architectures arm64

          # Wait for update to complete
          aws lambda wait function-updated \
            --function-name $FUNCTION_NAME

      - name: Smoke test
        run: |
          RESULT=$(aws lambda invoke \
            --function-name $FUNCTION_NAME \
            --payload '{"order_id":"test-123","amount":100}' \
            --cli-binary-format raw-in-base64-out \
            /tmp/response.json \
            --query 'StatusCode' \
            --output text)
          
          if [ "$RESULT" != "200" ]; then
            echo "Lambda invocation failed with status $RESULT"
            cat /tmp/response.json
            exit 1
          fi
          echo "Smoke test passed"

Image Caching and Build Optimization

# Optimized Dockerfile with BuildKit caching
# syntax=docker/dockerfile:1.4

FROM node:22-alpine AS deps

WORKDIR /app

# Use BuildKit cache mount — node_modules cached between builds
RUN --mount=type=cache,target=/root/.npm \
    --mount=type=bind,source=package.json,target=package.json \
    --mount=type=bind,source=package-lock.json,target=package-lock.json \
    npm ci --only=production

FROM public.ecr.aws/lambda/nodejs22.x:latest

WORKDIR ${LAMBDA_TASK_ROOT}

COPY --from=deps /app/node_modules ./node_modules

# Source files (most frequently changed — last layer)
COPY dist/ ./dist/

CMD ["dist/handler.handler"]

Cold Start Comparison

SetupTypical Cold Start
ZIP, Node.js 22, arm64, 1024MB200–400ms
Container, Node.js 22, arm64, 1024MB (first ever)500–1,500ms
Container, Node.js 22, arm64, 1024MB (subsequent)200–600ms (image cached)
Container, Python + PyTorch, 3008MB2,000–5,000ms
Container, Python + PyTorch + Provisioned<50ms
Container, Rust, arm64, 512MB50–200ms

Lambda caches container images at the edge after the first cold start in each availability zone—subsequent cold starts on that AZ are much faster.


Cost Comparison

ApproachCost per 1M invocations (1s, 1GB)Notes
ZIP, x86_64$16.70Baseline
ZIP, arm64$13.3520% cheaper
Container, arm64$13.35Same as ZIP
ECR storage$0.10/GB/month~$0.10–$1.00 for typical images

Container images have no additional cost over ZIP — you pay for ECR storage only.


See Also


Working With Viprasol

We build and deploy Lambda container functions for ML inference, custom runtimes, and large dependency workloads. Our cloud team has shipped Lambda containers running PyTorch models, ffmpeg video processing, and custom Rust runtimes in production.

What we deliver:

  • Multi-stage Dockerfile optimized for Lambda cold start
  • ECR repository with lifecycle policies and vulnerability scanning
  • GitHub Actions / CodePipeline CI/CD for image build and deploy
  • Provisioned concurrency for latency-sensitive ML workloads
  • Cost analysis: Lambda containers vs ECS Fargate for your workload

See our cloud infrastructure services or contact us to deploy your Lambda container functions.

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.