Back to Blog

AWS Lambda Layers: Shared Dependencies, Custom Runtimes, and Terraform IaC

Master AWS Lambda Layers: package shared Node.js dependencies as layers, build custom runtimes with bootstrap scripts, version and manage layers with Terraform, and reduce cold start times.

Viprasol Tech Team
November 19, 2026
13 min read

Lambda Layers solve the problem of shared code in serverless. Without layers, every function that uses the AWS SDK, Prisma client, or your internal utility library must bundle its own copy — bloating deployment packages, slowing cold starts, and making library updates a multi-function operation. A layer is a ZIP archive that Lambda mounts at /opt before your function code runs. All functions that reference the layer share a single copy.

This post covers the practical use cases: packaging Node.js dependencies as layers, creating a custom Node.js runtime layer for non-supported versions, managing layers with Terraform, and measuring the cold start impact.

When to Use Layers

Use layers for:

  • Large shared dependencies (Prisma client, AWS SDK v3, sharp for image processing)
  • Internal shared libraries used across 5+ functions
  • Custom runtimes (non-AWS-supported Node.js versions, Bun, Deno)
  • Binary executables (ffmpeg, chromium for Puppeteer)

Don't use layers for:

  • Function-specific code
  • Small utilities (<1MB) — just bundle them
  • Secrets — use Secrets Manager or Parameter Store instead

1. Node.js Dependency Layer

Packaging the Layer

Lambda expects Node.js dependencies in nodejs/node_modules/ within the ZIP.

#!/usr/bin/env bash
# scripts/build-deps-layer.sh

set -euo pipefail

LAYER_DIR="layers/nodejs-deps"
OUTPUT="layers/nodejs-deps.zip"

echo "Building Node.js dependency layer..."

# Clean previous build
rm -rf "$LAYER_DIR" "$OUTPUT"
mkdir -p "$LAYER_DIR/nodejs"

# Copy only package files
cp package.json package-lock.json "$LAYER_DIR/nodejs/"

# Install production dependencies only (no devDependencies)
cd "$LAYER_DIR/nodejs"
npm ci --omit=dev --ignore-scripts

# Prune unnecessary files to reduce layer size
npx modclean --run --patterns="default:safe" --no-progress

cd ../..

# Create ZIP (Lambda requires specific directory structure)
cd "$LAYER_DIR"
zip -r "../../$OUTPUT" . --quiet

cd ../..
SIZE=$(du -sh "$OUTPUT" | cut -f1)
echo "✅ Layer built: $OUTPUT ($SIZE)"

Terraform: Layer Resource

# infrastructure/lambda/layers.tf

# Build the layer ZIP before Terraform (use null_resource or external)
resource "null_resource" "build_deps_layer" {
  triggers = {
    package_hash = filemd5("${path.root}/package-lock.json")
  }

  provisioner "local-exec" {
    command = "bash ${path.root}/scripts/build-deps-layer.sh"
  }
}

# Node.js dependencies layer
resource "aws_lambda_layer_version" "nodejs_deps" {
  layer_name   = "${var.project}-nodejs-deps"
  description  = "Shared Node.js production dependencies"

  filename         = "${path.root}/layers/nodejs-deps.zip"
  source_code_hash = filebase64sha256("${path.root}/layers/nodejs-deps.zip")

  compatible_runtimes      = ["nodejs22.x"]
  compatible_architectures = ["arm64"]  # Graviton2 = cheaper + faster

  depends_on = [null_resource.build_deps_layer]

  lifecycle {
    create_before_destroy = true
  }
}

# Internal utilities layer
resource "aws_lambda_layer_version" "internal_utils" {
  layer_name   = "${var.project}-internal-utils"
  description  = "Internal shared utilities and helpers"

  filename         = "${path.root}/layers/internal-utils.zip"
  source_code_hash = filebase64sha256("${path.root}/layers/internal-utils.zip")

  compatible_runtimes = ["nodejs22.x"]
  compatible_architectures = ["arm64"]

  lifecycle {
    create_before_destroy = true
  }
}

# Lambda function that uses the layers
resource "aws_lambda_function" "api_handler" {
  function_name = "${var.project}-api-handler"
  role          = aws_iam_role.lambda.arn
  runtime       = "nodejs22.x"
  handler       = "dist/handler.handler"
  architectures = ["arm64"]

  filename         = "${path.root}/dist/api-handler.zip"
  source_code_hash = filebase64sha256("${path.root}/dist/api-handler.zip")

  # Attach layers — order matters: later layers override earlier ones
  layers = [
    aws_lambda_layer_version.nodejs_deps.arn,
    aws_lambda_layer_version.internal_utils.arn,
  ]

  memory_size = 512
  timeout     = 30

  environment {
    variables = {
      NODE_ENV    = var.environment
      LOG_LEVEL   = var.environment == "prod" ? "warn" : "debug"
      # NOT secrets — use Secrets Manager for those
    }
  }
}

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

2. Prisma Client Layer

Prisma's generated client includes query engine binaries — the right approach is to generate for the Lambda target architecture and package as a layer.

#!/usr/bin/env bash
# scripts/build-prisma-layer.sh

set -euo pipefail

LAYER_DIR="layers/prisma"
OUTPUT="layers/prisma-layer.zip"

rm -rf "$LAYER_DIR" "$OUTPUT"
mkdir -p "$LAYER_DIR/nodejs"

# Copy schema
cp -r prisma "$LAYER_DIR/nodejs/"

# Install Prisma in the layer directory
cd "$LAYER_DIR/nodejs"
cat > package.json << 'EOF'
{
  "dependencies": {
    "@prisma/client": "^6.0.0",
    "prisma": "^6.0.0"
  }
}
EOF

npm install --omit=dev

# Generate for Lambda target (linux/arm64 or linux/x64)
npx prisma generate --schema=prisma/schema.prisma

# Remove prisma CLI binaries (only needed at build time)
rm -rf node_modules/prisma/build
rm -rf node_modules/.bin/prisma

cd ../../..

cd "$LAYER_DIR"
zip -r "../../$OUTPUT" . --quiet
cd ../..

echo "✅ Prisma layer built: $OUTPUT ($(du -sh $OUTPUT | cut -f1))"
# Prisma layer
resource "aws_lambda_layer_version" "prisma" {
  layer_name   = "${var.project}-prisma-client"
  description  = "Prisma client with query engine for Linux arm64"

  filename         = "${path.root}/layers/prisma-layer.zip"
  source_code_hash = filebase64sha256("${path.root}/layers/prisma-layer.zip")

  compatible_runtimes      = ["nodejs22.x"]
  compatible_architectures = ["arm64"]
}

3. Custom Runtime Layer

AWS provides runtimes for Node.js 18/20/22, Python 3.x, etc. For anything else (Bun, Deno, Node.js canary), you need a custom runtime via a bootstrap executable.

Custom Bootstrap for Bun Runtime

#!/usr/bin/env bash
# layers/bun-runtime/bootstrap

# This file must be executable: chmod +x bootstrap

BUN_VERSION="1.1.38"
BUN_DIR="/opt/bun"

# Download Bun if not present (Lambda execution environment caches /opt)
if [ ! -f "$BUN_DIR/bun" ]; then
  mkdir -p "$BUN_DIR"
  curl -fsSL "https://github.com/oven-sh/bun/releases/download/bun-v${BUN_VERSION}/bun-linux-aarch64.zip" \
    -o /tmp/bun.zip
  unzip -q /tmp/bun.zip -d /tmp/bun-extract
  cp /tmp/bun-extract/bun-linux-aarch64/bun "$BUN_DIR/bun"
  chmod +x "$BUN_DIR/bun"
fi

# Lambda runtime loop
while true; do
  # Get next invocation
  INVOCATION_RESPONSE=$(curl -sS \
    "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
  
  REQUEST_ID=$(echo "$INVOCATION_RESPONSE" | \
    grep -i "Lambda-Runtime-Aws-Request-Id" | \
    awk '{print $2}' | tr -d '\r')
  
  EVENT_DATA=$(echo "$INVOCATION_RESPONSE" | tail -n1)

  # Execute the handler
  RESPONSE=$("$BUN_DIR/bun" run "$LAMBDA_TASK_ROOT/handler.ts" <<< "$EVENT_DATA" 2>&1)
  EXIT_CODE=$?

  if [ $EXIT_CODE -eq 0 ]; then
    # Report success
    curl -sS -X POST \
      "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/${REQUEST_ID}/response" \
      -d "$RESPONSE"
  else
    # Report error
    curl -sS -X POST \
      "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/${REQUEST_ID}/error" \
      -H "Content-Type: application/json" \
      -d "{\"errorMessage\":\"$RESPONSE\",\"errorType\":\"RuntimeError\"}"
  fi
done
# Custom runtime layer
resource "aws_lambda_layer_version" "bun_runtime" {
  layer_name   = "${var.project}-bun-runtime"
  description  = "Bun runtime for Lambda"

  filename         = "${path.root}/layers/bun-runtime.zip"
  source_code_hash = filebase64sha256("${path.root}/layers/bun-runtime.zip")

  compatible_runtimes      = ["provided.al2023"]
  compatible_architectures = ["arm64"]
}

# Function using custom runtime
resource "aws_lambda_function" "bun_handler" {
  function_name = "${var.project}-bun-handler"
  role          = aws_iam_role.lambda.arn
  runtime       = "provided.al2023"  # Use custom runtime
  handler       = "bootstrap"        # Must match the bootstrap filename
  architectures = ["arm64"]

  layers = [aws_lambda_layer_version.bun_runtime.arn]

  filename         = "${path.root}/dist/bun-handler.zip"
  source_code_hash = filebase64sha256("${path.root}/dist/bun-handler.zip")
}

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

4. Binary Layer (Chromium for Puppeteer)

#!/usr/bin/env bash
# scripts/build-chromium-layer.sh

set -euo pipefail

OUTPUT="layers/chromium-layer.zip"
LAYER_DIR="layers/chromium"

rm -rf "$LAYER_DIR" "$OUTPUT"
mkdir -p "$LAYER_DIR/nodejs"

# Use @sparticuz/chromium — pre-built for Lambda
cd "$LAYER_DIR/nodejs"
cat > package.json << 'EOF'
{ "dependencies": { "@sparticuz/chromium": "^131.0.0" } }
EOF
npm install --omit=dev

cd ../../..
cd "$LAYER_DIR"
zip -r "../../$OUTPUT" . --quiet
cd ../..

echo "✅ Chromium layer: $(du -sh $OUTPUT | cut -f1)"
// src/functions/screenshot/handler.ts
import chromium from '@sparticuz/chromium';
import puppeteer from 'puppeteer-core';

export const handler = async (event: { url: string }) => {
  const browser = await puppeteer.launch({
    args: chromium.args,
    defaultViewport: chromium.defaultViewport,
    executablePath: await chromium.executablePath('/opt/nodejs/node_modules/@sparticuz/chromium/bin'),
    headless: chromium.headless,
  });

  const page = await browser.newPage();
  await page.goto(event.url, { waitUntil: 'networkidle2' });
  const screenshot = await page.screenshot({ type: 'png', encoding: 'base64' });

  await browser.close();

  return {
    statusCode: 200,
    headers: { 'Content-Type': 'image/png' },
    body: screenshot,
    isBase64Encoded: true,
  };
};

5. Layer Version Management

# infrastructure/lambda/layer-aliases.tf

# Pin functions to specific layer versions (avoid accidental breakage)
locals {
  # Bump these to upgrade — tested in staging first
  deps_layer_version  = aws_lambda_layer_version.nodejs_deps.version
  prisma_layer_version = aws_lambda_layer_version.prisma.version
}

# Get ARN with specific version
data "aws_lambda_layer_version" "pinned_deps" {
  layer_name = "${var.project}-nodejs-deps"
  version    = local.deps_layer_version
}

# Output ARNs for reference in other modules
output "deps_layer_arn" {
  value = aws_lambda_layer_version.nodejs_deps.arn
}

output "prisma_layer_arn" {
  value = aws_lambda_layer_version.prisma.arn
}

CI/CD Layer Update Pipeline

# .github/workflows/update-lambda-layers.yml
name: Update Lambda Layers

on:
  push:
    paths:
      - 'package-lock.json'
      - 'prisma/schema.prisma'

jobs:
  build-and-publish:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE }}
          aws-region: us-east-1

      - uses: actions/setup-node@v4
        with: { node-version: '22' }

      - name: Build dependency layer
        run: bash scripts/build-deps-layer.sh

      - name: Build Prisma layer
        run: bash scripts/build-prisma-layer.sh

      - name: Publish layers via Terraform
        run: |
          cd infrastructure
          terraform init
          terraform apply -target=aws_lambda_layer_version.nodejs_deps \
                         -target=aws_lambda_layer_version.prisma \
                         -auto-approve

Cold Start Impact

Deployment sizeCold start timeApproach
50MB (all bundled)~800msNo layers
5MB function + 45MB layer~450msLayer (cached after first invocation)
2MB function + 45MB layer (SnapStart)~100msSnapStart (Java/Node.js)
Bun runtime layer~180msCustom runtime

Layers reduce cold starts because Lambda caches layer content in the execution environment — subsequent invocations reuse the mounted /opt directory without downloading or decompressing.


Cost Reference

UsageMonthly CostNotes
Layer storage$0.023/GB/monthNegligible — layers are small
Layer requestsNo extra chargeIncluded in Lambda invocation pricing
Lambda invocations$0.20/millionSame regardless of layers
Lambda compute$0.0000166667/GB-secondLayers don't add memory

See Also


Working With Viprasol

Running Lambda functions with bloated deployment packages, slow cold starts, or duplicated dependencies across 20 functions? We restructure Lambda deployments with layers, right-size memory allocation, and implement SnapStart or custom runtimes — reducing cold starts by 40–70% and deployment package sizes by 80%+.

Talk to our team → | Explore our cloud solutions →

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.