Back to Blog

Docker Multi-Stage Builds: Layer Caching, Minimal Images, Distroless, and BuildKit Secrets

Build minimal, secure Docker images with multi-stage builds: layer caching optimization, distroless base images, BuildKit secret mounts for npm tokens, and production Dockerfiles for Node.js and Go.

Viprasol Tech Team
November 12, 2026
13 min read

A naive FROM node:20 Dockerfile produces an image that's 1.2GB — the full Node.js development environment, build tools, and your production app all bundled together. A well-designed multi-stage build produces 80–150MB with only what runs in production. Smaller images mean faster pulls, reduced attack surface, and lower ECR/artifact storage costs.

This post covers the techniques that actually move the needle: multi-stage separation of build and runtime, layer cache ordering, distroless base images, BuildKit cache mounts for package managers, and secret mounts for private registries.

The Problem with Single-Stage Builds

# ❌ Single-stage: 1.2GB image, includes build tools, source maps, test deps
FROM node:20

WORKDIR /app
COPY . .
RUN npm install          # Includes devDependencies
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/server.js"]

# Problems:
# - node_modules includes devDependencies (jest, typescript, eslint, etc.)
# - Source code AND compiled output both present
# - Full node:20 image = Debian + gcc + python + Node dev tools
# - Any vulnerability in build tools applies to prod container

1. Basic Multi-Stage Build (Node.js)

# Stage 1: Dependencies
FROM node:22-alpine AS deps
WORKDIR /app

# Copy only package files first (cache layer unless deps change)
COPY package.json package-lock.json ./
RUN npm ci --frozen-lockfile

# Stage 2: Builder
FROM node:22-alpine AS builder
WORKDIR /app

# Bring in node_modules from deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY . .

RUN npm run build
# Prune devDependencies after build
RUN npm prune --production

# Stage 3: Runtime (minimal)
FROM node:22-alpine AS runner
WORKDIR /app

# Security: don't run as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs

# Copy only what the runtime needs
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/package.json ./

EXPOSE 3000
ENV NODE_ENV=production
ENV PORT=3000

CMD ["node", "dist/server.js"]

Result: 180MB vs 1.2GB — a 6.7x reduction.


☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

2. Next.js Production Dockerfile

# Next.js with standalone output mode
FROM node:22-alpine AS base

# Stage 1: Install dependencies
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci --frozen-lockfile

# Stage 2: Build the application
FROM base AS builder
WORKDIR /app

COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Disable telemetry during build
ENV NEXT_TELEMETRY_DISABLED=1

RUN npm run build

# Stage 3: Production runtime
FROM base AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# Copy public assets
COPY --from=builder /app/public ./public

# next.config.ts: output: 'standalone' generates a self-contained bundle
# Standalone output includes only required node_modules (~30MB vs 200MB+)
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]
// next.config.ts — enable standalone output
const nextConfig = {
  output: 'standalone',
};
export default nextConfig;

Result: ~120MB for a typical Next.js app vs 800MB without standalone.


3. Distroless Base Images

Distroless images contain only the application runtime — no shell, no package manager, no utilities. This eliminates the entire class of container escape attacks that rely on shell access.

# Node.js with distroless (Google's maintained distroless images)
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --frozen-lockfile
COPY . .
RUN npm run build && npm prune --production

# Distroless Node.js runtime (no shell, no OS utilities)
FROM gcr.io/distroless/nodejs22-debian12 AS runner

WORKDIR /app

# Copy compiled output and prod dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

# User 65532 = nonroot in distroless
USER 65532

ENV NODE_ENV=production
EXPOSE 3000

CMD ["dist/server.js"]
# Note: CMD uses node implicitly in distroless/nodejs

Debug Variant for Troubleshooting

# Use :debug tag to get busybox shell when you need to inspect
FROM gcr.io/distroless/nodejs22-debian12:debug AS debug-runner
# Has /busybox/sh — use for debugging only, never in production

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

4. Layer Caching Optimization

Layer caching is invalidated when a layer's content changes. The key rule: put what changes least at the top, what changes most at the bottom.

# ❌ Bad: COPY . . before npm install — any file change busts the cache
FROM node:22-alpine
WORKDIR /app
COPY . .                     # Invalidated on every code change
RUN npm ci                   # Re-runs on every code change (slow!)
RUN npm run build

# ✅ Good: package files first, source files last
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./  # Only invalidated when deps change
RUN npm ci                               # Cached unless deps change
COPY . .                                 # Source change doesn't re-run npm ci
RUN npm run build

BuildKit Cache Mounts

Cache mounts persist the package manager cache across builds without it being in the image layer.

# syntax=docker/dockerfile:1
FROM node:22-alpine AS builder

WORKDIR /app

COPY package.json package-lock.json ./

# --mount=type=cache: npm cache persists between builds (not in image)
RUN --mount=type=cache,target=/root/.npm \
    npm ci --frozen-lockfile

COPY . .
RUN --mount=type=cache,target=/root/.npm \
    npm run build
# Python: pip cache mount
FROM python:3.12-slim AS builder

WORKDIR /app
COPY requirements.txt ./

RUN --mount=type=cache,target=/root/.cache/pip \
    pip install --no-cache-dir -r requirements.txt

COPY . .
# Enable BuildKit (default in Docker 23+, explicit in older versions)
export DOCKER_BUILDKIT=1
docker build -t myapp .

# Or in docker-compose.yml:
# COMPOSE_DOCKER_CLI_BUILD=1
# DOCKER_BUILDKIT=1

5. BuildKit Secret Mounts

Never put secrets (npm tokens, private registry credentials) in ENV or ARG — they appear in docker history. Use BuildKit secret mounts: the secret is available only during that RUN step and never written to any layer.

# syntax=docker/dockerfile:1
FROM node:22-alpine AS builder

WORKDIR /app
COPY package.json package-lock.json .npmrc.template ./

# --mount=type=secret: secret available only during this RUN, not in image
RUN --mount=type=secret,id=npm_token \
    NPM_TOKEN=$(cat /run/secrets/npm_token) \
    npm config set //registry.npmjs.org/:_authToken "$NPM_TOKEN" && \
    npm ci --frozen-lockfile && \
    npm config delete //registry.npmjs.org/:_authToken
# Build with secret from environment variable
echo "$NPM_TOKEN" | docker build \
  --secret id=npm_token \
  --build-arg BUILDKIT_INLINE_CACHE=1 \
  -t myapp .

# Or from a file
docker build \
  --secret id=npm_token,src=~/.npmrc \
  -t myapp .

SSH Mount for Private Git Dependencies

# syntax=docker/dockerfile:1
FROM node:22-alpine AS builder

WORKDIR /app
COPY package.json package-lock.json ./

# Use SSH agent for private GitHub packages
RUN --mount=type=ssh \
    npm ci --frozen-lockfile
# Build with SSH forwarding
docker build \
  --ssh default=$SSH_AUTH_SOCK \
  -t myapp .

6. Go Multi-Stage Build (Reference)

Go compiles to a static binary — the final image can be scratch (literally empty).

# syntax=docker/dockerfile:1

# Stage 1: Build
FROM golang:1.23-alpine AS builder

WORKDIR /app

# Leverage cache for module downloads
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
    go mod download

COPY . .

# Build static binary (no CGO, fully self-contained)
RUN --mount=type=cache,target=/go/pkg/mod \
    --mount=type=cache,target=/root/.cache/go-build \
    CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
    go build -ldflags="-w -s" -o /app/server ./cmd/server

# Stage 2: Minimal runtime
# Option A: scratch (absolute minimum — 8MB)
FROM scratch AS runner
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]

# Option B: distroless/static (adds debugging utilities)
# FROM gcr.io/distroless/static-debian12
# COPY --from=builder /app/server /server
# ENTRYPOINT ["/server"]

Result: Go binary + scratch = 8–25MB image (vs 800MB+ with golang:1.23).


7. .dockerignore

A missing or incomplete .dockerignore sends your entire repo as the build context, slowing every build.

# .dockerignore
.git
.gitignore
.github
.env*
!.env.example

# Node.js
node_modules
npm-debug.log
.npm

# Build outputs (for Next.js/TypeScript — let Docker generate these)
.next
dist
build
out

# Test files
__tests__
*.test.ts
*.spec.ts
coverage
.nyc_output

# IDE
.vscode
.idea
*.swp
*.swo

# Documentation
README.md
CHANGELOG.md
docs/

# Docker files themselves (avoid circular context)
Dockerfile*
docker-compose*

CI/CD: GitHub Actions with BuildKit Cache

# .github/workflows/docker-build.yml
name: Build and Push Docker Image

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # For OIDC auth to AWS ECR

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ECR_ROLE_ARN }}
          aws-region: us-east-1

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Build and push
        uses: docker/build-push-action@v6
        with:
          context: .
          push: true
          tags: |
            ${{ steps.login-ecr.outputs.registry }}/myapp:latest
            ${{ steps.login-ecr.outputs.registry }}/myapp:${{ github.sha }}
          
          # Layer cache: pull from registry, push updated cache
          cache-from: type=registry,ref=${{ steps.login-ecr.outputs.registry }}/myapp:buildcache
          cache-to: type=registry,ref=${{ steps.login-ecr.outputs.registry }}/myapp:buildcache,mode=max
          
          # Build secrets
          secrets: |
            npm_token=${{ secrets.NPM_TOKEN }}
          
          # Build args (non-secret)
          build-args: |
            BUILD_DATE=${{ github.event.head_commit.timestamp }}
            GIT_SHA=${{ github.sha }}

Image Size Comparison

ApproachFinal Image SizeBuild CacheSecurity
Single-stage node:20~1.2GBPoorLow
Multi-stage + node:22-alpine~180MBGoodMedium
Multi-stage + Next.js standalone~120MBGoodMedium
Multi-stage + distroless~90MBGoodHigh
Go + scratch~8–25MBExcellentHighest

See Also


Working With Viprasol

Shipping containers that are 1GB+ and wondering why ECR bills keep climbing? We audit and redesign your Dockerfiles with multi-stage builds, distroless runtimes, and BuildKit cache optimization — typically cutting image sizes by 5–10x and CI build times by 40–60%.

Talk to our team → | Explore our cloud solutions →

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.