Back to Blog

Docker Best Practices: Production-Grade Container Security and Optimization

Docker best practices in 2026 — multi-stage builds, security hardening, image size optimization, Docker Compose for development, and production container patter

Viprasol Tech Team
April 12, 2026
12 min read

Docker Best Practices: Production-Grade Container Security and Optimization

Most developers can write a Dockerfile. Fewer write Dockerfiles that are secure, efficient, and maintainable in production. The gap matters: a badly constructed container image is 1GB instead of 80MB, runs as root, contains dozens of known vulnerabilities, and rebuilds from scratch on every code change.

This guide covers the practices that distinguish production containers from development experiments.


Multi-Stage Builds

The single highest-impact Dockerfile improvement. Separate the build environment from the runtime environment — your production image doesn't need compilers, build tools, dev dependencies, or source files.

Node.js Multi-Stage Build

# syntax=docker/dockerfile:1

# --- Stage 1: Install dependencies ---
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
# npm ci: deterministic, fails on lockfile mismatch
RUN npm ci --only=production=false

# --- Stage 2: Build ---
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# --- Stage 3: Production runtime ---
FROM node:20-alpine AS runtime

# Security: non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

WORKDIR /app

# Only copy what's needed at runtime
COPY --from=builder /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
COPY package.json ./

# Set ownership
RUN chown -R appuser:appgroup /app
USER appuser

# Don't run as PID 1 directly — use exec form
EXPOSE 3000
CMD ["node", "dist/server.js"]

Result: Final image contains only the Alpine base, Node.js runtime, production node_modules, and compiled output. Typically 80–150MB vs. 600–900MB for a single-stage build.

Python Multi-Stage Build

# syntax=docker/dockerfile:1

# --- Stage 1: Build dependencies ---
FROM python:3.12-slim AS builder

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc libpq-dev \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

# --- Stage 2: Runtime ---
FROM python:3.12-slim AS runtime

RUN groupadd -r appgroup && useradd -r -g appgroup appuser

WORKDIR /app

# Copy installed packages from builder
COPY --from=builder /root/.local /home/appuser/.local

# Copy application code
COPY --chown=appuser:appgroup . .

USER appuser

ENV PATH=/home/appuser/.local/bin:$PATH
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1

EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Security Hardening

Never Run as Root

# ❌ BAD: Default root user
FROM node:20-alpine
WORKDIR /app
COPY . .
CMD ["node", "server.js"]
# Process runs as root — if exploited, attacker has root on the container

# ✅ GOOD: Explicit non-root user
FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --chown=appuser:appgroup . .
USER appuser
CMD ["node", "server.js"]

Minimal Base Images

Base ImageCompressed SizeAttack Surface
ubuntu:22.04~30MBLarge (full OS)
debian:bookworm-slim~29MBMedium
alpine:3.19~3.5MBSmall
node:20-alpine~50MBSmall
distroless/nodejs20~35MBMinimal (no shell)
scratch0MBNone (static binaries only)

For production Node.js/Python apps, Alpine is the right tradeoff. Distroless is excellent for security-sensitive workloads (no shell = no shell injection, no package manager = no lateral movement).

# Distroless Node.js — no shell, no package manager, minimal attack surface
FROM node:20-alpine AS builder
# ... build steps ...

FROM gcr.io/distroless/nodejs20-debian12 AS runtime
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER nonroot
CMD ["dist/server.js"]

Scan for Vulnerabilities

# Trivy — free, comprehensive vulnerability scanner
trivy image --severity HIGH,CRITICAL myapp:latest

# Or in CI/CD (GitHub Actions)
- name: Scan image for vulnerabilities
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: '${{ steps.build.outputs.image }}'
    format: 'table'
    exit-code: '1'           # Fail build on critical vulnerabilities
    severity: 'CRITICAL,HIGH'
    ignore-unfixed: true     # Only fail on vulnerabilities with fixes available

Don't Embed Secrets

# ❌ NEVER: Secrets in Dockerfile or build args
ARG DATABASE_URL
ENV DATABASE_URL=${DATABASE_URL}   # Visible in image history!

# ✅ CORRECT: Secrets at runtime via environment variables
# In docker run:
docker run -e DATABASE_URL=$DATABASE_URL myapp:latest

# In docker-compose:
environment:
  - DATABASE_URL   # Reads from host environment

# In Kubernetes:
env:
  - name: DATABASE_URL
    valueFrom:
      secretKeyRef:
        name: app-secrets
        key: database-url

# In AWS ECS:
secrets:
  - name: DATABASE_URL
    valueFrom: arn:aws:ssm:us-east-1:123456789:parameter/myapp/database-url
# Check image for accidentally included secrets
docker history myapp:latest --no-trunc | grep -i "secret\|password\|key\|token"

# Verify no secrets in layers
docker save myapp:latest | tar -tv | grep -i "\.env\|secret\|credentials"

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Layer Caching Optimization

Docker layer caching is the most impactful build speed optimization. Structure your Dockerfile so that frequently-changed layers come last.

# ❌ SLOW: Code copied before dependencies installed
FROM node:20-alpine
WORKDIR /app
COPY . .                          # Changes every commit → cache bust here
RUN npm ci                        # Re-runs every build even if deps unchanged

# ✅ FAST: Dependencies installed before code copy
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./             # Only changes when deps change
RUN npm ci                        # Cached until package.json changes
COPY . .                          # Code changes don't invalidate dep layer
RUN npm run build

Result: With properly ordered layers, a code-only change skips the npm ci step (2–4 minutes) and only runs the build step (~30 seconds).

BuildKit Cache Mounts

# syntax=docker/dockerfile:1

FROM node:20-alpine AS builder
WORKDIR /app

# Mount npm cache between builds (--mount=type=cache)
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci

COPY . .
RUN npm run build

Enable BuildKit: DOCKER_BUILDKIT=1 docker build . or in Docker daemon config: { "features": { "buildkit": true } }.


Docker Compose for Development

# docker-compose.yml — development environment
services:
  api:
    build:
      context: .
      target: builder   # Use builder stage for dev (includes dev deps + hot reload)
    volumes:
      - .:/app           # Mount source for hot reload
      - /app/node_modules  # Prevent host node_modules from overwriting container's
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://dev:dev@postgres:5432/devdb
      - REDIS_URL=redis://redis:6379
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    command: npm run dev  # Hot reload command

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: devdb
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U dev"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:
# Start dev environment
docker compose up -d

# View logs
docker compose logs -f api

# Run migrations
docker compose exec api npm run db:migrate

# Shell into container
docker compose exec api sh

# Rebuild after Dockerfile changes
docker compose up --build api

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

.dockerignore

Always include a .dockerignore — prevents large directories from being sent to the build context:

# .dockerignore
node_modules/
.git/
.gitignore
dist/
coverage/
*.log
.env
.env.*
.DS_Store
Dockerfile
docker-compose*.yml
README.md
docs/
.github/

Without .dockerignore, node_modules (potentially 500MB+) gets sent to the build context on every build.


Health Checks

HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
  CMD curl -f http://localhost:3000/health/live || exit 1

Health checks tell orchestrators (Kubernetes, ECS) when a container is ready for traffic and when to restart it.


Image Tagging Strategy

# ❌ BAD: Always 'latest' — no traceability
docker build -t myapp:latest .

# ✅ GOOD: Git SHA + semantic version + latest
GIT_SHA=$(git rev-parse --short HEAD)
docker build \
  -t myapp:${GIT_SHA} \
  -t myapp:v2.3.1 \
  -t myapp:latest \
  .

Use Git SHAs for immutable references in deployment; semantic versions for release tracking.


Working With Viprasol

We build production-grade container infrastructure — Dockerfiles, CI/CD pipeline integration, registry setup, and Kubernetes/ECS deployment configuration.

Container infrastructure →
DevOps as a Service →
Cloud Solutions →


Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.