Back to Blog

CI/CD Pipeline Setup: A Production Guide for GitHub Actions, Docker, and AWS

CI/CD pipeline setup in 2026 — how to build production-grade continuous integration and deployment pipelines using GitHub Actions, Docker, AWS ECS, and Terrafor

Viprasol Tech Team
March 26, 2026
13 min read

CI/CD Pipeline Setup: A Production Guide for GitHub Actions, Docker, and AWS

A production-grade CI/CD pipeline is the difference between a team that ships confidently every day and one that treats deployments as high-risk events scheduled on Friday afternoons.

This guide walks through a complete pipeline implementation: GitHub Actions for CI, Docker for containerization, AWS ECS Fargate for deployment, and Terraform for infrastructure. Every code sample is production-ready — not pseudocode.


What a Complete Pipeline Looks Like

Developer pushes code
        ↓
GitHub Actions triggered
        ↓
┌─────────────────────────────────────────┐
│  CI Stage                               │
│  1. Install dependencies                │
│  2. Lint + type check                   │
│  3. Unit tests (with coverage gates)    │
│  4. Integration tests                   │
│  5. Security scan (npm audit, Trivy)    │
└─────────────────────────────────────────┘
        ↓ (all pass)
┌─────────────────────────────────────────┐
│  Build Stage                            │
│  6. Build Docker image                  │
│  7. Tag with git SHA                    │
│  8. Push to ECR                         │
└─────────────────────────────────────────┘
        ↓
┌─────────────────────────────────────────┐
│  Deploy to Staging                      │
│  9. Update ECS task definition          │
│  10. Rolling deploy to staging cluster  │
│  11. Run smoke tests                    │
└─────────────────────────────────────────┘
        ↓ (staging smoke tests pass)
┌─────────────────────────────────────────┐
│  Deploy to Production                   │
│  12. Require manual approval (optional) │
│  13. Rolling deploy to prod cluster     │
│  14. Health check verification          │
│  15. Notify Slack on success/failure    │
└─────────────────────────────────────────┘

Step 1: Dockerfile (Production-Grade)

# Multi-stage build: builder stage + minimal runtime image
FROM node:20-alpine AS builder

WORKDIR /app

# Copy package files first for layer caching
COPY package*.json ./
RUN npm ci --only=production=false

COPY . .
RUN npm run build

# --- Runtime stage ---
FROM node:20-alpine AS runtime

# Security: run as non-root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app

# Copy only built output + production dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

# Tighten filesystem permissions
RUN chown -R appuser:appgroup /app
USER appuser

EXPOSE 3000

# Use exec form (not shell form) so signals propagate correctly
CMD ["node", "dist/server.js"]

Key decisions:

  • Multi-stage build keeps the runtime image lean (no dev dependencies, no source files, no build tools)
  • Alpine base — ~5MB vs ~900MB for node:20; smaller attack surface
  • Non-root user — required by most container security policies
  • npm ci instead of npm install — deterministic, fails if package-lock.json is out of sync

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Step 2: GitHub Actions CI Workflow

# .github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  NODE_VERSION: "20"

jobs:
  ci:
    name: Test & Lint
    runs-on: ubuntu-latest
    
    services:
      postgres:
        image: postgres:16-alpine
        env:
          POSTGRES_DB: testdb
          POSTGRES_USER: testuser
          POSTGRES_PASSWORD: testpass
        ports: ["5432:5432"]
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        run: npm ci

      - name: Lint
        run: npm run lint

      - name: Type check
        run: npm run typecheck

      - name: Unit tests
        run: npm run test:unit -- --coverage

      - name: Integration tests
        run: npm run test:integration
        env:
          DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb

      - name: Upload coverage
        uses: codecov/codecov-action@v4
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          fail_ci_if_error: false

      - name: Security audit
        run: npm audit --audit-level=high

Step 3: Docker Build & ECR Push

# .github/workflows/build.yml
name: Build & Push

on:
  push:
    branches: [main]

env:
  AWS_REGION: us-east-1
  ECR_REPOSITORY: my-api

jobs:
  build:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: []   # Can run in parallel with CI in some setups

    outputs:
      image-tag: ${{ steps.meta.outputs.version }}
      ecr-image: ${{ steps.build-push.outputs.image }}

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Login to ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Docker metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}
          tags: |
            type=sha,prefix=,format=short
            type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}

      - name: Build and push
        id: build-push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
          # Build args for embedding version info
          build-args: |
            BUILD_VERSION=${{ github.sha }}
            BUILD_DATE=${{ github.event.head_commit.timestamp }}

GitHub Actions cache for Docker layers (cache-from: type=gha) cuts build times from 4–8 minutes to 45–90 seconds for typical Node.js apps.


⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

Step 4: ECS Fargate Deployment

# .github/workflows/deploy.yml
name: Deploy

on:
  workflow_run:
    workflows: ["Build & Push"]
    types: [completed]
    branches: [main]

jobs:
  deploy-staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    environment: staging

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Download task definition
        run: |
          aws ecs describe-task-definition \
            --task-definition my-api-staging \
            --query taskDefinition > task-definition.json

      - name: Update ECS task definition image
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
          task-definition: task-definition.json
          container-name: api
          image: ${{ secrets.ECR_REGISTRY }}/my-api:${{ github.sha }}

      - name: Deploy to ECS
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def.outputs.task-definition }}
          service: my-api-staging
          cluster: staging-cluster
          wait-for-service-stability: true
          # Rolls back automatically if health checks fail
          
      - name: Smoke test staging
        run: |
          sleep 15
          curl --fail https://staging.api.example.com/health || exit 1

  deploy-production:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: deploy-staging
    environment: production   # Requires manual approval in GitHub

    steps:
      - uses: actions/checkout@v4
      
      - name: Configure AWS
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Download task definition
        run: |
          aws ecs describe-task-definition \
            --task-definition my-api-production \
            --query taskDefinition > task-definition.json

      - name: Update ECS task definition image
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
          task-definition: task-definition.json
          container-name: api
          image: ${{ secrets.ECR_REGISTRY }}/my-api:${{ github.sha }}

      - name: Deploy to Production
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def.outputs.task-definition }}
          service: my-api-production
          cluster: production-cluster
          wait-for-service-stability: true

      - name: Notify Slack on success
        if: success()
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            { "text": "✅ *${{ github.repository }}* deployed to production\nCommit: ${{ github.sha }}\nBy: ${{ github.actor }}" }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

      - name: Notify Slack on failure
        if: failure()
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            { "text": "🚨 *${{ github.repository }}* production deploy FAILED\nCommit: ${{ github.sha }}\nBy: ${{ github.actor }}" }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

Step 5: Terraform Infrastructure

# infrastructure/ecs.tf

resource "aws_ecs_cluster" "main" {
  name = "${var.app_name}-${var.environment}"

  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}

resource "aws_ecs_task_definition" "api" {
  family                   = "${var.app_name}-api-${var.environment}"
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu                      = "512"
  memory                   = "1024"
  execution_role_arn       = aws_iam_role.ecs_execution.arn
  task_role_arn            = aws_iam_role.ecs_task.arn

  container_definitions = jsonencode([{
    name  = "api"
    image = "${aws_ecr_repository.api.repository_url}:latest"

    portMappings = [{
      containerPort = 3000
      protocol      = "tcp"
    }]

    environment = [
      { name = "NODE_ENV", value = var.environment },
      { name = "PORT", value = "3000" }
    ]

    secrets = [
      { name = "DATABASE_URL", valueFrom = aws_ssm_parameter.db_url.arn },
      { name = "JWT_SECRET", valueFrom = aws_ssm_parameter.jwt_secret.arn }
    ]

    logConfiguration = {
      logDriver = "awslogs"
      options = {
        awslogs-group         = "/ecs/${var.app_name}-${var.environment}"
        awslogs-region        = var.aws_region
        awslogs-stream-prefix = "api"
      }
    }

    healthCheck = {
      command     = ["CMD-SHELL", "curl -f http://localhost:3000/health/live || exit 1"]
      interval    = 30
      timeout     = 5
      retries     = 3
      startPeriod = 60
    }
  }])
}

resource "aws_ecs_service" "api" {
  name            = "${var.app_name}-api"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.api.arn
  desired_count   = var.environment == "production" ? 3 : 1
  launch_type     = "FARGATE"

  deployment_controller {
    type = "ECS"
  }

  deployment_circuit_breaker {
    enable   = true
    rollback = true   # Auto-rollback on deployment failure
  }

  network_configuration {
    subnets          = var.private_subnet_ids
    security_groups  = [aws_security_group.ecs_tasks.id]
    assign_public_ip = false
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.api.arn
    container_name   = "api"
    container_port   = 3000
  }
}

Secrets Management

Never store secrets in environment variables in task definitions (visible in AWS console, logs). Use SSM Parameter Store or Secrets Manager:

# Store secrets in SSM (run once during environment setup)
aws ssm put-parameter \
  --name "/myapp/production/database-url" \
  --value "postgresql://user:pass@host:5432/db" \
  --type "SecureString" \
  --key-id "alias/aws/ssm"

# IAM policy: ECS task execution role needs GetParameter permission
# Attach this policy to the ECS execution role:
# {
#   "Effect": "Allow",
#   "Action": ["ssm:GetParameters", "ssm:GetParameter"],
#   "Resource": "arn:aws:ssm:us-east-1:*:parameter/myapp/production/*"
# }

Pipeline Cost Breakdown

ComponentMonthly Cost (estimate)
GitHub Actions (2,000 min/month free, then $0.008/min)$0–$50
ECR storage (1–5 GB)$1–$5
ECS Fargate (0.25 vCPU, 0.5GB, staging only)$10–$20
ECS Fargate (0.5 vCPU, 1GB × 3 tasks, production)$80–$120
Application Load Balancer$18–$25
Total infrastructure$110–$220/month

Implementation cost (one-time): $8,000–$25,000 depending on complexity. Most teams recover this within 2–3 months of reduced deployment-related engineering time.


Working With Viprasol

We design and implement CI/CD pipelines for teams that want to ship faster with less risk — from simple single-service pipelines through multi-environment, multi-region deployment systems.

Build your pipeline with us →
DevOps as a Service →
Cloud Solutions →


See Also


Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.