Back to Blog

Kubernetes Ingress NGINX: TLS Termination, Rate Limiting, Canary Deployments, and Annotations

Configure Kubernetes Ingress NGINX in production: TLS termination with cert-manager, rate limiting by IP and user, canary deployments with traffic splitting, custom error pages, and Helm values.

Viprasol Tech Team
December 1, 2026
13 min read

Kubernetes Ingress NGINX is the most widely deployed ingress controller — over 70% of Kubernetes clusters use it. It handles TLS termination, path-based routing, host-based routing, rate limiting, authentication, and canary deployments entirely through annotations and Ingress resources. No custom sidecar proxies, no service mesh required for most use cases.

This post covers the production setup: Helm installation, cert-manager for automatic TLS, rate limiting annotations, canary traffic splitting, custom error pages, and the annotations you'll actually use.

1. Installation with Helm

# Install ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --values ingress-nginx-values.yaml
# ingress-nginx-values.yaml
controller:
  replicaCount: 2  # At least 2 for HA

  resources:
    requests:
      cpu: 100m
      memory: 90Mi
    limits:
      cpu: 500m
      memory: 300Mi

  # Pod disruption budget: keep at least 1 running during upgrades
  minAvailable: 1

  # Affinity: spread replicas across nodes
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: ingress-nginx
          topologyKey: kubernetes.io/hostname

  # Service: expose via LoadBalancer (AWS NLB)
  service:
    type: LoadBalancer
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

  # Config: global NGINX settings
  config:
    # Security headers
    add-headers: ingress-nginx/custom-headers
    server-tokens: "false"
    # Timeouts
    proxy-connect-timeout: "10"
    proxy-send-timeout: "60"
    proxy-read-timeout: "60"
    # Body size
    proxy-body-size: "10m"
    # Real IP from NLB
    use-forwarded-headers: "true"
    forwarded-for-header: "X-Forwarded-For"
    compute-full-forwarded-for: "true"
    # HTTP/2
    use-http2: "true"
    # GZIP
    enable-brotli: "true"
    brotli-level: "6"

  # Metrics for Prometheus
  metrics:
    enabled: true
    serviceMonitor:
      enabled: true  # If using Prometheus Operator

  # Custom global rate limit zones
  extraArgs:
    default-ssl-certificate: "default/wildcard-tls"

2. cert-manager for Automatic TLS

# Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm upgrade --install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set installCRDs=true \
  --set prometheus.enabled=true
# infrastructure/k8s/cert-manager/cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: ops@viprasol.com
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      # HTTP-01 challenge (works for most domains)
      - http01:
          ingress:
            ingressClassName: nginx
      # DNS-01 challenge (required for wildcard certs)
      - dns01:
          route53:
            region: us-east-1
            hostedZoneID: Z1234567890
        selector:
          dnsZones:
            - viprasol.com

---
# Wildcard certificate (optional — covers *.viprasol.com)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wildcard-tls
  namespace: default
spec:
  secretName: wildcard-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
    - viprasol.com
    - "*.viprasol.com"

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

3. Production Ingress Resource

# infrastructure/k8s/ingress/api-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod

    # Security headers
    nginx.ingress.kubernetes.io/configuration-snippet: |
      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
      add_header X-Frame-Options "DENY" always;
      add_header X-Content-Type-Options "nosniff" always;
      add_header Referrer-Policy "strict-origin-when-cross-origin" always;
      add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

    # CORS (for API endpoints)
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.viprasol.com,https://viprasol.com"
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, PATCH, DELETE, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-headers: "Authorization, Content-Type, X-Request-ID"
    nginx.ingress.kubernetes.io/cors-max-age: "86400"

    # Proxy timeouts
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"

    # Body size limit
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"

spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.viprasol.com
      secretName: api-tls  # cert-manager creates this

  rules:
    - host: api.viprasol.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 3000

4. Rate Limiting

# infrastructure/k8s/ingress/rate-limited-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-rate-limited
  annotations:
    kubernetes.io/ingress.class: nginx

    # Rate limit: 100 requests per second per IP
    nginx.ingress.kubernetes.io/limit-rps: "100"

    # Rate limit: 1000 requests per minute per IP
    nginx.ingress.kubernetes.io/limit-rpm: "1000"

    # Burst: allow short spikes (2x the rate limit)
    nginx.ingress.kubernetes.io/limit-burst-multiplier: "2"

    # Rate limit by connection count
    nginx.ingress.kubernetes.io/limit-connections: "50"

    # Whitelist: skip rate limiting for internal IPs
    nginx.ingress.kubernetes.io/limit-whitelist: "10.0.0.0/8,172.16.0.0/12"

    # Custom response when rate limit exceeded
    nginx.ingress.kubernetes.io/configuration-snippet: |
      limit_req_status 429;

    # For per-user rate limiting (authenticated routes):
    # Rate limit by Authorization header value (hashed)
    nginx.ingress.kubernetes.io/limit-rps: "30"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      limit_req_zone $http_authorization zone=by_auth:10m rate=30r/s;
      limit_req zone=by_auth burst=60 nodelay;

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

5. Canary Deployments

# Deploy new version to canary (10% of traffic)

# Step 1: Main ingress (stable, receives 90%)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-stable
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  rules:
    - host: api.viprasol.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-service-stable    # v1.2.0
                port:
                  number: 3000

---
# Step 2: Canary ingress (receives 10%)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-canary
  annotations:
    kubernetes.io/ingress.class: nginx

    # Mark as canary
    nginx.ingress.kubernetes.io/canary: "true"

    # Weight-based: 10% of traffic to canary
    nginx.ingress.kubernetes.io/canary-weight: "10"

    # OR: header-based (always route if header present)
    # nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
    # nginx.ingress.kubernetes.io/canary-by-header-value: "true"

    # OR: cookie-based
    # nginx.ingress.kubernetes.io/canary-by-cookie: "canary"

spec:
  ingressClassName: nginx
  rules:
    - host: api.viprasol.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-service-canary   # v1.3.0-rc1
                port:
                  number: 3000
# Gradually increase canary weight via kubectl patch
kubectl patch ingress api-canary -n default --type='json' \
  -p='[{"op": "replace", "path": "/metadata/annotations/nginx.ingress.kubernetes.io~1canary-weight", "value": "50"}]'

# Full rollout: delete canary ingress, update stable deployment
kubectl delete ingress api-canary
kubectl set image deployment/api-stable api=viprasol/api:v1.3.0

6. Custom Error Pages

# ConfigMap with custom error pages
apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-error-pages
  namespace: ingress-nginx
data:
  "404": |
    <html>
      <head><title>Page Not Found</title></head>
      <body>
        <h1>404 — Page Not Found</h1>
        <p><a href="https://viprasol.com">Go to homepage</a></p>
      </body>
    </html>
  "503": |
    <html>
      <head><title>Service Unavailable</title></head>
      <body>
        <h1>We'll be right back</h1>
        <p>Maintenance in progress. Check <a href="https://status.viprasol.com">status page</a>.</p>
      </body>
    </html>

Cost Reference

Cluster sizeIngress replicasMonthly costNotes
Small (< 10 nodes)2~$30–602 × t3.small nodes
Medium (10–50 nodes)3~$80–150Dedicated node pool
Large (50+ nodes)5+~$200–500HA with dedicated nodes
cert-manager$0Free (Let's Encrypt)

See Also


Working With Viprasol

Setting up Kubernetes Ingress NGINX for the first time or struggling with TLS, rate limiting, or canary deploys? We configure production-grade ingress with cert-manager automation, rate limiting policies, canary deployment workflows, and Prometheus metrics — with Helm values checked into your GitOps repo so every change is auditable.

Talk to our team → | Explore our cloud solutions →

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.