Back to Blog

Cloud-Native Security in 2026: Container Scanning, Pod Security, and OPA/Gatekeeper

Secure Kubernetes workloads with container image scanning, Pod Security Standards, OPA/Gatekeeper policies, network policies, and supply chain security with SLSA and Sigstore.

Viprasol Tech Team
July 27, 2026
13 min read

Cloud-Native Security in 2026: Container Scanning, Pod Security, and OPA/Gatekeeper

Kubernetes gives developers tremendous power to deploy workloads — and tremendous ability to deploy them insecurely. A container running as root, a pod with host network access, an image with 300 unpatched CVEs, a service that can talk to any other service in the cluster — these are the attack surface of the modern cloud-native stack.

Cloud-native security is layered: secure the container image, secure the pod configuration, secure the network, secure the supply chain, and enforce all of it with policy-as-code that runs in CI and admission control. This post covers each layer with production-ready configurations.


The Cloud-Native Security Layers

LayerWhat Can Go WrongControls
ImageVulnerable base OS, known CVEs, secrets in layersImage scanning (Trivy), distroless base images
PodRoot containers, host path mounts, excess capabilitiesPod Security Standards, SecurityContext
NetworkUnrestricted pod-to-pod trafficKubernetes NetworkPolicy, Istio AuthorizationPolicy
Supply chainTampered images, unsigned artifactsSigstore/cosign, SLSA provenance
AdmissionMisconfigured deployments slip throughOPA/Gatekeeper, Kyverno
RuntimeUnexpected syscalls, container escapeFalco, seccomp profiles

Layer 1: Container Image Security

Distroless Base Images

# ❌ Full Ubuntu — large attack surface, many CVEs
FROM ubuntu:22.04
RUN apt-get install -y nodejs npm
COPY . .
CMD ["node", "server.js"]

# ✅ Distroless — no shell, no package manager, minimal attack surface
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Final image: only runtime, no build tools
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
USER nonroot  # Never run as root
EXPOSE 8080
CMD ["dist/index.js"]

Image Scanning with Trivy

# .github/workflows/security.yml
name: Container Security Scan
on: [push, pull_request]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build image
        run: docker build -t myapp:${{ github.sha }} .

      - name: Scan with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: myapp:${{ github.sha }}
          format: sarif
          output: trivy-results.sarif
          severity: CRITICAL,HIGH
          exit-code: 1        # Fail CI on CRITICAL/HIGH CVEs
          ignore-unfixed: true  # Only fail on CVEs with available fixes

      - name: Upload SARIF to GitHub Security
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: trivy-results.sarif

      # Also scan for secrets in image layers
      - name: Scan for secrets
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: myapp:${{ github.sha }}
          scanners: secret
          exit-code: 1

Continuous Registry Scanning

# ECR: Enable automated scanning on push
resource "aws_ecr_repository" "api" {
  name = "myapp-api"

  image_scanning_configuration {
    scan_on_push = true  # Enhanced scanning with Inspector
  }

  encryption_configuration {
    encryption_type = "AES256"
  }
}

# Alert when critical CVEs found
resource "aws_cloudwatch_event_rule" "ecr_scan_findings" {
  name        = "ecr-critical-findings"
  event_pattern = jsonencode({
    source      = ["aws.inspector2"]
    detail-type = ["Inspector2 Finding"]
    detail = {
      severity = ["CRITICAL"]
      status   = ["ACTIVE"]
      resources = { type = ["AWS_ECR_CONTAINER_IMAGE"] }
    }
  })
}

☁️ Is Your Cloud Costing Too Much?

Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.

  • AWS, GCP, Azure certified engineers
  • Infrastructure as Code (Terraform, CDK)
  • Docker, Kubernetes, GitHub Actions CI/CD
  • Typical audit recovers $500–$3,000/month in savings

Layer 2: Pod Security Standards

Kubernetes Pod Security Standards (PSS) replaced Pod Security Policies in 1.25. Three levels: privileged (no restrictions), baseline (prevents obvious escalation), restricted (heavily restricted).

# Enforce restricted PSS on all production namespaces
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    # Enforce = reject pods that violate; Warn = allow but warn; Audit = log only
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.29
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/audit: restricted
# Compliant deployment under restricted PSS
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: production
spec:
  template:
    spec:
      # No service account token automount unless needed
      automountServiceAccountToken: false

      # No privilege escalation at pod level
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        seccompProfile:
          type: RuntimeDefault   # Apply default seccomp profile

      containers:
        - name: api
          image: myregistry/api:latest
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true   # Can't write to / — must use volumes
            runAsNonRoot: true
            capabilities:
              drop: ["ALL"]              # Drop all Linux capabilities
              # Only add back what's needed (usually nothing for web apps)
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "512Mi"
          volumeMounts:
            - name: tmp
              mountPath: /tmp            # Writable tmp since root is read-only

      volumes:
        - name: tmp
          emptyDir: {}

Layer 3: OPA/Gatekeeper — Policy as Code

OPA (Open Policy Agent) with Gatekeeper enforces custom policies as Kubernetes admission webhooks. Every resource creation/update is validated against your policies.

Install Gatekeeper

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.16.0/deploy/gatekeeper.yaml

Policy: Require Resource Limits

# constraints/require-resource-limits.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredResources
metadata:
  name: require-resource-limits
spec:
  match:
    kinds:
      - apiGroups: ["apps"]
        kinds: ["Deployment", "StatefulSet", "DaemonSet"]
    excludedNamespaces: ["kube-system", "gatekeeper-system"]
  parameters:
    limits: ["cpu", "memory"]
    requests: ["cpu", "memory"]
# constraint-templates/k8s-required-resources.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8srequiredresources
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredResources
      validation:
        openAPIV3Schema:
          type: object
          properties:
            limits:
              type: array
              items: { type: string }
            requests:
              type: array
              items: { type: string }
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredresources

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          resource := input.parameters.limits[_]
          not container.resources.limits[resource]
          msg := sprintf("Container '%v' missing resource limit: %v", [container.name, resource])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          resource := input.parameters.requests[_]
          not container.resources.requests[resource]
          msg := sprintf("Container '%v' missing resource request: %v", [container.name, resource])
        }

Policy: No Latest Tag

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8snolatesttag
spec:
  crd:
    spec:
      names:
        kind: K8sNoLatestTag
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8snolatesttag

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          endswith(container.image, ":latest")
          msg := sprintf("Container '%v' uses ':latest' tag  pin to a specific digest", [container.name])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          not contains(container.image, ":")
          msg := sprintf("Container '%v' has no image tag  pin to a specific digest", [container.name])
        }
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sNoLatestTag
metadata:
  name: no-latest-tag
spec:
  match:
    kinds:
      - apiGroups: ["apps"]
        kinds: ["Deployment", "StatefulSet"]
    excludedNamespaces: ["kube-system"]

Policy: Require Non-Root User

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8snonrootcontainer
spec:
  crd:
    spec:
      names:
        kind: K8sNonRootContainer
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8snonrootcontainer

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          not container.securityContext.runAsNonRoot
          not input.review.object.spec.template.spec.securityContext.runAsNonRoot
          msg := sprintf("Container '%v' must set runAsNonRoot: true", [container.name])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.template.spec.containers[_]
          container.securityContext.runAsUser == 0
          msg := sprintf("Container '%v' must not run as root (runAsUser: 0)", [container.name])
        }

⚙️ DevOps Done Right — Zero Downtime, Full Automation

Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.

  • Staging + production environments with feature flags
  • Automated security scanning in the pipeline
  • Uptime monitoring + alerting + runbook automation
  • On-call support handover docs included

Layer 4: Network Policies

By default, all pods can communicate with all other pods. NetworkPolicy restricts this to explicit allowed connections.

# Default deny all ingress and egress in production namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}      # Applies to all pods
  policyTypes:
    - Ingress
    - Egress
---
# Allow orders-service: receive from gateway, call postgres + kafka
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: orders-service-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: orders-service
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: api-gateway
      ports:
        - port: 8080
  egress:
    # PostgreSQL
    - to:
        - podSelector:
            matchLabels:
              app: postgres
      ports:
        - port: 5432
    # Kafka
    - to:
        - podSelector:
            matchLabels:
              app: kafka
      ports:
        - port: 9092
    # DNS resolution (required for all pods)
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - port: 53
          protocol: UDP

Layer 5: Supply Chain Security with Sigstore

# Sign container image after build (in CI)
# Install cosign
curl -Lo cosign https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64
chmod +x cosign

# Sign the image with keyless signing (uses OIDC from GitHub Actions)
cosign sign \
  --yes \
  ghcr.io/myorg/myapp@${IMAGE_DIGEST}

# In the cluster: verify signature before pulling (via Policy Controller)
# Or verify manually:
cosign verify \
  --certificate-identity="https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main" \
  --certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
  ghcr.io/myorg/myapp:latest
# Sigstore Policy Controller: reject unsigned images
apiVersion: policy.sigstore.dev/v1alpha1
kind: ClusterImagePolicy
metadata:
  name: require-signed-images
spec:
  images:
    - glob: "ghcr.io/myorg/**"
  authorities:
    - keyless:
        url: https://fulcio.sigstore.dev
        identities:
          - issuer: https://token.actions.githubusercontent.com
            subjectRegExp: "https://github.com/myorg/.*/.github/workflows/build.yml@refs/heads/main"

Security Scanning in CI: Full Pipeline

# .github/workflows/security-full.yml
jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # SAST: static analysis
      - name: Run Semgrep
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/owasp-top-ten
            p/nodejs
            p/typescript

      # Dependency vulnerabilities
      - name: npm audit
        run: npm audit --audit-level=high

      # Container scan
      - name: Build and scan image
        run: |
          docker build -t app:scan .
          docker run --rm \
            -v /var/run/docker.sock:/var/run/docker.sock \
            aquasec/trivy image --exit-code 1 --severity CRITICAL app:scan

      # OPA policy test (dry-run against your k8s manifests)
      - name: Validate k8s manifests against policies
        run: |
          kubectl apply --dry-run=server -f k8s/

      # Check for hardcoded secrets
      - name: Secret scan
        uses: trufflesecurity/trufflehog@main
        with:
          path: ./
          base: ${{ github.event.repository.default_branch }}

Security Cost Estimates

Security ControlSetup TimeMonthly CostRisk Reduction
Trivy image scanning in CI1 day$0 (OSS)High
Pod Security Standards1–2 days$0High
OPA/Gatekeeper policies1–2 weeks$0 (OSS)High
Network policies1 week$0Medium–High
Sigstore image signing2–3 days$0Medium
Falco runtime security1 week$0–$200 (infra)Medium
Full security audit2–4 weeks$15K–$40K (one-time)Very High

Working With Viprasol

Our platform engineering team implements cloud-native security controls — from CI image scanning through OPA/Gatekeeper policy enforcement and network segmentation.

What we deliver:

  • Dockerfile hardening (distroless, non-root, read-only root filesystem)
  • Trivy integration in CI with SARIF reporting
  • Pod Security Standards enforcement per namespace
  • OPA/Gatekeeper constraint library for your policies
  • Kubernetes NetworkPolicy for service-to-service segmentation
  • Sigstore image signing for supply chain security

Discuss your security requirementsCloud infrastructure services


See Also

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need DevOps & Cloud Expertise?

Scale your infrastructure with confidence. AWS, GCP, Azure certified team.

Free consultation • No commitment • Response within 24 hours

Viprasol · Big Data & Analytics

Making sense of your data at scale?

Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.