Backstage in Production: Software Catalog, Golden Path Templates, and TechDocs
Deploy Spotify Backstage as your Internal Developer Portal. Software catalog setup, Scaffolder templates for golden paths, TechDocs for internal documentation, and plugin ecosystem.
Backstage in Production: Software Catalog, Golden Path Templates, and TechDocs
Backstage, open-sourced by Spotify in 2020 and now a CNCF incubating project, is the standard for Internal Developer Portals (IDPs). At its core, it solves the "who owns what?" problem that plagues every engineering organization beyond ~20 people.
But Backstage's catalog is just the foundation. The real value comes from Scaffolder (golden path templates that provision new services in minutes), TechDocs (docs-as-code living next to the service they document), and the plugin ecosystem that surfaces information from GitHub, PagerDuty, SonarQube, and 100+ other tools in one place.
This post covers production Backstage deployment — not a local demo, but a real installation on Kubernetes with PostgreSQL, GitHub integration, and custom templates.
When Backstage Makes Sense
Backstage has real setup cost (~2–4 weeks for a production installation). It pays off when:
| Condition | Backstage Useful? |
|---|---|
| >20 engineers | ✅ Catalog prevents "who owns X?" |
| >10 services / repos | ✅ Discovery becomes hard without it |
| New services created frequently | ✅ Templates pay off fast |
| Multiple platforms (AWS + GCP + k8s) | ✅ Unified view |
| <10 engineers, <5 services | ❌ Overhead not worth it |
| Single monolith | ❌ Minimal catalog benefit |
Production Deployment on Kubernetes
Helm Chart Setup
# values.yaml for @backstage/helm-chart
backstage:
image:
registry: ghcr.io
repository: your-org/backstage # Your custom built image
tag: latest
pullPolicy: Always
extraEnvVars:
- name: NODE_ENV
value: production
- name: LOG_LEVEL
value: info
extraEnvVarsSecret: backstage-secrets # Contains GITHUB_TOKEN, PAGERDUTY_TOKEN, etc.
appConfig:
app:
baseUrl: https://backstage.internal.myapp.com
backend:
baseUrl: https://backstage.internal.myapp.com
database:
client: pg
connection:
host: ${POSTGRES_HOST}
port: 5432
user: backstage
password: ${POSTGRES_PASSWORD}
database: backstage
ssl:
require: true
integrations:
github:
- host: github.com
apps:
- appId: ${GITHUB_APP_ID}
clientId: ${GITHUB_CLIENT_ID}
clientSecret: ${GITHUB_CLIENT_SECRET}
privateKey: ${GITHUB_PRIVATE_KEY}
webhookSecret: ${GITHUB_WEBHOOK_SECRET}
postgresql:
enabled: false # Use external managed PostgreSQL
ingress:
enabled: true
className: kong
annotations:
konghq.com/plugins: "cf-access-verify" # Cloudflare Access protection
host: backstage.internal.myapp.com
tls:
- hosts: [backstage.internal.myapp.com]
secretName: backstage-tls
Custom Backstage Docker Image
Backstage requires a custom Docker image that bundles your plugins:
# Dockerfile (in backstage project root)
FROM node:20-bookworm-slim AS build
WORKDIR /app
COPY --chown=node:node . .
RUN yarn install --frozen-lockfile
# Build frontend
RUN yarn workspace app build
# Build backend bundle
RUN yarn workspace backend build
RUN yarn workspace backend bundle \
--pack-integrity \
--minify
FROM node:20-bookworm-slim
RUN apt-get update && apt-get install -y \
python3 \
g++ \
make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
USER node
COPY --chown=node:node --from=build /app/packages/backend/dist/bundle.tar.gz bundle.tar.gz
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
COPY --chown=node:node app-config.yaml app-config.production.yaml ./
CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
☁️ Is Your Cloud Costing Too Much?
Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.
- AWS, GCP, Azure certified engineers
- Infrastructure as Code (Terraform, CDK)
- Docker, Kubernetes, GitHub Actions CI/CD
- Typical audit recovers $500–$3,000/month in savings
Software Catalog: The Foundation
catalog-info.yaml
Every service, API, library, and website should have a catalog-info.yaml in its repository root:
# catalog-info.yaml (in orders-service repo)
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: orders-service
title: Orders Service
description: |
Manages order lifecycle: creation, payment, fulfillment, and cancellation.
Primary owner: payments-team. On-call: via PagerDuty.
annotations:
# GitHub integration — shows PRs, branches, releases
github.com/project-slug: myorg/orders-service
# PagerDuty integration — shows on-call schedule, recent incidents
pagerduty.com/service-id: PABC123
# ArgoCD integration — shows deployment status
argocd/app-name: orders-service-prod
# Grafana dashboard
grafana/dashboard-url: https://grafana.internal/d/orders-overview
# SonarQube code quality
sonarqube.org/project-key: myorg_orders-service
# Custom annotations for your tooling
myorg.com/runbook-url: https://wiki.internal/runbooks/orders
myorg.com/slo-target: "99.9"
myorg.com/team-slack: "#payments-eng"
links:
- url: https://orders.internal.myapp.com
title: Production API
icon: web
- url: https://github.com/myorg/orders-service/blob/main/docs/architecture.md
title: Architecture Doc
icon: docs
tags:
- nodejs
- postgresql
- kafka
- typescript
spec:
type: service
lifecycle: production
owner: group:payments-team
system: ecommerce-platform
# Dependencies (renders in Backstage dependency graph)
dependsOn:
- component:postgres-primary
- component:kafka-cluster
- resource:s3-orders-bucket
# APIs this service exposes
providesApis:
- orders-api
# APIs this service consumes
consumesApis:
- payments-api
- inventory-api
Discovery: Auto-Register All Repos
Instead of manually adding catalog-info.yaml to every repo, configure GitHub discovery:
# app-config.yaml
catalog:
providers:
github:
# Scan all repos in the org for catalog-info.yaml
myOrg:
organization: myorg
catalogPath: '/catalog-info.yaml'
filters:
branch: 'main'
repository: '.*' # All repos (use regex to filter)
schedule:
frequency:
minutes: 30
timeout:
minutes: 3
Scaffolder: Golden Path Templates
Scaffolder templates let developers create a new service with all best-practice infrastructure pre-configured — no searching for the right Dockerfile, no copy-pasting CI pipelines.
# templates/nodejs-service-template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: nodejs-service
title: Node.js Microservice
description: Creates a production-ready Node.js service with Fastify, TypeScript, PostgreSQL, and CI/CD
tags:
- nodejs
- typescript
- fastify
- recommended
spec:
owner: group:platform-team
type: service
parameters:
- title: Service Information
required: [name, description, owner]
properties:
name:
title: Service Name
type: string
description: Kebab-case name (e.g., "inventory-service")
pattern: '^[a-z][a-z0-9-]*[a-z0-9]$'
ui:autofocus: true
description:
title: Description
type: string
description: What does this service do?
owner:
title: Owner Team
type: string
description: Team responsible for this service
ui:field: OwnerPicker
ui:options:
allowedKinds: [Group]
- title: Database
properties:
needsDatabase:
title: Needs PostgreSQL?
type: boolean
default: true
description: Create a database migration setup and connection pool
- title: Repository
required: [repoUrl]
properties:
repoUrl:
title: Repository Location
type: string
ui:field: RepoUrlPicker
ui:options:
allowedHosts: [github.com]
allowedOwners: [myorg]
steps:
- id: fetch-template
name: Fetch Template
action: fetch:template
input:
url: ./skeleton # Template skeleton directory
values:
name: ${{ parameters.name }}
description: ${{ parameters.description }}
owner: ${{ parameters.owner }}
needsDatabase: ${{ parameters.needsDatabase }}
- id: publish
name: Publish to GitHub
action: publish:github
input:
allowedHosts: [github.com]
description: ${{ parameters.description }}
repoUrl: ${{ parameters.repoUrl }}
defaultBranch: main
requireCodeOwnerReviews: true
dismissStaleReviews: true
requiredStatusCheckContexts:
- "CI / test"
- "CI / build"
- id: create-argocd-app
name: Create ArgoCD Application
action: argocd:create-resources
input:
appName: ${{ parameters.name }}-prod
argoInstance: production
namespace: default
repoUrl: ${{ steps.publish.output.remoteUrl }}
path: k8s/overlays/production
- id: register-catalog
name: Register in Catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
catalogInfoPath: /catalog-info.yaml
- id: create-pagerduty-service
name: Create PagerDuty Service
action: pagerduty:service:create
input:
name: ${{ parameters.name }}
description: ${{ parameters.description }}
escalationPolicyId: ${{ parameters.owner === 'payments-team' && 'PD_PAYMENTS' || 'PD_DEFAULT' }}
output:
links:
- title: Repository
url: ${{ steps.publish.output.remoteUrl }}
- title: Catalog Entry
icon: catalog
entityRef: ${{ steps.register-catalog.output.entityRef }}
- title: ArgoCD App
url: https://argocd.internal/applications/${{ parameters.name }}-prod
Template Skeleton
templates/nodejs-service-template/skeleton/
.github/
workflows/
ci.yml ← Pre-configured CI with test/build/security scan
release.yml ← Semantic versioning + container build
CODEOWNERS ← Auto-set to ${{ values.owner }}
k8s/
base/
deployment.yaml ← Kubernetes deployment with probes + resource limits
service.yaml
kustomization.yaml
overlays/
staging/
production/
src/
index.ts ← Fastify app bootstrap
routes/
health.ts ← /health and /ready endpoints
catalog-info.yaml ← Pre-filled with ${{ values.name }}, ${{ values.owner }}
Dockerfile
package.json
tsconfig.json
.env.example
README.md
⚙️ DevOps Done Right — Zero Downtime, Full Automation
Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.
- Staging + production environments with feature flags
- Automated security scanning in the pipeline
- Uptime monitoring + alerting + runbook automation
- On-call support handover docs included
TechDocs: Docs That Stay Up to Date
TechDocs renders MkDocs documentation from your repository into Backstage — so docs live next to the code they document.
# mkdocs.yml (in each service repo)
site_name: Orders Service
site_description: Orders Service documentation
nav:
- Home: index.md
- Architecture: architecture.md
- API Reference: api.md
- Runbook: runbook.md
- Incidents: incidents.md
plugins:
- techdocs-core # Backstage TechDocs plugin
# catalog-info.yaml — enable TechDocs
metadata:
annotations:
backstage.io/techdocs-ref: dir:. # Docs in repo root
# Now visible at: backstage.internal/docs/orders-service
TechDocs Build Configuration (S3-backed)
# app-config.production.yaml
techdocs:
builder: external # CI builds docs, not Backstage
generator:
runIn: local
publisher:
type: awsS3
awsS3:
bucketName: myapp-techdocs
region: us-east-1
# Uses IRSA for authentication — no static credentials
# .github/workflows/techdocs.yml
name: Publish TechDocs
on:
push:
branches: [main]
paths: ['docs/**', 'mkdocs.yml', 'catalog-info.yaml']
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.12' }
- run: pip install mkdocs-techdocs-core
- name: Build and Publish
run: |
npx @techdocs/cli generate --no-docker
npx @techdocs/cli publish \
--publisher-type awsS3 \
--storage-name myapp-techdocs \
--entity default/Component/orders-service
env:
AWS_ROLE_ARN: ${{ secrets.TECHDOCS_ROLE_ARN }}
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/amazonaws.com/serviceaccount/token
Key Plugins to Install Day One
| Plugin | What It Adds | Install Complexity |
|---|---|---|
| GitHub | PRs, commits, actions, branch status | Low (built-in) |
| PagerDuty | On-call schedule, incident list per service | Low |
| ArgoCD | Deployment status, sync state | Low |
| Grafana | Embed dashboards in service page | Medium |
| SonarQube | Code quality metrics per service | Medium |
| Cost Insights | Cloud cost per team (AWS/GCP) | High |
| Kubernetes | Pod status, CPU/memory per service | Medium |
| Dynatrace / Datadog | APM metrics in catalog | Medium |
Adoption Metrics
Track Backstage adoption to prove platform team value:
// Measure adoption over time
const metrics = {
catalogCoverage: catalogComponents / totalServices, // % services with catalog entry
templateAdoption: servicesCreatedViaTemplate / totalNewServices, // % new services from template
techDocsPages: docsPageViews, // documentation is being read
searchQueries: backstageSearches / engineerCount, // devs finding answers in Backstage
};
Target: 80%+ catalog coverage within 3 months, 90%+ of new services created via template within 6 months.
Working With Viprasol
Our platform engineering team deploys and customizes Backstage for engineering organizations — from initial Kubernetes installation through custom Scaffolder templates and plugin integration.
What we deliver:
- Production Backstage deployment (Kubernetes + PostgreSQL + S3 TechDocs)
- Software catalog setup with GitHub auto-discovery
- 3–5 golden path Scaffolder templates (Node.js, Python, React, data pipelines)
- TechDocs pipeline (GitHub Actions → S3 → Backstage)
- Plugin integration: PagerDuty, ArgoCD, Grafana, SonarQube
→ Talk about your internal developer portal → Platform engineering and DevOps services
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need DevOps & Cloud Expertise?
Scale your infrastructure with confidence. AWS, GCP, Azure certified team.
Free consultation • No commitment • Response within 24 hours
Making sense of your data at scale?
Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.