Back to Blog

Engineering Metrics: DORA, SPACE, and Measuring Developer Productivity

Measure software engineering productivity correctly — DORA metrics (deploy frequency, lead time, MTTR, change failure rate), SPACE framework, avoiding vanity me

Viprasol Tech Team
April 27, 2026
11 min read

Engineering Metrics: DORA, SPACE, and Measuring Developer Productivity

Engineering teams are hard to measure. Unlike sales (revenue), marketing (MQLs), or support (ticket resolution), software engineering output is non-linear, depends on problem complexity, and has long feedback loops. A team that spends a month paying down technical debt may appear unproductive by naive metrics while actually becoming significantly faster.

The frameworks covered here — DORA and SPACE — are the most research-validated approaches to measuring engineering teams without creating perverse incentives.


The Problem with Naive Metrics

Metrics that seem reasonable but create bad behavior:

MetricWhat Goes Wrong
Lines of codeIncentivizes verbose solutions, discourages refactoring
PRs merged per developerIncentivizes tiny PRs, discourages collaborative design
Tickets closedIncentivizes gaming the ticket system
Sprint velocityIncentivizes inflating estimates, punishes honest assessment
Hours workedIncentivizes presence over output, discourages efficiency
Bug countIncentivizes underreporting, discourages honest QA

The common failure mode: measure something visible and immediately controllable, miss the outcome you actually care about.


DORA Metrics

The DORA (DevOps Research and Assessment) team at Google studied thousands of engineering organizations and identified four metrics that predict high-performing teams:

1. Deployment Frequency

How often does your team deploy to production?

Performance LevelFrequency
EliteMultiple times per day
HighOnce per day to once per week
MediumOnce per week to once per month
LowOnce per month or less

High deployment frequency is a leading indicator of team health — it means small batches, low risk per deployment, and fast feedback loops.

2. Lead Time for Changes

Time from code committed to running in production.

Performance LevelLead Time
Elite< 1 hour
High1 day to 1 week
Medium1 week to 1 month
Low1–6 months

Long lead times indicate review bottlenecks, manual approval gates, infrequent deployments, or large batch sizes.

3. Change Failure Rate

Percentage of deployments that cause production incidents requiring rollback or hotfix.

Performance LevelFailure Rate
Elite0–5%
High5–10%
Medium10–15%
Low> 15%

High change failure rate indicates insufficient testing, missing feature flags, or deployment process gaps.

4. Mean Time to Recovery (MTTR)

How long does it take to recover from a production incident?

Performance LevelMTTR
Elite< 1 hour
High< 1 day
Medium1 day to 1 week
Low> 1 week

MTTR measures incident response capability: monitoring quality, runbook completeness, and team alertness.


🌐 Looking for a Dev Team That Actually Delivers?

Most agencies sell you a project manager and assign juniors. Viprasol is different — senior engineers only, direct Slack access, and a 5.0★ Upwork record across 100+ projects.

  • React, Next.js, Node.js, TypeScript — production-grade stack
  • Fixed-price contracts — no surprise invoices
  • Full source code ownership from day one
  • 90-day post-launch support included

Measuring DORA with Your Existing Tools

# GitHub Actions metrics — calculate from GitHub API
import requests
from datetime import datetime, timedelta
import statistics

GITHUB_TOKEN = "ghp_..."
ORG = "yourorg"
REPO = "your-app"

headers = {"Authorization": f"Bearer {GITHUB_TOKEN}"}

def get_deployment_frequency(days: int = 30) -> dict:
    """Calculate deployments per day for last N days"""
    since = (datetime.now() - timedelta(days=days)).isoformat() + "Z"

    response = requests.get(
        f"https://api.github.com/repos/{ORG}/{REPO}/deployments",
        params={"environment": "production", "per_page": 100},
        headers=headers,
    )
    deployments = response.json()

    production_deploys = [
        d for d in deployments
        if d["created_at"] >= since
    ]

    return {
        "total": len(production_deploys),
        "per_day": len(production_deploys) / days,
        "level": classify_deploy_frequency(len(production_deploys) / days),
    }

def classify_deploy_frequency(per_day: float) -> str:
    if per_day >= 1.0:
        return "elite"
    elif per_day >= 1/7:
        return "high"
    elif per_day >= 1/30:
        return "medium"
    else:
        return "low"

def get_lead_time(days: int = 30) -> dict:
    """Calculate lead time: first commit in PR to deployment"""
    since = (datetime.now() - timedelta(days=days)).isoformat() + "Z"

    # Get merged PRs
    response = requests.get(
        f"https://api.github.com/repos/{ORG}/{REPO}/pulls",
        params={"state": "closed", "per_page": 50, "base": "main"},
        headers=headers,
    )
    prs = [pr for pr in response.json() if pr.get("merged_at")]

    lead_times_hours = []
    for pr in prs:
        if pr["created_at"] < since:
            continue
        created = datetime.fromisoformat(pr["created_at"].rstrip("Z"))
        merged = datetime.fromisoformat(pr["merged_at"].rstrip("Z"))
        lead_times_hours.append((merged - created).total_seconds() / 3600)

    if not lead_times_hours:
        return {"error": "No data"}

    return {
        "median_hours": statistics.median(lead_times_hours),
        "p90_hours": sorted(lead_times_hours)[int(len(lead_times_hours) * 0.9)],
        "level": classify_lead_time(statistics.median(lead_times_hours)),
    }

def classify_lead_time(hours: float) -> str:
    if hours < 1:
        return "elite"
    elif hours < 168:  # 1 week
        return "high"
    elif hours < 720:  # 1 month
        return "medium"
    else:
        return "low"

The SPACE Framework

DORA measures delivery performance. SPACE (GitHub research, 2021) provides a broader view of developer productivity:

DimensionWhat It Measures
Satisfaction & WellbeingDeveloper job satisfaction, burnout risk
PerformanceCode quality, reliability, customer outcomes
ActivityCommits, PRs, code reviews (use cautiously)
Communication & CollaborationPR review time, doc quality, cross-team work
Efficiency & FlowInterruptions, context switching, meeting load

Key SPACE insight: No single metric captures productivity. Use a balanced set across dimensions.

Practical SPACE measurements:

🚀 Senior Engineers. No Junior Handoffs. Ever.

You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.

  • MVPs in 4–8 weeks, full platforms in 3–5 months
  • Lighthouse 90+ performance scores standard
  • Works across US, UK, AU timezones
  • Free 30-min architecture review, no commitment

Monthly Engineering Health Check

Satisfaction (survey — anonymous)

  • "I can do my best work most days" (1–5 scale)
  • "My work is sustainable long-term" (1–5 scale)
  • "I have the tools and context I need" (1–5 scale)

Performance (automated)

  • Customer-reported bugs per release
  • P99 API latency (SLA compliance)
  • Test coverage % (trending up or down)

Activity (automated — use as context, not evaluation)

  • Deploy frequency (DORA)
  • PR cycle time (DORA lead time proxy)
  • Code review turnaround time

Communication (partially automated)

  • Average PR review wait time (< 4h target)
  • PR description quality (human assessment, quarterly)
  • Architecture decision records written

Efficiency (survey + calendar analysis)

  • Meeting hours per week (target < 10h for ICs)
  • Estimated uninterrupted focus blocks per day
  • On-call alert noise (false positive %)

---

## Building an Engineering Dashboard

```typescript
// engineering-dashboard/src/metrics.ts
// Aggregates DORA metrics from GitHub, PagerDuty, and Datadog

interface DORAMetrics {
  deployFrequency: {
    perDay: number;
    level: 'elite' | 'high' | 'medium' | 'low';
    trend: 'up' | 'down' | 'stable';
  };
  leadTime: {
    medianHours: number;
    level: 'elite' | 'high' | 'medium' | 'low';
  };
  changeFailureRate: {
    percentage: number;
    level: 'elite' | 'high' | 'medium' | 'low';
  };
  mttr: {
    medianHours: number;
    level: 'elite' | 'high' | 'medium' | 'low';
  };
}

// Aggregate and store metrics daily
// Serve via API to Grafana or internal dashboard

Existing tools that calculate DORA:


What to Actually Do With Metrics

DORA metrics diagnose organizational health, not individual performance. Use them to:

  1. Identify bottlenecks: Long lead time → look at PR review process, CI speed, approval gates
  2. Track improvement: Did the new CI pipeline improve lead time? Did on-call rotation reduce MTTR?
  3. Set team goals: "Improve deployment frequency from weekly to daily by Q3"
  4. Compare to industry: DORA publishes annual benchmarks — see where your team sits

Never use DORA metrics for:

  • Individual performance reviews
  • Team comparison/ranking
  • Executive KPIs disconnected from context

Metrics measured in an organizational pressure context become games.


Working With Viprasol

We help engineering teams set up metrics infrastructure, identify bottlenecks in their delivery pipeline, and implement improvements — from CI speed to deployment automation to on-call processes. Better metrics lead to better decisions.

Talk to our engineering team about improving your delivery metrics.


See Also

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Need a Modern Web Application?

From landing pages to complex SaaS platforms — we build it all with Next.js and React.

Free consultation • No commitment • Response within 24 hours

Viprasol · Web Development

Need a custom web application built?

We build React and Next.js web applications with Lighthouse ≥90 scores, mobile-first design, and full source code ownership. Senior engineers only — from architecture through deployment.