WorkflowsarticleDecember 12, 20259 min read

DORA Metrics Miss Maintenance: What to Track Instead

DORA metrics measure deployment performance but ignore maintenance overhead. Learn what additional metrics reveal about true engineering efficiency.

DORA metrics have become the gold standard for measuring engineering team performance. Deployment frequency, lead time for changes, change failure rate, and time to restore service. These four metrics, backed by years of research, correlate strongly with organizational performance.

But DORA metrics have a blind spot: they measure how well you ship, not how much of your capacity goes to shipping versus maintaining.

A team could have elite DORA scores while spending 40% of their capacity on maintenance. They deploy frequently and recover quickly—with half the engineers they could be using for features.

DORA tells you how efficient your deployment pipeline is. It doesn't tell you how efficient your engineering organization is.

What DORA Measures

The four DORA metrics:

Deployment Frequency

How often does your organization deploy code to production?

Elite: On-demand (multiple deploys per day) High: Between once per day and once per week Medium: Between once per week and once per month Low: Between once per month and once every six months

Lead Time for Changes

How long does it take to go from code committed to code successfully running in production?

Elite: Less than one hour High: Between one day and one week Medium: Between one week and one month Low: Between one month and six months

Change Failure Rate

What percentage of deployments cause a failure in production?

Elite: 0-15% High: 16-30% Medium: 31-45% Low: 46-60%

Time to Restore Service

How long does it take to restore service when a failure occurs?

Elite: Less than one hour High: Less than one day Medium: Between one day and one week Low: Between one week and one month

What DORA Doesn't Measure

DORA metrics tell you about deployment performance. They don't tell you about:

Maintenance Overhead

What percentage of engineering capacity goes to maintenance versus features?

A team deploying daily with elite lead times might be spending 30% of capacity on dependency updates, test fixes, and tech debt. DORA can't see this.

Code Health

Is the codebase getting healthier or accumulating debt?

Fast deployments from an increasingly fragile codebase isn't sustainable. DORA measures the pipeline, not the payload.

Capacity Utilization

How much of your engineering investment produces customer value?

If half your team is doing maintenance, you're getting half the feature output for your engineering spend. DORA doesn't capture this efficiency.

Sustainability

Can the team maintain current performance indefinitely?

A team might hit elite DORA metrics through heroic effort that can't be sustained. Or by deferring maintenance that will eventually demand attention.

The Missing Metrics

To get a complete picture of engineering efficiency, supplement DORA with maintenance-aware metrics.

Maintenance Ratio

Definition: Percentage of completed work that's maintenance versus feature development.

Calculation:

Maintenance Ratio = Maintenance Work / Total Work

Where work can be measured as:
- Story points
- PRs
- Engineering hours
- Tickets completed

Benchmarks:

  • Below 15%: Excellent—most capacity goes to features
  • 15-25%: Healthy—sustainable maintenance level
  • 25-35%: Elevated—maintenance is consuming significant capacity
  • Above 35%: Critical—maintenance is crowding out feature work

Why it matters: High DORA metrics with high maintenance ratio means you're fast but inefficient. You're running to stand still.

Dependency Freshness

Definition: Average age of dependencies relative to latest available versions.

Calculation:

Dependency Freshness = Average(Current Version Age) across all dependencies

Age = Time since latest version was released - Time since you updated

Benchmarks:

  • Below 30 days: Excellent—dependencies are current
  • 30-90 days: Healthy—reasonably current
  • 90-180 days: Elevated—starting to fall behind
  • Above 180 days: Critical—significant update debt

Why it matters: Stale dependencies indicate deferred maintenance. They correlate with security vulnerabilities, performance issues, and eventual upgrade crises.

Technical Debt Ratio

Definition: Estimated remediation cost of code issues relative to codebase size.

Calculation:

Technical Debt Ratio = Remediation Cost / Development Cost

Measured using static analysis tools (SonarQube, CodeClimate, etc.)

Benchmarks:

  • Below 5%: Excellent—healthy codebase
  • 5-10%: Acceptable—manageable debt
  • 10-20%: Elevated—debt is accumulating
  • Above 20%: Critical—debt is significant drag

Why it matters: High debt ratios predict future maintenance overhead. Today's debt is tomorrow's mandatory work.

Maintenance PR Ratio

Definition: Ratio of maintenance PRs to feature PRs.

Calculation:

Maintenance PR Ratio = Maintenance PRs / Total PRs

Maintenance PRs include:
- Dependency updates
- Refactoring
- Test fixes
- Documentation updates
- Tech debt paydown

Benchmarks:

  • Below 20%: Excellent—most PRs are features
  • 20-35%: Healthy—balanced workload
  • 35-50%: Elevated—significant maintenance volume
  • Above 50%: Critical—more maintenance than features

Why it matters: PR ratios are objective and easy to measure. They reveal whether engineering effort is going to new value or existing maintenance.

Time to First Production Impact

Definition: How long until new engineers contribute production code.

Calculation:

Time to First Production Impact = Days from start date to first production PR merged

Benchmarks:

  • Below 14 days: Excellent—fast onboarding
  • 14-30 days: Healthy—reasonable ramp-up
  • 30-60 days: Elevated—onboarding friction
  • Above 60 days: Critical—significant onboarding debt

Why it matters: Slow onboarding often indicates documentation debt, code complexity, or unclear architecture—all maintenance-related issues.

Creating a Complete Dashboard

Combine DORA metrics with maintenance metrics for a complete view:

Deployment Performance (DORA)

| Metric | Target | Current | |--------|--------|---------| | Deployment Frequency | Daily | | | Lead Time | < 1 day | | | Change Failure Rate | < 15% | | | Time to Restore | < 1 hour | |

Maintenance Health

| Metric | Target | Current | |--------|--------|---------| | Maintenance Ratio | < 25% | | | Dependency Freshness | < 60 days | | | Technical Debt Ratio | < 10% | | | Maintenance PR Ratio | < 35% | |

Overall Assessment

Scenario 1: Elite DORA + Healthy Maintenance = Genuinely high-performing team. Fast deployments from a sustainable codebase.

Scenario 2: Elite DORA + Poor Maintenance = Fast but fragile. Shipping quickly while accumulating debt. Unsustainable.

Scenario 3: Poor DORA + Healthy Maintenance = Pipeline problem. The code is healthy but deployment process needs work.

Scenario 4: Poor DORA + Poor Maintenance = Systemic issues. Both pipeline and codebase need attention.

Using Metrics to Drive Improvement

Metrics are only valuable if they drive action.

If Maintenance Ratio Is High

Investigate where maintenance time is going:

  • Dependency updates? → Automate with AI-powered tools
  • Test maintenance? → Improve test reliability
  • Bug fixes in old code? → Consider refactoring or rewriting problem areas
  • Documentation? → Automate documentation sync

If Dependency Freshness Is Poor

  • Enable automated dependency updates
  • Configure weekly/daily update PRs
  • Prioritize security updates for auto-merge
  • Schedule major version updates quarterly

If Technical Debt Ratio Is Rising

  • Allocate capacity explicitly for debt reduction
  • Identify and address highest-debt areas
  • Automate pattern enforcement to prevent new debt
  • Review architecture for systemic issues

If Maintenance PR Ratio Is High

  • Automate repetitive maintenance PRs
  • Batch maintenance into fewer, larger PRs
  • Investigate root causes of maintenance volume
  • Consider whether maintenance is actually valuable (vs. busywork)

The Automation Connection

Many maintenance metrics improve through automation:

Dependency Freshness

Manual approach: Occasional update sprints, falling behind between sprints Automated approach: Continuous updates, always fresh

Maintenance Ratio

Manual approach: Human time spent on maintenance tasks Automated approach: AI handles maintenance, humans review PRs

Maintenance PR Ratio

Manual approach: Every maintenance change needs human creation Automated approach: AI creates maintenance PRs, humans just approve

Automation shifts maintenance from active human work to passive human oversight. The maintenance still happens—it just doesn't consume as much capacity.

Reporting to Leadership

Executives understand DORA because it's industry-standard. Help them understand maintenance metrics too.

Frame in Business Terms

Instead of: "Our maintenance ratio is 35%" Say: "For every $3 we spend on engineering, $1 goes to keeping existing code working instead of building new features"

Instead of: "Dependency freshness is 120 days" Say: "We're four months behind on updates, which means we're accumulating security risk and missing performance improvements"

Instead of: "Technical debt ratio is 18%" Say: "We'll need to spend 18% of future development time just addressing accumulated code issues"

Show Trends, Not Snapshots

A single measurement is less meaningful than a trend:

  • "Maintenance ratio has increased from 20% to 35% over six months"
  • "Dependency freshness improved from 90 days to 30 days since automation"

Trends show whether interventions are working.

Connect to Business Outcomes

Link metrics to outcomes leadership cares about:

  • "High maintenance ratio means fewer features shipped per quarter"
  • "Poor dependency freshness increases security incident risk"
  • "Rising technical debt means slowing velocity over time"

The Balanced View

DORA metrics aren't wrong—they're incomplete. They measure important things well. They just don't measure everything important.

High-performing teams need:

  • Fast deployment (DORA captures this)
  • Low failure rate (DORA captures this)
  • Quick recovery (DORA captures this)
  • Sustainable maintenance (DORA misses this)
  • Healthy codebase (DORA misses this)
  • Efficient capacity use (DORA misses this)

Track both. Optimize both. A team with elite DORA metrics and healthy maintenance metrics is genuinely high-performing. A team with elite DORA metrics and poor maintenance metrics is sprinting toward a wall.


FAQ

Should we stop tracking DORA metrics?

No. DORA metrics are valuable for what they measure. Just recognize their limitations and supplement with maintenance metrics for a complete picture.

How do I measure maintenance ratio if we don't categorize work?

Start simple. Ask developers monthly what percentage of their time was maintenance. Or categorize PRs as feature/maintenance based on content. Approximate measurement is better than no measurement.

What if leadership only wants to see DORA?

Include DORA in your reports and add maintenance metrics as supplementary. Frame maintenance metrics as "efficiency metrics that explain why we're hitting or missing DORA targets." Over time, leadership will recognize their value.

Are there industry benchmarks for maintenance metrics?

Less established than DORA, but general patterns exist. 20-30% maintenance ratio is common for healthy teams. Dependency freshness of 60-90 days is typical. Technical debt ratios vary widely but under 10% is generally healthy.

Can one team have different metrics than another?

Yes, and they often should. A greenfield team might have very low maintenance ratios. A team managing legacy systems might legitimately have higher maintenance needs. Compare trends over time, not absolute numbers across teams.