WorkflowsarticleDecember 9, 20259 min read

The Productivity Metric Nobody Tracks: Maintenance Overhead

Engineering teams track velocity, cycle time, and deployment frequency - but ignore maintenance overhead. Learn why this hidden metric matters and how to measure it.

Engineering teams track everything. Story points completed. Cycle time. Deployment frequency. Pull request throughput. Code coverage percentage.

But ask most teams how much time they spend on maintenance, and you'll get blank stares. Or rough guesses. Or uncomfortable subject changes.

Maintenance overhead—the percentage of engineering capacity consumed by keeping existing code healthy—is the productivity metric that matters most and gets tracked least.

Why Maintenance Overhead Matters

Every hour spent on maintenance is an hour not spent on features. This isn't controversial. What's controversial is admitting how many hours that actually is.

Industry estimates suggest maintenance consumes 20-40% of engineering capacity. That's one to two days per week per developer spent on:

  • Dependency updates
  • Bug fixes in existing code
  • Test maintenance
  • Documentation updates
  • Security patches
  • Code cleanup and refactoring
  • Addressing technical debt

For a team of 10 engineers, that's 2-4 full-time equivalent positions spent on maintenance. For a team of 50, it's 10-20.

That's not a rounding error. That's a strategic reality.

Why Teams Don't Track It

If maintenance overhead matters so much, why don't teams track it?

It's Invisible in Standard Metrics

Story points measure what you planned to do. Cycle time measures how fast you did it. Neither captures the maintenance work that happens around and between planned work.

Developers fix a flaky test blocking their PR. They update a dependency that's causing a warning. They clean up code they're touching anyway. None of this shows up in Jira tickets or sprint metrics.

It's Embarrassing

Nobody wants to report that their team spent 30% of the sprint on maintenance. It feels like failure—like you should have prevented the maintenance need in the first place.

So teams don't report it. They absorb it into feature estimates. They work extra hours. They defer maintenance until it becomes a crisis.

It's Hard to Measure

What counts as maintenance? Updating a dependency for a feature versus updating it proactively are different. Fixing a bug in code you just wrote versus code written years ago are different. Refactoring as part of a feature versus refactoring as cleanup are different.

The categorization is genuinely hard. So teams don't bother.

It Requires Honest Conversation

Tracking maintenance overhead forces teams to confront uncomfortable questions:

  • Are we accumulating too much tech debt?
  • Should we be investing more in code quality?
  • Are we understaffed relative to our codebase size?
  • Is our architecture creating ongoing maintenance burden?

It's easier to track metrics that make everyone feel good.

How to Measure Maintenance Overhead

Despite the challenges, maintenance overhead can be measured. Here's how.

Method 1: Time Tracking

The most direct approach: track time spent on different work types.

Create categories:

  • Feature Development: New functionality
  • Bug Fixes: Correcting defective behavior
  • Maintenance: Keeping existing code healthy (updates, refactoring, cleanup)
  • Infrastructure: Tooling, CI/CD, environment improvements

Have developers log time against these categories weekly. It doesn't need to be precise—rough percentages work.

Pros: Direct measurement, captures reality Cons: Requires discipline, developers hate time tracking

Method 2: Ticket Classification

Classify Jira tickets (or equivalent) by work type and track the distribution.

Tag tickets as feature, bug, maintenance, or infrastructure. At sprint end, calculate what percentage of completed points fell into each category.

Pros: Uses existing systems, objective Cons: Misses untracked work, classification debates

Method 3: PR Analysis

Analyze pull request content to infer work type.

  • PRs that only touch dependencies: maintenance
  • PRs that only modify tests: likely maintenance
  • PRs flagged by bots (Dependabot, etc.): maintenance
  • PRs with "fix", "cleanup", "refactor" in titles: likely maintenance

Pros: Automatic, objective, no developer effort Cons: Imprecise, misses context

Method 4: Survey-Based

Survey developers monthly: "What percentage of your time last month was spent on maintenance?"

Average the responses. Track trends over time.

Pros: Simple, captures perception Cons: Subjective, depends on honest responses

Recommended Approach

Combine methods. Use PR analysis for objective baseline, ticket classification for tracked work, and quarterly surveys for qualitative check.

The goal isn't perfect measurement—it's approximate measurement that enables improvement.

What Good Looks Like

What maintenance overhead should you target?

Below 15%: Exceptional

Very low maintenance overhead. Usually indicates:

  • Relatively new codebase
  • Strong engineering practices
  • Heavy investment in automation
  • Possibly understating maintenance (not tracking it properly)

15-25%: Healthy

Sustainable maintenance overhead. The codebase is healthy enough that maintenance doesn't dominate, but realistic enough to acknowledge it exists.

25-35%: Elevated

Maintenance is consuming significant capacity. Worth investigating why:

  • Aging codebase with accumulated debt?
  • Recent rapid growth outpacing practices?
  • Infrastructure or architecture issues?

Above 35%: Critical

Maintenance is crowding out feature work. This typically indicates:

  • Severe technical debt
  • Underinvestment in code quality
  • Architecture fundamentally unsuited to current needs
  • Team may be in "keep the lights on" mode

What Drives Maintenance Overhead

Once you're measuring maintenance overhead, you can investigate what's driving it.

Codebase Age

Older codebases have more accumulated decisions that need revisiting. Patterns that made sense five years ago may not make sense now. Dependencies that were current then are outdated now.

This is normal. The question is whether you're managing it or ignoring it.

Team Growth Rate

Rapid team growth often increases maintenance overhead. New developers write code before fully understanding existing patterns. Knowledge becomes fragmented. Inconsistency creeps in.

Architecture Decisions

Some architectures create ongoing maintenance burden. Tight coupling means changes cascade. Shared mutable state means more bugs. Poor abstraction means more code to maintain.

Architecture debt compounds faster than code debt.

Dependency Count

More dependencies means more to keep updated. More potential for conflicts. More security patches to apply.

Teams often underestimate the ongoing cost of each dependency they add.

Test Health

Flaky or slow tests create ongoing maintenance work. Developers spend time debugging false failures. They wait for slow CI runs. They work around unreliable tests.

Test maintenance is maintenance.

Documentation Debt

Missing or outdated documentation creates ongoing overhead. Developers spend time investigating how things work. They make mistakes from incorrect assumptions. They ask questions that docs should answer.

Reducing Maintenance Overhead

Measuring maintenance overhead is step one. Reducing it is step two.

Automate Repetitive Maintenance

The biggest wins come from automating maintenance that happens repeatedly:

  • Dependency updates: AI-powered tools can handle the full cycle—identifying updates, assessing impact, generating changes, creating PRs.
  • Code formatting and linting: Automated on commit so developers never spend time on style issues.
  • Test maintenance: AI can identify and fix flaky tests, update tests for refactored code.
  • Security patches: Automated scanning and patching for known vulnerabilities.

Every hour of maintenance you automate is an hour returned to feature work.

Reduce Maintenance Surface Area

Less code means less to maintain.

  • Delete unused code aggressively
  • Remove unnecessary dependencies
  • Simplify architecture where complexity isn't paying off
  • Avoid building what you can buy or use from libraries

The best maintenance is maintenance that doesn't need to happen.

Improve Code Quality Upstream

Some maintenance is caused by poor initial quality. Fix the source:

  • Stronger code review for architectural decisions
  • Better testing requirements for new code
  • Documentation requirements for complex code
  • Consistency enforcement through automation

Prevention is cheaper than repair.

Allocate Capacity Honestly

If maintenance genuinely requires 25% of capacity, allocate 25%. Don't pretend it takes 10% and wonder why features are always late.

Honest capacity allocation lets you plan accurately and identify when maintenance overhead is growing.

Using Maintenance Overhead Strategically

Maintenance overhead isn't just an operational metric. It's a strategic input.

Hiring Decisions

If maintenance overhead is 30% and growing, you need capacity faster than new features can generate revenue. This affects hiring urgency.

Architecture Investments

High maintenance overhead in specific areas points to architecture problems worth fixing. The metric helps prioritize where to invest.

Build vs. Buy Decisions

Before building a component, consider ongoing maintenance. Buying often has lower maintenance overhead even at higher initial cost.

Technical Debt Prioritization

Track maintenance overhead by area of the codebase. Where overhead is highest, debt paydown has the biggest impact.

Making Maintenance Overhead Visible

For maintenance overhead to matter organizationally, it needs to be visible to leadership.

Executive Dashboard

Include maintenance overhead alongside other engineering metrics. When leadership sees "35% of capacity spent on maintenance," they understand why feature velocity isn't higher.

Trend Charts

Show maintenance overhead over time. Is it stable? Growing? The trend matters more than any single measurement.

Comparison to Industry

Benchmark against industry norms. "Our maintenance overhead is 40% versus industry average of 25%" is a compelling case for investment.

Cost Translation

Translate overhead to dollars. "We spent $1.2M on maintenance last quarter" resonates more than percentages with finance-minded stakeholders.

The Feedback Loop

Tracking maintenance overhead creates a feedback loop:

  1. Measure current maintenance overhead
  2. Investigate what's driving it
  3. Invest in reducing the biggest drivers
  4. Measure the impact of investments
  5. Repeat

Without measurement, you can't know if investments are working. You can't prioritize effectively. You can't make the case for change.

The teams that track maintenance overhead are the teams that control it. The teams that ignore it are controlled by it.


FAQ

How do I get my team to track maintenance overhead?

Start simple. A monthly survey asking "what percentage of your time was maintenance?" takes five minutes. Once you have baseline data, the conversation about improving it becomes concrete.

What if leadership doesn't care about this metric?

Translate it to impact they do care about. "We could ship 30% more features if we reduced maintenance overhead to industry average" connects maintenance to outcomes.

Isn't some maintenance overhead inevitable?

Yes. The goal isn't zero—it's optimal. Some maintenance is the cost of having a working product. The question is whether you're at 20% (manageable) or 40% (problematic).

How is this different from tracking technical debt?

Technical debt is the backlog of maintenance you haven't done. Maintenance overhead is the maintenance you are doing. Both matter, but they measure different things.

Should maintenance overhead be a KPI for developers?

No. Individual developers don't control organizational maintenance overhead. It's a team/org metric influenced by architecture, tooling, and investment decisions. Blaming individuals will just make them hide maintenance work.