Most agile metrics dashboards are vanity. They look impressive in a quarterly review, they sit in a Confluence page no one opens, and they have approximately zero impact on whether the team ships the right thing on time.
That’s not because metrics are useless. It’s because most teams measure what’s easy to measure, not what’s worth measuring.
Here’s the honest read — the four metrics that actually help, the five that don’t, and how to use them without turning your team into a number-chasing factory.
What metrics are for
Before listing any: be clear what you want metrics to do.
A useful agile metric does one of three things:
- Tell you when something has changed so you can investigate (signal).
- Give the team a shared reality for planning conversations (calibration).
- Show stakeholders enough trust to leave the team alone (transparency).
That’s it. Metrics that don’t do one of those three things are decoration.
The trap most teams fall into: tracking metrics for accountability — “are people working hard enough?” The moment a metric becomes a performance review input, it stops being a measurement and becomes a target. And targets get gamed.
Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. This is the single most important sentence in this article.
The four metrics that actually help
1. Velocity (with the right framing)
Velocity is the number of story points a team completes in a sprint. It’s the most over-analysed and most misunderstood metric in agile.
What it’s good for:
- Forecasting — given the next 30 backlog items totalling 200 points, and the team averages 40/sprint, that’s roughly 5 sprints. Useful for stakeholder conversations.
- Detecting change — if velocity drops 30% over three sprints, something is going on. People left, scope quality changed, dependencies are blocking, or the estimates have inflated. Investigate.
- Capacity planning — pair it with focus factor (more on that later) to plan a realistic sprint.
What it’s bad for:
- Comparing teams — story points are calibrated per-team. One team’s 5 might be another team’s 13. Comparing velocities across teams is meaningless and corrosive.
- Performance reviews — the moment you tell a team “ship more points next quarter,” they will. They’ll inflate estimates. The points number goes up, the actual output doesn’t change.
- Goals — “increase velocity by 20%” is a directive to inflate estimates. Nothing else.
The right framing: velocity is the team’s own ruler. It’s only ever compared to that team’s own previous sprints, and only as a forecast input — not a target.
2. Sprint goal hit rate
Are you hitting the sprint goal you set in planning? Yes / no / partial. Track it sprint over sprint.
This is underrated and almost no team measures it. It’s also the single best signal of whether your sprints are working.
If you hit the sprint goal 80% of the time, the team is in a healthy place. They’re committing realistically and protecting the goal during the sprint.
If you hit it 30% of the time, something is broken — usually one of:
- Goals are being set too ambitiously and nobody’s pushing back in planning.
- The team is being interrupted mid-sprint with new priorities.
- The “sprint goal” is a vague aspiration, not a clear deliverable.
- Estimates are wildly off and refinement is being skipped.
Goal hit rate forces an honest retrospective conversation. Velocity might be steady while you miss the goal every sprint — because you’re shipping the easy tickets and skipping the hard ones that actually matter.
3. Cycle time (per ticket)
Cycle time is how long a ticket takes from “in progress” to “done.” Per-ticket, distribution.
This is the most underused metric in agile and the one that pays the biggest dividends.
What it tells you:
- Median cycle time — how long a typical ticket takes. Shorter is better. If yours is creeping up, scope is creeping up.
- The long tail — the 95th percentile is where the real pain lives. Tickets that take 3 weeks instead of 2 days. They’re the ones blowing up your sprints. Find them, ask why, fix the pattern.
- WIP correlation — when work-in-progress goes up, cycle time goes up. Almost always. Cycle time is the metric that catches a team trying to do too much at once.
The honest version: most teams have a few tickets blowing up the average. Look at the distribution, not the mean. If the 95th percentile is 6x the median, you have a “stuck ticket” problem. Address it.
4. Escape rate (bugs found in production vs in QA)
Of the bugs caught in the last quarter, what fraction were found by users in production vs caught before release?
Low escape rate (10-20%) means quality processes are working. High escape rate (60%+) means you’re shipping bugs and burning trust.
This is the metric that reveals tech debt earlier than any other. Cycle time goes up because debt makes everything slower. Escape rate goes up because there’s no time to test properly. They move together.
It’s also a metric stakeholders actually care about — they don’t care about velocity, they care about whether the product is working when they use it.
The five that don’t help
These are common, popular, and largely useless. Don’t track them — or if you have to, don’t show them to the team as a goal.
1. Lines of code / commits / PRs per developer
This is industrial-era thinking applied to creative work. It optimises for typing volume, not for shipping the right thing.
The first time someone is graded on commit count, they’ll start splitting one logical commit into ten. Lines of code goes up by writing verbose code. PR count goes up by opening tiny PRs. None of these things make the product better.
Worse: it punishes the senior engineer who deletes 800 lines of dead code, replacing it with 30 cleaner ones. By this metric, they had a “negative output” week. Absurd.
2. Story points per developer
Same problem as velocity-as-target, but worse. Story points were never meant to be assigned to individuals — they’re a team measure. Slicing them per-developer creates competition over credit (who gets the big tickets?) and demotivates collaboration (why pair when only one of us gets the points?).
If your tooling shows you “Alice did 23 points this sprint, Bob did 8,” ignore it. Bob might have spent two sprints on the gnarliest bug in the codebase that no one else could touch.
3. Hours logged
Time tracking on agile work is the most expensive theatre in the industry. It produces a number, the number gets used in a board report, the board feels reassured, and the team is one minute closer to quitting.
Hours don’t measure output. They measure presence. If you genuinely need to know capacity, ask the team in planning — they know better than any tracker.
The only legitimate use case for time tracking is client billing. Internal team management isn’t one.
4. “Burndown chart shape”
Burndown charts are useful, but the shape is over-interpreted. There is no perfect burndown line. Tickets get done in chunks because real work happens in chunks. A perfectly straight diagonal would actually be suspicious — it would mean someone’s gaming when they mark things “done.”
A flat-then-cliff burndown isn’t a “broken team,” it’s normal — work was done, then closed out at end of sprint. Stop reading tea leaves in the chart shape.
(For more on this, our piece on burndown charts covers the four ways they lie.)
5. Number of tickets closed
Ticket count is meaningless without size. A team that closes 50 trivial tickets isn’t outperforming a team that closes 5 substantial ones. This is why we have story points — to abstract away from raw count.
If your dashboard headlines “tickets closed this week,” you’re measuring busyness, not output.
Focus factor: the metric most teams should track but don’t
Focus factor is the percentage of theoretical capacity a team actually spends on sprint work, after meetings, support work, interruptions, and context switching.
For a 6-person team theoretically capable of 60 ideal hours/week each, focus factor is rarely above 60%. Often closer to 40%.
Why this matters: planning a sprint based on theoretical capacity is the single biggest reason teams over-commit. If you know your team’s actual focus factor is 50%, you plan for 50% of the theoretical capacity. You hit the sprint goal more often. Stakeholders are happier. The team is less burned out.
Tracking focus factor: not as one number, but as a planning input. Ask in retros: “what ate our focus this sprint?” The answers are usually:
- Unplanned support work
- Cross-team dependency meetings
- Production incidents
- Onboarding / documentation
- Context switching between sprint work and “urgent” requests
Most of these are reducible if you can name them. That’s the value of the metric.
(Our sprint capacity calculator does this math automatically — input team size, focus factor, and sprint length, get realistic capacity in points.)
How to actually use these
Don’t bolt all of these onto a dashboard at once. Start with one signal per category:
- Forecast: velocity, with focus-factor-adjusted capacity.
- Health: sprint goal hit rate, sprint over sprint.
- Flow: cycle time distribution, especially the 95th percentile.
- Quality: escape rate, quarterly.
Look at them in retros. Talk about what changed and why. Don’t set them as targets. Don’t grade individuals on them. Don’t put them in a quarterly review without the team’s interpretation alongside.
Metrics are conversation starters. The conversation is the value, not the number.
What stakeholders actually want
If you’re being pushed for “more metrics” by leadership, the underlying ask is almost never “I want a dashboard.” It’s:
- “I want to know if we’re on track.”
- “I want to know if I should be worried.”
- “I want to be able to report up without making things up.”
You can answer all three with: velocity trend (forecast confidence), goal hit rate (are we doing what we said), and a one-line risk callout. That’s a 30-second status, not a 30-tab dashboard.
If you give them that consistently, the demand for more metrics goes away. The demand was for trust, not data.
The honest summary
Track four things: velocity, sprint goal hit rate, cycle time, escape rate. Use them to inform conversation, not to grade people. Treat focus factor as a planning input, not a KPI. Stop tracking lines of code, hours, and points-per-person. They cost more than they pay.
Metrics are a tool for the team to see itself clearly. The moment they become a tool for someone else to judge the team, they stop working — by mathematical inevitability, not by bad management.
The teams that actually ship reliably aren’t the ones with the most metrics. They’re the ones with the right four, used well, in conversations the team owns.
Tools to help:
- Sprint Velocity Calculator — track velocity over the last 5 sprints, free, no signup.
- Sprint Capacity Calculator — realistic capacity using focus factor.
- Sprint Health Check — quick assessment of sprint health across multiple signals.
- Sprint Maturity Self-Assessment — see where your team sits on the maturity curve.
SprintFlint computes velocity, cycle time, and goal hit rate automatically as tickets move. Free for the first 300 tickets — no card, no setup.