10 yes/no questions. Get a score, a tier label, and three things to ship next sprint. No vague consultancy fluff.
1. We track sprint velocity over the last 5+ sprints
2. We hold a retrospective every sprint and act on at least one item
3. Stories are sized (story points or t-shirt) before sprint planning
4. Every sprint has a written sprint goal everyone can quote
5. We have a written definition of done that is referenced (not just a wall poster)
6. Top backlog items are described, sized, and ready before planning
7. Daily standup has a clear format and rarely runs over time
8. We finish 80%+ of committed work most sprints
9. Blockers are visible (board column or tag) and resolved within 1-2 days
10. Engineers self-pull tickets rather than being assigned by a lead
SprintFlint computes velocity, predictability, and cycle time on every sprint. The maturity score climbs by itself.
Start FreeThey cover the six dimensions that make or break a sprint cadence: measurement (velocity, predictability), feedback (retros), planning (estimation, sprint goal, refinement), quality (definition of done, blockers), cadence (standup format), and autonomy (self-pull). Other agile assessments use 50+ questions and produce noise. Ten weighted questions gives a stable signal in 90 seconds.
0-39 = Sprint Survival, 40-64 = Working Sprints, 65-84 = High-Functioning, 85-100 = Mature Agile. Most teams new to scrum land at 30-50. Teams running for a year typically land at 55-75. Anything above 80 is real.
Yes — and ideally, have each team member fill it in independently first, then compare. The most useful conversation is "why did we disagree on the retro question?" not the average score itself.
No. It's a 90-second self-diagnostic. A coach observing your team will catch dynamics no questionnaire can. But this is a useful starting point for "where should we focus next?"
Every quarter. More often than that and noise dominates the signal. Less and you miss drift. Pin the score in a Slack channel and watch the trend over 4-6 quarters.