Definition of Ready: 5 entry criteria that prevent half-baked sprints

Most teams have a Definition of Done. Few have a Definition of Ready — and that's why stories enter sprints in wildly different states. Five entry criteria that catch most surprises (clear why, testable acceptance criteria, estimable size, dependencies identified, owner assignable), what to refuse adding, how DoR fits with refinement and planning, and how to introduce it without turning refinement into a gatekeeping ceremony.

May 5, 2026  ·  11 min read  ·  SprintFlint Team

Most teams have a Definition of Done. Far fewer have a Definition of Ready. The result: stories enter sprints in wildly different states of preparedness, sprint planning becomes triage, and mid-sprint surprises eat the goal.

A Definition of Ready (DoR) is the upstream sibling of Definition of Done. DoD says when work can leave the sprint. DoR says when work can enter it. It’s the cheapest piece of agile infrastructure most teams skip.

This is the practical version. What DoR actually does. Five entry criteria that catch most preventable surprises. The trap of over-engineered DoR. And how to introduce it without turning refinement into a gatekeeping ceremony.

What DoR is for

A Definition of Ready answers one question: is this story safe to commit to in sprint planning?

When stories enter a sprint half-baked, three things happen:

  1. Estimates are wrong. Engineers see hidden complexity mid-sprint that they couldn’t see at planning. The points were a guess at a different story.
  2. Goals get watered down. Mid-sprint discoveries pull effort away from goal-aligned work. Stretch becomes “did some of it”.
  3. Refinement happens during the sprint. Engineers chase down acceptance criteria, hunt for designs, ping stakeholders for missing decisions — work that should have happened before commitment.

DoR is the structural fix. Stories that don’t meet it don’t enter the sprint. Period.

Five entry criteria that catch most surprises

Most DoRs balloon to a 12-item checklist that nobody actually checks. Here’s the lean version that catches the issues that matter:

1. The story has a clear “why”

Either the user value is obvious from the title, or the story spells it out. “Add caching layer” without context fails this test. “Reduce API latency on the dashboard p95 from 800ms to 200ms” passes.

The test: can a non-author of the story explain in one sentence why it matters?

2. Acceptance criteria exist and are testable

At least the minimum viable set — three bullets — and each one can be agreed-upon as met or not met. (See acceptance criteria formats for the full guide.)

The test: read the criteria. Are they statements anyone could check, or are they aspirations?

3. The story is small enough to estimate

If the team can’t agree on points within a couple of rounds of planning poker, the story isn’t ready. It’s either too large (split it — see story splitter), or too unclear (more refinement, or convert to a spike).

The test: can the team produce a confident estimate, not a “shrug, maybe an 8”?

4. External dependencies are identified and unblocked

If the work depends on a design from another team, an API key, a data source, or a decision from a stakeholder — those need to be resolved (or explicitly accepted as risks) before commitment.

The test: list the dependencies. For each, is the answer “we have it” or “we’re choosing to take the risk”?

5. Someone owns it

Either an engineer or a clear group of engineers can take this story end-to-end. “We’ll figure out who when we start” doesn’t pass — that’s the discovery process leaking into the sprint.

The test: in standup on day 1, who would say “I’m working on this”?

That’s it. Five tests. Most stories that pass these don’t surprise the team mid-sprint. Most stories that fail them are the ones that blow up.

What DoR is not

DoR is the most over-engineered agile artefact. Common bloat to refuse:

  • “All technical design must be complete.” No. Design happens during the work for most stories. If a story needs upfront design, it’s a spike, and the spike is the predecessor.
  • “All UX mockups approved by all stakeholders.” No. UX needs enough definition to start, not full sign-off. Sign-off blocks shipping; “enough to start” enables it.
  • “Estimated by the whole team in a planning poker session.” No. Some teams do this; some teams have a smaller refinement crew estimate, then validate at planning. Don’t bake your process into your DoR.
  • “Linked to a JIRA epic with parent-child mapping.” No. That’s a tooling preference. DoR is about whether the story is ready, not whether the ticket has filled in 14 fields.
  • “Has a fully decomposed task list.” No. The team should be allowed to discover tasks as they work. Mandating decomposition encourages fake decomposition (engineers writing tasks they’ll ignore).

The pattern: DoR should test outcomes (can this be safely committed to?), not outputs (has this artefact been produced?). Each item that demands an artefact is a tax. Each item that asks a real question is signal.

Where DoR lives in the workflow

The flow:

  1. Story enters backlog (loose, half-baked, that’s fine).
  2. Refinement brings it toward Ready. Refinement is the meeting where DoR criteria get satisfied.
  3. Story is marked Ready.
  4. Sprint planning picks from Ready stories. Stories not yet Ready are not eligible.
  5. Sprint runs. DoD applies to leaving.

The structural rule: refinement makes things ready. Planning picks from ready things. If your team plans from “things that look interesting”, DoR has nothing to bite on.

(Pairs with the backlog refinement runbook for the meeting format.)

How to introduce DoR without making refinement a gatekeeping ceremony

Two failure modes when teams adopt DoR:

Failure mode 1: DoR becomes pure overhead. Every story gets debated for 20 minutes. The PM grows resentful. Engineers check items mechanically without thinking. Refinement gets longer; nothing else improves.

Failure mode 2: DoR is ignored. Team agrees to a DoR. Stories enter sprints not meeting it. Nobody enforces. The DoR is decorative.

The fix for both: the team can override DoR consciously, but the override is recorded.

Concretely:

  • A story that doesn’t meet DoR can be committed to if the team explicitly accepts the risk and writes down what’s missing.
  • At retro, the team reviews overrides: did the missing piece bite? If yes, what should DoR catch next time? If no, was the criterion necessary?

This converts DoR from a gate into a learning instrument. The team gets to be pragmatic; the team also can’t pretend the same surprises keep happening by accident.

When DoR should be tighter or looser

The standard 5-item DoR fits most teams. Deviations:

Tighter DoR (add criteria) when:

  • The team’s incident rate is high — surprises are biting hard
  • Distributed/async — fewer chances to fix mid-sprint
  • Heavy-compliance domain (healthcare, finance, regulated) — change is expensive
  • Multiple specialisations involved (backend + frontend + ML) — coordination cost is real

Add things like: “design review attached”, “API contract documented”, “test plan present”.

Looser DoR (drop criteria) when:

  • Very small team, very fast iteration — DoR overhead exceeds benefit
  • High discovery work (early-stage product, R&D) — pretending stories are ready is fiction
  • Teams with strong individual ownership — engineers fill gaps as they go

Drop things like: explicit owner, full criteria, dependency check. Keep at minimum: clear “why” and rough sense of size.

DoR for different sprint types

Not every sprint runs on the same kind of work. Adjust:

Feature-development sprint — full 5-item DoR. The whole point is committing to clearly-scoped feature work.

Bug-fix sprint — looser. Many bugs are inherently exploratory. DoR for a bug = “reproduces consistently OR has a clear reproduction step we’ll validate first”.

Discovery sprint — much looser. Most stories are spikes. DoR = “timeboxed and has a clear question to answer”.

Tech-debt sprint — tighter on outcomes, looser on user “why”. DoR = “the after-state is observable” + “the team agrees this is worth doing now”.

Don’t apply the same checklist to all sprint types. The point is sprint outcomes matching commitment, and the questions that protect that vary by sprint type.

What goes wrong without DoR

Three patterns to watch for, all symptoms of missing DoR:

“We discovered this only after we started”

The story didn’t have its “why” or its acceptance criteria pinned down. Mid-sprint, the team realises the work is bigger or different. By then, sprint goal slips.

Fix: criterion 1 and 2 of DoR.

“We were waiting on Marketing/Design/Legal/another team”

The dependency wasn’t identified or wasn’t resolved before commitment. Now the engineer is blocked, the sprint runs out of useful work for them, points get re-shuffled.

Fix: criterion 4 of DoR.

“It turned into an epic”

The story was always too large; the team estimated it as a 5 by averaging guesses. Mid-sprint it becomes obvious it’s a 13+. Either it doesn’t ship, or other work gets dropped to make it ship.

Fix: criterion 3 of DoR (with story splitter when oversized).

If you see two of these in any single retro, your team is missing DoR. The fix is structural, not motivational.

DoR for distributed/async teams

DoR matters more when the team is async. Reasons:

  • Fewer chances to clarify mid-sprint without burning a day waiting for time-zone overlap
  • Hidden context can’t be fixed by tapping someone on the shoulder
  • “We’ll figure it out as we go” turns into “we lost two days because we couldn’t sync”

For async/distributed teams, add at minimum: “the story has all context written down — no oral tradition required to understand it”. This catches the trap where the requester has the context in their head but never wrote it down.

Bottom line

Definition of Done says when work can leave the sprint. Definition of Ready says when work can enter. Most teams have the first; most teams need the second.

Five criteria — clear why, testable acceptance criteria, estimable size, dependencies identified, owner assignable — catch the surprises that wreck most sprints. Don’t bloat the list. Don’t pretend it’s a hard gate. Use overrides consciously and review them at retro.

The teams that adopt DoR seriously usually find they’ve cut sprint chaos by a third in two cycles, and the cost was a slightly tighter refinement meeting.

That’s the trade. Take it.

Stop estimating in hours.

SprintFlint runs your sprints with story points, velocity, capacity, and retros built in. First 300 tickets free, no credit card.