user stories & acceptance criteria
Acceptance criteria is a must for every user story. If you use it, trust me, all the ambiguity and all the back-and-forth just vanishes.
A user story is not a specification. It is not a task. It is a reminder to have a conversation.
Most PMs treat user stories as miniature PRDs — cramming requirements, edge cases, and business logic into a two-sentence template. Then they wonder why engineers come back three days later with twelve questions, or worse, build something technically correct but completely wrong.
The problem is not the format. The problem is that nobody taught you what a user story is supposed to do.
What a user story actually is
A user story is a building block. Think of Lego bricks — small, standardised units that combine into larger structures. Multiple user stories combine into an epic. Multiple epics make a release. The story is the smallest unit of user-visible value.
The canonical format:
As a [specific user persona], I want to [action] so that [outcome/value].
Three parts. Each does real work:
- As a forces you to specify which user. Not “a user.” Not “the customer.” A specific persona with specific needs.
- I want to describes the action from their perspective, not yours.
- So that explains the why — the value they get. This is the part most PMs skip. It is also the part that matters most.
Sprint planning. The PM is presenting user stories for a payment flow.
PM: “As a customer, I want to pay through one-click checkout so that I do not waste time filling out card details.”
Dev Lead: “Which customer? A new customer? A returning customer? Do both get one-click?”
PM: “Good point. Returning customers with saved cards.”
QA Lead: “What about customers with multiple saved cards? Do they pick, or do we default to the last used?”
PM: “I... let me update the story.”
Three minutes of questions found two critical gaps. This is the conversation a user story is designed to trigger.
The story is not the deliverable. The conversation it starts is.
The INVEST checklist
Before you consider a story ready for sprint planning, run it through INVEST. This is not optional — it is the quality gate.
| Letter | Criterion | What it means | Red flag if missing |
|---|---|---|---|
| I | Independent | Can be built without depending on another story | Blocked stories pile up in sprint |
| N | Negotiable | Implementation details are open for discussion | Engineers feel like order-takers |
| V | Valuable | Delivers visible value to the user or business | Devs question why they are building it |
| E | Estimable | Team can estimate effort within a reasonable range | Story is too vague or too large |
| S | Small | Fits within a single sprint | Story drags across sprints, never “done” |
| T | Testable | You can write a test to verify it works | QA cannot close the story, arguments follow |
The one teams fail most on is Testable. If you cannot describe how to verify the story is done, you have not finished writing it.
Acceptance criteria: the contract
A user story without acceptance criteria is a wish. Acceptance criteria are the contract between the PM, the developer, and QA. They define “done” — not in abstract terms, but in verifiable conditions.
Two formats work. Pick one and stay consistent across your team.
Format 1: Checklist style
Simple, direct, and good enough for 80% of stories.
Story: As a returning customer with a saved card,
I want to complete checkout in one tap
so that I do not re-enter payment details.
Acceptance Criteria:
- [ ] One-tap checkout button visible only for users with 1+ saved cards
- [ ] If multiple cards saved, default to last-used card
- [ ] Card selection dropdown available on the checkout screen
- [ ] Transaction completes within 3 seconds on 4G connection
- [ ] Order confirmation screen shows masked card number used
- [ ] Works on Chrome, Safari, and Samsung Internet (Android)
Each line is testable. QA can open the app and verify each one independently. No ambiguity.
Format 2: Gherkin (Given-When-Then)
More formal. Use this for complex business logic where the sequence of events matters — payment flows, eligibility rules, state transitions.
Scenario: Returning customer with one saved card
Given I am a logged-in customer with exactly one saved card
When I tap "Buy Now" on the product page
Then the order is placed using my saved card
And I see a confirmation screen within 3 seconds
Scenario: Returning customer with multiple saved cards
Given I am a logged-in customer with 2+ saved cards
When I tap "Buy Now" on the product page
Then I see a card selector defaulting to my last-used card
And I can switch cards before confirming
Gherkin has a real advantage: it maps directly to automated test scripts. If your QA team uses Cucumber or similar tools, writing Gherkin acceptance criteria means your stories double as test specifications. That is not extra work — that is removing a handoff.
Notice what happened. The developer found a gap. The PM made a decision in real time. QA updated their test plan immediately. The acceptance criteria grew during the sprint — and that is fine. Stories are living documents, not contracts carved in stone.
The five anti-patterns
I have reviewed thousands of user stories across Pragmatic Leaders cohorts and my own teams. These five mistakes appear in at least half of them.
1. The vague persona
Bad: “As a user, I want to search products so that I can find what I need.”
Which user? A first-time visitor browsing casually? A repeat buyer who knows the exact SKU? A wholesale purchaser filtering by MOQ? Each of these people needs a different search experience. “A user” is not a persona. It is a placeholder for thinking you have not done.
Better: “As a wholesale buyer on TradeKart, I want to filter products by minimum order quantity so that I only see items I can purchase at my required volume.”
2. The technical story disguised as a user story
Bad: “As a developer, I want to migrate the database to PostgreSQL so that we have better query performance.”
This is a technical task. It is valid work, but it is not a user story. Users do not care about your database. If the migration has user-facing impact, describe that impact: “As a seller, I want my product search results to load within 1 second so that I do not lose buyers to slow page speed.” Track the PostgreSQL migration as an engineering task linked to this story.
3. The missing “so that”
Bad: “As a seller, I want to upload product images.”
So that what? So that buyers can see what they are purchasing? So that listings rank higher on search? So that they meet marketplace compliance requirements? The “so that” clause determines priority, scope, and success metrics. Without it, the developer builds the upload feature and the PM complains it does not have image compression — because in their head, the goal was search ranking, which requires optimised images.
4. The epic pretending to be a story
Bad: “As a new user, I want to complete onboarding so that I can start using the product.”
That is an epic, not a story. Onboarding might involve account creation, email verification, profile setup, a product tour, and initial data import. Each of those is a story. If a story cannot be built, tested, and deployed within a sprint, it is too big.
5. No acceptance criteria at all
Disturbingly common. The PM writes the story, moves it to the sprint backlog, and assumes the developer “gets it.” The developer makes twenty micro-decisions the PM never considered. QA has no way to verify the output. Arguments happen at demo day.
Every story gets acceptance criteria. No exceptions.
Take these poorly written user stories and rewrite them with proper personas, clear value statements, and at least three acceptance criteria each.
- “As a user, I want notifications so I know what’s happening.”
- “As an admin, I want a dashboard.”
- “As a customer, I want to track my order so that I can see it.”
For each rewrite, ask yourself:
- Can the developer build this without asking me a single clarifying question?
- Can QA write test cases from the acceptance criteria alone?
- Does the “so that” clause explain why this matters to this person?
If the answer to any of these is no, rewrite again.
Stories that ship vs stories that sit
The gap between a written story and a shipped story is not engineering effort. It is clarity. Here is what separates stories that move through the pipeline from stories that stall.
Stories that ship have a single, verifiable outcome. One story, one behaviour change, one thing QA can test. They reference specific screens, specific flows, specific data states.
Stories that sit are bundled. They say “and also” three times. They reference “the user experience” without defining what changes. Engineers estimate them as 8+ story points, which means nobody actually knows how long it will take.
A practical test: if your story requires more than one pull request to implement, split it.
The story-writing workflow
This is the process I use and teach. It works for teams of 4 and teams of 40.
- Start from the user journey. Map the flow first. Each step in the journey is a candidate for one or more stories.
- Write the “so that” first. Before you write the persona or the action, write the value. If you cannot articulate the value, the story is not ready.
- Add acceptance criteria before grooming. Never bring a story to sprint planning without AC. The conversation during grooming should refine the AC, not create them from scratch.
- Size against the INVEST checklist. If a story fails any of the six criteria, fix it before it enters the backlog.
- Let the AC evolve. New edge cases will surface during development. Update the AC. Communicate the change. This is normal, not a failure of planning.
Test yourself
You are the PM for a food delivery app in Bangalore. Your designer hands you a mockup for a new 'Reorder' feature. Your tech lead asks for user stories before sprint planning tomorrow morning. You have 2 hours.
You look at the mockup. It shows a 'Reorder' button on past orders, a confirmation screen, and a payment step. How do you approach writing the stories?
your path
You are PM at Zoho's CRM team. The head of enterprise sales has been asking for a detailed spec document for every story — 2-3 pages covering all edge cases, field validations, error states, and integration behaviors — before engineering starts work. She argues that Zoho's enterprise customers have zero tolerance for inconsistency and that thin user stories produce bugs. Your engineering lead argues that writing exhaustive specs before discovery produces the wrong details — developers end up implementing documented edge cases that real users never hit, while missing edge cases that matter. Sprint planning is in two days.
The call: Do you write detailed upfront specs as the sales head demands, or maintain thin user stories with acceptance criteria and resolve edge cases during development?
You are PM at Zoho's CRM team. The head of enterprise sales has been asking for a detailed spec document for every story — 2-3 pages covering all edge cases, field validations, error states, and integration behaviors — before engineering starts work. She argues that Zoho's enterprise customers have zero tolerance for inconsistency and that thin user stories produce bugs. Your engineering lead argues that writing exhaustive specs before discovery produces the wrong details — developers end up implementing documented edge cases that real users never hit, while missing edge cases that matter. Sprint planning is in two days.
The call: Do you write detailed upfront specs as the sales head demands, or maintain thin user stories with acceptance criteria and resolve edge cases during development?
Where to go next
- Write the document that holds the stories: Writing PRDs That Engineers Read
- Understand how to prioritise which stories matter: Prioritization Frameworks
- Learn the metrics that tell you if the story worked: Metrics & KPIs
- See how stories connect to the bigger picture: Product Vision & Strategy