9 min left 0%

continuous discovery

Discovery is not something you finish. It is something you do every week, or you are building on assumptions that are getting older by the day.
Talvinder Singh, from a Pragmatic Leaders workshop on product discovery

There is a lie that product teams tell themselves. It goes like this: “We will do research first, then we will build.” A neat, sequential process. Discovery phase, then delivery phase. Research report, then PRD.

This is how most product teams in India still operate. A quarterly “research sprint” where someone talks to fifteen users, writes a deck, presents findings, and then the team builds for three months based on a document that is already going stale.

The problem is obvious once you say it out loud: by the time you finish a four-week research sprint, the market has moved. Competitors have shipped. User expectations have shifted. The insights you gathered in week one are already decaying by week four. And the team has started building based on assumptions anyway, because nobody can sit idle for a month waiting for a research report.

“Discovery phase” is a project management concept applied to a product management problem. It does not work.

Why quarterly research fails

I have seen this pattern destroy product outcomes at companies of every size. Here is how it plays out:

Week 1-4: The PM runs a research sprint. Talks to users, analyses data, writes a findings document. The team is waiting, or worse, building something else in parallel that has nothing to do with the research.

Week 5: The PM presents findings. Stakeholders debate. The roadmap gets “updated” — which usually means the same features get reordered, not replaced.

Week 6-16: The team builds. No user contact. No validation of assumptions made during the build. When questions come up (“should the filter be on the left or right?”), they are answered by the loudest person in the room, not by evidence.

Week 17: The feature launches. Adoption is low. The PM says “we need to do more research.” The cycle repeats.

The failure is structural, not individual. When you separate discovery from delivery in time, you create a gap where assumptions grow unchecked. A PM who talked to users in January and ships in April is building on three-month-old understanding. In Indian tech, where markets shift quarterly, that is a lifetime.

Continuous discovery is a weekly habit

The alternative is not “more research.” It is research that never stops and never takes over.

Continuous discovery means the PM has direct contact with users or customers every single week. Not a formal research project. Not a sixty-minute structured interview with a screener and a discussion guide and a consent form. A fifteen-minute conversation. A quick screen-share. A five-minute follow-up on a support ticket.

The rhythm looks like this:

  • Monday: Review last week’s signals — support tickets, NPS comments, sales call notes, product analytics.
  • Tuesday or Wednesday: One or two fifteen-minute user conversations. Targeted questions, not open-ended “tell me about your experience” sessions.
  • Thursday: Update your opportunity map with anything new you learned.
  • Friday: Feed validated opportunities into the delivery backlog for next sprint.

This is not a burden. This is thirty minutes a day. Less time than most PMs spend in status meetings that produce no insight.

The Opportunity Solution Tree

Teresa Torres popularised the Opportunity Solution Tree, and it is the single most useful framework for organising continuous discovery work. Not because it is clever — because it forces you to connect research to outcomes.

The structure is simple:

Outcome (top of the tree): What business or product result are we trying to achieve? Not a feature. A measurable change. “Increase 30-day retention from 40% to 55%.”

Opportunities (second level): What user problems, needs, or desires could we address to move that outcome? These come from your weekly discovery — the patterns you see in user conversations, support data, and behavioural analytics. “New users do not understand the value within the first session.” “Power users churn when they hit the collaboration ceiling.”

Solutions (third level): What could we build or change to address each opportunity? Multiple solutions per opportunity. “Guided onboarding flow.” “Interactive product tour.” “Simplified first-run experience.” You want at least two or three solutions per opportunity so you are choosing, not defaulting.

Experiments (fourth level): How do we test each solution cheaply before committing engineering time? A prototype test, a fake door, a concierge version, a survey. The experiment validates the solution before you build it at full scale.

The tree is not a one-time exercise. You update it weekly. New opportunities get added as you learn from users. Old opportunities get deprioritised when evidence shows they do not matter. Solutions get pruned when experiments fail. The tree is alive — it is your map of what you know and what you are testing.

What breaks about OSTs in Indian startups

I teach OSTs in every Pragmatic Leaders cohort. They work. But they break in specific, predictable ways in the Indian context, and you need to know where the fractures happen.

Problem 1: PMs do not have direct user access. In many Indian startups, the founder controls the customer relationship. Or the sales team guards their accounts. The PM is told to “talk to sales” instead of talking to users. This is a structural barrier, and wishing it away does not help.

The workaround: You do not need formal user interviews to do continuous discovery. Support tickets are discovery data. NPS verbatims are discovery data. Customer success call recordings are discovery data. Sales call recordings — especially the objections — are some of the richest discovery data you will ever find. The objections tell you what the product is failing to communicate or deliver. Start there.

// thread: ##product-discovery — PM sharing a workaround for continuous discovery when direct user access is limited
PM I mapped last month's support tickets by theme. Top 3: confusion about pricing tiers (34 tickets), failed CSV imports (28), and 'how do I add team members' (22).
CS Lead Those are just support issues though, not product insights.
PM 22 people could not figure out how to add team members. That IS a product insight. Our collaboration onboarding is broken. 💡 6
CS Lead Fair point. We also get that question on every single onboarding call.
PM Can I listen to three of those call recordings this week? I want to hear where exactly they get stuck.
CS Lead Sure, I'll share the Zoom links.
PM This is our continuous discovery pipeline now. Support themes → targeted listening → opportunity map updates. Every week.

Problem 2: The outcome is handed down, not chosen. In Western product orgs, the PM often has autonomy to pick which outcome to pursue. In many Indian companies, the CEO or VP says “increase revenue 30%” and the PM is expected to figure out the rest. This means the top of your OST is fixed. That is fine — the tree still works. You just start from the given outcome and discover opportunities underneath it.

Problem 3: “We do not have time for experiments.” This is the most common objection, and it is almost always wrong. You do not have time for three-month experiments. You do have time for a two-day prototype test, a twenty-user survey, or a fake door that takes an afternoon to build. Experiments at the bottom of the OST should be small and fast — days, not weeks.

The dual-track setup

Continuous discovery works best when the team operates in two parallel tracks. Not two separate teams — the same team, switching between two modes.

Discovery track: The PM (and often a designer) runs weekly user touchpoints, analyses data, updates the opportunity tree, and designs small experiments. This produces validated opportunities — problems worth solving, with evidence.

Delivery track: The engineering team builds solutions for opportunities that have already been validated. They are not building on assumptions. They are building on evidence from the discovery track.

The two tracks feed each other. Discovery produces opportunities. Delivery produces shipped solutions that generate new usage data, which feeds back into discovery.

// scene:

Sprint planning. The PM is proposing a change to how the team allocates time.

PM: “I want to reserve two hours every Wednesday for user conversations. Fifteen minutes each, four users. The designer joins me.”

Engineering Manager: “We tried user research before. It turned into a two-month project and we shipped nothing.”

PM: “This is not a research project. This is a weekly habit. Two hours. Four conversations. I update the opportunity tree on Thursday, and anything validated goes into the next sprint.”

Engineering Manager: “What if you don't learn anything useful?”

PM: “Then we have lost two hours and gained confidence that we are building the right thing. That is not nothing.”

Engineering Manager: “And what if you learn something that contradicts what we are building right now?”

PM: “Then we caught it in a week instead of discovering it after launch. Which would you prefer?”

The EM paused. He had been on the team that shipped the redesign nobody wanted six months ago. He agreed to try it for one month.

// tension:

The PM is not asking for permission to do research. She is asking for two hours a week to prevent the team from wasting two months.

The key discipline: the discovery track is always one to two sprints ahead of the delivery track. You are not discovering and building the same thing simultaneously. You are discovering what comes next while building what was validated last cycle.

What discovery looks like at each career stage

If you are 0-2 years in: Your job is to sit in on discovery. Join user calls. Listen to support recordings. Read NPS comments. Do not run the discovery system yet — learn what good looks like. Your contribution: take notes, spot patterns, bring them to your senior PM. The habit of weekly user contact starts here, even if you are just observing.

If you are 3-5 years in: You run your own weekly touchpoints. You own an opportunity tree for your product area. You recruit users, run conversations, synthesise findings, and feed opportunities into sprint planning. You are the bridge between user reality and engineering capacity. The mistake at this stage: running discovery but not connecting it to delivery. An opportunity tree that does not change the backlog is an art project.

If you are 5+ years in: You design the discovery system for your team or organisation. You decide how many user touchpoints per week, who runs them, how findings get shared, and how opportunities flow into planning. You coach junior PMs on interview technique and synthesis. You fight the organisational battles — convincing leadership that weekly user contact is not a waste of engineering-adjacent time, but the thing that prevents wasted engineering time.

Sales calls as a discovery proxy

This is the hack that most Indian PMs underuse.

Your sales team talks to potential customers every day. Those conversations are full of discovery gold — but nobody treats them that way. The sales team hears objections and tries to overcome them. The PM should hear those same objections and ask: what is our product failing to do?

Three things to listen for in sales call recordings:

1. The objection that keeps recurring. If every third prospect says “we already use Excel for this,” that is not a sales problem. That is a product positioning problem. Your product is not communicating why it is better than the status quo.

2. The feature question that signals a gap. “Can it do X?” asked by multiple prospects means X is table stakes in your market and you are missing it. Or it means your marketing promises something that the product does not deliver.

3. The “almost bought” moment. The prospect who went through the entire demo, loved it, and then did not buy. The reason they gave the sales team is rarely the real reason. But the recording often reveals it — a moment of hesitation, a question that was not answered well, a competitor comparison that landed.

Ask your sales team to flag three calls a week that you can listen to. Thirty minutes of listening will teach you more than a hundred survey responses.

Test yourself

// interactive:
The Feature Request Avalanche

You are a PM at a B2B SaaS company. Your VP walks into your Monday standup and says: 'We have 40 feature requests from customers in the last quarter. I need you to validate them and come back with a prioritised list by end of month.' You have three weeks.

Your backlog has 40 unstructured feature requests from sales, support, and customer success. Some are one-liners. Some have detailed descriptions. None have been verified with the actual users who requested them.

Build your first Opportunity Solution Tree

// exercise: · 30 min
Your first OST

Pick one business outcome your team is currently trying to move. Write it at the top of a page or whiteboard.

Step 1: Define the outcome precisely. Not “improve retention.” Something measurable: “Increase 30-day retention for users who signed up via organic search from 32% to 45%.”

Step 2: List three opportunities. What user problems, needs, or pain points could you address to move that outcome? Draw on what you already know — support tickets, user feedback, analytics, sales objections. Write each as a user problem statement: “New users from organic search expect [X] but experience [Y].”

Step 3: For each opportunity, brainstorm two solutions. Not just the obvious one. Force yourself to think of an alternative. If the first solution is “add a guided tour,” the second might be “simplify the first screen so a tour is unnecessary.”

Step 4: For one solution, design a fast experiment. Something you could run in under a week. A prototype test with five users, a fake door, a survey targeting the specific assumption.

You now have a living document. Next week, update it with what you learned from the experiment and from any user contact. The tree grows with your understanding.

// learn the judgment

You are PM for Koo's creator tools. You have set up weekly user interviews with top creators. After 4 weeks, you notice that every creator session drifts into the same topic: payment delays on creator monetization. You weren't supposed to focus on payments—your squad owns creator tools, not payments.

The call: Do you keep ignoring payments in your interviews (not your scope) or do something else?

// practice for score

You are PM for Koo's creator tools. You have set up weekly user interviews with top creators. After 4 weeks, you notice that every creator session drifts into the same topic: payment delays on creator monetization. You weren't supposed to focus on payments—your squad owns creator tools, not payments.

The call: Do you keep ignoring payments in your interviews (not your scope) or do something else?

0 chars (min 80)

Where to go next

continuous discovery 0%
9 min left