data-informed decision making
The best product decisions I have seen were not made by people who had the most data. They were made by people who knew which data to ignore.
Most PMs fall into one of two camps. The first camp worships data. They will not move without statistical significance, they build dashboards for everything, and they confuse measurement with understanding. The second camp ignores data. They trust their gut, cite Steve Jobs, and make confident decisions based on vibes.
Both camps ship bad products. The first camp optimises for what they can measure and misses what matters. The second camp gets lucky sometimes and catastrophically wrong the rest.
The PMs who consistently make good decisions operate in a third space. They are data-informed, not data-obsessed. They use data as one input alongside judgment, user context, and strategic direction. And they know — before they look at a single dashboard — what kind of decision they are making and how much data it actually requires.
”Data-informed” is not a euphemism for “less rigorous”
When I say data-informed instead of data-driven, I am not softening the language. I am making a substantive claim about how good decisions actually work.
Data-driven implies that data decides. You collect the numbers, the numbers point to an answer, and you follow them. This sounds disciplined. In practice, it produces three failure modes:
Analysis paralysis. The team will not ship because the A/B test has not reached significance. The test needs 40,000 users and you have 3,000. So you wait. And wait. Meanwhile, a competitor launches something similar and captures the market. I have seen this pattern at no fewer than six Indian startups. Razorpay did not wait for perfect data before building payment links. They talked to merchants, understood the pain, and shipped fast.
Measuring what is easy, not what matters. If your decisions are “driven” by data, you will gravitate toward decisions where data exists. Feature optimisation, button colour, checkout flow tweaks. These are measurable. But the decisions that determine whether your company wins or loses — which market to enter, which user segment to bet on, whether to build a platform or stay vertical — these rarely have clean data. A data-driven PM avoids them. A data-informed PM makes them anyway, using the best available evidence plus judgment.
Goodhart’s Law, everywhere. “When a measure becomes a target, it ceases to be a good measure.” If your culture says decisions must be data-driven, people will optimise the numbers rather than the outcomes. Swiggy’s delivery executives discovered that the app tracked “time to accept order” but not “time standing idle at the restaurant.” So they accepted orders instantly and waited at restaurants. The metric looked great. The system did not improve.
Data-informed means data is a critical input, but it is not the only input. Your judgment matters. Your understanding of the user matters. Your strategic context matters. And knowing which of these to weight more heavily for a given decision is the actual skill.
The three types of product decisions
Not every decision needs the same amount of data. The mistake is treating all decisions the same way. Here is how I teach this:
| Decision type | Data weight | Judgment weight | Examples |
|---|---|---|---|
| Optimisation decisions | High (70-80%) | Low (20-30%) | Pricing changes, A/B test conclusions, checkout flow improvements, notification timing |
| Allocation decisions | Medium (40-60%) | Medium (40-60%) | Feature prioritisation, team staffing, resource allocation between projects, quarterly roadmap |
| Bet decisions | Low (20-30%) | High (70-80%) | New market entry, platform pivots, building for a new user segment, vision-level product direction |
Optimisation decisions are about making something you already have work better. The problem is defined. The metric is clear. The solution space is constrained. Here, data should dominate. Run the experiment. Read the numbers. Act on what they say. If your A/B test shows a 12% improvement in checkout completion with p < 0.05, ship it. This is where the “data-driven” mindset works.
Allocation decisions are about where to invest limited resources. Data can inform these — usage numbers, market sizing, cost estimates — but the data is always incomplete. You are making tradeoffs between incommensurable things. Should you invest in retention or acquisition? The data can tell you where each metric stands. It cannot tell you which one matters more for your company at this stage. That is a strategic judgment.
Bet decisions are about creating something new. When PhonePe decided to build a merchant payments ecosystem, there was no A/B test to run. When Dream11 bet on fantasy cricket before the IPL became a cultural phenomenon, there was no dashboard to consult. These decisions require conviction formed from pattern recognition, user empathy, and market understanding. Data plays a supporting role — market size estimates, early signal from analogous products, qualitative research. But the decision is fundamentally a judgment call.
The PM’s first job in any decision is to classify it. Are you optimising, allocating, or betting? Get this wrong and you will either paralyse a bet with demands for data that cannot exist, or wing an optimisation decision that has a clear empirical answer.
Product review at a B2B SaaS company in Bangalore. The VP wants to build an analytics module.
VP Product: “Our enterprise customers keep asking for built-in analytics. I want to prioritise this for Q3.”
PM: “I ran the numbers. Only 14% of enterprise accounts have requested this in support tickets. Our current retention rate is 91% without it. The data does not support prioritising this over the onboarding improvements we scoped.”
VP Product: “The data is showing you what happened. I am telling you what is about to happen. Three deals this quarter lost to competitors who have analytics. Our sales pipeline is shifting.”
PM: “That is anecdotal. Three deals is not a pattern.”
VP Product: “It is a pattern when those three deals were our largest prospects and the same objection came up in all three.”
The PM was treating a bet decision like an optimisation decision. The VP was not ignoring data — she was reading a weak signal that the PM's dashboard could not capture.
The PM had the data. The VP had the market context. Neither was wrong — but the PM's framework for the decision was.
The PM in this scene made a common error: applying optimisation-level evidence standards to a strategic allocation decision. 14% of support tickets is not the right dataset. Competitive loss reasons from your three largest prospects is a qualitative signal that demands attention, even without statistical significance.
When your data is lying to you
Data Interpretation covers the statistical traps in detail — Simpson’s paradox, survivorship bias, base rate neglect. Here I want to focus on the judgment failures that happen even when the data is technically correct.
Your power users are not representative. This is the most common trap in product analytics. You survey your users about a feature. 78% love it. But who responds to in-app surveys? Power users. The people who use your product enough to encounter the survey prompt, care enough to respond, and are engaged enough to have an opinion. The casual user who opened the app twice and found the feature confusing? They are not in your sample. At Meesho, early product decisions based on power-seller feedback almost missed the needs of first-time sellers from tier-3 towns — a segment that turned out to be their growth engine.
You are optimising the metric, not the outcome. A team at a lending app optimised for “loan application completion rate.” They simplified the form, removed fields, auto-filled data. Completion rate went up 23%. Approval rate went down 31%. They had made it easier for unqualified borrowers to complete the application, which increased load on the underwriting team and did not improve disbursements. The metric improved. The business did not.
Short-term data hides long-term damage. You can boost daily active users by sending aggressive push notifications. DAU goes up for two weeks. Then uninstalls spike. Then your app store rating drops. Then organic acquisition slows. The 14-day data said “this works.” The 90-day data said “this destroyed trust.” Most teams review data on weekly or fortnightly cycles. The damage shows up on quarterly cycles.
Qualitative signals beat quantitative ones for new problems. When CRED was trying to understand why high-net-worth users were not adopting their rewards marketplace, the quantitative data showed low click-through rates on reward cards. That told them the symptom. It took user interviews to discover the cause: these users found the rewards aspirationally mismatched. They did not want Rs 200 off on a pizza delivery. They wanted concierge experiences. No amount of A/B testing on card layouts would have surfaced this insight.
The HiPPO problem (and the actual fix)
HiPPO stands for Highest Paid Person’s Opinion. It describes what happens when a senior leader overrides data with their gut feeling, and the team complies because of the power dynamic.
The standard advice is “show them the data.” This does not work. Here is why: the HiPPO is not making a data-free decision. They have their own data — years of pattern recognition, relationships with customers, context about the market that does not live in your dashboard. When they override your A/B test, they are not being irrational. They are weighting different evidence than you are.
The fix is not more dashboards. The fix is framing the decision as an explicit bet.
A bet has four components:
- The hypothesis. “We believe X will cause Y.”
- The evidence for. What data, research, or experience supports this?
- The evidence against. What data, research, or experience contradicts this?
- The reversibility. If we are wrong, how quickly can we course-correct, and what does it cost?
When a VP says “I think we should build feature X despite the data suggesting otherwise,” do not argue about who is right. Instead, say: “Let us write this down as a bet. What are we betting on, what evidence supports it, what would tell us we are wrong, and how long should we run it before we check?”
This works for three reasons. First, it removes ego from the conversation. You are not challenging the VP’s authority — you are helping structure the decision. Second, it creates accountability. If the bet fails, the pre-agreed failure criteria make it clear without blame. Third, it often improves the decision. The act of writing down assumptions forces the VP to articulate the signal they are reading, which sometimes reveals that the signal is weaker than they thought.
I have seen this approach defuse dozens of HiPPO situations at companies in the Pragmatic Leaders network. It does not always prevent bad decisions. But it always makes the reasoning transparent.
The India data problem
If you are building products for the Indian market using a Silicon Valley analytics playbook, you are flying partly blind. The Indian market has specific data challenges that most analytics frameworks do not account for.
WhatsApp is a data-dark channel. A significant share of Indian commerce — from real estate discovery to grocery ordering to B2B procurement — happens on WhatsApp. Your attribution model cannot see it. Your funnel analytics cannot track it. When a Flipkart seller shares a product link on WhatsApp and it drives 200 orders, your dashboard attributes those to “direct traffic.” You are making decisions about channel effectiveness with a massive blind spot.
Multiple payment methods break attribution. A single transaction might start with a UPI intent, fail, switch to a debit card, fail again, and complete via net banking. Your payment analytics might count this as three failed attempts and one success. Or it might count it as one success with the attribution going to net banking, even though the user’s intent was UPI. At Razorpay, building accurate payment attribution required custom instrumentation that treated the entire payment session as one unit — not something off-the-shelf analytics handles.
Shared devices skew user metrics. In many Indian households, one phone serves multiple family members. Your “daily active user” is actually three people. Your “single user journey” is actually three interleaved journeys. Your personalisation model is confused because the same device searches for cricket scores, saree designs, and school textbooks in the same session. Zepto and other quick-commerce apps in India have had to build household-level models alongside individual user models for this reason.
Festive season spikes break ML models. Diwali, Navratri, Eid, Onam — India’s festival calendar creates demand spikes that do not exist in Western markets. ML models trained on “normal” data produce wildly wrong predictions during these periods. Flipkart’s Big Billion Days demand is 10-20x normal. If your recommendation model or inventory forecasting model does not have festival-specific adjustments, it will fail exactly when it matters most.
Tier-2/3 user behaviour is structurally different. Network latency is higher. Device memory is lower. Data plans are limited. Users are more likely to be first-time internet users. The onboarding flow that works in Bangalore does not work in Bhopal. Your aggregate metrics hide this because Bangalore generates more events. When Meesho focused specifically on tier-3 and tier-4 metrics — separated from metro data — they discovered that their checkout abandonment rate was 3x higher in those tiers, driven almost entirely by page load times.
The lesson: segment everything by tier, device, and connectivity. National aggregates are fiction in a market this diverse.
How to use data at each career stage
The relationship between a PM and data changes as you grow. What looks like good practice at one level is a limitation at the next.
0-2 years: Learn to ask for data before every decision.
Your instinct at this stage is to build what stakeholders ask for. The first discipline is to pause and ask: what does the data say? Before scoping a feature, check usage data for the problem it solves. Before prioritising a bug fix, check how many users it affects. Before redesigning a flow, check where users actually drop off.
You will not always get the data you need. That is fine. The habit of asking is what matters. After six months of asking, you will know what your team measures well, what it measures poorly, and where the blind spots are. That map of blind spots is more valuable than any dashboard.
3-5 years: Learn to challenge data before trusting it.
By now you have the habit of looking at data. The new skill is questioning it. When someone presents a number in a meeting, you should be the person who asks: “What is the denominator? What changed during this period? Who is excluded from this dataset?” This is not cynicism. It is rigour.
This is also the stage where you learn that qualitative data — user interviews, support ticket themes, sales call recordings — often contains stronger signal than quantitative data for product direction decisions. The PM who only trusts numbers will miss the user frustration that does not show up in any metric until it shows up as churn.
5+ years: Learn when to override data with judgment.
This is the hardest transition. You have built a career on being rigorous with data. Now you need to learn when to act against what the data says.
Zepto did not wait for data showing that 10-minute grocery delivery was viable. PhonePe did not have A/B test results proving that offline merchant QR codes would scale. Dream11 did not have retention data for fantasy sports in India because the market did not exist yet. These were judgment calls informed by deep market understanding, not dashboards.
At senior levels, your value is not in reading data better than anyone else — your analysts do that. Your value is in synthesising data with market context, competitive dynamics, and strategic direction to make calls that no spreadsheet can make.
Look at the last five product decisions your team made (shipped features, killed features, changed priorities, adjusted pricing, entered new segments — anything).
For each decision, write down:
- Was this an optimisation, allocation, or bet decision?
- What data was used? (Be specific — name the metric, dataset, or research.)
- What judgment was applied on top of the data?
- In hindsight, was the balance between data and judgment correct?
Most teams discover that they treat every decision like an optimisation decision — demanding quantitative evidence for calls that are fundamentally judgment-based. If you find this pattern, it explains why your team is slow to make strategic moves.
Test yourself
You are a PM at an Indian ed-tech company. You ran an A/B test on two versions of your course recommendation algorithm. Version A (personalised by learning history) beats Version B (curated by instructors) by 3.2% on course enrollment rate, with p-value 0.04. But your qualitative research — 15 user interviews conducted last week — shows that 11 out of 15 users found Version A's recommendations 'random' and 'confusing.' Three users said they enrolled in courses from Version A just to see what they were, not because they intended to complete them. Course completion rate data will not be available for 6 weeks.
Your head of product wants a recommendation by end of day. What do you propose?
your path
Razorpay's PM has A/B test results showing that a redesigned checkout flow increases conversion by 4.2% (p < 0.01, 2-week test). The head of design hates the visual direction of the winning variant and wants to run another test with a different visual approach.
The call: Do you ship the winning variant, run the design's preferred version, or do something else?
Razorpay's PM has A/B test results showing that a redesigned checkout flow increases conversion by 4.2% (p < 0.01, 2-week test). The head of design hates the visual direction of the winning variant and wants to run another test with a different visual approach.
The call: Do you ship the winning variant, run the design's preferred version, or do something else?
Where to go next
- Build the statistical foundation for reading data correctly: Data Interpretation
- Learn which metrics are worth tracking in the first place: Metrics and KPIs
- Design experiments that actually prove causation: Experimentation
- When data points at a problem, learn to diagnose it: Diagnosing Metric Drops
- Present data-informed recommendations to leadership: Presenting to Leadership