metrics & analytics cases
Without data, without instrumenting properly the products, you would not be able to understand what are the things that you need to improve upon. A lot of your roadmap items would be derived from simply optimizing that funnel.
Metrics questions are the fastest way to separate PMs who think from PMs who memorize. In an interview, you get “revenue dropped 20% — diagnose it.” On the job, you get the same question except nobody tells you the percentage, nobody hands you a dashboard, and the CEO wants the answer by Friday.
This page is five worked cases. Each one is a real scenario — the kind you encounter at Indian startups where data infrastructure is patchy, the business model is still evolving, and the answer is never just “check the funnel.” Work through them sequentially. The diagnostic pattern will start repeating, and that repetition is the point.
The diagnostic framework you will use repeatedly
Before the cases, one framework. Every metrics problem follows the same skeleton:
-
Clarify the metric. What exactly is being measured? How is it calculated? Over what time window? A “20% drop in revenue” means nothing until you know whether it is daily, weekly, or monthly — and whether it is gross or net.
-
Establish the baseline. Is this drop unusual? What does the normal range look like? Seasonality, day-of-week effects, and holiday patterns kill more investigations than bugs do.
-
Segment before you hypothesize. Break the metric by platform (iOS vs Android vs web), geography, user cohort (new vs returning), acquisition channel, and product line. The aggregate number lies. The segments tell the truth.
-
Isolate internal vs external. Did you change something (deploy, campaign, pricing)? Or did the world change (competitor launch, regulation, seasonality)?
-
Find the layer. Metrics are stacked. Revenue = Users x Transactions per User x Average Order Value. A revenue drop lives in one of those layers. Find which one before you start fixing.
This is not a framework to memorize. It is a checklist to run. Every case below uses it.
Case 1: The revenue drop that was not a revenue problem
Setup: You are a PM at a B2C e-commerce company in Bangalore — think Meesho or early-stage Flipkart competitor. The Monday morning dashboard shows gross revenue dropped 22% week-over-week. The leadership Slack channel is on fire.
Monday morning war room. CEO, Head of Growth, Engineering Lead, and you (PM) are staring at the revenue dashboard.
CEO: “Revenue is down 22% from last week. What happened?”
Head of Growth: “We paused the Instagram campaign on Thursday. Could be that.”
Eng Lead: “We pushed a checkout flow update on Wednesday. But our tests passed.”
PM (you): “Before we chase causes — is this week-over-week or same-week-last-month? Last week had a flash sale.”
CEO: “...it was the end-of-season sale. Ran Monday to Sunday.”
PM (you): “So we are comparing a sale week to a non-sale week. What is revenue compared to the week before the sale?”
The comparison showed revenue was actually up 4% against the pre-sale baseline. The 22% 'drop' was a return to normal after an artificial spike.
The first question in any metric investigation is not 'what broke?' It is 'what is the correct baseline?' Comparing against an anomalous period creates phantom problems.
The decomposition:
Revenue = Traffic x Conversion Rate x Average Order Value (AOV)
| Metric | Sale week | Current week | Pre-sale week |
|---|---|---|---|
| Daily unique visitors | 185K | 120K | 115K |
| Conversion rate | 3.8% | 3.2% | 3.1% |
| AOV | Rs 680 | Rs 520 | Rs 490 |
Against the pre-sale baseline, every metric is either flat or slightly up. The “crisis” is a comparison error.
The lesson: Always ask “compared to what?” before investigating. Sale periods, festivals (Diwali, Dussehra), month-end salary cycles, and even IPL match schedules distort Indian e-commerce baselines. If you do not control for these, you will spend every Monday chasing ghosts.
India-specific context: Indian e-commerce revenue is intensely cyclical. Month-end (salary credit days, 25th-1st) sees 40-60% higher transaction volume than mid-month at many platforms. Diwali week can be 5-10x normal. If your analytics tool defaults to week-over-week comparison without seasonal adjustment, it is lying to you most of the time.
Case 2: Engagement is flat but the CEO wants it up
Setup: You are a PM at an edtech platform — think Unacademy or a Pragmatic Leaders-scale operation. Monthly Active Users (MAU) has been flat at 280K for four months. The board wants 15% quarter-on-quarter growth. The CEO asks you to “fix engagement.”
The problem: “engagement is flat” is not a diagnosis. It is a symptom. And MAU is one of the laziest metrics in product management.
The decomposition:
MAU (app open) = New users who opened + Returning users who opened
But meaningful engagement = Users who completed a learning action
| Cohort | Count | % of MAU | Avg sessions/month |
|---|---|---|---|
| Bouncers (open, no action) | 95K | 34% | 1.1 |
| Browsers (viewed courses, no start) | 110K | 39% | 2.4 |
| Starters (began a course) | 55K | 20% | 5.8 |
| Completers (finished a lesson) | 20K | 7% | 12.3 |
The real insight: The platform had two completely different products hiding inside one metric. For 95K users, the app was a curiosity — opened once from an ad, never returned. For 20K users, it was a daily habit. Building features for “280K MAU” meant building for nobody specific.
The lesson: When someone says “engagement is flat,” your first job is to challenge the metric definition. MAU with a low activation bar is a vanity metric. Redefine “active” around the core value action — the thing users came for. In edtech, that is learning. In e-commerce, that is purchasing. In payments, that is transacting. The number will be smaller and more honest, and the growth trajectory will become visible.
India-specific context: Indian edtech has a massive top-of-funnel problem. Millions download apps from Google Play ads, use the free tier for a day, and vanish. If your MAU counts these users, your metric is a marketing dashboard, not a product dashboard. Pragmatic Leaders learned this early — the metric that mattered was not “how many people signed up” but “how many people completed a module.” That number was always smaller and always more useful.
Case 3: Picking the right metric for a new feature
Setup: You are a PM at a fintech company building a new “savings goal” feature inside an existing UPI payments app — picture a PhonePe or Google Pay type product. Users can set a savings target (like “Goa trip — Rs 50,000”) and auto-transfer small amounts weekly from their UPI-linked account.
The feature is built. The Head of Product asks: “What is the success metric?”
This is a trap question. Not because it is hard, but because the obvious answer is wrong.
Feature review meeting. The savings goal feature is built and ready for a phased rollout.
Head of Product: “What's our north star for savings goals?”
Junior PM: “Number of goals created. If people create goals, the feature is working.”
Senior PM: “How many of those goals will still be active in 30 days?”
Junior PM: “We don't know yet. But creation is the first step.”
Senior PM: “Creation is a vanity metric for this feature. I can get 100K goals created with a push notification and an incentive. Doesn't mean anyone is saving money.”
Head of Product: “So what do we track?”
Senior PM: “Three things. One: 30-day goal retention rate — percentage of goals that receive at least 3 deposits within 30 days. Two: average monthly deposit per active goal. Three: goal completion rate at 90 days. Creation is the funnel top. Retention is the actual metric.”
The team launched tracking both. Goal creation hit 200K in month one. 30-day retention was 12%. The feature was a marketing success and a product failure — which they caught early enough to redesign the deposit reminders.
The metric you pick determines what you optimize. Pick the wrong one and you will celebrate a failure.
The metric selection framework for new features:
| Metric type | What it measures | Risk if used alone |
|---|---|---|
| Activation (goals created) | Did users try it? | Inflated by curiosity and push notifications |
| Engagement (deposits made per goal) | Are they using it? | Doesn’t distinguish Rs 10 deposits from Rs 5,000 |
| Retention (goals active at 30/60/90 days) | Is the habit forming? | Slow to read — takes months to validate |
| Outcome (goals completed) | Did it work? | Very slow — goals may be 6-12 months long |
| Business impact (total savings AUM) | Does it matter for the company? | Correlated with user count, not feature quality |
The lesson: For any new feature, you need a leading indicator (did they try it), a health indicator (are they using it meaningfully), and a lagging indicator (did it create the outcome). Do not pick just one. And never let the leading indicator become the team OKR — it will be gamed within a quarter.
India-specific context: Savings features in Indian fintech face a unique challenge — the Rs 500 weekly auto-debit that feels trivial to a Bangalore tech worker is a meaningful commitment for a teacher in Patna earning Rs 30,000/month. If you track “average deposit amount” as a health metric without segmenting by income tier, you will conclude the feature is failing when it is actually serving different users differently. Segment by city tier. Always.
Case 4: The conversion rate that lied
Setup: You are a PM at an online insurance aggregator — think PolicyBazaar. Your term insurance comparison page has a 4.2% click-to-quote conversion rate. Product leadership is happy. Then you segment the data.
The decomposition:
Overall conversion (click to purchase) = 4.2% x 0.8% = 0.034% (3.4 per 10,000 visitors)
But the funnel has five stages, and the bottleneck was stage 4 — not the top of funnel that leadership was celebrating.
The lesson: A high top-of-funnel conversion rate can mask a catastrophic mid-funnel drop. Never celebrate a partial metric. Always measure the full journey from first touch to completed action. When you find the bottleneck, look at the user experience at that exact step — not the data, the experience. Go through the flow yourself on a phone. Time yourself. The data told the team “medical questions page is the problem.” But only using the product on a phone revealed why — 12 fields, tiny touch targets, no progress indicator, and a question about “family history of renal disorders” that most users cannot answer confidently.
India-specific context: Insurance purchase on mobile in India has a unique trust problem. Users filling out medical details on a phone worry about two things — data privacy (“will my health data be sold?”) and disclosure consequences (“if I say my father had diabetes, will they reject me?”). The form was not just long — it was anxiety-inducing. The redesign added a single line: “Your medical information is used only for this quote and is not shared with third parties.” That line improved form completion by 11% on its own.
Case 5: The daily active user trap at a social commerce startup
Setup: You are a PM at an early-stage social commerce company — think early Meesho. The product lets resellers in tier-2 and tier-3 cities share product catalogues on WhatsApp and earn commissions on sales. DAU is 45K and growing 10% month-over-month. The board is pleased.
Then a data analyst pulls a cohort retention chart.
| Cohort (sign-up month) | Month 1 DAU/MAU | Month 3 | Month 6 |
|---|---|---|---|
| January | 32% | 14% | 6% |
| February | 34% | 12% | 5% |
| March | 30% | 11% | — |
DAU is growing only because acquisition is outpacing churn. The company is on a treadmill — running faster to stay in the same place. At current acquisition cost (Rs 120/user) and month-6 retention (5-6%), the unit economics are negative.
Board prep meeting. The PM presents the retention data to the Head of Product before the board sees it.
PM: “DAU looks healthy at 45K, but our 6-month retention is 5-6%. We're acquiring our way to growth.”
Head of Product: “What's the retention for resellers who made at least one sale in their first week?”
PM: “38% at month 6.”
Head of Product: “And for those who didn't make a sale in week one?”
PM: “3%.”
Head of Product: “There's your answer. The product works for people who succeed early. Everyone else churns. Our onboarding doesn't get people to a sale fast enough. That's the problem — not acquisition, not features, not pricing.”
The team redesigned onboarding to guarantee a first sale within 48 hours — pre-loaded catalogues, sample WhatsApp messages, and a Rs 50 bonus commission on the first order. Month-6 retention for new cohorts rose to 18%.
A growing top-line number can hide a leaky bucket. Cohort retention tells you the truth that DAU never will.
The lesson: DAU and MAU are aggregate numbers. They add new users and returning users into one figure and call it “active.” But for any product where retention matters (which is every product), you must look at cohort retention — what percentage of users who signed up in month X are still active in month X+3, X+6, X+12? If the cohort curves all collapse to near-zero, your growth is an illusion sustained by marketing spend. The moment you stop spending, the number collapses.
India-specific context: Social commerce in India (Meesho, DealShare, CityMall) saw this pattern play out across the entire sector. The reseller model attracted millions of sign-ups from tier-2 and tier-3 cities — homemakers, students, small shopkeepers looking for supplemental income. But the gap between “signed up” and “earned money” was enormous. The successful companies were the ones that optimised for time-to-first-sale, not sign-ups. Meesho eventually cracked this by eliminating the reseller margin concept entirely and becoming a direct e-commerce platform — a pivot driven entirely by retention data showing that the reseller model churned 95% of users.
Scenario: You are a PM at OYO. The weekly booking conversion rate (app open to booking confirmed) dropped from 8.1% to 5.6% — a 31% decline. This happened gradually over 3 weeks, not overnight.
Data available to you:
- Traffic is stable (no change in app opens)
- Search-to-listing-view rate is unchanged at 45%
- Listing-view-to-booking-initiation dropped from 28% to 19%
- Booking-initiation-to-confirmation is unchanged at 65%
- Average displayed price increased 18% over the same 3 weeks
- No app updates were pushed during this period
- The period coincides with the end of wedding season
Your task:
- Where in the funnel is the drop occurring? (Be precise — which step, which metric?)
- List three hypotheses for the drop, ranked by likelihood.
- For your top hypothesis, what data would you pull to confirm or eliminate it?
- If the drop is pricing-related, what would you recommend — and what would you explicitly NOT recommend?
- What is the one question you would ask the revenue management team before making any recommendation?
Constraint: Do not default to “run an A/B test.” That is not a diagnosis — it is an abdication. Diagnose first.
You are a PM at an Indian B2B SaaS startup that sells inventory management software to kiranas (small neighbourhood shops). The product has been live for 8 months. The CEO wants to set a single north star metric for the company. The board meeting is in 3 days.
The CEO calls a meeting with you, the Head of Sales, and the CTO. She says: 'We need one metric that the whole company rallies around. I have been reading about north star metrics. What should ours be?' The Head of Sales immediately says 'Monthly Recurring Revenue — that is what investors care about.' The CTO says 'Daily Active Users — if kiranas use us every day, revenue follows.' Both look at you.
your path
The patterns across all five cases
Every metrics problem you will encounter is one of these five:
-
The baseline problem (Case 1). You are comparing against the wrong reference point. Fix the comparison before you investigate the metric.
-
The definition problem (Case 2). The metric is measuring the wrong thing, or measuring the right thing too loosely. Tighten the definition and the “problem” either disappears or becomes clearer.
-
The selection problem (Case 3). You picked a metric that is easy to move but does not correlate with the outcome you care about. Go back to first principles: what behavior, if it increased, would make this product more valuable?
-
The aggregation problem (Case 4). The top-level number looks fine but hides a bottleneck at a specific funnel stage or user segment. Segment everything.
-
The leading-vs-lagging problem (Case 5). Your growth metric is growing but your retention metric is decaying. The aggregate hides the cohort truth. Always look at cohorts.
When you are in an interview and someone says “revenue dropped 20%,” start with this list. Identify which problem type it is. Then decompose. The structure will carry you further than any memorized framework.
Paytm's PM notices that daily active users on the Fastag top-up flow are up 18% month-over-month, but transaction revenue is flat. The growth team calls this a win. The finance team calls it a problem.
The call: Which team is right, and what is the single metric you'd add to the weekly dashboard to resolve this disagreement?
Paytm's PM notices that daily active users on the Fastag top-up flow are up 18% month-over-month, but transaction revenue is flat. The growth team calls this a win. The finance team calls it a problem.
The call: Which team is right, and what is the single metric you'd add to the weekly dashboard to resolve this disagreement?
Where to go next
- The systematic diagnostic process for metric drops: Diagnosing Metric Drops
- Choosing KPIs for your product: Metrics and KPIs
- Applying metrics thinking to growth: Growth Analytics
- Indian market context for these problems: Indian Market Cases