10 min left 0%

pm benchmarks

If you can't tell me whether your number is good or bad, you don't have a metric — you have a decoration.
Talvinder Singh, PM cohort review session

Every PM asks “is this number good?” and nobody gives them a straight answer. Advisors say “it depends.” VCs say “what are your comps?” Your manager says “let’s benchmark against last quarter.” None of that helps when you are staring at 8% D30 retention and need to know whether to panic or celebrate.

This page gives you straight answers. Reference numbers by product type, stage, and market. India-adjusted, not Silicon Valley defaults copied from a16z blog posts.

Before you use these numbers

One warning, and I mean it: benchmarks are reference points, not targets.

Your context matters. A 12% D30 retention for a consumer social app is strong. The same 12% for a quick commerce app is a problem. If you blindly chase a benchmark number without understanding why it applies to your product, you will optimise for the wrong thing.

But here is the other truth: having no reference is worse than having an imperfect one. I have watched PMs present metrics to leadership with zero context for whether those numbers are healthy. They get asked “is that good?” and they shrug. That is a career-limiting moment.

Use these tables as a starting diagnosis. Then adjust for your specific product, market, and stage.

// scene:

Board meeting. The PM is presenting Q3 metrics to investors.

PM: “Our D30 retention is 11%. We've improved it from 8% last quarter.”

Investor: “Is 11% good?”

PM: “It's... better than last quarter.”

Investor: “I didn't ask if it improved. I asked if it's good. What does the market look like for your category?”

Silence. The PM had no benchmark. The improvement story collapsed because there was no frame of reference. The board moved on to the next agenda item.

// tension:

Relative improvement means nothing without an absolute reference point. The PM lost the room because they couldn't answer the most basic question about their own metric.

Retention benchmarks by product type

Retention is the metric that separates real products from leaky buckets. These numbers represent “good” — meaning top-quartile products in each category, not median. If you are at median, you have work to do.

Product typeGood D1Good D7Good D30Indian context
Consumer social35-45%18-25%10-15%WhatsApp dominance compresses social app retention. Users default back to WhatsApp for messaging, so any social app competing for attention starts at a disadvantage.
Fintech (payments)25-35%15-20%8-12%UPI is sticky — once a user sets up UPI on your app, switching cost is low but habit is high. Non-UPI payment flows (wallets, cards) churn 2-3x faster.
E-commerce20-30%10-15%5-8%COD orders inflate D1 but hurt D30. A user who places a COD order shows up as “retained” on D1 but never opens the app again after delivery. Track paid-order retention separately.
Quick commerce30-40%20-28%12-18%Habit-forming category. Frequency is the moat — Blinkit/Zepto users who order 3+ times in week one have 4x the D30 of single-order users.
B2B SaaSN/A (use weekly)60-75% W140-55% M1Enterprise buyers measure monthly. SME/startup buyers measure weekly. Do not apply daily retention to B2B — the usage cadence is wrong.
Edtech25-35%12-18%5-10%Exam-cycle dependent. Retention spikes 3-4 weeks before major exams (CAT, GATE, JEE) and collapses after. Measure retention in exam-adjusted cohorts, not calendar cohorts.
Gaming (casual)30-40%12-18%5-8%India’s casual gaming market has massive D1 (ad-driven installs) but brutal D7 drop-off. Real retention starts after the ad-install cohort washes out.
Health & fitness20-30%10-15%4-7%January and post-Diwali spikes. Sustained retention requires habit loops — daily streaks, social accountability. Without them, expect sub-5% D30.

How to read this table: If your consumer fintech app has 10% D30, you are in the “good” range. If you are at 5%, you have a retention problem. If you are at 15%, you are outperforming most of the market and should focus on growth, not retention optimisation.

The D1-to-D30 ratio tells you where the leak is

D30/D1 ratio matters more than any single number. Here is why:

  • Ratio above 0.3: Your activation is working. Users who show up on D1 are finding value. Focus on frequency and engagement depth.
  • Ratio between 0.15-0.3: You have a “week one wall.” Users activate but lose interest. Your onboarding gets them in; your core loop does not keep them.
  • Ratio below 0.15: Your D1 is inflated by low-intent users (ad installs, referral fraud, COD browsers). Fix your acquisition quality before touching retention.

Activation benchmarks by product type

Activation is the moment a user gets value for the first time. Not “signed up.” Not “opened the app.” The moment they did the thing your product exists for.

Product typeGood activation rateTime-to-value benchmarkWhat counts as “activated”
Consumer app30-50% of signups< 60 seconds to first valueCompleted core action (sent a message, posted a photo, played a game)
Fintech20-35% (KYC friction)< 3 minutes to first transactionCompleted first transaction (not just KYC). In India, KYC alone kills 40-60% of signups.
B2B SaaS40-60% of trials< 1 week to first integrationConnected a data source, invited a teammate, or completed the setup wizard
Marketplace25-40% (supply side harder)Buyer: first purchase. Seller: first saleTwo-sided activation. Supply-side activation is always 30-50% lower than demand-side.
Edtech25-40% of registrations< 5 minutes to first lesson completionCompleted one lesson or module, not just browsed the catalog
Quick commerce50-65% of installs< 2 minutes to first orderFirst order placed. Quick commerce has the highest activation rates because the value prop is immediate and obvious.

India-specific activation killers

Three things destroy activation rates in India that do not show up in global benchmarks:

1. KYC friction. Any fintech product requiring Aadhaar/PAN verification loses 40-60% of users during onboarding. The best Indian fintech products (PhonePe, Razorpay) have invested years in reducing KYC steps. If you require full KYC before first value, your activation ceiling is 35%.

2. Permission overload. Indian users on Android are trained to deny permissions. If your app asks for camera, location, contacts, and storage on first launch, expect 20-30% drop-off before they even see your home screen. Ask for permissions at the point of need, not at install.

3. Language and literacy. English-only onboarding in a Hindi-belt market is a 15-25% activation penalty. Vernacular onboarding is not a nice-to-have — it is a conversion multiplier.

Growth rate expectations by stage

VCs have specific expectations for month-over-month growth at each funding stage. These are India-adjusted — Indian VCs are slightly more forgiving on revenue growth at pre-seed/seed but stricter on unit economics at Series A+.

StageGood MoM growthWhat Indian VCs actually expect
Pre-seed15-25%Engagement signal, not revenue. “Are users coming back?” matters more than “are users paying?” Show retention curves, not GMV.
Seed15-20% MoMPMF signals. Can you acquire users at reasonable cost and retain them? The PULSE check: are users pulling the product into their lives, or are you pushing it?
Series A10-15% MoMConsistent, compounding, not paid-driven. Indian Series A investors will check your organic vs paid split. If 80%+ of growth is paid, they discount the growth rate by half.
Series B8-12% MoMUnit economics improving alongside growth. CAC payback under 12 months. LTV/CAC above 3x. Growth slowing is fine if margins are expanding.
Series C+5-8% MoMMarket leadership signals. Market share gains, pricing power, expansion into adjacent segments. Pure growth rate matters less than defensibility.

A common mistake: Showing 30% MoM growth at seed stage and thinking it is impressive. If that growth is 100% paid acquisition with negative unit economics, Indian VCs will see through it. They have been burned by this pattern too many times (remember the 2015-2016 cash-burn era). Sustainable 15% beats unsustainable 30%.

Conversion benchmarks

Conversion pointGood rateIndian adjustment
Landing page → signup3-8%+1-2% if vernacular landing page. -2-3% if English-only in tier 2/3 cities.
Free → paid (consumer)2-5%India is 30-50% lower than global. Expect 1-3%. Freemium works; premium pricing does not.
Free trial → paid (B2B SaaS)15-25%Indian SMEs: 8-15%. Enterprise: 20-30%. The SME segment needs longer trials and more hand-holding.
Add to cart → purchase35-50%COD availability adds 10-15% to conversion. Remove COD and watch this drop by a third.
App install → registration50-70%Lower on Android (permission friction). Higher on iOS but smaller addressable market.
// thread: ##growth-team — PMs from different verticals comparing their actual numbers after reading a global benchmark report
Fintech PM That Lenny Rachitsky benchmark post says good activation is 40-60%. We're at 22% and I thought we were doing okay for India.
E-commerce PM Global benchmarks assume no KYC, no COD, and credit card as default payment. None of that applies here.
B2B SaaS PM Our trial-to-paid is 12% for Indian SMEs. My US counterpart at the same company gets 28%. Same product. Context is everything.
Quick Commerce PM We get 58% install-to-first-order. But D30 is 14%. Activation isn't the problem — retention is. Different benchmarks, different diagnosis. 💯 6
Fintech PM So basically: use global benchmarks for direction, India benchmarks for calibration, and your own cohort trends for decisions.
E-commerce PM Someone pin that. 📌 3

Experiment velocity benchmarks

How fast your team runs experiments is itself a metric. The best product teams do not just make better decisions — they make more decisions faster.

Team maturityExperiments/monthWhat separates good from great
No experimentation culture0Most Indian startups under 50 people. Every change is a “launch” decided by the founder’s gut.
Early growth team2-4Any experimentation culture at all. Running A/B tests, even badly, puts you ahead of 70% of Indian startups.
Maturing growth team8-15Hypothesis-driven, statistically rigorous. Kill rate above 50% (most experiments should fail — if everything wins, your bar is too low).
High-performing15-30Automated pipeline, rapid iteration. Feature flags, server-side testing, real-time dashboards. Teams at Flipkart, PhonePe, Dream11 operate here.

The real benchmark is not experiments per month — it is learning velocity. A team that runs 4 experiments and changes strategy based on results beats a team that runs 20 experiments and ignores the losers. Count the decisions changed, not the tests launched.

Why Indian teams experiment less

Three structural reasons, none of which are excuses:

  1. Smaller engineering teams. You cannot dedicate engineers to experimentation infra when you have 8 engineers total. Solution: use third-party experimentation tools (VWO is Indian — start there) instead of building your own.
  2. HiPPO culture. The Highest Paid Person’s Opinion still dominates decision-making in many Indian companies. Experimentation threatens this. You need a founder who is willing to be wrong.
  3. Statistical illiteracy. Most Indian PM teams call an experiment “done” after 3 days with 200 users. That is not an experiment — that is a guess with extra steps. Learn sample size calculation. Use it.

Why global benchmarks are 15-30% off for India

If you are reading benchmarks from Lenny’s Newsletter, Reforge, or First Round Capital and applying them directly to your Indian product, you are calibrating with the wrong ruler. Here is why:

Lower smartphone quality drives higher error rates

The median Indian smartphone has 3-4GB RAM, a mid-range processor, and is running Android 11 or older. Your app crashes more. Pages load slower. Animations stutter. This is not a UX problem — it is a physics problem. Error rates on Indian devices are 2-3x higher than on US devices, which directly suppresses activation and retention.

Adjustment: Subtract 5-10% from any global activation benchmark to account for device-driven friction.

Price sensitivity changes every conversion funnel

Indian users will spend 30 minutes comparing prices across 4 apps before buying a Rs 200 item. Free-to-paid conversion in India is structurally lower because willingness-to-pay thresholds are different. A $9.99/month subscription that converts 5% globally will convert 1-2% in India.

Adjustment: Halve any global free-to-paid conversion benchmark for consumer products. B2B enterprise is closer to global because the buyer is the company, not the individual.

Festival seasonality skews monthly metrics

Diwali, IPL season, Holi, end-of-financial-year — Indian products have 4-5 major seasonal spikes per year that do not exist in Western markets. If you measure monthly without adjusting for festivals, your October numbers look like a growth miracle and your November numbers look like a crisis. Neither is real.

Adjustment: Always compare same-period YoY, not sequential MoM, during festival months. Build a seasonality index for your category.

WhatsApp as acquisition channel breaks attribution

In India, a significant chunk of acquisition happens through WhatsApp forwards — shared links, group recommendations, screenshot virality. This traffic shows up as “direct” in your analytics. Your paid attribution model underestimates organic and overestimates paid contribution.

Adjustment: If your “direct” traffic is above 40%, investigate WhatsApp-driven sharing. Build UTM-tagged share links inside your product.

COD inflates top-of-funnel, deflates bottom

Cash on delivery accounts for 50-60% of e-commerce orders in India. COD orders have 2-3x the return rate of prepaid orders. They inflate your D1 retention (user placed an order) but deflate your D30 (user returned the product and never came back).

Adjustment: Track retention separately for COD and prepaid cohorts. Your “real” retention is the prepaid number.

How to use benchmarks without becoming a benchmark zombie

// interactive:
The Benchmark Trap

You are the PM for a consumer fintech app in India. Your D30 retention is 8%. You have just read this benchmarks page and seen that 'good' D30 for fintech is 8-12%. Your CEO asks: should we focus on retention or growth this quarter?

You are at the bottom of the 'good' range. The CEO wants a clear recommendation.

The three-step benchmark protocol

Step 1: Locate yourself. Find your product type and stage in the tables above. Identify which range you fall into. This takes 30 seconds and gives you directional awareness.

Step 2: Adjust for context. Apply the India-specific adjustments. Apply stage-specific adjustments. Apply your unique factors (e.g., if you are in a regulated industry, subtract another 5-10% from activation benchmarks for compliance friction).

Step 3: Trend over threshold. Once you know your adjusted benchmark, stop looking at it. Track your own trend line. A product improving from 5% to 8% D30 over three months is healthier than a product sitting flat at 10%. Direction matters more than position.

The benchmarks nobody talks about

Beyond the standard retention/activation/growth metrics, here are benchmarks that separate competent PMs from great ones:

Time to first “aha” moment: For consumer apps, this should be under 30 seconds. For B2B, under one session. If your time-to-aha is longer than your competitors’, your activation will always trail theirs regardless of how good your onboarding flow looks.

Support ticket rate per 1,000 MAU: Good is under 20 for consumer, under 50 for B2B. If you are above 50 for consumer, your product has UX debt that no amount of support scaling will fix.

NPS by cohort tenure: New users should have NPS 30-40. Users at 6+ months should have NPS 50+. If NPS declines with tenure, your product has a depth problem — it impresses on first use but disappoints over time.

Feature adoption rate for new launches: A new feature should reach 20-30% of eligible users within 4 weeks. If it does not, either the feature is not valuable, not discoverable, or not well-communicated. Below 10% after 4 weeks means the feature is dead on arrival.

// exercise: · 20 min
Benchmark your own product

Pull your product’s actual metrics and score them against these benchmarks.

  1. Retention: What is your D1, D7, D30 (or W1, M1 for B2B)? Which range do you fall in? Apply India adjustments.
  2. Activation: What percentage of signups complete your core action? What is your time-to-value? Is KYC or permission friction suppressing this?
  3. Growth rate: What is your MoM growth? Is it organic or paid-driven? What would your growth rate be if you turned off paid acquisition tomorrow?
  4. Experiment velocity: How many experiments did your team run last month? How many changed a product decision?
  5. The gap: For each metric, identify the gap between your current number and the “good” benchmark. Rank the gaps by business impact. The largest gap with the highest revenue impact is your priority.

Write down one sentence: “Our biggest benchmark gap is _____, and closing it would impact _____ by _____.”

That sentence is your next quarter’s focus.

// learn the judgment

A new PM at Dunzo asks: 'What's a good D30 retention rate for a hyperlocal delivery app in India?' Their current D30 retention is 22%. The head of growth says that's terrible. The CEO says it's fine for the category.

The call: How do you determine if 22% D30 retention is a problem worth fixing urgently, or a category baseline you should accept?

// practice for score

A new PM at Dunzo asks: 'What's a good D30 retention rate for a hyperlocal delivery app in India?' Their current D30 retention is 22%. The head of growth says that's terrible. The CEO says it's fine for the category.

The call: How do you determine if 22% D30 retention is a problem worth fixing urgently, or a category baseline you should accept?

0 chars (min 80)

Where to go next

pm benchmarks 0%
10 min left