11 min left 0%

metrics that matter

A pragmatic product leader is continuously measuring. They never stop. The measurement is always there.
Talvinder Singh, Pragmatic Leaders

Your team is tracking 47 metrics. Your weekly dashboard takes fifteen minutes to scroll through. Everyone nods at the numbers. Nobody changes their behavior because of them.

This is the metrics trap — and most product teams are stuck in it. More tracking, less clarity. More dashboards, less action.

The fix is not better tooling. The fix is discipline. You need fewer metrics, chosen with more care, reviewed with more rigor.

The difference between a metric and a KPI

People use these words interchangeably. They are not the same thing.

A metric is any number you can measure. Page views, button clicks, API response time, number of support tickets, average session duration. You can track hundreds of these. Most of them tell you nothing useful on their own.

A KPI (Key Performance Indicator) is a metric that is directly tied to a business objective. If the number moves, someone should act. If nobody acts when the KPI changes, it is not a KPI — it is noise on a dashboard.

Here is the test: if your KPI drops 20% next week, do you know exactly who needs to do what? If the answer is no, you have a vanity metric wearing a KPI’s badge.

Vanity metrics vs. actionable metrics

// scene:

Monday standup. The growth team is presenting last week's numbers.

Growth PM: “Great news — we hit 500,000 total registered users last week.”

VP Product: “How many of those logged in this month?”

Growth PM: “...about 38,000.”

VP Product: “So our real user base is 38,000. What happened to the other 462,000?”

The room went quiet. Total registered users had been on the leadership dashboard for two years. Nobody had questioned it.

// tension:

The number that made everyone feel good was hiding the number that mattered.

Total registered users is a vanity metric. It only goes up. It cannot tell you if your product is healthy. It cannot tell you if last week’s release helped or hurt.

Monthly active users is closer, but still incomplete. Active means different things for different products. A user who opens the app, stares at the home screen, and leaves is “active” in your analytics but getting zero value.

A good metric passes three tests. These come from Eric Ries and the lean startup community, and they hold up in practice:

  1. Actionable — when the number changes, you know what to do about it. “Activation rate dropped from 45% to 38%” tells you to investigate onboarding. “Total page views went up” tells you nothing.

  2. Accessible — your team can get the number without filing a data request and waiting three days. If the metric is hard to access, people will stop checking it.

  3. Auditable — you can trace the number back to real user behavior. If someone questions it, you can show them the query, the event definition, and the raw data. No black boxes.

If your metric fails any of these three, downgrade it from KPI to “interesting but not critical” and remove it from your weekly review.

The AARRR framework (Pirate Metrics)

Dave McClure of 500 Startups proposed this framework, and it became the default mental model for startups because it maps directly to the customer lifecycle. Five stages, each with its own metrics:

Acquisition — How do users find you?

This is awareness and first touch. SEO rankings, app store impressions, social media reach, paid ad clicks, referral link visits. The question is not “how many people came to our site” but “how many of the right people came to our site.”

In India, acquisition channels behave differently than in the US. WhatsApp referrals drive more installs than Facebook ads for many consumer products. Vernacular content on ShareChat or Moj reaches audiences that English-language Google ads miss entirely. Your acquisition metrics must reflect the channels your users actually use — not the channels your marketing team is comfortable with.

Activation — Do first-time users experience the core value?

This is the most commonly neglected stage. A user downloads your app, creates an account, and then… does nothing. They are “acquired” but never “activated.”

Activation means the user completed the action that makes your product useful to them. For a UPI payments app, activation is completing the first transaction — not creating an account. For a learning platform, activation is finishing the first lesson — not signing up. Define your activation event precisely, and track the percentage of acquired users who reach it within a set timeframe (usually 7 days).

Retention — Do they come back?

Retention separates real products from hype. A product with strong acquisition but weak retention is a leaking bucket — you pour users in, they drain out, and you spend more and more on acquisition to maintain the illusion of growth.

Track cohort retention, not aggregate retention. The question is not “how many users were active this month” but “of the users who signed up in January, what percentage are still active in March?” Cohort analysis is the single most important analytical skill a PM can develop.

Revenue — Do they pay?

For products that monetize directly: average revenue per user (ARPU), customer lifetime value (LTV), LTV-to-CAC ratio. A healthy ratio is 3:1 or higher — you are making three rupees for every rupee you spend acquiring a customer.

For pre-revenue products: track the proxy metrics that will eventually convert to revenue. Daily active users, feature adoption rates, engagement depth. But be honest about the fact that these are proxies, not proof that revenue will follow.

Referral — Do they tell others?

The cheapest acquisition channel is a happy user who brings their friends. Track referral rate — the percentage of users who invite at least one other person. Track viral coefficient (K-factor) — on average, how many new users does each existing user generate?

Here is a worked example:

StageMetricValue
AcquisitionMonthly website visitors10,000
ActivationCompleted first transaction (within 7 days)70% → 7,000
RetentionReturned after 30 days20% → 1,400
RevenuePaying users10% of retained → 140
ReferralUsers who referred a friend10% of all visitors → 1,000

This simple table tells you exactly where to focus. A 70% activation rate is strong — your onboarding works. A 20% retention rate is a problem — users try the product and do not come back. Fixing retention will do more for the business than pouring money into acquisition.

The HEART framework

Google’s research team proposed HEART as a complement to AARRR. While AARRR tracks the business funnel, HEART tracks the user experience.

Happiness — How satisfied are users? Measured through surveys (NPS, CSAT), app store ratings, qualitative feedback. This is the only metric in HEART that comes from asking users directly rather than observing their behavior.

Engagement — How much are they using the product? Session frequency, session duration, actions per session. But be careful: more engagement is not always better. A user spending 40 minutes in your expense management tool might be engaged — or they might be lost. Define what healthy engagement looks like for your product.

Adoption — Are users trying new features? When you ship a feature, what percentage of eligible users try it within the first two weeks? Low adoption of a well-promoted feature is a signal that either the feature does not solve a real problem or users do not understand what it does.

Retention — Are they coming back? Same as AARRR, but HEART encourages you to measure it alongside experience metrics. A product can retain users through switching costs (they are locked in) or through value (they choose to stay). The happiness and engagement metrics help you tell the difference.

Task Success — Can users accomplish what they came to do? Task completion rate, error rate, time-to-complete. This is the most underused metric in the framework. If users cannot complete their core task efficiently, nothing else matters.

// thread: #product-analytics — The team is debating which framework to use
Priya (PM) Should we use AARRR or HEART for our Q2 metrics review?
Rohit (Data) Why not both? AARRR for the business funnel, HEART for UX quality.
Priya (PM) Because then we're back to 30 metrics and nobody acts on any of them.
Meera (Head of Product) Pick one framework. Pick 3-5 metrics from it. Review them weekly. Change one thing based on what you see. That's the whole process.
The framework matters less than the discipline of acting on what it tells you. fire (3)

How to choose the right metric

Frameworks give you categories. They do not tell you which specific metric to track within each category. That is the hard part.

Here is the process I teach:

Step 1: Start with the business goal.

Not “improve the product” — the actual business goal. “Increase monthly recurring revenue by 40% this fiscal year” or “reduce customer acquisition cost below Rs 200” or “reach 100,000 monthly active users in tier-2 cities.”

Step 2: Work backwards to the user behavior that drives that goal.

If the goal is MRR growth, the user behaviors are: new subscriptions, upgrades, and reduced churn. If the goal is tier-2 MAU, the behaviors are: installs from tier-2 regions, activation within those cohorts, and retention.

Step 3: Pick the metric that measures that behavior most directly.

Not a proxy. Not a leading indicator of a leading indicator. The most direct measurement of the behavior you care about. “Number of tier-2 users who complete onboarding within 48 hours of install” is better than “tier-2 app installs” because installs without activation are worthless.

Step 4: Set a target and a timeframe.

“Improve activation” is not a goal. “Increase tier-2 activation from 32% to 50% by end of Q3” is a goal. Without a target and a deadline, you cannot tell if you succeeded or failed.

Step 5: Define what you will do when the metric moves.

This is the step everyone skips. If activation drops below 40%, what is the response? Who investigates? What is the escalation path? A metric without a response plan is decoration.

The metrics stack: how it fits together

Most mature products need three layers of metrics:

Layer 1: Business metrics (2-3 max). Revenue, user count, unit economics. These are for leadership. They change slowly and tell you if the company is healthy. They are lagging indicators — by the time revenue drops, the problem started weeks or months ago.

Layer 2: Product metrics (3-5 max). Activation rate, retention by cohort, feature adoption, task success rate. These are for the product team. They are leading indicators — they move before business metrics do. This is where AARRR and HEART live.

Layer 3: Feature metrics (varies). Specific to whatever you shipped this sprint. Click-through rates on the new CTA, completion rates for the redesigned flow, error rates after the migration. These are temporary — you track them during and after a launch, then archive them.

The mistake most teams make is treating Layer 3 metrics as if they were Layer 2. They fill the dashboard with feature-level metrics from six months ago that nobody looks at, and the important signals get buried.

// exercise: · 15 min
Metric audit

Pull up your team’s analytics dashboard (or the dashboard of a product you use). For every metric on it, answer:

  1. What business goal does this metric connect to?
  2. If this metric dropped 20% tomorrow, what would we do?
  3. When was the last time someone made a decision based on this metric?

Any metric where you cannot answer all three questions should be removed from the main dashboard. Be ruthless. A five-metric dashboard that drives action is worth more than a fifty-metric dashboard that drives nothing.

Common mistakes

Tracking ratios without absolute numbers. A 90% activation rate sounds fantastic until you realize it is 9 out of 10 users. Always pair ratios with the absolute numbers behind them.

Optimizing a metric at the expense of the system. If you optimize for session duration by adding friction (more steps, more modals, more content to scroll through), the metric goes up but the user experience goes down. Every metric can be gamed. The question is whether the improvement reflects real value creation.

Changing metrics when the numbers look bad. This is the most tempting mistake. The retention number is ugly, so someone proposes a “better” definition of retention that makes the number look healthier. If you change your metric definition, you must acknowledge the break in your time series. You cannot compare the new number to the old one.

Measuring what is easy instead of what matters. You track page views because your analytics tool gives them to you for free. But page views do not tell you whether users accomplished their goal. Task success rate is harder to instrument but infinitely more useful.

Ignoring the denominator. “We got 500 new signups last week!” Compared to what? If you drove 50,000 visitors to the landing page, a 1% conversion rate is a problem, not a celebration.

Test yourself

// interactive:
The Dashboard Debate

You just joined a B2B SaaS company in Bangalore as a PM. The product is a project management tool for IT services companies. The CEO tells you: 'Our churn is too high. Fix it.' The current dashboard shows 23 metrics. None of them are cohort retention.

You have your first team meeting tomorrow. The data analyst, the engineering lead, and the CEO will be there. What do you propose?

The operating rhythm

Metrics without rhythm are useless. Here is the cadence that works:

Daily: Glance at your 2-3 product metrics. Look for anomalies. If something spiked or dropped, investigate within 24 hours.

Weekly: Review your full metrics stack with the team. One slide per metric: current value, trend, target, actions taken. If no actions were taken on any metric, ask why you are tracking it.

Monthly: Cohort analysis. Compare this month’s cohorts to last month’s. Is activation improving? Is retention holding? Are new features getting adopted? This is where you catch slow-moving trends before they become crises.

Quarterly: Step back and ask whether you are tracking the right things. Business goals may have shifted. The product may have evolved. The metrics that mattered in Q1 may not matter in Q3. Update the dashboard, retire old metrics, add new ones if justified.

The goal is not perfect measurement. The goal is a measurement system that consistently generates action — where every number on the screen connects to a decision someone will make.

For reference benchmarks on what “good” looks like across product types, see PM Benchmarks — actual numbers for retention, activation, conversion, and growth rates, adjusted for the Indian market.

// learn the judgment

The PM at Khatabook is being asked to define the North Star metric for the business. The current candidates being debated internally: (A) DAU, (B) transactions recorded per month, (C) GMV tracked by Khatabook users, (D) credit approved through Khatabook's lending product.

The call: Which metric do you choose, and why do you reject the others?

// practice for score

The PM at Khatabook is being asked to define the North Star metric for the business. The current candidates being debated internally: (A) DAU, (B) transactions recorded per month, (C) GMV tracked by Khatabook users, (D) credit approved through Khatabook's lending product.

The call: Which metric do you choose, and why do you reject the others?

0 chars (min 80)

Where to go next

metrics that matter 0%
11 min left