10 min left 0%

customer success metrics

When we talk about customer success — NPS needs to be tracked at every stage. The onboarding stage, the experience stage, the support stage. Everywhere, separate NPS should be managed, and finally you track the complete success journey.
Talvinder Singh, Pragmatic Leaders

Your product has 50,000 monthly active users. Usage is up. The CEO is happy. Then a board member asks: “How many of those users are actually succeeding with the product?”

Nobody can answer. Because nobody defined what success looks like from the customer’s side.

This is the gap between usage metrics and success metrics. Usage tells you people are inside the product. Success tells you they are getting value from it. A user who logs in every day but never completes their core task is active but failing. A user who logs in twice a month but closes a deal each time is inactive by your dashboard but thriving.

Customer success metrics exist to close this gap. They measure whether your product is delivering on its promise — not just whether people are clicking around.

The four pillars of customer success measurement

There are dozens of customer success metrics. Most teams need four, measured well. Here they are, in order of how directly they capture success.

NPS — Net Promoter Score

NPS answers one question: “How likely are you to recommend this product to a colleague or friend?” The user responds on a 0-10 scale. 9-10 are Promoters, 7-8 are Passives, 0-6 are Detractors. Your NPS is the percentage of Promoters minus the percentage of Detractors.

The number itself matters less than the trend and the segments. An NPS of 40 that was 55 last quarter is a crisis. An NPS of 25 that was 15 last quarter is progress. And an overall NPS of 40 that hides a Detractor cluster among enterprise accounts is a churn bomb waiting to detonate.

What NPS is good at: Capturing overall sentiment in a single number. Benchmarking against competitors and industry averages. Identifying promoters you can turn into case studies and detractors you need to call immediately.

What NPS is bad at: Telling you why someone is unhappy. Capturing experience at a specific touchpoint. Reflecting the views of users who ignore surveys (which is most of them — typical response rates are 10-20%).

How to run it well:

  • Survey at meaningful moments, not random intervals. After onboarding completion, after a support ticket is resolved, after a renewal decision. Each touchpoint gets its own NPS — do not blend them into one number.
  • Always include a follow-up open-text question: “What is the primary reason for your score?” The qualitative data is where the insight lives.
  • Track NPS by cohort, plan tier, geography, and company size. An aggregate NPS is a starting point, not an answer.

In India, NPS behaves differently than in the US. Indian users tend to be more generous with high scores — a 7 that would be passive in the US context often reflects genuine satisfaction here. Calibrate your benchmarks to your market. Do not import American NPS thresholds and panic when they don’t match.

CSAT — Customer Satisfaction Score

CSAT is more granular than NPS. It measures satisfaction with a specific interaction, feature, or experience. “How satisfied were you with this support interaction?” Rated 1-5 or 1-7. Your CSAT score is the percentage of responses that are 4 or 5 (on a 5-point scale).

// scene:

Weekly product-CS sync. The customer success lead is sharing last month's CSAT data.

CS Lead: “Overall CSAT is 4.2 out of 5. Looks solid.”

PM: “Break it down by touchpoint. What's the onboarding CSAT?”

CS Lead: “Onboarding is 3.1. Support tickets are 4.6. The overall number is pulled up by support.”

PM: “So our support team is excellent, but our product is confusing to set up. We're compensating for a bad first experience with a good rescue experience.”

The aggregate CSAT had been in the green zone for six months. Nobody had looked at the components.

// tension:

A healthy average was hiding an onboarding problem that was quietly killing activation.

When to use CSAT over NPS: When you need to diagnose a specific touchpoint. NPS tells you the patient’s overall health. CSAT tells you which organ is struggling.

The CSAT trap: Teams collect CSAT after support interactions because it is easy to instrument. So CSAT becomes a support metric, not a product metric. If you only measure CSAT on support tickets, you are measuring how good your team is at fixing problems — not whether the product created the problem in the first place. Instrument CSAT at onboarding, at first value delivery, after major feature usage, and at renewal. Those are the moments that determine success.

Retention — are they staying?

Retention is the most honest metric in product management. Users can give you a 9 on NPS out of politeness. They cannot fake coming back to your product week after week.

There are two types of retention that matter:

Logo retention — What percentage of customers (accounts) are still paying at the end of the period? If you started the quarter with 200 customers and ended with 190, your logo retention is 95%. This is the number your board cares about.

Net revenue retention (NRR) — What percentage of last period’s revenue did you retain, including expansions and contractions? If your 200 customers were paying Rs 10 lakh total, and this quarter the surviving customers are paying Rs 11 lakh (because some upgraded), your NRR is 110%. NRR above 100% means your existing customers are growing faster than your churning customers are shrinking — this is the single best indicator of product-market fit in B2B.

How to measure retention properly:

Track by cohort, always. “Our monthly retention is 85%” is meaningless without knowing which month’s users you are measuring. The January cohort might retain at 90% while the March cohort retains at 70% — perhaps because you changed your onboarding in February and broke it.

Set a retention timeframe that matches your product’s natural usage cycle. A daily-use collaboration tool should track weekly and monthly retention. A quarterly tax filing product should track annual retention. Measuring daily retention for a product people use once a month will produce ugly numbers that mean nothing.

// thread: #product-metrics — Two PMs are debating how to report retention to leadership
Arjun (PM, Growth) Should we report D7, D30, or D90 retention to the board?
Kavitha (PM, Core Product) Depends on the product. What's our natural usage frequency?
Arjun (PM, Growth) Most active users come back 2-3 times a week.
Kavitha (PM, Core Product) Then W4 (week-4 retention by cohort) is your primary number. D7 is too noisy, D90 is too lagging.
Arjun (PM, Growth) The board is used to seeing monthly retention.
Kavitha (PM, Core Product) Show them W4 with a trailing 3-month trend. If you show monthly aggregate, they'll see a single number and miss the cohort story. The cohort story is the whole point. dart (4)

Churn — who is leaving and why?

Churn is retention’s mirror image. If 95% of customers retained, 5% churned. But churn deserves its own analysis because the composition of churn matters as much as the rate.

Voluntary churn — The customer actively decided to leave. They cancelled, they switched to a competitor, they told your CS team they are done. This is a product or value problem.

Involuntary churn — The customer’s payment failed, their credit card expired, their procurement process lapsed. They did not choose to leave; they fell through a crack. This is an operations problem, not a product problem — and it is often 20-30% of total churn. Fix your dunning flows before blaming the product.

Revenue churn vs. logo churn — Losing one enterprise customer paying Rs 50 lakh is worse than losing ten SMBs paying Rs 50,000 each, even though logo churn looks worse in the second scenario. Always track both.

How to use churn data:

  1. Segment churn by customer type, plan tier, tenure, and last feature used. Patterns will emerge. If 80% of churn comes from customers who never completed onboarding, your churn problem is actually an onboarding problem.
  2. Build a churn prediction model — even a simple one. Customers who have not logged in for 14 days, have open support tickets older than 7 days, and scored below 6 on their last NPS survey are at risk. Flag them. Call them. Do not wait for the cancellation email.
  3. Conduct exit interviews with every churned customer you can reach. Not a survey — a conversation. “What would have kept you?” is worth more than any metric.

Building a customer health score

Individual metrics tell you pieces of the story. A health score combines them into a single signal that tells you which customers need attention right now.

A health score is not a magic number. It is a weighted composite — and the weights come from your data about what actually predicts retention and churn in your product.

Here is a starting framework:

SignalWeightSource
Product usage (weekly active users in account)30%Analytics
Feature adoption (% of key features used)20%Analytics
Support ticket volume and sentiment15%Support tool
NPS or CSAT score (most recent)15%Survey
Engagement with CS (attending QBRs, responding to outreach)10%CRM
Contract value trend (expanding or contracting)10%Billing

Score each signal on a 1-100 scale. Multiply by weight. Sum for a composite health score. Segment into green (70+), yellow (40-69), red (below 40).

The weights above are a starting point. After six months, run a regression: which signals actually predicted churn in your customer base? Adjust weights accordingly. At one B2B SaaS company I worked with, we discovered that support ticket sentiment was a stronger churn predictor than ticket volume. A customer filing many tickets in a constructive tone is engaged. A customer filing one ticket in a frustrated tone is leaving.

The metrics that do not show up in dashboards

The most important customer success signals are often qualitative:

Expansion conversations. When a customer asks about upgrading, adding users, or using a new module — that is a success signal stronger than any NPS score. Track these in your CRM. If expansion conversations have dried up across the base, something has changed.

Referral behavior. A customer who refers another company to you has given you the highest possible endorsement. It is not captured in NPS (they might never take your survey) but it is the purest signal of success.

Champion changes. When your internal champion at a customer account leaves or changes roles, the account is at risk — regardless of what the health score says. Track champion tenure as a risk indicator.

// exercise: · 20 min
Build your success scorecard

For a product you work on (or a product you use as a customer), design a customer success scorecard:

  1. Define what “success” means for the customer — not usage, not revenue, but the outcome they hired the product to achieve.
  2. Pick 4-6 signals that indicate whether that outcome is being achieved. At least two should come from product analytics, at least one from direct customer feedback, and at least one from behavioral signals (expansion, referrals, support patterns).
  3. Assign weights based on your intuition about what matters most. Write down why you chose those weights.
  4. Score three real or hypothetical customers. Does the scorecard produce results that match your gut feeling about those accounts? If not, adjust the weights.

The goal is not a perfect model. The goal is a model that surfaces the right accounts for attention before they churn.

Common mistakes with customer success metrics

Treating NPS as a goal instead of a signal. The moment you incentivize your team on NPS scores, they will start gaming the survey — cherry-picking happy customers, timing surveys after positive interactions, making it hard to leave low scores. NPS is useful exactly as long as it is honest. Tie incentives to retention and expansion, not to survey scores.

Measuring satisfaction without measuring outcome. A customer can be satisfied with your product and still failing at their job. CSAT tells you they like the interface. It does not tell you they are closing more deals or shipping faster. Connect satisfaction metrics to business outcomes — otherwise you are optimizing for comfort, not success.

Ignoring the silent middle. Detractors are loud. Promoters are visible. The passives — the 7s and 8s on NPS, the customers who use the product regularly but never expand, never refer, never complain — are the largest segment and the most neglected. They are one bad experience away from churning and one great experience away from becoming champions. Build a strategy for them.

Confusing correlation with causation in health scores. Your health score might show that customers who attend QBRs retain better. But is that because QBRs create value, or because customers who are already succeeding are more willing to attend meetings? Before you mandate QBRs for all accounts, test the assumption.

Test yourself

// interactive:
The Churn Crisis

You are the PM at a B2B SaaS company in Pune that sells an HR management tool to mid-size IT services companies. Monthly churn has jumped from 3% to 7% over the last quarter. The CEO wants answers by Friday. Your CS team says 'customers are unhappy' but cannot pinpoint why. You have access to product analytics, NPS data (last survey was 8 weeks ago), and a list of 15 customers who churned last month.

It is Monday morning. You have four days. Where do you start?

The operating rhythm for customer success metrics

These metrics are useless if they sit in a quarterly business review deck. Here is how to make them operational:

Weekly: Review the health score dashboard. Which accounts moved from green to yellow? Which moved from yellow to red? Assign follow-up actions to CS for every red account.

Monthly: Run cohort retention analysis. Compare this month’s retention curve to the previous three months. If the curve is steepening (faster drop-off), investigate immediately — do not wait for churn to show up in the quarterly numbers.

Quarterly: NPS survey at scale. Segment results by cohort, tier, geography, tenure. Share verbatim comments with the product team — not just the scores. The score tells you the mood. The comments tell you the story.

At every renewal: CSAT survey plus a structured renewal conversation. “What value have you gotten from the product this year? What almost made you leave? What would make you expand?” These three questions, asked consistently, will teach you more about customer success than any dashboard.

// learn the judgment

Freshworks' PM for the customer success team notices that their NPS score from enterprise clients has dropped 6 points in Q3, but monthly recurring revenue is up 11%. The CS team says the NPS drop is noise. The product team says it's an early warning.

The call: Which team is right, and how do you use this data to decide what to do in the next 30 days?

// practice for score

Freshworks' PM for the customer success team notices that their NPS score from enterprise clients has dropped 6 points in Q3, but monthly recurring revenue is up 11%. The CS team says the NPS drop is noise. The product team says it's an early warning.

The call: Which team is right, and how do you use this data to decide what to do in the next 30 days?

0 chars (min 80)

Where to go next

customer success metrics 0%
10 min left