15 min left 0%

edtech product management

The moment your edtech product measures success by time-in-app instead of what the student actually learned, you have built an entertainment product with a textbook skin. And entertainment products compete with Instagram, not Kota coaching.
Talvinder Singh, from a Pragmatic Leaders cohort on vertical PM

India’s edtech sector went through the most violent boom-bust cycle of any industry in the last decade. Byju’s raised $5.8 billion and was valued at $22 billion. Unacademy hit $3.4 billion. Vedantu raised $300 million. The story was simple: India has 260 million school students and 50 million test prep aspirants. Put education online. Scale infinitely.

Then the crash. Byju’s defaulted on loans, laid off thousands, and saw its valuation collapse to near-zero. Unacademy cut 60% of its workforce. Vedantu was fire-sold to a fraction of its peak valuation. WhiteHat Jr, once acquired by Byju’s for $300 million, shut down operations entirely.

But PhysicsWallah went public at a $2.8 billion valuation — profitably. Testbook built a sustainable test prep business. upGrad found a working model in career-transition education. The difference was not market size or funding. It was product decisions.

If you are building or joining an edtech company in India, this page covers the product problems that actually determine whether your company survives.

Engagement metrics are the wrong scoreboard

This is the central tension in edtech PM, and getting it wrong killed more companies than bad unit economics did.

Every edtech product team I have worked with starts by measuring the same things: DAU, session duration, lessons completed, videos watched. These are engagement metrics. They tell you whether people are using your product. They tell you nothing about whether people are learning.

Byju’s optimised ruthlessly for engagement. The app was designed to keep students watching — animated videos, gamified progress bars, streaks, daily challenges. The metrics looked spectacular. Students spent 70+ minutes per day in the app. Parents saw the usage dashboard and felt reassured.

But engagement and learning are not the same thing. A student can watch ten videos and retain nothing. A student can complete fifty MCQs and not understand the underlying concept. The engagement metrics created an illusion of learning that persisted until the exam results arrived. When they did — when NEET and JEE scores did not improve despite months of app usage — parents churned.

PhysicsWallah took the opposite approach. Alakh Pandey’s content was not gamified. It was not even polished. It was a teacher explaining physics in Hindi on a whiteboard, working through problems step by step. The production quality was low. The learning quality was high. Students came because they understood the material after watching. The metric that mattered was not time-in-app. It was whether the student could solve the next problem set.

// scene:

Weekly product review at an edtech startup. The team is reviewing Q3 metrics.

Growth PM: “Completion rate is up 22% this quarter. Users are finishing 4.2 modules per week on average, up from 3.4. DAU is at an all-time high.”

Head of Product: “What happened to mock test scores?”

Growth PM: “We don't track that in the product dashboard. That's the content team's problem.”

Head of Product: “Completion rate is a vanity metric. A student can complete a module by clicking through slides without reading them. If mock test scores are flat while completions are up, we've made it easier to complete, not easier to learn.”

Growth PM: “But investors look at engagement. The board deck is built around DAU and completion.”

Head of Product: “The board deck won't matter when parents see the NEET results in May and cancel their subscriptions in June. We need to add diagnostic assessments after every module and track score improvement as a first-class metric.”

The Growth PM had built a dashboard that made everyone feel good. The Head of Product wanted a dashboard that predicted retention.

// tension:

Engagement metrics measure activity. Outcome metrics measure value. The company that optimises for the first will lose to the company that optimises for the second.

The PM lesson here is uncomfortable but important: the metrics that make your board deck look good are not always the metrics that keep your customers. In edtech specifically, there is a time delay between engagement and outcomes. A student uses your product for six months. The exam happens once. If the exam result is bad, you lose the customer — and no amount of DAU will save you.

What to measure instead:

  • Diagnostic score improvement: Pre-test at the start of a module, post-test at the end. Track the delta. This is the closest you can get to measuring learning within the product.
  • Concept mastery rate: Not “did they complete the lesson” but “can they solve problems that require this concept?” Adaptive testing after each unit gives you this signal.
  • Time-to-competency: How long does it take a student to move from “cannot solve” to “can solve independently” for a given topic? Shorter is better. This captures both content quality and pedagogical design.
  • Mock test progression: For test prep products, track how mock test scores change over 4/8/12 week windows. This is the metric parents actually care about.
  • Parent-reported satisfaction: In K-12, ask parents directly whether they see improvement. This is messy and subjective. It is also the signal that determines renewal.

Cohort-based vs self-paced: when each model works

The Indian edtech market split into two models, and the PM decisions behind each are fundamentally different.

Self-paced works for exam prep. JEE, NEET, UPSC, CAT, banking exams — these have fixed syllabi, fixed dates, and clear scoring criteria. The student knows exactly what they need to learn and when. They need content, practice, and feedback. They do not need a cohort.

PhysicsWallah and Testbook built on this insight. A student preparing for NEET in Patna does not need to be in a cohort with students in Chennai. They need high-quality content at a price they can afford, a question bank, mock tests, and doubt resolution. Self-paced removes scheduling friction, scales to millions, and works with Indian pricing expectations (Testbook charges under Rs 500/month for full access).

Cohort-based works for career transitions and high-ticket professional education. upGrad charges Rs 3-6 lakh for executive programs. ISB online programs cost Rs 5-10 lakh. At these prices, the student is not buying content — they are buying accountability, peer learning, and a credential. A self-paced course at Rs 5 lakh would feel like a scam. The cohort justifies the price because it creates structure, deadlines, mentor access, and a peer network.

The PM implications are different for each:

Self-PacedCohort-Based
Core PM challengeContent discovery and adaptive learning pathsCohort engagement and completion rates
Retention leverContent depth, practice quality, exam resultsPeer pressure, mentor relationships, placement outcomes
Pricing modelSubscription or one-time (low ARPU, high volume)Upfront or EMI (high ARPU, lower volume)
Key metricScore improvement, question accuracy, mock test progressionCohort completion rate, placement rate, NPS
Content productionRecord once, serve forever. Investment in question banks.Live sessions, guest lectures, project reviews. High recurring cost.
Tech PM focusRecommendation engine, adaptive testing, offline accessLive video infrastructure, assignment workflows, peer collaboration tools

The hybrid trap: Unacademy tried to do both. They had self-paced test prep content and live cohort-based “Plus” subscriptions. The result was a confused product that was too expensive for mass market self-paced (Rs 30,000+/year) and not structured enough for true cohort-based learning. They ended up competing with PhysicsWallah on price (and losing) while competing with upGrad on cohort quality (and losing). The PM lesson: pick your model and build everything around it. Hybrid sounds strategic in a pitch deck. In practice, it means you build two products with one team and do both badly.

// thread: ##product-edtech — PMs from different edtech companies debating gamification in a community Slack
PM at test-prep startup We added streaks, badges, and a leaderboard last quarter. DAU jumped 30%. The board is thrilled.
PM at cohort-ed company We tried gamification for our data science cohort. Students gamed the leaderboard by submitting assignments early with minimal effort. Quality of submissions dropped. We ripped it out.
PM at test-prep startup But streaks work. Duolingo proved it.
PM at K-12 edtech Duolingo teaches vocabulary. You can learn 10 Spanish words a day in a streak. You cannot learn thermodynamics in a streak. The content type matters.
PM at cohort-ed company Gamification works when the unit of progress is small and completable in one session. Language. Typing practice. Flashcards. It fails when the learning requires deep focus over weeks. You can't badge your way through integration by parts. 100 7
PM at K-12 edtech We kept one gamification element: the 'explain to a friend' feature where students record a 60-second explanation of a concept. That actually improves learning because it forces retrieval practice. The gamification is incidental. The pedagogy is the product.
PM at test-prep startup Fair point. Our streaks might be juicing engagement without improving scores. I should check the correlation between streak length and mock test performance.
PM at cohort-ed company If there's no correlation, you've built a habit loop around opening the app, not around learning. That's engagement theatre. brain 4

The two-customer problem in K-12

In K-12 edtech, the PM serves two masters with different needs, different interfaces, and different definitions of success. The student uses the product. The parent pays for it. And they want different things.

The student wants:

  • Content that is not boring (they will compare you to YouTube, not to textbooks)
  • Quick doubt resolution (WhatsApp-speed, not email-speed)
  • Practice that feels productive, not punishing
  • Mobile-first experience (most Indian students access edtech on a phone, not a laptop)

The parent wants:

  • Proof that their child is learning (progress reports, score improvement, time spent)
  • Control and visibility (what is the child studying, how much, when)
  • Exam score improvement (this is the only metric that ultimately matters)
  • Value for money (comparison against tuition teacher who charges Rs 2,000/month)

The tension: If you optimise for the student, you build an engaging, visually rich experience that the student enjoys using. If you optimise for the parent, you build a tracking dashboard with test scores and study time reports. The product the student loves might look frivolous to the parent. The product the parent trusts might feel like homework to the student.

Byju’s solved this by separating the interfaces: a student-facing app with videos and games, and a parent-facing dashboard with usage stats. But they made a fatal error in the sales process. The offline sales team sold to parents using high-pressure tactics — home visits, emotional manipulation about the child’s future, EMI commitments. The product the parent bought was not the product the student experienced. When the two did not align — when the child did not enjoy the app or the scores did not improve — the parent had buyer’s remorse and a loan they could not cancel.

PhysicsWallah avoided this by making the student the buyer. At Rs 3,500 for a full JEE/NEET course, a student could buy it from their pocket money or convince a parent with a trivial ask. The parent did not need a sales team to convince them. The student’s own motivation was the acquisition channel. When the student was the buyer and the user, the product incentives aligned.

For the PM, this means:

  • K-12 (below age 16): The parent is the buyer. Build a parent dashboard as a first-class product, not an afterthought. The parent dashboard is your retention product. The student app is your engagement product. Fund both.
  • Test prep (age 16-25): The student is the buyer. The parent is an influencer, not a decision maker. Build for the student. Give parents enough visibility to not worry, but do not let parent features slow down the student experience.
  • Career education (age 22+): The student is both buyer and user. No two-customer problem. The product challenge shifts entirely to outcomes — placement, salary improvement, career transition success.

Assessment is a product, not a feature

Most edtech PMs treat assessments as a checkbox — add MCQs after each lesson, throw in some mock tests, done. This is a mistake. In India, where standardised testing determines career outcomes for hundreds of millions of people, assessment is the highest-stakes product problem in edtech.

Adaptive testing: A student who consistently gets easy questions right should not keep seeing easy questions. Adaptive testing adjusts difficulty based on performance in real time, giving a more accurate picture of what the student knows and does not know. Testbook built its competitive advantage here — their mock tests adapt to the student’s level and produce diagnostic reports that identify weak topics at a granular level. Building an adaptive testing engine requires Item Response Theory (IRT) or a simpler Elo-based model, a tagged question bank (every question tagged by topic, sub-topic, difficulty, and cognitive skill), and enough data to calibrate difficulty parameters. This is a significant engineering investment, but it is the moat.

Question bank management: A test prep product lives or dies on its question bank. You need tens of thousands of questions, tagged, calibrated, and regularly refreshed. Questions leak. Students share screenshots. Coaching centres compile them. Your question bank is a depreciating asset unless you continuously add new questions. The PM must treat the question bank as a product with its own roadmap — new question ingestion, quality review, difficulty calibration, retirement of leaked questions.

Proctoring: Online proctoring for high-stakes exams is one of the hardest product problems in edtech. NTA (National Testing Agency) moved JEE and NEET online, and every coaching company wanted to offer proctored mock tests. The problem: reliable proctoring requires camera access, screen monitoring, audio detection, and human review of flagged incidents. On Indian internet connections, with students using low-end Android phones, in homes with limited space and background noise — proctoring is a nightmare. Every false flag (flagging a student for “suspicious behaviour” when their sibling walked into the room) destroys trust. Every missed cheat undermines the test’s credibility. The PM must decide: how much proctoring is enough for your use case? A low-stakes practice test needs minimal proctoring. A scholarship exam with real money at stake needs more. Match the proctoring intensity to the stakes.

Plagiarism detection in assignments: For cohort-based programs (upGrad, Great Learning), assignment plagiarism is a growing problem. Students copy from each other, from ChatGPT, from online sources. The PM needs to decide: is plagiarism detection a product feature or a policy problem? The answer is both. The product should make it easy to write original work (good scaffolding, clear rubrics, intermediate checkpoints) and hard to fake it (AI-detection tools, viva-based assessment for high-stakes submissions).

Vernacular content changes the economics

PhysicsWallah’s breakout insight was not pedagogy or pricing. It was language. Alakh Pandey taught physics in Hindi — conversational, colloquial Hindi, not the formal textbook Hindi that feels as alien as English to a student in Lucknow or Jaipur.

This seems obvious in hindsight. India has 22 official languages and only 10% of the population is fluent in English. But until 2020, almost all edtech content was in English. Byju’s was in English. Unacademy started in English. The assumption was that students aspiring to engineering and medical seats would learn in English because the exams were in English.

The assumption was wrong. Understanding a concept and answering an exam in a language are different skills. A student in Varanasi can answer a NEET biology MCQ in English while having learned the concept through a Hindi explanation. The language of learning does not need to match the language of testing.

The PM challenges of vernacular content:

  • Content creation at scale. You cannot just translate English content into Hindi and call it done. Good vernacular content is created in the vernacular — the idioms, the examples, the cultural context change. This means separate content teams for each language, which means 5-10x the content creation cost.
  • Teacher supply. PhysicsWallah works because Alakh Pandey is a genuinely gifted teacher in Hindi. Finding teachers who can teach advanced concepts compellingly in Tamil, Telugu, Marathi, or Bengali is hard. The talent pool is thinner than English-medium content creators.
  • UI/UX localisation. Hindi and other Devanagari scripts have different typographic needs than Latin scripts. Tamil and Telugu have even more complex rendering requirements. Your design system needs to handle multiple scripts without breaking layouts. Most edtech companies treat this as a translation layer. It should be treated as a separate product surface.
  • Content discovery. If your platform has content in 6 languages, how does the student find the right content? Language preferences interact with topic preferences, difficulty level, and teaching style. The recommendation engine gets significantly more complex.

Doubtnut solved one specific vernacular problem brilliantly: doubt resolution. A student photographs a question, the app uses OCR to read it (in any language), and returns a video solution. The interaction is language-agnostic at the input layer and language-specific at the output layer. This is a clever product architecture that sidesteps the full vernacular content creation problem while solving the highest-frequency use case.

What Byju’s got wrong — a product autopsy

Byju’s failure is usually discussed in financial terms: excessive spending, bad acquisitions, governance failures. But beneath the financial disaster was a series of product decisions that made the financial disaster inevitable.

1. Acquisition over retention. Byju’s spent Rs 3,000-5,000 to acquire each paying customer through offline sales teams. This created a product culture that optimised for conversion, not retention. Features were built to impress during the sales demo, not to serve the student during the learning journey. The 30-minute demo showing animated dinosaurs teaching fractions looked magical. The daily experience of actually using the app to prepare for an exam was mundane. The gap between the demo and the daily product was the retention gap.

2. Content as cost centre, not product. Byju’s hired expensive production teams to create cinematic video content. The production values were high but the pedagogical value was mediocre. They spent crores on animation and production while PhysicsWallah’s whiteboard videos — shot on a phone — produced better learning outcomes. The PM mistake was treating content quality as production quality rather than learning quality.

3. Offline sales distorting product decisions. When your acquisition channel is a sales team doing home visits, the product roadmap gets distorted. Features that help close sales get prioritised over features that help students learn. The parent dashboard showed impressive-looking usage stats because that helped the sales team during follow-ups. The student experience was secondary because the student was not in the room during the sale.

4. Platform sprawl instead of product depth. Byju’s acquired WhiteHat Jr (coding), Aakash (test prep), Great Learning (professional education), Epic (US reading platform), and Osmo (physical learning toys). Each acquisition was a bet on a new market. None improved the core product for existing users. The PM lesson: horizontal expansion is not a product strategy. It is a holding company strategy. If your core product is not retaining users, adding more products does not fix the problem — it multiplies it.

5. No feedback loop from outcomes to product. The most damning product failure: Byju’s never built a systematic way to connect exam outcomes back to product usage. They did not track which content actually improved test scores. They did not run controlled studies comparing learning methods. They optimised for engagement proxies because engagement was easy to measure and exam outcomes were not. A product team that cannot connect its output to user outcomes is flying blind.

// exercise: · 20 min
Edtech metrics audit

Pick an edtech product you use or have used (Byju’s, Unacademy, PhysicsWallah, Coursera, upGrad, Testbook, or any other). Go through the product and list every metric it surfaces to the user — progress bars, completion percentages, scores, streaks, badges, time spent, rank.

Now sort those metrics into two buckets:

Engagement metrics (measure activity):

  • Time spent, sessions, streaks, completion rate, videos watched, badges earned

Outcome metrics (measure learning):

  • Test scores, concept mastery, skill assessments, before/after improvement, mock test rank

For each metric, answer:

  1. Does this metric help the student learn, or does it just make them feel busy?
  2. Could a student game this metric without actually learning? How?
  3. If this metric improved by 50%, would the student’s exam score improve? By how much?

If most of the product’s visible metrics are in the engagement column, the product has an outcome measurement gap. That gap is where churn hides.

// learn the judgment

You are PM at an upGrad competitor offering cohort-based data science programs at ₹2.4 lakh. Your engagement metrics are strong — 85% of cohort students complete week 4 assignments. But your placement team reports that learning outcomes are flat: students who complete the program are not clearing technical interviews at mid-sized product companies. Your content team wants to add two gamification features — a daily streak for completing modules and a student leaderboard — which they project will lift completion rates from 72% to 85%. Your data team shows zero correlation between completion rate and interview success rate in the last three cohorts.

The call: Do you approve the gamification features, or redirect the effort toward fixing learning outcomes?

// practice for score

You are PM at an upGrad competitor offering cohort-based data science programs at ₹2.4 lakh. Your engagement metrics are strong — 85% of cohort students complete week 4 assignments. But your placement team reports that learning outcomes are flat: students who complete the program are not clearing technical interviews at mid-sized product companies. Your content team wants to add two gamification features — a daily streak for completing modules and a student leaderboard — which they project will lift completion rates from 72% to 85%. Your data team shows zero correlation between completion rate and interview success rate in the last three cohorts.

The call: Do you approve the gamification features, or redirect the effort toward fixing learning outcomes?

0 chars (min 80)

Career-stage considerations

If you are 0-2 years in edtech PM: Your most important task is to understand the difference between engagement metrics and outcome metrics. It is easy to celebrate when DAU is up. It is harder to ask whether DAU translates to learning. Build the habit of asking “but did they learn?” after every metric review. Spend time with students — not just looking at dashboards, but sitting with a student as they use your product. Watch where they get confused, where they skip ahead, where they put the phone down. That observation is worth more than any analytics report.

If you are 3-5 years in edtech PM: You should be shaping the product’s assessment strategy. Push for adaptive testing if your product does not have it. Build the case for diagnostic assessments that measure actual learning, not just completion. Start connecting product usage data to learning outcomes — this is hard, requires collaboration with content and data teams, and is the most valuable thing you can do.

If you are 5+ years in edtech PM: You are making platform decisions. Cohort vs self-paced. Single language vs vernacular expansion. Assessment as a standalone product vs embedded feature. These decisions determine the company’s trajectory for years. The companies that survived the edtech crash are the ones where senior product leaders made the right structural choices early: PhysicsWallah chose vernacular and affordable pricing. Testbook chose deep exam-specific content over breadth. upGrad chose high-ticket cohort-based education. Each choice excluded a market but created a defensible one.

Test yourself

// interactive:
The Parent Churn Problem

You are the PM at a K-12 edtech app focused on CBSE classes 6-10. Your DAU is strong — 70% of subscribed students use the app daily. But parent renewal rates have dropped from 68% to 51% over two quarters. Exit surveys show parents saying 'my child uses the app but I don't see improvement in school marks.' The CEO wants a plan by Friday.

Your data team confirms: student engagement is high but there is no correlation between app usage and school exam scores. Students who use the app 60 minutes a day score the same as students who use it 15 minutes a day. The CEO is leaning toward adding more gamification to boost engagement further. What do you propose?

Where to go next

edtech product management 0%
15 min left