11 min left 0%

what i changed my mind on

The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.
F. Scott Fitzgerald

I have been training product managers for eight years. Over that time I have taught more than 10,000 professionals across India — in cohorts, in workshops, in one-on-one coaching sessions at startups that ranged from four people to four thousand.

Some of what I taught was right. Some of it was wrong. Not wrong in the way that a typo is wrong — wrong in the way that a confidently held belief, taught to hundreds of people, turns out to produce bad outcomes at scale.

This page is the list. Seven things I believed, taught, and eventually had to stop teaching — or fundamentally rethink — because the evidence from the field forced me to. Each one follows the same pattern: what I used to believe, what happened to change my mind, and what I believe now.

If you are reading this manual and thinking “this guy sounds pretty sure of himself” — this page is the correction. I am sure of what I have tested. I am less sure than I used to be about everything else.


1. “The PM is the CEO of the product”

I taught this framing for three years. It came from Ben Horowitz’s essay, and it felt empowering. You are the CEO. You own the product. You are responsible for its success.

The problem showed up in the field, not in the classroom.

PMs who internalised this framing walked into meetings expecting authority they did not have. They got frustrated when engineers pushed back. They made pronouncements instead of building alignment. One PM in a Bengaluru startup told me, with genuine confusion, “But I am the CEO of this product — why does the engineering lead keep overriding my decisions?”

Because you are not the CEO. The CEO can fire people. The CEO controls the budget. The CEO has the final word. A PM has none of these things. Teaching PMs that they are the CEO sets them up for an entitlement-authority gap that destroys their credibility in the first month.

The framing I use now: the PM is the person who ensures the right problem gets solved. Not the person who decides. Not the person who commands. The person who does the work of understanding the problem deeply enough that the team can make good decisions together. That is a harder sell in a training session. It is also true.

This shift runs through the entire way I teach product thinking now — problem-first, not authority-first.

2. “Jobs To Be Done works everywhere”

I was a JTBD evangelist. I taught it in the first fifty cohorts at Pragmatic Leaders. The framework is elegant: people do not buy products, they hire them for a job. Understand the job, and you understand what to build.

The theory is sound. The execution, for most teams I worked with, was brutal.

JTBD requires deep qualitative research — switching interviews, timeline mapping, understanding the forces of progress and anxiety. It requires a research ops capability that most Indian startups simply do not have. A team of three PMs shipping features on a two-week sprint cycle does not have time for twelve switching interviews per persona.

I tracked adoption across cohorts. Fewer than 20% of participants could apply JTBD within a month of the training. Not because they were bad PMs — because the framework demands infrastructure that their organisations had not built.

What works instead, especially in the Indian startup context, is what I now call the Strip-to-Core method. Simpler discovery, faster loops, actionable within a week. I still teach JTBD — it appears in the discovery section — but I teach it as an advanced tool for teams with research maturity, not as the default starting point. The default needs to be something a PM can do on Monday and act on by Friday.

3. “Always be data-informed”

I built entire training modules around this principle. Instrument everything. Set up dashboards. Let the data tell you what to build. I genuinely believed that if PMs could just look at the right numbers, they would make better decisions.

Then I spent time inside actual companies.

I watched a PM at a Series B fintech make a product bet based on a funnel analysis that had a survivorship bias so severe it was essentially fiction. The dashboard showed a 60% conversion rate at step 3. What the dashboard did not show was that 40% of users had already dropped off before step 1 was instrumented. The “60% conversion” was 60% of a self-selected minority.

I watched another team at an edtech company spend three weeks debating whether a metric drop was real or an instrumentation artefact. It was the instrumentation. They had changed the event schema two sprints ago and nobody had updated the dashboard definition.

The problem is not data. The problem is that most teams have bad data and do not know it. Wrong instrumentation, confounding variables, metrics that measure activity instead of outcomes, dashboards built by someone who left the company two years ago. Teaching PMs to “be data-informed” without teaching them to interrogate the data itself produces a dangerous false confidence.

What I teach now: trust the data only as much as you trust the instrumentation. Before you act on a number, ask three questions. Where does this number come from? What does it not count? When was the tracking last validated? I wrote about this in the diagnosing metric drops section — but the deeper lesson is that judgment must come before the dashboard, not after.

4. “RICE is a reliable prioritization framework”

I used RICE (Reach, Impact, Confidence, Effort) in training for four years. It felt rigorous. You assign numbers, you calculate a score, you rank the backlog. Science.

Except it is not science. It is structured guessing with a formula on top.

I watched hundreds of teams use RICE in practice. Here is what actually happens: the PM has already decided what they want to build. They assign the Reach and Impact scores to support that decision. The Confidence field — which is supposed to be the honest acknowledgment of uncertainty — gets filled in as “medium” for everything because nobody wants to admit they are guessing. The Effort estimate comes from engineering, who are guessing too, but at least they know they are guessing.

The output looks objective. The process was not. In my observation, RICE produced the right prioritisation answer maybe 30% of the time. The rest of the time, it produced a mathematically justified version of whatever the PM already wanted to do.

What works better: structured judgment with explicit trade-offs. Instead of scoring, I now teach PMs to answer three questions for each candidate: What do we lose if we do not build this? What do we learn if we do? What becomes possible — or impossible — after? This approach is less formulaic and more honest. It became the basis for what I now teach as SLICE for diagnosis in the prioritisation section. The difference is that SLICE forces you to articulate the reasoning, not hide it behind a score.

5. “Ship fast, iterate”

This one cost real money to unlearn.

The lean startup canon says ship fast, measure, iterate. And at the zero-to-one stage, this is correct. When you are finding product-market fit, speed of learning is everything. I taught this as a universal principle.

Then I worked with a company that had 40,000 merchants on its platform.

They shipped a pricing change fast. The change was wrong. Within a week, their support team was drowning. Within two weeks, merchants were publicly complaining on social media. Within six weeks — the minimum time needed to measure the impact and roll it back cleanly — they had lost 3,000 merchants. The “iterate” part of “ship fast, iterate” assumed the damage was reversible. It was not. Those merchants did not come back.

The lesson: match your speed to the reversibility of the decision. A UI change to an onboarding flow? Ship it tomorrow. A pricing change affecting 40,000 paying customers? That needs the rigour of a considered rollout — staged, monitored, with a rollback plan written before you ship.

I now teach what I call the Reversibility Razor: before deciding how fast to move, classify the decision. Is it easily reversible (change a button colour), costly to reverse (change a pricing tier), or irreversible (shut down a product line)? Speed should match category, not ideology. This idea runs through the zero-to-one section and the scaling product section — because the speed that creates a startup and the speed that destroys a scaled product are the same speed applied at different stages.

6. “PMs don’t need to code”

I said this for years. “You do not need to be technical. You need to be curious.” It was a comforting message for the career-switchers from MBA programmes and consulting who made up a large part of our cohorts.

I was half right.

PMs do not need to write production code. Nobody is asking you to submit pull requests. But the gap between “technical PM” and “non-technical PM” is not a gentle slope — it is a cliff, and I underestimated how much it matters.

The PMs in our cohorts who understood technical constraints — API rate limits, database query costs, mobile performance budgets, the difference between a client-side and server-side operation — made categorically better product decisions. Not marginally better. Categorically. They could smell a “simple feature” that would require a database migration. They knew when an engineer’s “this will take two sprints” meant “this is genuinely hard” versus “I do not want to build this.” They could have the conversation about trade-offs in a language that engineering respected.

The PMs who could not — even brilliant ones with sharp product instincts — kept getting surprised by technical complexity. They would design features that were elegant from a user perspective and nightmarish from an implementation perspective, then feel betrayed when engineering pushed back.

I still tell PMs they do not need to write code. But I now add: you need to understand enough about how software works that you can evaluate trade-offs honestly with your engineering team. That is a specific, learnable skill. The working with engineering page reflects this updated view. Technical literacy is not optional — it is a force multiplier.

7. “Always align stakeholders before building”

This was my gospel for enterprise and growth-stage PMs. Get alignment. Socialise the idea. Build consensus. Do not start building until everyone is on the same page.

Then I spent time with early-stage startups.

At a 15-person startup in Gurugram, the PM followed my advice perfectly. She wrote a brief, scheduled alignment meetings with the CEO, the head of engineering, the sales lead, the design lead, and two senior engineers. She presented her thinking, collected feedback, revised the brief, presented again. Three weeks later, she had alignment. Three weeks in which nothing shipped.

The CEO pulled me aside and said, “Your training is making my PM slow.”

He was right. In that context, he was completely right.

At a 15-person company, the prototype is more convincing than the deck. Building something rough in a week and showing it to people generates better alignment than three weeks of presentations. The artifact does the persuading. The process does not.

The correction is not “never align.” The correction is: match your alignment process to your company’s stage and speed. At a 500-person company with multiple teams and dependencies, alignment-first prevents expensive rework. At a 15-person startup, alignment-first prevents shipping. Both are real failure modes. The skill is knowing which one you are in.

I now teach this as a spectrum, not a rule. The zero-to-one section covers the build-first-align-after mode. The stakeholder management section covers the align-first mode. Neither is wrong. Both are wrong when applied in the wrong context.


What am I wrong about now?

Probably something. That is the point.

Eight years of training has taught me that the shelf life of a confidently held belief is about three years. After that, either the market changes, the context shifts, or you accumulate enough counter-examples that honesty demands a revision.

The manual changes as I learn. If you are a PM who has seen something I am teaching here fail in practice, I want to hear about it. Not the theoretical objection — the field report. The specific situation where the advice did not work, the specific outcome that resulted, the specific thing that worked instead.

That is how every entry on this page started. Someone in the field showed me I was wrong. The least I can do is keep listening.

// learn the judgment

A PM you are mentoring tells you: 'I used to believe in user research before building. But in my last three projects, user research kept showing us things we already knew and delaying decisions. I've changed my mind—we should just build and learn from usage data.' How do you respond?

The call: Is this PM right to change their mind, or have they drawn the wrong conclusion from their experience?

// practice for score

A PM you are mentoring tells you: 'I used to believe in user research before building. But in my last three projects, user research kept showing us things we already knew and delaying decisions. I've changed my mind—we should just build and learn from usage data.' How do you respond?

The call: Is this PM right to change their mind, or have they drawn the wrong conclusion from their experience?

0 chars (min 80)

Where to go next

what i changed my mind on 0%
11 min left