zero to one
MVP is not a product. It is only a process and only the start of it. If somebody told you they got the product right the first time, they are lying.
Most startup product failures do not happen at launch. They happen months earlier, in the room where someone said “let’s build it” without asking who wants it and why.
Zero to one is the hardest phase in product management. You have no data, no users, no track record, and a team that wants direction. Every decision feels like a guess. That discomfort is real — but it is not an excuse for building without thinking.
This page is about the discipline of the zero-to-one phase: how to define what to build, who to build it for, how to find the first people who will use it, and which mistakes kill products before they ever reach a real market.
What “minimum viable” actually means
The term MVP has been so abused it has almost lost meaning. Teams use it to justify shipping half-finished work. Founders use it to mean “version 1.” Neither is what Eric Ries intended.
The definition that actually works in practice: an MVP is the smallest thing you can build that tests your riskiest assumption.
Not your smallest product. Not your cheapest product. The thing that answers the question you are most likely to be wrong about.
This means the shape of your MVP depends entirely on what question you are trying to answer:
- If the riskiest assumption is “do people have this problem?” — your MVP might be a landing page with a waitlist, not a working product at all.
- If the riskiest assumption is “will people pay for this?” — your MVP is anything that lets you take money. A spreadsheet, a WhatsApp group, a manual process.
- If the riskiest assumption is “can we build this reliably enough to charge for it?” — your MVP needs to work, but only for a narrow slice of the use case.
The failure mode most Indian startup PMs fall into: they build a feature-complete v1 when they should have built a throwaway prototype to test the core assumption. Six months of engineering to answer a question they could have answered in two weeks with five customer conversations.
An edtech startup in Bangalore. The PM has just presented the v1 roadmap — 14 features, 6 months of work.
CTO: “This is basically a full product. Where's the MVP?”
PM: “This is the MVP. We cut a lot. No mobile app, no analytics dashboard, no integrations.”
CTO: “But what is the one thing we're trying to prove? Which of these 14 features tests our core bet?”
PM: “All of them, together, prove that students will pay for self-paced technical courses.”
CTO: “You don't need 14 features to test that. You need one course, one payment link, and ten students who complete it.”
The PM had confused 'minimum' with 'fewer features.' The CTO was asking for a different question: what is the riskiest assumption, and what is the fastest way to kill it?
Cutting features is not the same as defining an MVP. An MVP answers a question. A cut-down v1 is just a smaller product.
The three-question test before you build anything
Before writing a single spec, every startup PM should be able to answer three questions clearly. If any answer is vague, you are not ready to build.
1. What is the specific problem, and for whom?
Not “small businesses struggle with invoicing” — that is a category, not a problem. The specific version: “GST-registered freelancers in India who bill 5-15 clients per month spend 2-3 hours per invoice because they do not understand GST calculation rules, and they are afraid of getting it wrong.”
The more specific the problem definition, the better your MVP will be. Specificity forces you to make choices. “Small businesses” is everyone. “GST-registered freelancers billing 5-15 clients per month” is a segment you can find, interview, and test with.
2. What is your riskiest assumption? (The Strip-to-Core test)
Every product idea rests on a stack of assumptions. Which one, if wrong, kills the whole thing? I use a method I call Strip-to-Core: systematically remove capability layers from your product idea until you find the single irreducible value. A savings app? Strip away the bank integration, the budgeting tools, the social features. The core value is the habit of saving — not the account. Your MVP tests THAT, not the layers above it.
Usually it is one of three things:
- The problem is real and painful enough that people will change behavior to solve it
- Your proposed solution actually solves the problem (not just addresses it)
- People will pay what you need them to pay for the solution to be a viable business
Write them all down. Rank them by: how likely are we to be wrong, and how much does it cost us if we are? The highest-scoring assumption is what your MVP needs to test.
3. What is the falsification condition?
What would tell you, unambiguously, that your assumption is wrong? This is the question teams almost always skip. They define success metrics. They rarely define failure conditions.
“We will know our assumption is wrong if fewer than 30% of people who try the demo ask for pricing” — that is a falsification condition. “We will measure conversion rate” is not.
Without a falsification condition, confirmation bias takes over. You will find ways to explain away every negative signal. The MVP will “almost work.” The users who churned “were not really our target.” You will rebuild instead of pivoting, because you never decided in advance what failure looks like.
Take any product idea you are working on. List every assumption it makes — about user behavior, willingness to pay, technical feasibility, market size, competitive dynamics. Aim for at least eight.
Now score each assumption on two axes (1-5 scale):
- Probability of being wrong: How confident are you in this assumption today?
- Cost if wrong: If this assumption fails, how much work is invalidated?
Multiply the scores. The assumption with the highest number is your MVP’s job. Everything else is secondary until you have answered this question.
Validate before you build
The fastest path to a good product is not to build faster. It is to eliminate bad ideas before you build them.
Validation does not mean a survey. Surveys tell you what people say they will do. You need to know what they actually do — or at minimum, what they are already doing to solve the problem without your product.
Three validation methods that work in the zero-to-one phase:
1. The problem interview
Talk to 8-12 people who match your target user. Do not show them your idea. Ask about the problem: how do they currently solve it, how painful is it, what have they tried, what has failed. If they are describing the problem in vivid, specific detail with emotional charge behind it — you have a real problem. If they shrug and say “yeah, it could be better” — the pain is not acute enough for them to change behavior for your product.
In India, most founders skip this step because they assume they know the problem (often from personal experience). Personal experience is a starting point, not a validation. Your problem might be acute for you and irrelevant to your target segment.
2. The manual concierge
Before you build automation, do the job manually for one or two customers. Charge them — even a small amount. The willingness to pay for a manual, imperfect version of your solution tells you more than any survey or prototype test.
Zomato’s early version in Jaipur was essentially a PDF of restaurant menus and a phone number. No app, no logistics software, no real-time tracking. They manually relayed orders. When restaurants paid to be listed in the PDF, they knew they had something.
3. The landing page test
Build a one-page description of your product with a clear call to action — join the waitlist, pay for early access, request a demo. Drive traffic to it via targeted channels (WhatsApp groups, relevant LinkedIn posts, a specific subreddit, college alumni networks). Measure how many people complete the CTA.
What counts as success depends on the CTA and channel. A 10% signup rate from a cold channel is very different from a 10% signup rate from a warm referral. Define your benchmark before you run the test.
Finding your first users
This is where most startup PMs get it wrong. They build the product, then go looking for users. The sequence should be the opposite: talk to potential users before you build anything, and turn the best of those conversations into your first users.
Start in your immediate network — but be honest about the signal.
Your friends will sign up. Your college batch will leave encouraging comments. Your family will tell you it is great. None of this is signal. Early adopters from your personal network have a social obligation to support you. Strangers do not.
The useful metric from your personal network is not whether they sign up — it is whether they invite someone else. Organic referral from warm networks means the product is solving a real problem. Zero organic referral means it is not, regardless of how many people politely said they liked it.
Find people who are already solving the problem imperfectly.
The best early users are not people who might need your product someday. They are people who are already doing something painful to solve the problem you are addressing. They have proven the pain is real — they just have a worse solution.
If you are building a tool for freelance designers to track client feedback, find freelance designers who are currently using Google Sheets for that. They have already decided the problem is worth solving. Your job is just to convince them your solution is better than the spreadsheet.
In India, look for WhatsApp groups, Telegram channels, LinkedIn communities, and college alumni networks where your target users are already gathering. These are high-trust environments where you can find people describing their problems in real language, not survey language.
Make it easy to say no — it filters for real interest.
When you approach potential early users, give them a genuine out. “This might not be useful to you, but…” or “I am looking for people who specifically struggle with X — if that is not your situation, this is not for you.” This repels people who would be polite signups but bad users. It attracts people who genuinely have the problem.
The person who says “actually, yes, X is a massive headache for me — tell me more” is worth ten people who signed up to be supportive.
The mistakes that kill products before launch
These are not hypothetical. They are patterns from watching hundreds of startup PMs go through zero to one.
Building for imagined users, not real ones
“Our user is a 25-35 year old urban professional.” This is not a user — it is a demographic slice. Real users have a specific context, a specific pain, and a specific alternative they are currently using. The PM who builds for a demographic is guessing. The PM who builds for the 30 people they interviewed in depth is not.
Scope creep dressed as completeness
“We just need to add X before we can show it to users.” No. The instinct to delay first-user contact by adding one more feature is almost always fear, not product judgment. The discomfort of showing an unfinished product to real users is exactly the discomfort you should lean into. Their feedback reshapes the product in ways that internal iteration never will.
Confusing interest with commitment
“Everyone we talked to said they’d use it.” Interest is free. Commitment has friction. The behavioral test is: will they give you something scarce? Time (taking a 45-minute session to use your product), money (paying even ₹99 to join), or reputation (referring a colleague). Until a user has given you something scarce, you do not know if the interest is real.
Optimizing for the wrong metric in the wrong phase
Downloads, signups, and page views are vanity metrics in the zero-to-one phase. The only metrics that matter early are: do users come back, do they tell someone else, and do they pay? If you are optimizing for top-of-funnel numbers before you know whether the product retains users, you are filling a leaky bucket.
Anchoring on the first solution
You have your first idea. It feels right. You start building. Twelve weeks later, you are so invested that you cannot see the six contradictory signals you received along the way. Zero to one requires genuine willingness to abandon the first solution if the evidence points elsewhere. This is harder than it sounds, especially when you have announced the idea publicly.
Imagine it is six months from now and your product has failed. Not stalled — genuinely failed, shut down, no users.
Write a one-paragraph postmortem as if you are looking back. What was the reason? Be specific: “We built for the wrong segment,” “we could not get users to return after day 3,” “the unit economics only worked at a scale we never reached.”
Now reread it. Is any part of that failure already visible today? Is there an assumption you are avoiding testing because you are afraid of the answer?
The purpose of a pre-mortem is not pessimism. It is permission to be honest about what you already know.
What a good zero-to-one PM looks like in practice
A startup PM in the zero-to-one phase spends most of their time doing three things:
Talking to users. Not reading about them, not having someone else interview them. Personally sitting across from (or on a video call with) potential users and asking open-ended questions about their behavior. Minimum 5 conversations per week in the early phase. Not asking “would you use this” — asking “walk me through the last time you had to do X.”
Writing clearly. The discipline of writing down what you learned from each conversation, what assumption it validated or challenged, and what you are going to do differently. Writing forces clarity. If you cannot write a clear three-sentence summary of what a user conversation taught you, you did not really learn anything from it.
Making the next bet explicit. What are you building this week, what assumption does it test, and what will you do if the test fails? Every week in the zero-to-one phase should have a clear bet on the table. Not a project plan — a bet. Bets imply the possibility of losing.
The first version of almost every successful product was embarrassingly small. Gmail launched invite-only with 1 GB storage when competitors offered 4 MB — the bet was on storage, and the invite scarcity was a growth mechanic, not a technical limitation. Naukri.com launched with manually entered job listings and a team that called recruiters to fill the database. Razorpay’s first API was a Python library that only worked with certain banks. None of them waited to be complete before testing with real users.
The goal of zero to one is not to build a product. It is to learn whether a product is worth building — and to learn that as cheaply and quickly as possible.
Test yourself
You have been building a B2B SaaS product for 4 months — a contract management tool for small law firms in India. You have 12 beta users from your personal network, mostly friends of friends who signed up as a favor. Retention is low: only 3 of the 12 come back after the first week. The founding team wants to do a public launch next month to generate momentum.
The CEO is excited about the launch timeline. The team has prepared a Product Hunt post and a LinkedIn campaign. You have one week before the campaign assets go out.
your path
You are PM at a Series A startup in Pune building a B2B tool for chartered accountants to manage GST filings for their clients. You've found strong PMF with solo CAs handling 10-20 client books. The product is tight and those users love it. Now your co-founder wants to build a second product line — a payroll module for the same CAs to offer their SME clients. His reasoning: 'We already have the trust. We can cross-sell. The CA will buy both.' Engineering capacity is 6 engineers. The core GST product still has significant reliability issues — occasional data sync failures that your best users complain about regularly.
The call: Do you greenlight the payroll module now, or do you hold the co-founder off until the core product is stable?
You are PM at a Series A startup in Pune building a B2B tool for chartered accountants to manage GST filings for their clients. You've found strong PMF with solo CAs handling 10-20 client books. The product is tight and those users love it. Now your co-founder wants to build a second product line — a payroll module for the same CAs to offer their SME clients. His reasoning: 'We already have the trust. We can cross-sell. The CA will buy both.' Engineering capacity is 6 engineers. The core GST product still has significant reliability issues — occasional data sync failures that your best users complain about regularly.
The call: Do you greenlight the payroll module now, or do you hold the co-founder off until the core product is stable?
Where to go next
- Understand product-market fit as the next milestone: Product-Market Fit
- Learn how to structure user discovery conversations: Customer Interview Methods
- Apply problem definition rigor: Problem Definition
- Understand what happens after zero-to-one: Scaling the Product