product launch playbook
The launch date is not the finish line. It is the first day your assumptions meet reality — and reality does not read your PRD.
Most product launches fail silently. Not with a server crash or an angry tweet storm — but with a whimper. A feature ships, nobody notices, adoption flatlines, and three months later someone asks in a review: “Wait, did we actually launch that?”
The problem is rarely the product. The problem is that PMs treat launch as a date on the calendar instead of a coordinated operation across engineering, marketing, sales, support, and leadership. You built the thing. Now you have to land it.
I have run launches across B2B SaaS, consumer apps, and platform products — in India and globally. The playbook that follows is what I teach at Pragmatic Leaders and what I use myself. It is not theoretical. Every section exists because I have seen the failure mode it prevents.
The three launch phases
Every launch, regardless of size, has three phases. Skip any one of them and you are gambling.
Pre-launch (T-minus 4 weeks to T-minus 1 day). Alignment, readiness checks, risk mitigation. This is where 80% of launch work happens. If you are scrambling during launch week, you started too late.
Launch day (T-zero). Controlled rollout, monitoring, rapid response. The goal is not perfection — it is controlled exposure with fast feedback loops.
Post-launch (T-plus 1 day to T-plus 2 weeks). Adoption tracking, bug triage, stakeholder communication, and the honest retrospective.
Pre-launch: the alignment sprint
The launch brief
Before anything else, write a one-page launch brief. This is not a PRD — the PRD was for engineering. The launch brief is for everyone else: marketing, sales, support, leadership, legal.
It answers five questions:
- What are we launching? One paragraph. No jargon. Your CEO’s executive assistant should understand it.
- Who is it for? The specific user segment. Not “our users” — which users, in what context, with what problem.
- Why now? The business trigger. A competitive move, a retention problem, a regulatory deadline, a revenue opportunity.
- What does success look like? Two to three metrics with baselines and targets. Same ones from your PRD’s success metrics section.
- What could go wrong? The top three risks and your mitigation plan for each.
Circulate this brief four weeks before launch. If anyone on the cross-functional team cannot explain the launch in their own words after reading it, the brief is not clear enough.
Stakeholder alignment — the RACI you actually need
Launch planning meeting, three weeks before a B2B feature release. Six people in the room.
PM: “So who is handling the customer communication for this launch?”
Marketing Lead: “I assumed product was sending the in-app announcement.”
PM: “We are doing in-app. But what about the email campaign to existing enterprise accounts?”
Marketing Lead: “That was not on our sprint board. When did that get decided?”
Customer Success Lead: “Our CSMs have been telling key accounts it is coming next month. Some of them are expecting a webinar.”
Three teams. Three different assumptions about who owns customer communication. Nobody wrote it down.
If the ownership question surfaces during launch week instead of during planning, you will ship a product that nobody hears about — or worse, that everyone communicates inconsistently.
The fix is not a 47-row RACI spreadsheet. It is a focused ownership table for the six things that matter at launch:
| Activity | Owner | Informed | Deadline |
|---|---|---|---|
| Feature QA and sign-off | Engineering lead | PM | T-minus 5 days |
| Customer-facing comms (email, in-app) | Marketing | PM, CS | T-minus 3 days |
| Sales enablement (deck, FAQ, demo script) | Product Marketing | Sales, PM | T-minus 5 days |
| Support readiness (knowledge base, escalation path) | Support lead | PM, Engineering | T-minus 3 days |
| Internal announcement (all-hands, Slack) | PM | Leadership | T-minus 1 day |
| Rollout decision (go/no-go) | PM | Engineering, Leadership | T-zero |
One owner per row. Not two. Not “shared.” If two people own something, nobody owns it.
The readiness checklist
This is the checklist I use. Adapt it to your context, but do not skip any category.
Engineering readiness
- Feature complete and merged to release branch
- Performance testing done under expected load (not “it works on staging”)
- Rollback plan documented and tested — you can revert within 15 minutes
- Feature flags configured for phased rollout
- Monitoring dashboards live: error rates, latency, key user flows
- On-call rotation confirmed for launch window
Customer-facing readiness
- Release notes written in user language, not engineering language
- Help center articles published (or drafted and scheduled)
- In-app announcements configured with correct targeting
- Email campaign scheduled with correct segment filters
- Support team briefed with FAQ document and escalation matrix
Internal readiness
- Sales team has updated pitch deck and competitive positioning
- CSMs have talking points for key accounts
- Leadership has executive summary for board/investor visibility
- Legal has signed off on any compliance-sensitive claims
Risk readiness
- Top three failure scenarios documented with response plans
- War room channel created (Slack/Teams) with all responders added
- Customer communication templates drafted for incident scenarios
- Escalation path clear: who decides to pause, rollback, or push forward
Launch day: controlled exposure
The phased rollout
Never launch to 100% of users on day one. I do not care how well you tested. Production traffic is different from staging traffic, and real users do things your test suite never imagined.
The standard pattern for Indian SaaS products:
Phase 1 (T-zero, hours 0-4): Internal dogfood. Your own team uses it in production. Catch the embarrassing bugs before customers do.
Phase 2 (T-zero, hours 4-24): 5% of users. Pick a segment you can monitor closely. For B2B, this might be three to five friendly accounts who agreed to early access. For B2C, a geographic or cohort-based slice.
Phase 3 (T-plus 1-3 days): 25% of users. Watch your error rates, latency, and support ticket volume. If any metric spikes beyond the threshold you set, pause.
Phase 4 (T-plus 3-7 days): 100% of users. Full rollout with all customer comms.
Each phase has a go/no-go gate. The PM makes the call. Not a committee — you. If the error rate is above your threshold, you pause. If support tickets spike, you pause. If a key integration breaks for a major account, you pause and talk to them directly.
Notice what the PM did. She did not panic. She did not blame testing. She paused, got a diagnosis, set a timeline, communicated, and made the call to resume only after verification. This is launch management.
The launch day rhythm
For the first 24 hours, establish a check-in cadence:
- Every 2 hours: Engineering reports on error rates, latency, and system health
- Every 4 hours: PM shares a one-paragraph update in the stakeholder channel — what is live, what was found, what is the plan
- Every 8 hours: Support reports on ticket volume and top issues
- End of day 1: Go/no-go decision for expanding the rollout
Do not skip the stakeholder updates even if everything is going well. “Launch proceeding as planned, no issues, expanding to 25% tomorrow” takes 30 seconds to type and prevents your VP from pinging you asking “how’s the launch going?”
Post-launch: the first two weeks
Adoption tracking
A feature that is live is not a feature that is used. Track these three metrics daily for the first two weeks:
Activation rate. Of the users who have access, what percentage performed the key action at least once? If you launched a new reporting dashboard, how many users opened it? If the activation rate is below 20% after a week, you have a discovery problem — users do not know the feature exists or do not understand why they should care.
Retention rate. Of the users who activated, what percentage came back and used it a second time within the first week? A feature with high activation but low retention is interesting but not useful. Users tried it, decided it was not worth the effort, and went back to their old workflow.
Support ticket rate. Are users contacting support about this feature? What are they asking? The first week of support tickets is the most honest user research you will ever get. Group tickets by theme and feed them back to the engineering team as potential improvements.
The honest retrospective
Two weeks after launch, run a retrospective. Not the sanitized one where everyone says “great teamwork.” The honest one where you answer three questions:
- What did we get right? Be specific. “Good testing” is not useful. “The phased rollout caught the webhook timeout before it hit 95% of users” is useful.
- What surprised us? Every launch has surprises. The adoption pattern you did not expect. The edge case that slipped through testing. The stakeholder who was not aligned. Document these — they become your pre-launch checklist items for the next launch.
- What would we do differently? Not “be more careful” — that is not actionable. “Add concurrent request testing to our load test suite” is actionable. “Brief the CSM team five days before launch instead of two” is actionable.
Write the retrospective down. Share it with the team. Add the action items to your next sprint.
When things go wrong on day one
They will. Here is the decision framework.
Severity 1 — Data loss or security issue. Rollback immediately. Do not wait for a fix. Communicate to affected users within one hour. Bring in your security and legal teams. This is not a product decision — it is a crisis response.
Severity 2 — Core flow broken for a segment of users. Pause rollout to new users. Hotfix and deploy. Communicate to affected users with a timeline. Do not expand the rollout until the fix is verified for 24 hours.
Severity 3 — Non-critical bug or UX issue. Log it, prioritize it, and keep rolling out. Communicate to support so they can handle tickets. Fix in the next sprint, not as a hotfix.
Severity 4 — Adoption is lower than expected. This is not a day-one problem. Do not panic-ship a notification campaign on day two. Wait a week, look at the data, diagnose whether the problem is awareness, onboarding, or value — then act.
The most common mistake is treating a Severity 4 problem (low adoption) like a Severity 2 problem (core flow broken). Low adoption requires patience and diagnosis. A broken flow requires speed.
Take a feature you are planning to ship in the next month. Build a launch plan using this structure:
- Write the five-question launch brief. What, who, why now, success metrics, top risks. Keep it to one page.
- Fill out the ownership table. Six rows. One owner per row. If you cannot name the owner, that activity is not covered.
- Define your phased rollout. What percentage at each phase? What is the go/no-go gate for each phase? What metric triggers a pause?
- Draft two incident communication templates. One for “we found a bug, here is what we are doing” and one for “we are pausing the rollout while we investigate.”
- Set your day-one check-in cadence. Who reports what, how often, in which channel?
Share the plan with your engineering lead and ask: “Is there anything in here that will not work for the team?” Their feedback will reveal gaps you missed.
Test yourself
You are the PM for a logistics SaaS product based in Hyderabad. You have just launched a new route optimization feature to 25% of your fleet-manager users. It is 11 AM on launch day. Your engineering lead pings you: 'The optimization engine is returning routes that are 40% longer than the old algorithm for some users. Not all — maybe 15% of requests. We are investigating.' Your head of sales is presenting the feature to a key enterprise prospect at 2 PM today.
The feature is live for 25% of users. Some are getting bad results. A high-stakes sales demo is three hours away. Your engineering team needs time to diagnose.
your path
Razorpay is launching a new feature that lets merchants embed a one-click payment button directly into WhatsApp Business messages. It is Monday. Launch is scheduled for Thursday to coincide with the Diwali campaign commitments made to 200 merchant partners. On Tuesday morning, QA flags a bug: on Redmi devices running MIUI 12, the payment confirmation screen renders blank after a successful transaction — the user sees nothing, even though the payment processed. Roughly 5% of Razorpay's merchant customer base uses Redmi devices. The fix requires a workaround in the rendering layer that engineering estimates will take 3-4 days.
The call: The Diwali campaign is committed and 200 merchants are expecting the launch Thursday. The bug affects 5% of users — but those users will see a blank screen after paying, with no confirmation. Do you launch Thursday or delay?
Razorpay is launching a new feature that lets merchants embed a one-click payment button directly into WhatsApp Business messages. It is Monday. Launch is scheduled for Thursday to coincide with the Diwali campaign commitments made to 200 merchant partners. On Tuesday morning, QA flags a bug: on Redmi devices running MIUI 12, the payment confirmation screen renders blank after a successful transaction — the user sees nothing, even though the payment processed. Roughly 5% of Razorpay's merchant customer base uses Redmi devices. The fix requires a workaround in the rendering layer that engineering estimates will take 3-4 days.
The call: The Diwali campaign is committed and 200 merchants are expecting the launch Thursday. The bug affects 5% of users — but those users will see a blank screen after paying, with no confirmation. Do you launch Thursday or delay?
Keep going
- Writing PRDs That Engineers Read — the document that precedes the launch
- Metrics and KPIs — how to pick the right success metrics for your launch
- Stakeholder Management — aligning the people who can make or break your rollout