ethics in product
Just because you can do it doesn't mean you should do it. This used to sound like a cliched, prophetical statement. But it is truer today than ever.
Ethics in product management is not a philosophy elective. It is a series of decisions you make every sprint — about what data you collect, how you get consent, who you design for, and what tricks you refuse to use.
Most PMs do not wake up and decide to be unethical. What happens instead is slow drift. A growth metric under pressure. A “small” UI change that technically isn’t lying. A consent flow that nobody reads anyway. Each decision is defensible in isolation. In aggregate, they erode user trust.
This page is not a lecture about being good. It is a practical framework for the ethical calls you will actually face — with specific tests you can apply before shipping.
The ethics spectrum is not binary
The easy calls are rare. Nobody needs a framework to reject building actual malware. The hard calls live in a grey zone where business incentive and user welfare pull in opposite directions.
Sprint planning. Growth team reviewing a checkout flow experiment.
Growth PM: “We tested adding the insurance upsell as pre-checked. Conversion went up 12%.”
Designer: “But support tickets about unexpected charges also spiked.”
Growth PM: “Those are only 3% of buyers. Net revenue is still way up.”
Engineering Lead: “Isn't this exactly what the insurance regulator flagged as a dark pattern last year?”
The room went quiet. Everyone looked at the PM.
A 12% conversion lift is real. So are the users who didn't notice the checkbox.
This is the kind of call you will face. Not evil versus good. Revenue versus trust. Short-term metric versus long-term brand. The PM is the person in the room who has to make the call — and own the consequences.
Dark patterns: the PM’s responsibility
A dark pattern is a design that tricks users into doing something they did not intend. The term was coined by Harry Brignull in 2010, but the practice is as old as commerce.
Here are the patterns you will encounter most often as a PM — not in theory, but in actual sprint reviews and A/B test results:
Pre-checked boxes. Add-ons, insurance, newsletter signups ticked by default. In India, this became a national conversation when flight booking sites pre-selected travel insurance and seat upgrades. The user who clicks through fast pays more than they intended. Your conversion metric looks great. Your user’s bank statement does not.
Confirm-shaming. “No thanks, I don’t want to save money.” Framing the opt-out as a stupid decision. It works on metrics. It also signals contempt for your user.
Roach motel. Easy to get in, nearly impossible to get out. One-click subscribe, twelve-step unsubscribe. If your cancellation flow is harder than your signup flow, you are running a roach motel.
Misdirection. Making the thing you want the user to click visually dominant, while the thing they actually want (like “skip” or “decline”) is greyed out, tiny, or placed where nobody looks.
Hidden costs. Showing a low price, then stacking fees at the last step. Platform fees, convenience charges, service charges — each individually “small,” collectively significant. Every Indian who has booked a movie ticket on BookMyShow knows this experience.
The test
Before shipping any flow that nudges user behavior, apply this check:
If the user fully understood what was happening, would they still do it?
If the answer is no — if your conversion depends on users not paying attention — you are running a dark pattern. It does not matter what your A/B test says.
Data privacy: what PMs actually need to know
Privacy is not a compliance checkbox that legal handles. Every feature you ship that collects, stores, or shares user data is a privacy decision. And in India, the stakes are specific.
The Indian context
Indians share personal data with a casualness that would alarm a European regulator. “Sir, send me an Aadhaar copy on WhatsApp” is a sentence spoken a thousand times a day. Masked Aadhaar exists, but adoption is low because nobody asks for it.
UPI transactions carry transaction metadata — where you spent, how much, how often. This data is gold for product teams and a privacy minefield. As UPI expands internationally (Singapore, UAE, more coming), the privacy layer underneath will have to harden. The PM who builds on UPI data today needs to think about what happens when the Digital Personal Data Protection Act (DPDPA) is fully enforced tomorrow.
The DPDPA, enacted in 2023, gives Indian users the right to consent, access, correct, and erase their data. It is not as punitive as GDPR yet, but the direction is clear. If you are building a product in India, treat DPDPA compliance as a floor, not a ceiling.
Data classification — the PM’s job
Most privacy failures happen because nobody classified the data before collecting it. Tools like Segment have made privacy a first-class product category — with data inventories, consent management, and classification systems built in.
As a PM, you need to classify every data point your product collects into three buckets:
Green — anonymous and aggregated. Page views, feature usage counts, funnel metrics. No individual can be identified. Collect freely.
Yellow — pseudonymous or behavioral. Product category preferences, purchase frequency (without specific items), session patterns. An individual could potentially be re-identified. Collect with consent. Anonymize in storage.
Red — personally identifiable. Name, email, Aadhaar, phone number, health data, financial data, location history. Collect only what you need. Encrypt at rest. Delete when the purpose expires.
The PM who says “we’ll figure out privacy later” is building technical debt that compounds faster than feature debt — because the cost of retrofitting consent, reclassifying data, and purging non-compliant records can dwarf the cost of the feature itself.
Pick one feature you own (or use daily). Map every data point it collects:
- What data is collected? List each field.
- Classify each as Green, Yellow, or Red.
- For Yellow and Red: is there explicit consent? Can the user see what was collected? Can they delete it?
- Is any Red data stored unencrypted or retained beyond its stated purpose?
If you found a Red data point without clear consent or a deletion path, you have found a compliance risk. Raise it before a regulator does.
You are PM at a Bangalore-based BNPL (buy-now-pay-later) startup targeting first-time credit users in tier-2 cities. Your growth team has tested a checkout flow where the BNPL option is the only payment method shown on the first screen — UPI and card options appear only if the user taps 'See all payment methods.' A/B test results: BNPL conversion increased 34%, average order value increased 22%. The business case is strong. Legal says the flow is technically compliant with current RBI guidelines. The user is not being lied to. But 40% of users who converted via BNPL did not scroll past the first screen and never saw the alternative options.
The call: Do you ship the flow as tested? The metric and the legal team both say yes. What is the real product question here?
You are PM at a Bangalore-based BNPL (buy-now-pay-later) startup targeting first-time credit users in tier-2 cities. Your growth team has tested a checkout flow where the BNPL option is the only payment method shown on the first screen — UPI and card options appear only if the user taps 'See all payment methods.' A/B test results: BNPL conversion increased 34%, average order value increased 22%. The business case is strong. Legal says the flow is technically compliant with current RBI guidelines. The user is not being lied to. But 40% of users who converted via BNPL did not scroll past the first screen and never saw the alternative options.
The call: Do you ship the flow as tested? The metric and the legal team both say yes. What is the real product question here?
Accessibility: not a luxury, not a Phase 2
The most common excuse PMs give for ignoring accessibility: “We’ll add it later when we have more resources.”
Later never comes. And the longer you wait, the more expensive the retrofit.
Accessibility is usually framed as something Google and Microsoft worry about — that a young startup needn’t bother with. This framing is wrong on every level. Here is why it matters for every product, at every stage:
The numbers. India has over 26 million people with disabilities according to the 2011 census — a number widely considered an undercount. That is a market larger than the population of Australia. If your product cannot be used by someone with a visual impairment, motor limitation, or cognitive disability, you have excluded a market the size of a country.
Situational disability is universal. A parent holding a baby has one hand free. A person in a loud auto-rickshaw cannot hear audio. A user in bright sunlight cannot see low-contrast text. Everyone experiences situational disability. Designing for permanent disability improves the product for everyone. The iPhone’s AssistiveTouch button was built for users with motor disabilities. Millions of able-bodied users activated it because they did not want to wear out their physical home button.
The business case writes itself. Accessibility improvements are UX improvements. Keyboard navigation, clear hierarchy, readable fonts, sufficient contrast, proper alt text — every one of these makes the product better for all users, not just disabled ones.
What PMs should actually do
You do not need to become a WCAG expert. But you need to own these decisions:
- Contrast ratios. Text must be readable. WCAG AA requires 4.5:1 for normal text, 3:1 for large text. Test it. Free tools exist.
- Keyboard navigation. Every action a mouse user can take, a keyboard user should be able to take. If your modal traps keyboard focus, fix it.
- Alt text on images. If an image conveys information, it needs alt text. If it is decorative, mark it as such. This is a content decision, not an engineering decision.
- Touch targets. Minimum 44x44 pixels. India’s most common phones are budget Androids with smaller screens. Small touch targets on small screens are an accessibility and a usability failure.
- Form labels and error messages. Screen readers need labels. Users need clear error messages. “Invalid input” tells nobody anything.
The PM’s job is not to write the ARIA attributes. It is to make accessibility a requirement in the acceptance criteria, not an afterthought in a backlog item labelled “nice to have.”
Design review for a new onboarding flow.
Designer: “Here's the onboarding. Clean, minimal. Users pick their interests from these icons.”
PM: “Looks great. What happens with a screen reader?”
Designer: “...we haven't tested that yet.”
PM: “Do the icons have alt text? Can you navigate the picker with a keyboard?”
Designer: “Not yet. We were going to handle accessibility in Phase 2.”
PM: “Phase 2 is where accessibility goes to die. Let's add it to the acceptance criteria now. It's five lines of markup, not a rewrite.”
The PM who asks 'does this work with a screen reader?' during design review is doing their job. The PM who waits for a complaint is not.
A framework for ethical decisions
When you face a grey-zone decision — the kind where the metric says yes but your gut says wait — use this framework. It is not original. It is a synthesis of what I have seen work across teams.
The three-question test
1. The newspaper test. If a journalist wrote about this feature decision, would you be comfortable with the headline? Not “would it be legal” — would you be comfortable? “Company tricks users into buying insurance” is a headline. “Company ships accessible onboarding for 26 million users” is also a headline. Which one do you want?
2. The reversal test. If you were the user, would you feel this is fair? Not “would you notice” — that is the dark pattern defence. Would you feel it is fair if you fully understood what was happening? If you would feel misled, tricked, or trapped, the feature fails this test.
3. The regret test. In two years, will this decision have built trust or eroded it? Dark patterns have a half-life. They boost metrics for a quarter and corrode brand for a decade. The pre-checked insurance box that boosted conversion 12% today creates the Reddit thread titled “Why [Company] is a scam” tomorrow.
You are a PM at a food delivery app. Your engagement metrics are flat. The growth team proposes tripling push notification frequency — breakfast, lunch, dinner, plus 'flash deals.' Testing shows 22% more orders from the high-frequency group. But uninstall rates also rose 8%. Your VP wants to ship it. What do you do?
The VP says: 'The net revenue math works. Ship it.' The data scientist says: 'The uninstall cohort skews toward power users.' What's your move?
your path
The PM as the last line of defence
In most organizations, there is no “ethics team” that reviews every feature. Legal reviews for compliance. Design reviews for usability. Engineering reviews for feasibility. But nobody reviews for should we?
That is the PM’s job.
You sit at the intersection of business, technology, and user experience. You are the person who sees the growth metric and the support ticket. The A/B test result and the uninstall rate. The conversion lift and the trust erosion.
When the growth team says “pre-check the box,” you are the one who asks what happens to the users who didn’t notice. When the data team says “collect everything, we’ll figure out what to use later,” you are the one who asks what happens when the user asks to see their data. When the designer says “we’ll add accessibility in Phase 2,” you are the one who knows Phase 2 never comes.
This is not about being the team’s conscience. It is about being the person who thinks in systems — who sees that today’s shortcut is next quarter’s crisis.
Ethics as a competitive advantage
In a market where every competitor is running the same dark patterns, the company that doesn’t stands out. In a regulatory environment that is tightening (DPDPA in India, GDPR in Europe, state-level laws in the US), the company that built privacy into the foundation has a structural advantage over the company that has to retrofit it.
Ethics is not the opposite of growth. It is a constraint that forces better product thinking. The PM who cannot grow a metric without tricking users is a PM who has run out of ideas.
Pick an app you use daily. Walk through these checks:
- Dark patterns: Sign up for something, then try to cancel or unsubscribe. Is cancellation as easy as signup? Note every friction point that exists only on the exit path.
- Privacy: Go to the privacy settings. Can you see what data is collected? Can you download it? Can you delete it? How many taps does each action take?
- Accessibility: Turn on your phone’s screen reader (VoiceOver on iOS, TalkBack on Android). Try to complete one core task. Where does it break?
- Notifications: Count how many notifications the app sent you in the last 7 days. How many were genuinely useful? How many were engagement bait?
Write down what you found. Then ask: if you were the PM, which one would you fix first? That is your ethical product instinct speaking.
The bottom line
Ethics in product is not about grand statements or company values painted on a wall. It is about the small decisions: what you pre-check, what you collect, who you design for, and how easy you make it to leave.
The test is simple. Build products you would be comfortable using yourself, with full knowledge of how they work. Build products you would be comfortable explaining to a journalist, a regulator, or your own family.
If you can do that, the frameworks will take care of themselves.
Where to go next
- Design for everyone: Accessibility — 2.68 crore Indians use assistive technology. Design for them.
- AI-specific ethics: AI Ethics for PMs — bias, transparency, and the decisions AI products force you to make
- The people side: Stakeholder Management — when ethics conflicts with what stakeholders want, how you handle it matters