8 min left 0%

ai product strategy

Every company is now an AI company. Most of them shouldn't be.
Talvinder Singh, from a Pragmatic Leaders session on AI strategy

There is a pattern I have seen play out dozens of times across Indian startups and enterprises. A founding team or business head reads about what GPT-4 can do, sees a competitor announce an “AI-powered” feature, and calls an urgent meeting. By Friday, the roadmap has an AI initiative. By next quarter, the team has burned three months and has nothing to show for it.

The problem is not that AI is overhyped. AI is genuinely transformative. The problem is that most teams skip the strategy question entirely. They jump from “AI is important” to “let’s build something with AI” without answering the question that actually matters: what role does AI play in your product’s value proposition?

Get that question wrong, and everything downstream — architecture, hiring, pricing, go-to-market — is built on sand.

The spectrum: AI as product vs AI as feature

This is the first strategic decision, and most teams never make it explicitly. They drift into one position or the other without realizing the implications.

AI-as-feature means your product already has a core value proposition, and AI enhances it. Think Canva adding Magic Resize, or Freshworks adding AI-suggested responses in their support tool. The product works without AI. AI makes it faster, smarter, or cheaper.

AI-as-product means AI capability IS the core value. Without the AI, there is no product. Think Grammarly, Jasper, or an Indian startup like Karya that uses AI for data labeling. Remove the model, and there is nothing left to sell.

The strategic implications are completely different:

AI as FeatureAI as Product
MoatYour existing user base, data, and workflowsModel performance and training data
PricingBundled into existing plansMust justify standalone cost
Failure modeFeature feels gimmicky, users ignore itModel is not good enough, users churn
CompetitionOther incumbents add similar AI featuresFoundation model providers enter your space
PM focusIntegration quality, UX, adoption metricsModel accuracy, cost per inference, feedback loops

Most Indian SaaS companies are in the AI-as-feature camp. That is fine. But they make strategic errors when they use AI-as-product thinking — hiring ML research teams, building custom models, chasing benchmarks — when all they needed was a well-integrated API call to an existing model.

// scene:

Strategy offsite at a mid-stage B2B SaaS company in Bangalore. The CEO has just returned from a conference.

CEO: “We need to become an AI-first company. Our competitors are all talking about AI. I want us to hire five ML engineers and build our own models.”

VP Engineering: “We could start with the OpenAI API for the use cases we have and see if we need custom models later.”

CEO: “Using an API is not being AI-first. Anyone can call an API. I want a proprietary model — that's our moat.”

PM Lead: “Can I ask a different question? What customer problem are we solving with AI that we can't solve without it? And would our customers pay more for that solution?”

The room went quiet. Nobody had asked the customer.

// tension:

The CEO was solving a positioning problem. The PM was solving a customer problem. These are not the same thing.

The three strategic traps

Trap 1: AI as a press release

This is the most common trap in the Indian ecosystem. A company adds AI to their marketing page without adding AI to their product. Or they add a chatbot that answers three questions badly and call it “AI-powered.”

The test is simple: remove the AI feature. Does any customer complain? If not, the AI is not part of your product strategy. It is a press release.

Trap 2: Building what the model provider will build

In 2023, dozens of startups built “AI writing assistants” that were thin wrappers around GPT-3.5. By 2024, ChatGPT could do everything those wrappers did, better and cheaper. The startups had no moat.

Before you build an AI feature, ask: is this a feature that the model provider (OpenAI, Google, Anthropic) is likely to ship natively within 18 months? If yes, you are building on a shrinking island. Your strategy must include something the model provider cannot replicate — your proprietary data, your workflow integration, your domain expertise, your distribution.

Trap 3: Optimizing for model performance instead of user outcomes

This one is subtle and it kills technical founders. They spend months improving model accuracy from 89% to 94%, but the user does not care because the UX around the model is so poor that they never see the output. Or the model is great but the latency is three seconds and users have already moved on.

AI product strategy is not ML strategy. The PM’s job is not to maximize F1 scores. It is to maximize the value the user gets from the AI capability. Sometimes that means a worse model with a better UX. Sometimes it means no model at all — a rules-based system that is fast, predictable, and cheap.

// thread: #product-ai — The PM translating model metrics into user impact
ML Lead Model accuracy is at 92% on our test set. Can we ship?
PM What does 92% accuracy mean for the user? How many wrong suggestions will they see per session?
ML Lead About 1 in 12 suggestions will be wrong.
PM And what happens when they see a wrong suggestion? Do they lose trust in the feature?
ML Lead We haven't tested that yet.
PM That's the number that matters. Not the 92%. If one bad suggestion makes them turn off the feature entirely, we need 99% or we need a graceful fallback UX.
// learn the judgment

You are PM at a mid-stage Indian HRtech company (500 B2B customers, Series B). Your engineering lead proposes building a custom LLM fine-tuned on Indian job descriptions and salary data to power a 'compensation benchmarking' feature. He estimates 4 months, 2 ML engineers. A competitor just launched a similar feature using the OpenAI API.

The call: Do you approve the fine-tuning project? What is your recommendation to the CEO?

// practice for score

You are PM at a mid-stage Indian HRtech company (500 B2B customers, Series B). Your engineering lead proposes building a custom LLM fine-tuned on Indian job descriptions and salary data to power a 'compensation benchmarking' feature. He estimates 4 months, 2 ML engineers. A competitor just launched a similar feature using the OpenAI API.

The call: Do you approve the fine-tuning project? What is your recommendation to the CEO?

0 chars (min 80)

Building an AI product strategy document

A real AI product strategy is not a slide deck with “AI-powered” stamped on every feature. It is a document that answers six questions:

1. What user problem does AI solve better than non-AI alternatives? Be specific. “AI makes it faster” is not enough. How much faster? Compared to what? For which users? If you cannot quantify the improvement, you do not have a strategy — you have a hunch.

2. Where does the AI sit in the user workflow? Is AI the primary interaction (like a chatbot) or a background optimization (like a recommendation engine)? This determines your UX approach, your latency requirements, and your error tolerance.

3. What is your data advantage? The model is commodity. Everyone has access to the same foundation models. Your advantage is the data you can feed those models — proprietary customer data, domain-specific training data, feedback loops from your user base. If you have no data advantage, you have no AI moat.

4. What happens when the AI is wrong? Every AI system will produce errors. Your strategy must define the failure mode. Is it a wrong recommendation the user can ignore? A wrong medical diagnosis? A wrong financial calculation? The severity of the failure mode determines how much you invest in accuracy vs speed.

5. What is the cost model? AI inference costs money. Every API call, every model run, every GPU cycle has a price. If your pricing does not account for AI costs, you are subsidizing AI usage out of margin. Many Indian B2B companies discover this painfully — they added an AI feature for free, usage spiked, and their cloud bill tripled.

6. What is your 18-month defensibility story? Foundation models are improving fast. What you build today with a custom pipeline might be a single API call in a year. Your strategy must articulate what remains valuable even as the underlying models get better and cheaper.

// exercise: · 20 min
AI strategy stress test

Take the AI initiative your team is currently working on (or considering). Answer each of the six questions above in one sentence each. Then apply these three stress tests:

  1. The removal test: If you removed the AI from this feature and replaced it with a manual process or simple rules, would customers notice? Would they care?
  2. The API test: Could a competitor replicate this by calling the same model API you are using? What is the 20% of your solution that they cannot copy?
  3. The cost test: At 10x your current usage, does the unit economics still work? What is the AI inference cost per user per month?

If you fail any of these tests, your strategy needs revision before you write a single line of model code.

AI product strategy in the Indian context

There are three things about the Indian market that change the AI strategy calculation:

Cost sensitivity is real. Indian B2B customers will not pay a 3x premium for AI features. Your AI must deliver clear, measurable ROI — and the cost of running it must be low enough to sustain at Indian price points. This often means using smaller, cheaper models (GPT-4o-mini, Claude Haiku, open-source models) rather than the flagship models. It also means aggressive caching, batching, and knowing when to fall back to non-AI paths.

Data quality is a challenge. Indian enterprises have messier data than their Western counterparts — multilingual content, inconsistent formats, incomplete records. Your AI strategy must account for data cleaning as a first-class concern, not an afterthought. The team that can make models work on messy Indian enterprise data has a genuine moat.

The talent arbitrage is shrinking. India used to be a place where you could hire ML engineers cheaply. That gap has closed significantly. Top ML talent in Bangalore commands salaries comparable to mid-tier US cities. Your strategy should not depend on hiring a large ML team. It should depend on a small, sharp team that uses foundation models intelligently.

The PM’s role in AI products

As a PM on an AI product, your job is not to understand transformers or write training pipelines. Your job is to be the translator between what the model can do and what the user needs.

This means:

  • Setting the acceptance criteria — not in model metrics (accuracy, precision, recall) but in user metrics (task completion rate, time saved, error rate experienced by users).
  • Designing the feedback loop — how does user behavior flow back into model improvement? If users correct AI suggestions, is that data captured and used for fine-tuning?
  • Managing expectations — with leadership, with customers, with the engineering team. AI is probabilistic. It will be wrong sometimes. Your job is to set expectations about how often and what happens when it is.
  • Owning the cost model — every PM on an AI product must understand the cost per inference and how that scales. This is not the ML team’s problem. This is the product’s unit economics.

Test yourself

// interactive:
The AI Strategy Decision

You are the PM at a mid-stage Indian EdTech company. Your platform serves 50,000 monthly active students preparing for competitive exams (JEE, NEET, UPSC). The CEO wants to add an AI tutor that can answer student questions in real time. The CTO says it will take 6 months and a team of four ML engineers. A board meeting is in two weeks.

You need to present an AI strategy recommendation to the board. You have two weeks to prepare.

Where to go next

ai product strategy 0%
8 min left