9 min left 0%

ai tools for pm workflows

If today I have to solve a problem, I don't really need to go through 15, 20, 30 different pages. I just exactly need to write what I am looking for. As a product manager, I love using AI tools because of that.
Aayush Tuteja, Pragmatic Leaders alumnus

I will not pretend AI tools are optional for PMs in 2026. They are not. If you are writing PRDs from scratch, manually summarizing user interviews, or waiting three days for your data team to run a query — you are slower than you need to be.

But here is what the LinkedIn influencers will not tell you: most PMs use AI tools badly. They paste a vague prompt into ChatGPT, get a generic answer, and either accept garbage or conclude that AI is useless. Both outcomes are wrong.

This page is a practitioner’s guide to using AI in your actual PM workflows. Not a list of tools. Not a hype piece. I will tell you where AI saves you hours, where it wastes your time, and exactly how to use it for the five workflows that matter most.

The honest assessment: what works and what does not

Let me save you the experimentation. After two years of using AI tools daily — building an AI product, training thousands of PMs, and running my own workflows through LLMs — here is my honest split:

AI is genuinely useful for:

  • Writing first drafts of specs, briefs, and analyses
  • Summarizing large volumes of text (research transcripts, competitor reports, support tickets)
  • Generating SQL queries from plain English
  • Structuring messy thinking into outlines
  • Competitive research synthesis

AI is actively harmful for:

  • Prioritization decisions (it has zero context on your politics, resources, or strategy)
  • Product strategy (it generates plausible-sounding nonsense)
  • User research itself (it cannot talk to your users)
  • Final-version writing that carries your name (it does not know your voice)
  • Any decision that requires judgment about your specific market

The pattern is clear. AI is a first-draft machine and a summarization engine. It is not a decision-maker. The PM who treats it as a thinking accelerator wins. The PM who treats it as a thinking replacement produces mediocre work.

Workflow 1: Writing specs and PRDs

This is where AI delivers the most value for the least effort. The blank page problem is real — most PMs stare at an empty document for thirty minutes before writing the first sentence of a PRD. An LLM eliminates that.

// scene:

A PM's desk, 9 AM. A PRD is due by end of day.

PM (internal monologue): “I know what we need to build. I just cannot figure out how to start writing it.”

The PM opens Claude. Types: 'I need a PRD for adding UPI autopay support to our subscription billing system. Target users are Indian SaaS companies with monthly billing. The problem is failed recurring payments due to mandate expiry. Write a first draft with problem statement, proposed solution, success metrics, and edge cases.'

Three minutes later, the PM has a 1200-word draft. It is 60% right. But the structure is there, the edge cases include ones they had not considered, and now they are editing — not staring at a blank page.

PM (internal monologue): “The AI missed that RBI's e-mandate limit is 15,000 rupees for auto-debit. And it does not know our payment gateway limitations. But I can fix those in ten minutes.”

// tension:

The AI wrote a draft in three minutes that would have taken the PM an hour. The PM still did the thinking — but skipped the writing friction.

The workflow that works:

  1. Write a detailed prompt that includes: who the user is, what problem they have, what solution you are considering, and what constraints exist. The more context you give, the better the draft.
  2. Generate the first draft.
  3. Delete everything that is wrong or generic. This will be 30-40% of the output.
  4. Add the context only you have: internal constraints, political considerations, specific technical limitations, the thing your engineering lead told you in the hallway.
  5. Rewrite the conclusion and recommendations in your own voice.

Common mistakes:

  • Prompting with “Write a PRD for feature X” and nothing else. You get a template, not a draft.
  • Accepting the AI’s success metrics without thinking. It will suggest metrics that sound reasonable but are not the ones your team actually tracks.
  • Using the AI draft as the final version. Your engineering team will notice. AI writing has a distinctive quality — slightly too polished, slightly too generic, missing the specific trade-offs that make a PRD useful.

Workflow 2: Research summarization

You have twenty user interview transcripts. Or fifty competitor pages. Or three months of support tickets. No PM has time to read all of that carefully. This is where AI is genuinely transformational.

// thread: #product-research — A PM shares their AI-assisted research workflow
Priya (PM) Finished the user research synthesis for the checkout redesign. 22 interviews done.
Design Lead That was fast. Usually takes you two weeks.
Priya (PM) Fed all transcripts into Claude with the prompt: 'Identify the top 5 pain points mentioned across these interviews, with direct quotes supporting each. Flag any pain point mentioned by only 1-2 users that seems high-severity.' Took 20 mins instead of 3 days.
Design Lead Do you trust the output?
Priya (PM) I trust the pattern extraction. I still read the 5 most interesting interviews in full. The AI found the common themes. I found the surprising outlier that changes our whole approach.
Design Lead What was the outlier?
Priya (PM) One user in Tier 2 was completing checkout on her husband's phone because our OTP flow does not work with dual-SIM phones properly. AI flagged it as 'low frequency.' I flagged it as 'we are losing every dual-SIM user in India.'
Engineering Lead That's... a lot of users. 😳

The key insight: AI finds patterns. PMs find surprises. The pattern extraction is the time-consuming part that AI handles well. The interpretation — “this low-frequency issue is actually a massive segment problem” — is the judgment call that makes you a PM and not a data processor.

For competitive research: paste competitor landing pages, feature lists, and pricing pages into an LLM. Ask it to build a comparison matrix. Then add the column it cannot fill: “what this means for our positioning.” That strategic interpretation is your job.

For support ticket analysis: export the last 500 tickets. Ask the LLM to categorize them by theme and severity. Then look at the category that is growing fastest, not the one that is largest. The fastest-growing pain point tells you where the product is deteriorating. The largest one tells you where it has always been weak. Different problems, different solutions.

Workflow 3: Data analysis and SQL

If your analytics setup requires SQL — and in most Indian startups, it does — text-to-SQL tools are the single biggest productivity gain for PMs.

The old workflow: you have a question about user behavior. You write it up in a Jira ticket. You assign it to the data team. They get to it in three days. The answer generates a new question. You file another ticket. A week later, you have the analysis you needed on day one.

The new workflow: you describe the question in English. The AI generates the SQL query. You run it. You have the answer in ten minutes. You ask the follow-up question. Ten more minutes. The entire analysis loop that used to take a week now takes an afternoon.

Tools that do this well in 2026:

  • Claude and GPT-4 with your schema pasted into the context
  • Amplitude’s AI query feature (if your company uses Amplitude)
  • Mode Analytics with AI assist
  • DBeaver with AI plugins

The critical step everyone skips: validate the output. AI-generated SQL often looks correct but has subtle join errors or filter conditions that exclude data you need. Always spot-check the results against a known baseline. Run the query for a period where you already know the answer. If the numbers match, trust it for the unknown period.

Workflow 4: Structuring messy thinking

This is the underrated use case. Every PM has moments where they know the problem, have collected the data, understand the constraints — but cannot organize the thinking into a coherent argument. The ideas are all there. The structure is not.

This is where “think with me, not for me” prompting works:

“I am deciding between three approaches for our onboarding redesign. Here are the trade-offs I see for each. Help me structure this into a decision framework that I can present to my VP. Do not make the decision — organize my thinking so the trade-offs are clear.”

The output will be a structured comparison that you can refine. You are not outsourcing the decision. You are outsourcing the formatting of a decision you have already half-made. This is legitimate and it saves time.

// exercise: · 20 min
Build your AI prompt library

Create a document with five prompts you can reuse across your PM work. For each, write:

  1. The trigger — when do you use this prompt? (e.g., “after finishing user interviews”)
  2. The prompt template — with placeholders for context you will fill in each time
  3. The expected output — what does a good result look like?
  4. The validation step — how do you check if the AI output is actually useful?

Start with these five workflows: PRD first draft, research summarization, competitive analysis, metric investigation, stakeholder update email.

The PMs who get the most value from AI are not the ones who prompt best in the moment. They are the ones who have pre-built prompts for recurring tasks. Treat your prompt library like your templates folder — invest once, reuse forever.

Workflow 5: What to never delegate to AI

This section matters more than the previous four combined.

Do not use AI for prioritization. I have seen PMs paste their backlog into ChatGPT and ask “prioritize this using RICE.” The AI will generate a RICE score for every item. The scores will be wrong. Not slightly wrong — fundamentally wrong. Because the AI does not know that the CEO personally cares about feature X, that the engineering lead quit last week and the team’s capacity dropped 40%, or that your biggest client threatened to churn unless you fix the billing bug. Prioritization is a political and strategic act. It is not a calculation.

Do not use AI to replace user research. “Generate ten user personas for an Indian fintech app” gives you fiction. Plausible fiction, well-formatted fiction — but fiction. Your real users do not match personas generated from an LLM’s training data. They are messier, more specific, and more surprising. Talk to them.

Do not use AI for product strategy. “What should our product strategy be for 2027?” will generate a coherent-sounding two-page strategy that has nothing to do with your specific market position, team capabilities, competitive dynamics, or customer relationships. Strategy requires judgment about your unique situation. An LLM has never sat in your board meeting.

Do not use AI-generated content as your final output without rewriting. This applies to customer-facing copy, PRDs that go to engineering, and stakeholder presentations. If your VP reads a strategy doc that sounds like ChatGPT, you lose credibility — even if the thinking underneath is solid. Your voice matters. Your specificity matters. The AI gets you to the 60% mark faster. The last 40% is your job.

The India-specific reality

A few things I have learned training thousands of PMs across Indian companies:

Data infrastructure is usually worse than you expect. Most Indian startups do not have a clean analytics setup. The text-to-SQL workflow I described only works if your data warehouse exists and is documented. If you are at a company where “data analysis” means downloading a CSV from the admin panel, your first priority is getting proper event tracking set up — not adopting AI tools. Fix the foundation first.

English proficiency varies, and that matters for prompting. PMs who are comfortable writing detailed English prompts get dramatically better AI outputs than those who write short, vague prompts. This is not a judgment — it is a practical observation. If your English writing is strong, use that skill to write better prompts. If it is not, write prompts in Hindi or your native language first, then translate. The quality of your input determines the quality of your output.

Cost sensitivity is real. ChatGPT Plus costs 1,700 rupees per month. Claude Pro costs about the same. For a PM earning 8-15 LPA, this is not trivial. My recommendation: start with the free tiers. ChatGPT free and Claude free are good enough for 80% of PM workflows. Upgrade only when you hit the usage limits consistently — which means you are using it enough to justify the cost.

WhatsApp integration is underexplored. Some of the most effective AI-assisted PM workflows I have seen in India involve forwarding customer WhatsApp messages to an AI tool for sentiment analysis and pattern detection. If your product has a WhatsApp support channel — and in India, most consumer products do — this is low-hanging fruit.

Test yourself

// interactive:
The AI Shortcut

You are a PM at a Series B edtech startup in Bangalore. Your CEO wants a competitive analysis of five rivals by Friday. It is Wednesday. You also have two user interviews scheduled and a sprint planning meeting tomorrow. You have access to Claude Pro.

You have two days and three commitments. The competitive analysis alone usually takes a full week. How do you approach this?

// learn the judgment

You are a PM at a Series B edtech startup in Hyderabad (300K registered learners, B2C). You are researching whether to enter the upskilling market for DevOps engineers — a segment you have not served before. Your deadline is a strategy presentation to the CEO in four days. You have two options: (A) Use Claude to synthesize competitor websites, job postings, and public course reviews into a market analysis — four hours of work. (B) Spend those same four hours doing five 45-minute calls with DevOps engineers from your existing learner base to understand their upskilling needs directly.

The call: Which approach do you choose, and is there a version where you do both without cutting either short?

// practice for score

You are a PM at a Series B edtech startup in Hyderabad (300K registered learners, B2C). You are researching whether to enter the upskilling market for DevOps engineers — a segment you have not served before. Your deadline is a strategy presentation to the CEO in four days. You have two options: (A) Use Claude to synthesize competitor websites, job postings, and public course reviews into a market analysis — four hours of work. (B) Spend those same four hours doing five 45-minute calls with DevOps engineers from your existing learner base to understand their upskilling needs directly.

The call: Which approach do you choose, and is there a version where you do both without cutting either short?

0 chars (min 80)

Where to go next

ai tools for pm workflows 0%
9 min left