How to Become a Better Product Manager - Leveraging AI and LLMs effectively
Learn how to 10x your product management in the age of AI.
In the age of AI, the Product Manager role is rapidly shifting in expectations.
It’s no longer enough to do some market research and toss it over the fence (though, honestly - that has never been enough).
Instead, there’s higher demands:
Visual evidence in the form of functional prototypes
LLM-friendly documentation and breakdowns
Further specificity in details - use cases, edge cases
Product Manager should identify how their role is changing, and more importantly - what they can do to accelerate their efforts.
This article is part of my series How to Become a Better Product Manager, which teaches the deep fundamentals of product management.
Basic LLM Usage
LLMs as a Pair of Eyes
As Product Management scope increases, so does the broader awareness required to keep abreast of changes. There was a time where I was in 300 Slack channels across my scope, monitoring the conversations - it was exhausting.
LLMs saved me tons of time here.
If your company allows it, you can use LLMs to raise awareness of things happening in the company. For example, Claude connected to Slack and Confluence means you don’t have to actively be in a Slack channel or watching Confluence activity like a hawk to identify whether something needs your attention.
A few prompts I like to use:
Search Slack and Confluence for every decision made about Project X that I wasn’t involved in.
Search Slack for any confusion, questions, or defect reports for Product Y that arrived today.
The more data sources you attach, the broader the scope of search.
LLMs as Synthesis Engines
There’s a lot of documents and knowledge we have to both absorb and impart as product managers.
LLMs are great for this.
Use LLMs to summarize and synthesize. Meeting notes are a good candidate, as are documents - you can have it add a summary at the top of documents you create.
Tips:
Tailor for your audience - “write a concise summary for engineers” will lead to a very different level of detail than “write a concise summary for executives”.
Proof-read it. LLMs can make mistakes - don’t just ask it to summarize and smack it on your document. Actually read it.
Edit it. LLMs can write long, flowery prose. Make the summary concise, even if that means repeatedly telling the LLM “make this more concise”.
Ask it source. LLMs can tie its claims to specific parts of the document, which makes it easier to verify.
LLMs as Interpreters
Product Managers aren’t typically experts in engineering. That’s OK - they aren’t expected to be (at least, not yet).
However, it does mean there’s a communication gap. When an engineer speaks to tradeoffs and talks about how the replica write latency would prevent real-time querying of the chart data and the primary IOPs capacity wouldn’t handle the load of the feature, it’s easy to just accept it and move it.
Well - LLMs can help you understand and translate what all of that means.
Pop a message into the LLM and say “translate this for me in terms a non-technicalperson would understand”.
It deepens your understanding and more importantly allows you to engage further - perhaps there’s clarity or adjustments you can provide to remove the problem, or maybe you might gently push back and find that the engineer is making a mistaken assumption that renders the problem moot!
Intermediate LLM Usage
LLMs as Thought Partners
As Product Managers, we have to think about a lot of different things - use cases, positioning, strategy. It’s easy to forget something or not have the fullest understanding, particularly in a new area.
Fun fact - LLMs can help you think through product use cases.
Suppose you’re developing an Impersonation feature for internal users. Ask some basic questions:
List common use-cases of Impersonation features.
How do competitors implement this?
What are the risks?
What is a small slice of functionality that can be implemented?
What potential edge cases and bugs can occur?
Once you ‘form’ your thoughts, you can then ask the LLM to list them in a use-case friendly way:
Take the above and turn it into an itemized requirements list, organized by Happy Path, Edge Cases, Scoped Phases.
As always - edit, proof-read, and verify. Don’t just toss it to engineers - you must not be responsible for slop. It’s a good starting point, not the end result.
LLMs as Visualizers
There’s nothing quite like seeing something in front of you vs. reading a wall of text to truly understand it.
LLMs have empowered Product Managers to visualize their approach and thoughts in several ways:
Creating diagrams of workflows and userflows
Creating design mocks and wireframes
Creating actual click-through prototypes
You can use tools like Figma Make, Claude Design, and Gemini Stitch to rapidly create mocks and prototypes to show how you think something should function. If that’s not available, your basic LLM can just create a Mermaid diagram.
It doesn’t have to work, it just has to show.
Advanced LLM Usage
LLMs as a Personal Analyst
LLMs can help you analyze data. If you’re fortunate enough to have query access to a subset of data, you can ask the LLM questions about your data and get it to answer.
“How many user signed up yesterday and didn’t log in today?”
“How many sales did we make in Turtle County last year?”
Even if you don’t have a direct connection of the AI to a data source, you can still benefit by asking the LLM how you might query the information.
For example - suppose you have a access to a subset of data to query against, but you don’t know SQL. You can ask the LLM:
Write me a query to find this fact <fact> from these tables <schema>.
How do I query for the active user count?
The LLM will spit out a probably syntactically correct query you can then apply to your BI tool.
Caution - syntactically correct does not mean semantically correct. There’s a large nuance in data columns - often, as code evolves, the meaning and intent of a data field changes. For example - perhaps “Created At” on the “User” record used to mean the timestamp the user account was created at, but now there’s a new field called “signed up” that contains the actual sign up timestamp because of a mass auto-migration. Situations like this are not captured in syntax and LLMs have no way of identifying these changes. The lack of a Semantic Model means you should always double-check your results with someone familiar with your code for important cases. Don’t just trust the query the LLM returns. Gut-check everything, at minimum - AI is often wrong with data.
LLMs as a Personal Developer
There’s a lot of situations where as a Product Manager you have to go and ask the developer “how does this really work under the hood?”. You file a ticket or pop over a message and a couple hours to days later, the engineer will provide you the answer.
If you have access to the codebase, you can use the LLM to answer these questions for you.
Ask the LLM:
“Explain precisely like I’m non-technical how each of the numbers on the bar chart on the /charts page is calculated.”
“When a user logs in, at what point is the drip campaign for onboarding sent?”
These can compress the wait time dramatically and avoid having to bother an engineer to find the answer for you.
LLMs as a Builder
One of the most advanced forms of incorporating LLMs is the Product Manager builds the feature using LLMs - ie. vibe coding.
This actually works great for smaller-scale startups or less constrained environments. The speed of a subject matter expert translating their thoughts into working product is unmatched.
However - it’s far too much risk in areas where compliance, scaling, or security matter. Vibe Coding doesn’t typically address these cases at all, even if the AI swears to you it does.
Caution - in many cases, what you see is only 10% of what you need. 90% of the work is things like cross-cutting concerns, authentication, authorization, security, scaling, observability, safety, testing, validation - so called ‘ilities’ that vibe coding won’t get you. Don’t just assume it’s 90% done once you see it working.
If the codebase is effectively set up for safe vibe coding (chances are, it isn’t), then you might be able to do risk-appropriate development, but 99.99999% of codebases are not. Leave the truly important stuff to the actual engineers.
Caution
A word of caution: if you abuse the tools you will create more work for little value.
AIs are verbose. If you give an engineer a document with 500 words that could’ve clearly been explained in 10, you are wasting that engineer’s time. Always ensure conciseness and precision - every word matters.
AIs make stuff up. AIs will make stuff up. If you provide a document that has clearly wrong information, people will lose trust in you, and you’ll create issues downstream when the wrong things get implemented.
AIs will not tailor completely for your context. You can provide context all you want, but AI will not fully understand every single thing there is to know about your context or company. Evaluate its decisions - is that use-case that was generated actually relevant to the specific goal you’re trying to pursue? Does that feature really need that guard in the environment you’re in?
AIs can be overly detailed. AIs can toss in a lot of irrelevant detail for a document - architecture and implementation in a product use case document, defect remediation steps in a Go To Market alignment document. Remember the audience and purpose of a document - don’t just dump what the AI wrote.
You’re always responsible for the output of AI. Always.
LLMs don’t replace your thinking, but they can help accelerate it, broaden it, deepen it, and create space for focus. Use it effectively!


