War Stories - How I turned around a paralyzed Product organization into a rapid delivery engine
How telling a product management organization "value doesn't matter" was the best possible thing I could've done for them.
A while ago I was given the responsibility of improving the Product Management organization at a startup. It was a Series B company with about 200 people, of which ~50 people were in the development organization. The group of ~10 Product Managers had their leadership recently depart, and they needed someone to step in to help guide them through that transition.
I knew some of the challenges and opportunity areas from the engineering perspective first-hand and through constant feedback from the engineering team.
There was never enough “ready” work to do. The Product Management organization had a difficult time defining and prioritizing work. As a result, engineers went underutilized for weeks at a time, working on tiny things they found themselves here and there with little value.
The requirements that made it to engineering were ill-defined. The work hadn’t been done to define the boundaries of the problem space or even what the end-state was supposed to be, leaving engineers to fill in massive gaps in requirements.
The “Why?” wasn’t defined or answered, let alone asked. The value, the problem it was solving, and how that item mapped to a larger vision or picture wasn’t defined for the development teams.
Cross-feature interaction analysis and product capability management was non-existent. Work items were done in silos without consideration to how they interacted with existing features, which led to duplicative functionality, or different functionality for the same capability. This lead to large numbers of gaps and quirks in the product overall that reduced quality and led to difficult defects - not due to implementation errors, but due to thought not being put into the intended change itself.
Explicit functional boundaries prevented gap filling. The organization was afraid to step on toes or interfere with another function’s work, so boundaries formed, leading to decreased collaboration. Of course, because the work that did get handed off had issues, it meant an ever-increasing burden on Engineering to fill in all the gaps as the last function in the value stream capable of fixing issues.
These challenges were at best slowing down delivery of everything, and at worst completely stalling efforts to deliver value.
Identifying the issues
When I came on board to the Product organization, I had my own hypothesis and opinions of what the issues were, but I did my due diligence:
I read through chat histories, meeting notes, and other documents to understand how decisions were being made
I had 1:1s with every product manager and adjacent functions like Marketing, Privacy, Customer Success, and Solutions that interacted with Product to better understand the issues and challenges from their perspectives.
I mapped out the relationships of all of the issues and their causes, influencers, and effects to better understand the first, second, and third effects.
I conducted surveys and other analysis of sentiment.
From that, I saw some clear root issues that were creating a complex web of spiraling - a vicious cycle of negative feedback that reinforced itself the longer it went on.
The Product Managers didn’t have a true understanding of the product or user.
Without understanding how the product truly worked or what the user truly needed, the Product Managers were afraid to make decisions because they couldn’t figure out the effect of it. When they did make a decision, it often broke some user workflow or had some issue they didn’t consider because of their lack of comprehensive, true understanding of the user, their journey, and their work. It was like the blind leading the blind.
There wasn’t any formalized mechanisms for the organization to get and leverage feedback. Certain members occasionally did interviews, or looked at a competitor, or read an article, but very little was baked into any formal feedback loop. Whatever knowledge we did obtain was silo’d and lost in random slack messages instead of becoming a concrete learning that could be used in the future to make decisions.
The Product Managers wanted to be 100% correct, all the time.
Being wrong was viewed as a terrible thing. In fact, most discussions revolved around how to not be wrong, which naturally led to doing further analysis, measurement, and thinking to avoid being wrong. The quantification of the negative impact of being wrong wasn’t even being considered. Being wrong was avoided 100%, even for the smallest negative impact. Some of this was due to individual Product Manager personalities, and others were due to fear of punitive measures if something went wrong.
The Product Managers over-indexed on working on the highest value item.
Not just value, but specifically the highest value. This meant a lot of discussion, argument, and analysis to intimately understand the exact value of all items being considered to eventually be able to decide which item specifically had the most value. This, of course, increased decision-delay.
The Product Managers wanted everyone to agree before they decided anything.
The Product Management culture was one of consensus - full alignment and agreement was required before anything moved forward. Because nobody wanted to be the one that was wrong, even the smallest disagreement or concern raised derailed a plan to move forward. Few people made decisions, instead opting to have another meeting or touchpoint to get further alignment and clarity. This meeting, of course, was scheduled for a week or two after, which added wait time to even the smallest discussions.
If disagreement persisted, even by one person and even if it was an otherwise minor addressable element, it meant that the item was disregarded. While this was fine if it got replaced by another valuable item, the issue was it got replaced with nothing, which further exasperated the lack of delivery of anything.
The Product Managers cared too much about optics.
There was a massive pressure to concretely measure the specific impact on a particular metric with exacting precision. The go-to method was the A/B test. Product Managers were pushing for every single change to be A/B tested. Despite this, features lacked basic instrumentation because Product Managers generally handed off those responsibilities to the Data Science team.
This of course, meant not only the setup of the A/B test adding time to development efforts, but also monitoring and running and interpreting results. Because the Product Managers as a group weren’t particularly organized or familiar with the data side, the A/B test interpretation fell almost entirely onto the data science function. Because a proper A/B test requires a stable environment, this also meant follow-up iterations were effectively blocked until the A/B test was completed.
In short - analysis paralysis ruled the day. All of these specific issues fed into each other like a vicious cycle. Because of a lack of understanding, product managers made mistakes. This made them more scared to make future decisions, so they tried to analyze more and measure more to prove they did a good job. Because they tried to analyze more before they made any decision, they slowed down their decision making. This increased pressure on the product managers to make sure whatever decision they did make had high value, which further led to more analysis and delays. This decreased confidence so they tried to get all of their stakeholders and peers to agree on every aspect of a change before moving forward, which meant valuable items with minor concerns got disregarded, further leading to slower decisions.
I knew I needed to break this cycle.
The Gordion Knot of Analysis Paralysis
The Gordion Knot is a legend about a complicated knot.
The belief was that whoever untied that knot would go on to rule all of Asia. Alexander the Great came along to untie it. Instead of painstakingly and slowly untying the knot as expected, he just cut it in into pieces with his sword.
It’s a metaphor for intractable problems and the solution spaces that are possible if you don’t make yourself beholden to perceived constraints.
I thought hard about the situation and the culture. I spent several weeks diving deep into the problem space that the organization found itself in.
Fear was the root cause of a lot of these issues. Fear of being wrong. Fear of looking bad. Fear of having others disagree. Fear of being uncertain. Fear of making mistakes. Fear of punishment. The only way to remove the fear is to remove the disincentives.
I created a bold and, frankly, insane, 120-day plan. One that I kicked off at the next Product Management all-hands.
Setting the tone
That Monday, I said clearly and in no uncertain terms to every Product Manager in the company:
I don’t care about the value. The value doesn’t matter. Stop considering it. Stop thinking about it.
I could see the wheels turning in all the product manager’s heads. Their whole job is oriented around delivering value. If value doesn’t matter, what does that mean for them and their careers?
I can see the wheels turning in your heads, too. I can hear the cries of “blasphemy” from all of you readers who are product managers that have been trained on the belief that the highest value is the only thing that matters.
But the fact is, it’s not.
The whole point of a Product Management is to deliver value. The key first word in that is to be able to deliver. It’s not enough to talk about value, or compare one value to another, or know what’s valuable in your head. All of the purpose of the role is predicated on the first word - that value is being delivered.
The team’s problem was that in their pursuit of the value, they forgot how to deliver. The muscle of intentionally and sufficiently defining a change, crafting a narrative, influencing others, working with a development team, and putting it out into the world, and then seeing what the effect was had atrophied. Habits of fear and talking replaced it. Under that fog of fear, they went through the motions of product management without fulfilling its spirit. They got slower because they didn’t know how to act without full knowledge. They got slower because getting full knowledge required delivering, which they didn’t know how to do.
It was lik trying to turn a car into tho right direction without moving either forward or backwards. We couldn’t even get out of the parking lot to know whether we were going in the right direction.
We needed to build up the delivery muscle, and to do that, we needed to go through the reps of delivering something - regardless of the value.
Breaking the desire for full confidence
Some of the Product Managers still weren’t sure. They were still fearful of changes they made. It might be fine to ignore value, but what if it has a negative impact?
My response was simple : who cares? If it has a negative impact, we can see it on the basic instrumentation and roll it back. Changes were reversible. Software is “soft”ware for a reason.
Some weren’t convinced.
“Well, that’s not precise enough”. They wanted to A/B test everything to be absolutely sure it wouldn’t harm a company outcome.
I told them that with change comes risk, and with risk comes the possibility of failure. Trying to measure every single step you take to see if it’s taking you in the right direction isn’t due diligence, it’s fear. There’s no place for fearful product management in a startup.
The fact is, at our company, A/B testing had become a crutch to hide a lack of decision-making.
So, I decided to remove that crutch.
I banned A/B tests.
The right tool for the wrong job
That’s not to say A/B tests aren’t useful. They are, in the right context. The situation our company in was absolutely the wrong context. The desire of the product managers to test everything was getting in the way of making progress.
There were proposals to A/B test even the smallest of changes - changing the text of a navigation bar link, adding a new filter to a dashboard list, or adding instructional text to a search bar. There was even a proposal to A/B test the error message of a donation!
Little thought was put into what the risk profile of these changes actually looked like. Nobody even considered whether that profile warranted needing the precision confidence of an A/B test to make a decision on outcome. That there were other cheaper, perfectly appropriate mechanisms to obtain outcome confidence didn’t seem to resonate.
The proponents of “A/B test everything” also didn’t put much weight into the fact we wanted to iterate rapidly on our product, and an A/B test would prevent rapid follow-up iterations to the exact areas we wanted to change as a company because we would need to wait weeks and months for an appropriately large sample size.
I stuck to my stance.
Drawing decision boundaries
The third thing I did was to ensure that the Product Managers didn’t get interfered with. One of the issues was that they got a lot of feedback from others in the company - folks that were there longer or whom were viewed as “subject matter experts” of a particular area or system that argued about the smallest detail or approach. Many arguments ended in the same way - the Product Manager seeing all of the possible problems, challenges, and issues and simply concluding that it wasn’t worth it to continue.
Every single thing we do has reasons for not doing them. Some are very good reasons. Some are bad reasons. They are all there, all the time. If we overpay attention to reasons we shouldn’t do something, we’ll just get stuck not doing anything at all. Instead, we should look for good reasons to do something, and treat all the reasons not to do something as the addressable, definable, and sometimes ignorable risks they are.
I needed to break the product managers free of the well-intentioned influence from others.
To do that, I took a page out of my autocratic dictator playbook. I told them that the only person who’s opinion mattered was my own, and that they should ignore any disagreements from anyone. If I said it was to be done, that was all that mattered. People who argued we hadn’t validated it, that it had risks, that it wasn’t aligned: none of that mattered except my opinion for them to do it.
Listening to the users
If the Product Managers weren’t doing analysis and discovery, then what would they work on? What would get prioritized? How would we determine what we worked on?
Simple: the users.
For years we had multiple mechanisms for collecting user feedback - annual surveys, in-product ratings, support tickets. While some analysis was done and an occasional item or two pulled from them, these sources of user feedback were almost entirely ignored.
It had gotten to the point that our users giving us feedback were answering our questions with statements like “why do we even bother answering this if you never change the things we bring up”. They were tired of making the same complaints, year over year over year and seeing nothing done.
I didn’t want the Product Organization to waste all their time trying to identify possible new problem spaces. With the surveys, we had problem spaces in front of us we could immediately address. Sure, it might not be the most valuable, but I didn’t care about the value. What use was identifying the most valuable if we had not built up the ability to act on it afterwards? That would be like hearing without listening.
I went through the thousands of answers on the survey and, with the product managers, manually created a list of feedback areas that I called “Pain Points” - the problem areas of our product that people complained about repeatedly.
Many of these were explicitly self-evident solutions, while some contained more ambiguity and nuance:
Alphabetize the dashboard record list
Add a new filter for date
Add <X> information to the record
Save notes properly
Calculating totals
Ensure we get paid for our work
Improve account security
On that list of pain points, I put a “reason for doing” column. I chose 10 pain points - problems I felt had simple solutions with outcomes that would be meaningful to someone and little risk of being harmful to anyone. Next to those 10, I put under “reason for doing” - “Joseph”.
I told the team if an item had my name on it, it was a “do it”, regardless of any objections they may encounter from anyone. If anyone complained, argued, or pushed back, they should redirect that person to me.
My goal was simple: I wanted to see a completely different list of problems the next time the survey was run.
Being talked out of delivering value
There were still extensive discussions happening in Teams and tickets as to why the product managers shouldn’t do the items on my list:
Did we understand the problem space?
What’s the real problem being solved?
Why can’t they just use the information or features we already have?
Are we going to A/B test it?
What if it takes too long?
Why aren’t we going to user-test it?
These were happening for even the smallest changes. Product Managers were being talked out of doing work based on even the most insignificant complaints. For example - adding a new label would decrease the screen real estate by 16 pixels, or that adding a specific filter in the product was duplicative because a user could just go and export their data and filter their spreadsheet in Excel.
Reasons for not moving forward. Reasons for not acting. All reasons without risk-adjusted consideration.
I needed to create a culture where people were OK with delivering something that didn’t solve the problem fully, if at all. I needed to create a culture of build-measure-learn, which mean being OK with getting it wrong the first few times. I needed people to understand that sometimes, you just have to do something to see the effects of it.
Very few of these complaints were one-way doors: that is, items that had negative consequences that were irreversible or massively damaging. Most issues were easily adjusted or minor. Minor issues shouldn’t stop progress.
This was symptomatic of the exact problem we were having before. We would spend weeks talking about a half dozen possibilities to figure out which one to do. In that time, we could’ve successfully delivered all of them and then figured out the right one based on real results. It was all talk, no walk.
I re-iterated that they should ignore any objection and to proceed with solving the problem at hand. I explicitly told them to ignore every single person complaining or proposing alternatives.
Did that reduce autonomy and collaboration from the organization? Yes, it did. But the team had shown that they couldn’t move forward with the level of autonomy that they had, and that the level of collaboration was no longer productive.
It’s perfectly fine to reduce positive elements to appropriate levels if they start having negative impacts.
Supporting rapid solutions
With these adjustments, the Product Managers were able to bypass the “discovery” and “prioritization” steps that had been plaguing their slow speed and finally moved towards “defining” and “scoping” the solution space.
It immediately became clear solution definition skillsets had weakened greatly. The initial scope for many of the proposed requirements for solutions to even the simplest pain points contained weeks or months of development effort that required extensive analysis, discovery, and feedback. The Product Manager were putting the cart before the horse - they had been trying to define massive solutions to complex problems before they had even learned how to define and solve the smallest of problems.
I told the team to think about not just solving the problem in full, but to constrain their answers to different time-boxes.
The 1 hour solution
The 1 day solution
The 1 week solution
The 1 month solution
The 1 quarter solution
These were solutions that could be implemented if they had a time limit to implement that solution.
I prohibited them from pursuing the 1 week solution unless they had first delivered to production the 1-hour solution.
This constraint was hard for them to think about first. They just couldn’t visualize how to solve a problem partially. I had to guide them to think about low-hanging things, and to think about solutions in stages of progressing difficulty and investment and comprehensiveness in solving the problem:
Informing the user or making it clearer
eg. adding instructional text, clarifying terminology, writing an out-of-product FAQ
Doing it manually internally
eg. taking on a request via email and having an employee do it ad-hoc, creating a meeting to do it once a week for all requestors, etc.
Providing a way for the user to do it manually
eg. adding a basic form to collect information, or giving them an action or button
Doing it automatically
eg. doing something in bulk, on schedule, or removing all the steps
Removing the need for it completely
eg. removing the business process from having to exist
Sure, the best solution might be to remove the need for a user to do something completely, but sometimes just telling the user how to do it correctly solves 80% of the problem.
Did we solve the pain?
The important part about all these changes was that without having a tight feedback loop with our users, we would never be able to tell what effect we were having. It would backfire if we did a bunch of stuff that did not do anything for our users or the company.
We still needed feedback on the decisions we were making and the actions we were taking. We just wanted to not be beholden to it for every step we took.
To do that, I worked to operationalize feedback - to bake in leveraging quantitative and qualitative insights as part of the entire organization’s work.
I worked with the Product Management group to set up:
Insights groups - groups of users we could talk to every week to get feedback, ask questions, and learn about their days
Raised awareness for feedback - I had in-product feedback sent directly to a Slack channel, allowing us to discuss in real-time as comments came in and also personally amplified feedback we received by sharing it broadly with others and following up on it
Mandatory relationships - I instructed all the Product Managers to directly email specific users and build feedback relationships with them
User interviews - I placed more emphasis on talking to the users, incuding interviewing them, and sharing notes and learnings in a regular, organized manner
Feedback intent - I created cycles of intentional times where we discussed feedback from users specifically as part of our day-to-day
Instrumentation - I emphasized having product analytics and instrumentation implemented, and knowing what questions we wanted to be able to answer
Relationships with support - I emphasized the need to work closely with support and meet with them regularly, monitor tickets they received, and leverage their knowledge as their front-line experience with our users was valuable
With these, we created firehoses of feedback into the Product organization, and set the expectation that we acted on it in a timely manner.
I especially emphasized qualitative and in some cases anecdotal feedback to ensure we moved past the paralysis caused by concerns of having the maximum quantitative impact.
The changes have massive effects
It was a tremendous 180-degree turnaround.
The Product Management team had gone from taking months and quarters to do a 1-day project because they were so busy thinking about value, to being able to respond to user feedback and deliver something within that same day. We solved dozens of identified user pain points in a short time, some of which led to deeper learnings about their causes and unlocked new valuable areas to explore.
The speed of decisions, the scoping of solutions, the responsiveness to user feedback, and the intentionality of instrumentation and monitoring all worked together to give us more opportunities “at bat” - more chances to get it wrong so we can get feedback and act to get it right.
The Product Managers had successfully done the reps to build up their delivery muscle, and it was showing at the rate we were delivering working, quality solutions to our users.
We had gone from delivering nothing to becoming a fairly efficient feature factory that delivered many things, some of which had value, but with the ability to course-correct based on feedback.
The effect was that we delivered more in a single month than we had in two quarters the prior year. Some of the delivered items were massive wins that saw immediate adoption and usage. Others flopped and saw no usage, but had no negative effects. All led to learnings and better understanding of the mental models and user journeys our product fit into. All were cheap bets that cost little investment, and were delivered incrementally and iteratively.
We would not have delivered on even half the wins had we been operating under the prior rules.
Knowing when to break the rules
Although unintuitive, ignoring value didn’t harm us given our context. Instead, it was the key to establishing a virtuous cycle and getting out of our death spiral. It broke through the fear of failure and opened up the possibility of learning from mistakes.
Focusing on rapid delivery meant we got better at getting real feedback and insights earlier. Rapid follow-through meant we incorporated that better into our future decisions. An operations process that tightly monitored for and fed new information into decision-making processes made it a virtuous cycle to make mistakes, get things in front of users, and learn from them. These built up capabilities and competencies we were lacking in the organization.
It resulted in faster validation of risks, and richer information than we could’ve gotten from all the interviews and analysis in the world. You can’t predict with 100% precision how users will interact and behave. Some problems you can’t analyze your way out of - not all problems are complicated: some exist in complex-adaptive systems where action is required.
The speed meant having more opportunities to have an effect. This ended up meaning we experienced more wins than if we had tried to avoid the failures.
If we made 25 attempts and failed 20 of those times, we would still be better off than if we had tried only 3 times and had all 3 win. More “at bats” meant more chances to win and learn and adjust course.
It also meant we got better at defining and shepherding solutions and delivering. There’s a study floating around about how making a thousand pots as fast as you can without regard to quality leads to you getting better at making pots than if you had focused on making high quality pots. The exact thing happened here - doing the reps let the Product Managers get better at the work.
That didn’t mean we ignored outcomes completely. We still built a culture of instrumentation and measuring and making sure we could observe the effects. We still built a culture of listening to users, building relationships and feedback loops, and sharing knowledge. We still built a culture of defining and validating risks.
We just decided that value was not to be a driving factor in our decisions for a certain time frame. It worked.
Re-orienting towards value
Ignoring value wasn’t where we wanted to be in an ideal world, but it was great progress relative to doing nothing. We don’t need a team of 50 to do nothing.
Now that we had the means to actually act on and learn from our actions, we could then shift the organization towards increasing diligence around value without losing our delivery effectiveness.
It was time to take the journey towards Value once again - this time without the baggage of analysis paralysis.
As I let go of the steering wheel, I started orienting Product Managers to think more deeply and strategically, giving back some of the autonomy I had taken away.
How I did that, however, is a War Story for another time.


