Product Management Skills - Articulating development cost risks
Development cost risk is the easiest one to think about for Product Managers, but it’s also the one they misuse the most.
As a product manager, you have to be able to articulate, separate, and define your risks - the potential negative effects of whatever you are trying to do. Articulating them helps you frame and compare them appropriately - ignoring low risks and mitigating higher ones.
These are the levers you use and the plates you balance as a product manager to create and deliver value.
Saving money can be wasting money if you aren’t replacing it with another piece of work. Thinking about theoretical capacity available helps combat this counter-intuitive issue.
Development cost risk
Development cost risk is the easiest risk to think about for Product Managers, but it’s also the one they misuse the most.
Development cost is the cost to develop the solution. Every single solution being built has a quantifiable amount of time it will be expected to take. That is the development cost. To identify this amount, Product Managers work with their engineering partners to obtain estimates, or look at historical trends, or if they bad, just make something up.
Once that time is obtained, you can multiple by the average developer salary to quantify the development cost in terms of dollars:
For example, supposing there are 2 developers and they are working on something that’ll take them 2 weeks to build, at $4,000 / week per developer cost:
2 developers
x 2 weeks
—————
4 developer-weeks
x $4,000 / week average salary
—————
$16,000 development cost
You now know that it’ll cost $16,000 to develop the solution,
The development cost risk here is that the cost of developing the solution will not be worth the impact it has. For example, if the impact ends up being estimated to be $100, it probably wasn’t a good tradeoff.
The second development cost risk is that a cost over-run occurs, and the solution is more expensive to develop than previously believed. This still results in the same efect as the solution not being worth the impact, and is more of an engineering concern to monitor and avoid - outside the scope of this particular post.
Addressing the risk
As a Product Manager, you can address the risk a variety of different ways.
The first option is to reduce development costs by cutting scope. Can you solve the problem with a simpler solution? If so, you can change the ROI calculation for the solution.
Many Product Managers don’t actually cut scope, though. They haven’t thought about the solution space enough, or maybe they can’t envision what the smallest possible increment could be.
Instead, the first tool they often reach for is to say “no” and to remove that solution from consideration. They don’t do it at all, instead looking for more valuable items to work on.
Ka-ching! You just saved the company $16,000…maybe.
A key pitfall in saying “no”
There is a subtle nuance in addressing development cost risk that most Product Managers forget when working in a product-based organization: you are always paying it regardless of what you are working on or not working on.
Developers in long-lived teams are typically salaried - they get paid every two weeks regardless of whether they do 1 thing or 100 things. This cost is a recurring amount. It doesn’t stop if there’s no backlog of items to work on.
Product Managers attempting to prioritize solely from the lens of “saving developer bandwidth” or”avoiding something that is too costly to build” can potentially act counter-productively, incurring an even higher cost than time they actually saved from not doing a piece of work.
This is because many well-intentioned Product Managers turn down decent, valuable items to prepare or work on something even more valuable, without having something else ready for development. With no backlog and no ready work, the developers just sit around with nothing to do or tinker on technical concerns of intellectual value but no meaningful impact, or work on lower value items than the one that was discarded.
Remember: in an organization with salaried engineers, you are always incurring development cost if you don’t have something ready for the developers to do. Efforts to address development cost must factor in that this is the floor of cost.
Counter-intuitively, this means that declining to do something because of concerns of development cost can actually lead to lower overall value delivery in some cases.
An alternative approach
An alternative model I prefer to use when thinking about development cost is percentage of developer bandwidth. It’s a common one I use as an Engineering Manager when I think about team capacity.
It assumes that you get a fixed amount of bandwidth per time period (eg. a month). The items you work on, as estimated in your preferred dev-unit (eg. dev-weeks), take up a percentage of that total time available.
You can get the total time available by multiplying the number of developers you have with the number of units in your time period, with a fudge factor for holidays, weekends, etc. This would give you your maximum theoretical development capacity.
As time marches on through your time period, your theoretical development capacity decreases to zero.
For example, to look at a month:
2 developers
x 3.2 theoretical dev-weeks in a month (to factor in meetings)
—————
6.4 theoretical developer-week capacity per month
Then, you can look at you estimate in terms of percentage of development capacity:
2 developers
x 2 developer weeks (estimate)
——-
4 developer weeks (estimate)
÷ 6.4 theoretical developer-weeks capacity per month
—————
62.5% of available developer-week capacity per month
You then know that you’ll be using up 62.% of your theoretical development capacity for that month.
It’s clearly visible that when you remove that item that was going to be worked on and you don’t replace it, you’ll end up using 0% of your theoretical development capacity, which isn’t actually a savings: it’s waste.
This makes it much clearer than if you had merely used dollar values to represent the cost - it feels different because it illustrates not just what you save, but what you waste.
Note: if you pay by the hour, then this development cost is as straightforward as it sounds and utilization can be less needed.
In practice
I’ve found this model useful when product managers get stuck in analysis paralysis. Some product managers become fixated on identifying the highest value item, that they do so at the expense of their teams having anything to do, sometimes for months on end. Telling product managers that they should at least achieve a certain minimum amount of utilization can help spur them towards becoming comfortable making decisions and acting, even if that action isn’t the highest value item in the moment.
By forcing them to realize the cost isn’t saved, but wasted, it makes the picture clearer and helps them make sub-optimal, but still valuable decisions. Worrying about min-maxing value makes sense at higher levels of utilization, but not necessary at lower levels.
It then becomes an exercise of achieving a balance in developer utilization, cost, and value. Applying a constraint creates a forcing function that helps narrow down an infinite set of decisions into a path forward.


