Recently, there has a been a big shift in the focus of Product teams from outputs to outcomes. In other words, some companies are starting to care a little bit less about the fact that a feature got shipped and a little bit more about whether that feature had a positive impact on user behavior and metrics. This is a great development. Shoving out 10 new features, none of which improve anything, seems like a fairly big waste of everybody’s time and money.
Unfortunately, a lot of teams find it hard to understand whether what they’ve built improved anything important. This happens for a lot of reasons, like not having the right metrics, not being given time to measure things, or not knowing what the goal was in the first place. Even if they do know what improved, a lot of teams can’t tell you if the improvement was worth the effort.
These are all pretty big impediments to being able to measure outcomes and make better choices. By taking a slightly more disciplined approach to planning and review, teams can not only evaluate their work better, they can also improve their future decisions by identifying places where they’ve consistently made mistakes.
Step 1: Write Your Goals Clearly
If everybody just did this first step, their products would improve significantly. This is without question the most important thing you can do to make better decisions. Write down your goals and expectations before you start building.
If everybody wrote down their goals and expectations before building, their products would improve significantly. -Tweet This
How you want to do this is really up to you. I know of at least a half dozen different styles for stating the expected outcomes of your feature or product. However you do it, you need to capture a few key pieces of information:
A description of the thing you’re building
How you expect the change you’re making to improve things and by when
How you’ll measure that improvement
Which things you want to monitor to make sure they aren’t badly affected
What sort of investment will be required to make the change
Why you believe what you believe
The first thing on the list should be trivial, especially since you shouldn’t be making these until you’re fairly close to ready to start working on the new feature or project. These aren’t lists you make for every single possible feature you might build. These are well informed estimates for a project that is ready to go. If the project requires significant research and/or design work, you will likely want to do a short version of this before that starts and then update key parts when you have a better idea of what you’re going to be building.
The second and third are tricky, because this is where you start laying things out as outcomes and benefits rather than just restating the feature. For example, you can’t say something like “Adding the ability to pay by mobile phone will let users pay by mobile phone.” You have to explain why that’s a good thing both for the user and the company. Something more like “Adding the ability to pay by mobile phone will allow a significant number of people who can not currently use our service to start using it.”
The third one is even harder, since that’s where you have to explain what “significant” means and how you’ll measure it. Just measuring how many users pay with mobile phone doesn’t do the trick here. You’ll probably need to see how many new users pay that way and whether current users who switch end up spending more or less.
And don’t forget the fourth item! In this example, you’d also need to monitor how many new users still paid the old way and overall sales in order to make sure you’re not cannibalizing a different payment method. Of course, you also need a method that lets you isolate your changes to make sure that sales didn’t go up for some unrelated reason like a big promotion or a sale on the day you release your new mobile phone payment feature.
Don’t forget the second to last item - what sort of investment will be required to make the change. This doesn’t have to be stated in money. In fact, it’s pretty hard to do that in most companies. But once you’re at the point where you’re ready to start building something, you should have a decent idea of how long it should take and how many people or teams will be involved.
Make sure that you’re not just talking about the time it takes to ship something. This should be how long it will take until it’s being actively used by people and you’re starting to see value from it. Those two things can be quite different, especially in B2B environments. If sales is telling you that you’ll get a big new client if you build a new feature, make sure that part of the investment includes educating clients about the new feature and training sales how to sell it, etc. Don’t forget to include any time research and design spent working on this before you had enough info to write everything down, and be sure to keep track of further research and design work as you build.
The last item - why you believe what you believe - should be the easiest. What’s driving the decision to build this feature? Was there research that showed there was a huge potential market that couldn’t pay with a credit card? Did a specific person in the company insist that this was high priority? Did a salesperson say you couldn’t win a big account without it? Write it down! Be honest. “The CEO insisted,” is an acceptable thing to write here, but I do encourage you to try to understand why the CEO fell in love with the feature in the first place.
If you do this correctly, over time, you’ll start to get a great view of which sorts of evidence is the most trustworthy and which sources provide the best feature or product ideas. I sometimes have an extra piece of information that I’ll record which is, “Who disagreed with this feature?” Not everybody is always onboard with every decision. Keep track. Sometimes you start to see patterns of people who will waste everybody’s time with their “great ideas,” and other times you’ll learn who’s needlessly pessimistic about every new change.
Step 2: Post Release Retrospective
Your post release review happens as soon as the project is over. Please note that this does not replace regular product or engineering team retrospectives. If you do those, please carry on!
For those of you who loathe all meetings on principle, please remain calm. I’m not adding a huge number of them - just two per project, where projects are defined as a fairly large feature or as a new version of a product or something of similar scope. You don’t need to do these for every button you add or piece of text you change.
During this meeting, you’re going to review parts of your list and ask a few important questions:
Did we end up building more or less what we thought we were going to build?
If not, why not?
What were the reasons for the changes?
How close were we to the original investment estimate?
How were we wrong? (hint: you almost certainly underestimated wildly!)
Which specific costs were higher or lower than predicted?
What took us significantly more or less time than we thought?
Why were we wrong?
You’re not going to be assessing whether your new thing meets expectations yet, because there’s almost never a realistic way to know that this early. All you’re doing is looking at what you expected to build, what you ended up building, how much you thought it would cost (in time/money/opportunity/whatever), and what it ended up costing.
These are extremely important things to evaluate. If you find, as so many teams do, that everything ended up taking twice as long as you expected, that’s going to affect your company outcome. After all, would you have gone after that big new client if you’d known how much it would cost to build the feature they needed? Maybe! But you’ll never know unless you get a fairly accurate view of how long the project took, and this is easiest to do immediately after you think you’re finished.
Step 3: Outcome Retrospective
And now, we wait. There are very few companies that can immediately judge whether a new feature has the impact they expected. All of those companies are big consumer properties with millions of transactions per day (or per second). Even then, there are all sorts of features that might require some time to measure - internal tools, features built for a small subset of the customer base, etc.
That’s why in the original list, you need to specify when you think you’ll see the benefit by. Do you think it will take a few months to land the big new customer even after the feature they wanted is released? Fine, set that date ahead of time. Be generous with yourself, even. But be honest.
If you think you’ll see a benefit in 6 months, check back in 6 months, but don’t keep extending the deadline if that customer still isn’t landed. It’s important for you to understand how long it can take to get the benefits you’re predicting. Hold the meeting, record the truth, and then feel free to set up a future date for an optional later retrospective if you think there’s still a chance you’ll get some benefit.
On the appointed day, hold your next retrospective for the project. In this one, you’re going to go through the whole list, including the part you went through before. The questions you are trying to answer are:
How much has what we built changed since we thought it was “done”?
Why did we change it?
How much more work was it?
How much did it end up adding to the original estimate?
Were there any benefits that we can prove came from the change we made?
Why do we think we can attribute those changes to the new feature or product?
Are there other things we also did that might account for the improvements?
How realistic were we about the outcomes?
If we were wrong, why?
Were there any negative consequences of the thing we built?
What were they?
Why did they happen?
Why didn’t we anticipate and prevent them?
If you were off on anything - investment, benefits, side effects, etc. - then you have to ask the most important question: What can we do differently next time to avoid these same mistakes?
The most important question: What can we do differently next time to avoid these same mistakes? - Tweet This
This is the question I don’t hear people asking often enough. They just shrug their shoulders and move to the next thing. Inevitably, they end up underestimating the costs and overestimating the benefits again and again. It’s infuriating.
Some Important Reminders
There is a tendency, when we start asking questions like, “what went wrong,” to turn the conversation into a blamefest. You can’t do that here, or nobody will be honest, and if nobody’s honest, nobody will learn.
If you blame people when things go wrong, nobody will be honest, and if nobody’s honest, nobody will learn. - Tweet This
These have to be free of blame. It’s not “who made this terrible decision?” The question we’re asking is, “how can we make better decisions in the future?” If you want more info on this, check out the concept of blameless post-mortems in engineering. That’s where I stole it from, anyway.
Not Just Products
Another important thing to note is that, while I’ve been describing this as “building a product or feature,” this technique works great for any sort of big project or objective. Maybe you’re switching over to a new HR system that you think will reduce a specific kind of routine task your team has to do. Or maybe you’re adding a CRM and a new process for your sales team. Great! Write it down and do a couple of retros. Make sure you’re making good decisions.
Remember to Iterate
One of the nice things about this method is that you may find the second retrospective is a great time to ask yourself what you should do next on the project. Did it live up to expectations or do even better than you imagined? Great! Maybe you should double down. Did it go wildly over budget and return nothing? Now’s a good time to figure out a way to fix it or kill it.
It can be tough to convince execs to let you iterate on features that are “done.” It can also be incredibly easy to let non-performing features linger forever as zombies in your product. This is a fantastic breakpoint that encourages everybody to assess the feature objectively and take the right next step.
Include the Whole Team
These are not meetings that you hold in secret or with only executives. They’re not about judging other people or finding ways to punish bad performers. They need to be run by the teams who are doing the work, and ideally, they include any stakeholders or decision makers. If you can’t get everybody actively involved, make sure that they at least see the results, especially if the right next step involves changing some important process.
Anybody who can make decisions on a project should be given the information they need to determine whether their decisions were good. It’s the only way we learn to make better decisions.
MAKE THE NECESSARY CHANGES
You will need to make some changes. The hardest part of this process is not adding extra meetings or writing down goals. The hardest part is learning from your mistakes and changing the environment that allowed them to happen.
Every so often, go back over the notes from previous features. Are there patterns? Are there mistakes you’re making repeatedly? Are there “reasons” for building features or products that consistently underperform? Are you always overestimating the return on features and underestimating the cost?
This is where you need to come up with systemic changes, and you can’t just write down, “BE SMARTER” because that never works. Trust me. You need to identify where the system went wrong and change it when possible.
This part is hard and probably deserves its own blog post, but there’s lots of good info about this if you look at information about software post-mortems.
Make It Yours
And, as with all advice, feel free to adapt or change this to suit your team’s needs. No advice is one size fits all, and no set of questions will be perfect for all projects. But all teams can benefit from stating their expectations clearly before starting a project and reviewing specific metrics once the project is finished.
Interested in learning more? Check out a version of this in the Hypothesis Tracker section of my book, Build Better Products.