Lean UX Videos

Recently, I've been experimenting with new ways of delivering information about UX for Lean Startups. Yes, this is a very poor excuse for not blogging as much. But it's also a genuine effort to get information about user experience design to new people.

As part of this effort, I'm making a series of short (10 minutes or so) videos for UXD for Developers. This is a show on YouTube produced by the folks at the Android Developer Network at Google.

Two of my videos are already posted, and at least one more is on the way. A list of all the videos (including some that I'm not in) is here: UXD for Developers.

In my episodes, I cover an Intro to Lean UX and Qualitative vs. Quantitative Research for UX.

New episodes are released every Tuesday, so make sure to subscribe to the channel to get all the updates!


You Can't Make Good Decisions with Bad Data

I think a critical lesson of the Lean Startup movement is that you have to learn quickly.

The “quickly” part of that lesson can lead to a culture of “good enough.” Your features should be good enough to attract some early adopters. Your design should be good enough to be usable. Your code should be good enough to make your product functional.

While this might drive a lot of perfectionists nuts, I’m all for it. Good enough means that you can spend your time perfecting and polishing only the parts of your product that people care about, and that means a much better eventual experience for your users. It may also mean that you stay in business long enough to deliver that experience.

I think though that there’s one part of your product where the standard for “good enough” is a whole lot higher: Data. Data are different.

You Can’t Make Good Decisions With Bad Data

The most important reason to do good research is that it can keep you from destroying your startup. I’m not being hyperbolic here. Bad data can ruin your product.

Imagine for a moment an a/b testing system that randomly returned the wrong test winner 30% of the time. It would be tough to make decisions based on that information, wouldn’t it? How would you know if you were choosing the right experiment branch?

Qualitative research can be just as bad. I can’t tell you how many founders have spent time and money talking to potential customers and then wondered why nobody used their product. Nine times out of ten, they were talking to the wrong people, asking the wrong questions, or using terrible interview techniques.

I had one person tell me, “bad data are better than no data,” but I strongly disagree here. After all, if I know I don’t have any data, I can go do some research and learn something.

But if I have some bad data, I think I already know the answers. Confirmation bias will make it even harder for me to unlearn that bad information. I’m going to stop looking and start acting on that information, and that may influence all of my product decisions.

If I “know” that all of my users are left handed, I can spend an awful lot of time building and throwing out features for left handed people before realizing that what I got wrong was the original premise. And, of course, that problem is made even worse if I’m not getting good information about how the features are actually performing.

You Have To Keep Doing It

Unlike any given feature or piece of code, collecting data is guaranteed to be part of your process for the life of your startup.

One of the best arguments for building minimum viable products and features is that you might just throw them out once you’ve learned something from them (like that nobody wants what you built).

This isn’t true of collecting data. Obviously you may change the way you collect data or the types of data you collect, but you’re going to keep doing it, because there’s simply no other way to make informed decisions.

Because this is something that you know is absolutely vital to your company, it’s worth getting it right early.

Data Collection Is Not a Mystery

Most of your product development is going to be a mystery. That’s the nature of startups.

You’ve got a new product in a new market, possibly with new technology. You have to do a lot of digging in order to figure out what you should be building. There’s no guide book telling you exactly what features your revolutionary new product should have.

That’s not true of gathering data. There is a ton of useful, pertinent information about the right way to do both qualitative and quantitative research. There are workshops and courses you can take on how to not screw up user interviews. There are coaches you can hire to get you trained in gathering all sorts of data. There are tools you can drop in to help you do a/b testing and funnel tracking. There are blogs you can read written by people who have already made mistakes so that you don’t have to make the same ones. There is a book called Lean Analytics that pretty much lays it out for you.

You don’t have to take advantage of all of these things, but you also don’t have to start from scratch. Taking a little time to learn about the tools and methods already available to you gives you a huge head start.

Good Data Take Less Time Than Bad Data

Here’s the good news: good data actually take less time to collect than bad data. Sure, you may have to do a little bit of upfront research on the right tools and methods, but once you’ve got those down, you’re going to move a hell of a lot faster.

For example, customer development interviews go much more quickly when you’re asking the right questions of the right people. You don’t have to talk to nearly as many users when you know how to not lead them and to interpret their answers well. Observational and usability research becomes much simpler when you know what you’re looking for.

The same is true for quantitative data collection. Your a/b tests won’t seem nearly so random when you’re sure that the information in the system is correct. You won’t have to spend time as much time figuring out what’s going on with your experiments if you trust your graphs.

Good Data Does Not Mean Complete Data

I do want to make one thing perfectly clear: the quest for good data should be more about avoiding bad data than it is about making sure you have every scrap of information available.

If you don’t have all the data, and you know you don’t have all the data, that’s fine. You can always go out and do more research and testing later. You just don’t want to put yourself into the situation where you have to unlearn things later.

You don’t have to have all the answers. You just have to make sure you don’t have any wrong answers. And you do that by setting the bar for “good enough” pretty damn high on your data collection skills.


Like the post? Please share it!

Want more information like this? 


My new book, UX for Lean Startups, will help you learn how to do better qualitative and quantitative research. It also includes tons of tips and tricks for better, faster design. 

The Best Best Practice

I get asked for a lot of what I call "generic" advice, which I'm not really very good at giving. People will ask questions like, "Should I make a prototype?" or "Should I build a landing page?" or "Should I do more customer development?"

If you've asked this in email, you've probably gotten an unreadable 5,000 word manifesto that is essentially a brain dump of everything I can think of on the topic. If you've asked me in person you've almost certainly had to listen to me blather until your eyes glazed over.

Wherever you've asked, I've probably started the response with the words, "Well, it depends..."

And it does depend. What you should do right now with your product depends on a tremendous number of factors.

However, I think I've got some better advice for you.

You see, there aren't really Best Practices in Lean UX that apply in every situation. There are merely things that would be extremely helpful, except in cases where they'd be a huge waste of time. You can learn all the techniques in the world, but you still have to know when to apply them.

Every time you are wondering, "should I do this thing?" you should immediately ask yourself the following three questions:
  • What do I hope to learn by doing this?
  • How likely is it that I will learn what I want to learn by doing this?
  • Is there a faster, cheaper, or more effective way that I could learn what I want to learn?

An Example!

Somebody recently asked me if his company should build an interactive prototype of a proposed new feature. 

I asked him what he hoped to learn by building an interactive prototype. He said he wanted to know if people would use the feature. I explained that, actually, interactive prototypes aren't terribly good for figuring out if people will use your new feature. They're only good for figuring out if people can use your new feature. 

So, by building an interactive prototype, you're very unlikely to learn what you want to learn. A more effective way to learn if people will use a new feature might be a Feature Stub (also called a Fake Door). 

Note: A Feature Stub is where you put some sort of access in your product to the proposed feature. For example, if you were wondering if people would watch an informational video, you might put a link on your site called Watch This Informational Video and then record how many people clicked on the link. If nobody clicked your link, you wouldn't bother to make an informational video. 

To be clear, it may be that he should also build an interactive prototype in order to figure out if people can use the feature as designed. However, his first step should be to learn whether the feature is worth building at all. If nobody's going to use the feature, it's best to learn that before you spend a lot of time designing and building it.

It's All About Learning

The reason these questions are so important is that Lean Startup is all about learning quickly. If a particular Best Practice helps you learn what you need to learn, then you should use it. If not, you shouldn't. At least, not just yet. In other words, it depends.

Want to learn more? Buy this book.


My new book, UX for Lean Startups, will help you learn how to build great products. It also includes all sorts of Best Practices and when you should use them. 

10 Reasons Founders Should Learn to Design

I know, I know. Founders and entrepreneurs are already being told that they need to learn how to code, hire, raise money, and get customers.

Screw that. What founders and entrepreneurs should really do is learn how to build a great, usable, useful product. And that means learning the fundamentals of research and design.

Don't believe me? Here are 10 reasons you should learn to be your own UX designer (or at least learn enough about UX design to fake it).
  1. You can't build a great product if you don't know what problem it solves for which people. UX design and research helps you figure that out.
  2. The only thing harder to find than a great designer is a unicorn.
  3. It is almost impossible to judge somebody else's UX design skills unless you have designed things yourself. 
  4. The only thing more expensive than a great designer is a faberge egg. Sitting on top of a unicorn. 
  5. It's much easier to manage somebody who is doing a job you truly understand. 
  6. Jason Putorti already has a job.
  7. UX design is a team sport. You don't want to get picked last for the team, do you? 
  8. You have a million fabulous feature ideas. It's easiest to communicate them to your team and customers through design. 
  9. You should already understand your product and users better than anybody else. This just takes it to the next logical step. 
  10. Adding extra people to the Build>Measure>Learn loop does not make it faster. 
Convinced? Great! First, share this list with people!

Now, here's a book to help you learn how to do enough user research and design to get your product into the hands of people who want to buy it. 


It's called UX for Lean Startups. It's by me. It will help you learn how to build great products. I promise. 

Combining Qualitative & Quantitative Research


Designers are infallible. At least, that’s the only conclusion that I can draw, considering how many of them flat out refuse to do any sort of qualitative or quantitative testing on their product. I have spoken with designers, founders, and product owners at companies of all sizes, and it always amazes me how many of them are so convinced that their product vision is perfect that they will come up with the most inventive excuses for not doing any sort of customer research or testing. 

Before I share some of these excuses with you, let’s take a look at the types of research I would expect these folks to be doing on their products and ideas.

Quantitative Reserach

When I say quantitative research in this context, I’m talking about a/b testing, product analytics, and metrics - things that tell you what is happening when users interact with your product. These are methods of finding out, after you’ve shipped a new product, feature, or change, exactly what your users are doing with it. 

Are people using the new feature once and then abandoning it? Are they not finding the new feature at all? Are they spending more money than users who don’t see the change? Are they more likely to sign up for a subscription or buy a premium offering? These are the types of questions that quantitative research can answer. 

For a simple example, if you were to design a new version of a landing page, you might run an a/b test of the new design against the old design. Half of your users would see each version, and you’d measure to see which design got you more registered users or qualified leads or sales or any other metric you cared about.

Qualitative Research

By qualitative testing, I mean the act of watching people use your product and talking to them about it. I don’t mean asking users what you should build. I just mean observing and listening to your users in order to better understand their behavior. 

You might do qualitative testing before building a new feature or product so that you can learn more about your potential users’ behaviors. What is their current workflow? What is their level of technical expertise? What products are they already using? You might also do it once your product is in the hands of users in order to understand why they’re behaving the way they are. Do they find something confusing? Are they getting lost or stuck at a particular point? Does the product not solve a critical problem for them? 

For example, you might find a few of your regular users and watch them with your product in order to understand why they’re spending less money since you shipped a new feature. You might give them a task in order to see if they could complete it or if they got stuck. You might interview them about their usage of the new feature in order to understand how they feel about it. 


Excuses, Excuses

While it may seem perfectly reasonable to want to know what your users are really doing and why they are doing it, a huge number of designers seem really resistant to performing these simple types of research or even listening to the results. I don’t know why they refuse to pay any attention to their users, but I can share some of the terrible excuses they’ve given me. 


A/B Testing is Only Good for Small Changes

I hear this one a lot. There seems to be a misconception that a/b testing is only useful for things like button color and that by doing a/b testing you’re only ever going to get small changes. The argument goes something like, “Well, we can only test very small things and so we will test our way to a local maximum without ever being able to really make an important change to our user experience.”
This is simply untrue.

You can a/b test anything. You can show two groups of users entirely different experiences and measure how each group behaves. You can hide whole features from users. You can change the entire checkout flow for half the people buying things from you. You can test a brand new registration or onboarding system. And, of course, you can test different button colors, if that is something that you are inclined to do.

The important thing to remember here is that a/b testing is a tool. Itʼs agnostic about what youʼre testing. If youʼre just testing small changes, youʼll only get small changes in your product. If, on the other hand, you test big things - major navigation changes, new features, new purchasing flows, completely different products - then youʼll get big changes. And, more importantly, you’ll know how they affected your users. 


Quantitative Testing Leads to a Confused Mess of an Interface

This is one of those arguments that has a grain of truth in it. It goes something like, “If we always just take the thing that converts best, we will end up with a confusing mess of an interface.”
Anybody who has looked at Amazonʼs product pages knows the sort of thing that a/b testing can lead to. They have a huge amount of information on each screen, and none of it seems particularly attractive. On the other hand, they rake in money.

Itʼs true that when youʼre doing lots of a/b testing on various features, you can wind up with a weird mishmash of things in your product that donʼt necessarily create a harmonious overall design. You can even wind up with features that, while they improve conversion on their own, end up hurting conversion when they’re combined. 

As an example, letʼs say youʼre testing a product detail page. You decide to run several a/b tests simultaneously for the following new features:
  • 
customer photos

  • comments
  • ratings

  • extended product details

  • shipping information

  • sale price

  • return info
Now, letʼs imagine that each one of those items, in its own a/b test, increases conversion by some small, but statistically significant margin. That means you keep all of them. Now youʼve got a product detail page with a huge number of things on it. You might, rightly, worry that the page is becoming so overwhelming that youʼll start to lose conversions.

Again, this is not the fault of a/b testing – or in this case, a/b/c/d/e testing. This is the fault of a bad test. You see, itʼs not enough that you run an a/b test. You have to run a good a/b test. In this case, just because the addition of a particular feature to your product page improved conversions doesn’t mean that adding a dozen new features to your product page will increase your conversion. 

In this instance, you might be better off running several a/b tests serially. In other words, add a feature, test it, and then add another and test. This way you’ll be sure that every additional feature is actually improving your conversion. Alternatively, you could test a few different versions of the page with different combinations of features to see which converts best. 


A/B Testing Takes Away the Need For Design

For some reason, people think that a/b testing means that you just randomly test whatever crazy shit pops into your head. They envision a world where engineers algorithmically generate feature ideas, build them all, and then just measure which one does best.

This is just absolute nonsense.

A/B testing only specifies that you need to test new designs against each other or against some sort of a control. It says absolutely zero about how you come up with those design ideas.

The best way to come up with great products is to go out and observe users and find problems that you can solve and then use good design processes to solve them. When you start doing testing, youʼre not changing anything at all about that process. Youʼre just making sure that you get metrics on how those changes affect real user behavior.

Letʼs imagine that youʼre building an online site to buy pet food. You come up with a fabulous landing page idea that involves some sort of talking sock puppet. You decide to create this puppet character based on your intimate knowledge of your user base and your sincere belief that what they are missing in their lives is a talking sock puppet. Itʼs a reasonable assumption.

Instead of just launching your wholly re-imagined landing page, complete with talking sock puppet video, you create your landing page and show it to only half of your users, while the rest of your users are stuck with their sad, sock puppet-less version of the site. Then you look to see which group of users bought more pet food. At no point did the testing process have anything to do with the design process. 

Itʼs really that simple. Nothing about a/b testing determines what youʼre going to test. A/B testing has literally nothing to do with the initial design and research process. 

Whatever youʼre testing, you still need somebody who is good at creating the experiences youʼre planning on testing against one another. A/B testing two crappy experiences does, in fact, lead to a final crappy experience. After all, if youʼre looking at two options that both suck, a/b testing is only going to determine which one sucks less.

Design is still incredibly important. It just becomes possible to measure designʼs impact with a/b testing.


There’s No Time to Usability Test

When I ask people whether they’ve done usability testing on prototypes of major changes to their products, I frequently get told that there simply wasn’t time. It often sounds something like, “Oh, we had this really tight deadline, and we couldn’t fit in a round of usability testing on a prototype because that would have added at least a week, and then we wouldn’t have been able to ship on time.” 

The fact is you don't have time NOT to usability test. As your development cycle gets farther along, major changes get more and more expensive to implement. If you're in an agile development environment, you can make updates based on user feedback quickly after a release, but in a more traditional environment, it can be a long time before you can correct a big mistake, and that spells slippage, higher costs, and angry development teams. Even in agile environments, it’s still faster to fix things before you write a lot of code than after you have pissed off customers who are wondering why you ruined an important feature that they were using. 

I know you have a deadline. I know it's probably slipped already. It's still a bad excuse for not getting customer feedback during the development process. You're just costing yourself time later. I’ve never known good usability testing to do anything other than save time in the long run on big projects.


Qualitative Research Doesn’t Work Because Users Don’t Know What They Want

This is possibly the most common argument against qualitative research, and it’s particularly frustrating, because part of the statement is quite true. Users aren’t particularly good at coming up with brilliant new ideas for what to build next. Fortunately, that doesn’t matter. 

Let’s make this perfectly clear. Qualitative research is NOT about asking people what they want. At no point do we say, “What should we build next?” and then relinquish control over our interfaces to our users. People who do this are NOT doing qualitative research. 

Qualitative research isn’t about asking people what they want and giving it to them. Qualitative research is about understanding the needs and behaviors of your users. It’s about really knowing what problem you’re solving and for whom.

Once you understand what your users are like and what they want to do with your product, it’s your job to come up with ways to make that happen. That’s the design part. That’s the part that’s your job.


It’s My Vision - Users Will Screw it Up

This can also be called the "But Steve Jobs doesn't listen to users..." excuse. 

The fact is, understanding what your users like and don't like about your product doesn't mean giving up on your vision. You don't need to make every single change suggested by your users. You don't need to sacrifice a coherent design to the whims of a user test. You don’t even need to keep a design just because it converts better in an a/b test. 

What you do need to do is understand exactly what is happening with your product and why. And you can only do that by gathering data. The data can help you make better decisions, but they don’t force you to do anything at all.


Design Isn’t About Metrics

This is the argument that infuriates me the most. I have literally heard people say things like, “Design can’t be measured, because design isnʼt about the bottom line. Itʼs all about the customer experience.”

Nope.

Wouldnʼt it be a better experience if everything on Amazon were free? Be honest! It totally would. 

Unfortunately, it would be a somewhat traumatic experience for the Amazon stockholders. You see, we donʼt always optimize for the absolute best user experience. We make tradeoffs. We aim for a fabulous user experience that also delivers fabulous profits.

While itʼs true that we donʼt want to just turn our user experience design over to short term revenue metrics, we can vastly improve user experience by seeing which improvements and features are most beneficial for both users and the company.

Design is not art. If you think that thereʼs some ideal design that is completely divorced from the effect itʼs having on your companyʼs bottom line, then youʼre an artist, not a designer. Design has a purpose and a goal, and those things can be measured.


So, What’s the Right Answer?

If you’re all out of excuses, there is something that you can do to vastly improve your product. You can use quantitative and qualitative data together. 

Use quantitative metrics to understand exactly what your users are doing. What features do they use? How much do they spend? Does changing something big have a big impact on real user behavior?

Use qualitative research to understand why your users do what they do. What problems are they trying to solve? Why are they dropping out of a particular task flow when they do? Why do they leave and never come back.

Let’s look at an example of how you might do this effectively. First, imagine that you have a payment flow in your product. Now, imagine that 80% of your users are not getting through that payment flow once they’ve started. Of course, you wouldn’t know that at all if you weren’t looking at your metrics. You also wouldn’t know that the majority of people are dropping out in one particular place in the flow.

Next, imagine that you want to know why so many people are getting stuck at that one place. You could do a very simple observational test where you watch four or five real users going through the payment flow in order to see if they get stuck in the same place. When they do, you could discuss with them what stopped them there. Did they need more information? Was there a bug? Did they get confused?

Once you have a hypothesis about what’s not working for people, you can make a change to your payment flow that you think will fix the problem. Neither qualitative nor quantitative research tells you what this change is. They just alert you that there’s a problem and give you some ideas about why that problem is happening. 

After you’ve made your change, you can run an a/b test of the old version against the new version. This will let you know whether your change was effective or if the problem still exists. This creates a fantastic feedback loop of information so that you can confirm whether your design instincts are functioning correctly and you’re actually solving user problems. 

As you can hopefully see from the example, nobody is saying that you have to be a slave to your data. Nobody is saying that you have to turn your product vision or development process over to an algorithm or a focus group. Nobody is saying that you can only make small changes. All I’m saying is that using quantitative and qualitative research correctly gives you insight into what your users are doing and why they are doing it. And that will be good for your designs, your product, and your business.


Like the post? 

Fucking Ship It Already: Just Not to Everyone At Once

There is a pretty common fear that people have. They’re concerned that if they ship something that isn’t ready, they’ll get hammered and lose all their customers. Startups who have spent many painstaking months acquiring a small group of loyal customers are hesitant to lose those customers by shipping something bad.

I get it. It’s scary. Sorry, cupcake. Do it anyway.

First, your early adopters tend to be much more forgiving of a few misfires. They’re used to it. They’re early adopters. Yours is likely not the first product they’ve adopted early. If you’re feeling uncomfortable, go to the Way Back Machine and look at some first versions of products you use every day. When your eyes stop bleeding, come back and finish this post. I’ll wait.

Still nervous? That’s ok. The lucky thing is that you don’t have to ship your ridiculous first draft of a feature to absolutely everybody at once. Let’s look at a few strategies you can use to reduce the risk.

The Interactive Mockup

A prototype is the lowest risk way you can get your big change, new feature, or possible pivot in front of real users without ruining your existing product. And you’d be surprised at how often it helps you find easy to fix problems before you ever write a line of “real code.”

If you don’t want to build an entire interactive prototype, trying showing mockups, sketches, or wireframes of what you’re considering. The trick is that you have to show it to your real, current users.

Get on a screen share with some users and let them poke around the prototype. Whatever you do, never tell them why you made the changes or what the feature is supposed to be for or how awesome it is. You want the experience to be as close as possible to what it would be if you just released the feature into the wild and let the users discover it for themselves.

If your product involves any sort of user generated content, taking the time to include some of the tester’s own content can be extremely helpful. For example, if it’s a marketplace where you can buy and sell handmade stuff, having the user’s own items can make a mockup seem more familiar and orient the user more quickly.

Of course, if there’s sensitive financial data or anything private, make sure to get the user’s permission BEFORE you include that info in their interactive prototype. Otherwise, it’s just creepy.

The Opt In

Another method that works well is the Opt In. While early adopters tend to be somewhat forgiving of changes or new features, people who opt in to those changes are even more so.

Allowing people to opt in to new features means that you have a whole group of people who are not only accepting of change but actively seeking it out. That’s great for getting very early feedback while avoiding the occasional freakout from the small percentage of people who just enjoy screaming, “Things were better before!”

Here’s a fun thing you can learn from your opt in group: If people who explicitly ask to see your new feature hate your new feature, your new feature probably sucks.

The Opt Out

Of course, you don’t only want to test your new features or changes with people who are excited about change. You also want to test them with people who hate change, since they’re the ones who are going to scream loudest.

Once you’re pretty sure that your feature doesn’t suck, you can share it with more people. Just make sure to let them go back to the old way, and then measure the number of people who actually do switch back.

Is it a very vocal 1% that is voting with their opt out? You’re probably ok. Is half of your user base switching back in disgust? You may not have nailed that whole “making it not suck” thing.

The n% Rollout

Even with an opt out, if you’ve got a big enough user base, you can still limit the percentage of users who see the change. In fact, you really should be split testing this thing 50/50, but if you want to start with just 10% to make sure that you don’t have any major surprises, that’s a totally reasonable thing to do.

When you roll a new feature out to a small percentage of your users, just make sure that you know what sorts of things you’re looking for. This is a great strategy for seeing if your servers are going to keel over, for example.

It’s also nice for seeing if that small, randomly selected cohort behaves any differently from the group that doesn’t have the new feature. Is that cohort more likely to make a purchase? Are they more likely to set fire to their computers and swear never to use your terrible product ever again? These are both good things to know.

Do remember, however, that people on the internet talk about things. Kind of a lot. If you have any way at all for your users to be in contact with one another, people will find out that their friends are seeing something different. This can work for or against you. Just figure out who’s whining the loudest about being jealous of the other group, and you’ll know whether to continue the rollout. What you want to hear is, “Why don’t I have New New New New New Thing, yet?” and not “Just be thankful that they haven’t forced the hideous abomination on you. Then you will have to set your computer on fire.”

The New User Rollout

Of course, if you’re absolutely terrified of your current user base (and you’d be surprised at how many startups seem to be), you can always release the change only to new users.

This is nice, because you get two completely fresh cohorts where the only difference is whether or not they’ve seen the change. It’s a great way to do A/B testing.

On the other hand, if it’s something that’s supposed to improve things for retained users or users with lots of data, it can take a really long time to get enough information from this. After all, you need those new cohorts to turn into retained users before seeing any actual results, and that can take months.

Also, whether or not new users love your changes doesn’t always predict whether your old users will complain. Your power users may have invested a lot of time and energy into setting up your product just the way they want it, and making major changes that are better for new folks doesn’t always make them very happy.

In the end, you need to make the decision whether you’ll have enough happy new users to offset the possibly angry old ones. But you’ll probably need to make that decision about a million more times in the life of your startup, so get used to it.

So, are you ready to fucking ship it, already? Yes. Yes, you are. Just don't ship it to everybody all it once.

Now, follow me on Twitter.

Fucking Ship It Already: Visual Design

I asked on Twitter whether anybody would buy a UX book called Fucking Ship It Already. Apparently some of you are interested. So, in the interest of following my own advice, I’m shipping the book iteratively in the form of this blog. You’re welcome.

I’ve talked in the past about lots of ways to do user research faster. Now, let’s talk about a way to make your design process faster. This is not a new idea, but it’s worth reiterating for those of you who are trying to make decisions like this on a day to day basis.

Today’s chapter will cover the fastest and most useful sort of visual design for your lean startup.

There is some tension out there in lean startup land. Many people favor eschewing visual design polish all together, since it’s more important to figure out if a product is useful and usable before spending time “making it pretty.” Other people argue that a good user experience includes things like trust and delight, which can be enhanced by good visual design.

I’ve seen this work both ways. I was speaking with an entrepreneur the other day who told me a relevant story. Apparently, she had spent time on visual polish for a login screen. There were a few things that took awhile to implement, but they made the screen look much better. Unfortunately, the next week she had to rip it all out to change the feature, and all that time pushing pixels was wasted.

On the other hand, I’ve had dozens of people talk about Path’s gorgeous and delightful interface recently. Would they have gotten that kind of buzz without spending time on the visual details? Most likely not.

So, what does this mean for you? Should you spend time on pixel perfect screens and delightful visual design? No. And yes.

Here’s what you should do: spend a little time developing clean, flexible, easy to implement visual design standards.

What That Means

It’s probably not worth your time to fret and sweat over every single pixel on every single new page, mostly because you should always plan on iterating. When you’re a startup, any new feature may be killed or transformed in a week’s time.

If you spend days getting everything lined up beautifully on a product detail page, that could all be blown to hell as soon as you add something like Related Products or Comments.

Many people think that the right answer is to have a grand vision of everything that will eventually go on the page, but things just change far too rapidly for this. Imagine that you’ve carefully designed a tabbed interface with just enough room for four tabs. Now imagine that you need to add a fifth tab. I hope you didn’t spend too many hours getting all that spacing exactly right.

What You Should Do Instead

How about spending time on the basics that won’t have to change every time you add a feature?

For example, you could establish standards for things like:

  • An attractive color palette
  • Font sizes and color standards for headers, sub-headers, and body text
  • Column sizes in grid layouts
  • A simple and appealing icon set
  • Standards for things like boxes, gradients, backgrounds, and separators
  • A flexible header and footer design

Why You Should Do This

The great thing about having standards like these is that engineers can often combine them with sketches to implement decent looking screens without having to go through a visual design phase at all.

Also, since these things are reusable and flexible, there’s no wasted effort in creating them. Killing a feature doesn’t make knowing that your H1s should be a certain size and color any less valuable.

The best part is that you save time in a few important ways. First, as I mentioned, you don’t necessarily need to involve a visual designer every time you want to create a new screen. Second, this sort of approach tends to encourage a much simpler, cleaner, more flexible design, since items need to work in various combinations. And lastly, it tends to keep things more consistent across your product, which means that you’re less likely to have to go back later and do a complete redesign after things have gotten hideously out of whack.

It won’t solve all of your visual design woes, but it will make developing new features go faster, and you won’t be quite as sad when they fail miserably and you have to kill them.

Like the post? Want more tips on shipping already? Follow me on Twitter.

Stop Validating Your Product

I talk to a lot of very small companies that are trying to do Customer Development, and the conversations are often the same. The entrepreneur explains that the company is working on a fabulous product, and they want to figure out a) if anybody wants to buy the product and b) if they need to change anything about the product so that more people will buy it.

The entrepreneurs always ask questions like, “How will I know if I have talked to enough people?” and “How do I know if the people who like it are just early adopters?” and “How do I know if I should change the product in response to feedback or if I should just keep trying to find the right market?” The ones who have already been out in the field trying to conduct these interviews all have a sort of glazed, terrified look.

These are all really important questions. I’m going to give you a way to avoid having to ask most of them.

Stop trying to validate your product.

Now, I fully expect a bunch of people to stop reading here and totally miss the point of this post, but for those of you who stick it out, I promise this will make sense in a minute.

The trick is, it is far, far easier to conduct customer development before you have settled on a product or even an idea for a problem.

Why is that? Well, think about products as solutions to problems. Sometimes that “problem” is “I’m sort of bored while I’m waiting for the train” and the unexpected solution is flinging virtual birds at virtual pigs. But often, the problem is something more concrete, and it’s frequently shared by a large group of similar people.

So, instead of focusing on validating a solution, try one of the following techniques.

Validate a Problem

Let go of your preconceptions about how you are going to solve a problem for people and concentrate on first figuring out whether lots of people have a particular problem and what they’re currently doing to solve it.

For example, let’s say that you’ve posited that people have a really hard time finding and making appointments with trustworthy auto mechanics. The mistake you will probably make is to jump right into solving that problem and then going out into the world with some half-baked idea for Yelp meets OpenTable meets AAA and trying to find out whether it solves this problem that you’re not technically sure exists yet.

Instead of doing that, first validate the problem. Get on the phone with lots of different types of people and ask them how they found their mechanics. Talk to them about all of their mechanic-based issues. Find out what causes them the most pain.

Also, this is a good time to narrow down your market. Start with the market “people who have cars and will talk to you,” but quickly start noticing patterns. Do all the busy people have similar problems? What about people who live in cities vs. suburbs? How about people who are new to an area? Try people with special kinds of cars. I’ll bet that they all have very different problems, any of which you might want to solve.

Once you’ve spent time talking to people in various markets with various problems, you’ll come up with all sorts of ideas of how to solve those problems. The great thing is that then you can validate your product idea with people who you already know have a solvable problem.

This is a great way to do things if you have a particular problem yourself, and you want to find out if there are other people like you who have that same problem. By talking to lots of people with the same problem, you’re going to come up with a much better solution than you would if you just solved things for yourself.

Solve a Problem for a Particular Market

A slightly different approach is to pick your market first. Let’s say you have a special affinity for auto mechanics or busy moms or accountants at mid-sized companies.

The trick here is that you’re not going to change your market. You’re going to figure out some massive problem that is being experienced by a large portion of the market, and you’re going to solve it for them.

Your first step is going to be some ethnographic research. You need to really get into the heads of your target market and see what makes them similar and what’s driving them nuts. You’re not going into the research with an idea of the problem you want to solve for them. You’re going to let their needs drive your product decisions.

This is a great method if you happen to have some specific connection with a group or industry. Let’s say you collect porcelain owl figurines. You might desperately want to solve a problem for other porcelain owl aficionados, but you should be open to what problem you want to solve for them. For example, it might be how to get large numbers of high quality porcelain owls. Or it might be ways to contact therapists that deal with severe hoarding issues. Let the user research guide you!

The Easiest Kind of Customer Development

Hopefully you’re noticing a pattern here. The easiest kind of customer development is the kind that you do before you have a very solid product idea.

If you figure out your problems and your market before you come up with an Idea or a Solution or a Product, then when you do build something, you’ve already done a huge amount of the work in figuring out if anybody’s going to use it.

This is really about controlling which variables you’re testing. It’s hard to simultaneously find a problem, a market, and the problems with a real product all at once.

However, once you’ve validated your market and your problem, you can create something that solves that specific problem for that particular market. The beauty of this is that if you build a product for a problem you know exists in a market you know needs it and still nobody uses it, you can be pretty certain that the problem is your product.

Like the post? Follow me on Twitter.

How Metrics Can Make You a Better Designer

I have another new article in Smashing Magazine's UX section: How Metrics Can Make You a Better Designer.

Here's a little sample:

Metrics can be a touchy subject in design. When I say things like, “Designers should embrace A/B testing” or “Metrics can improve design,” I often hear concerns.

Many designers tell me they feel that metrics displace creativity or create a paint-by-numbers scenario. They don’t want their training and intuition to be overruled by what a chart says a link color should be.

These are valid concerns, if your company thinks it can replace design with metrics. But if you use them correctly, metrics can vastly improve design and make you an even better designer.


Read the rest here >

Stop Worrying About the Cupholders

Every startup I’ve ever talked to has too few resources. Programmers, money, marketing...you name it, startups don’t have enough of it.

When you don’t have enough resources, prioritization becomes even more important. You don’t have the luxury to execute every single great idea that you have. You need to pick and choose, and the life of your company depends on choosing wisely.

Why is it that so many startups work so hard on the wrong stuff?

By “the wrong stuff” I mean, of course, stuff that doesn’t move a key metric - projects that don’t convert people into new users or increase revenue or drive retention. And it’s especially problematic for new startups, since they are often missing really important features that would drive all those key metrics.

It’s as if they had a car without any brakes, and they’re worried about building the perfect cupholder.

For some reason, when you’re in the middle of choosing features for your product, it can be really hard to distinguish between brakes and cupholders. How do you do it?

You need to start by asking (and answering) two simple questions:
  • What problem is this solving?
  • How important is this problem in relation to the other problems I have to solve?
To accurately answer these questions, it helps to be able to identify some things that frequently get worked on that just don’t have that big of a return. So, what does a cupholder project look like? It often looks like:

Visual Design

Visual design can be incredibly important, but nine times out of ten, it’s a cupholder. Obviously colors, fonts, and layout can affect things like conversion, but it’s typically an optimization of conversion rather than a conversion driver.

For example, the fact that you allow users to buy things on your website at all has a much bigger impact on revenue than the color of the buy button. Maybe that’s an extreme example, but I’ve seen too many companies spending time quibbling over the visual design of incredibly important features, which just ends up delaying the release of these features.

Go ahead. Make your site pretty. Some of that visual improvement may even contribute to key metrics. But every time you put off releasing a feature in order to make sure that you’ve got exactly the right gradient, ask yourself, “Am I redesigning a cupholder here, or am I turbocharging the engine?”

Retention Features

Retention is a super important metric. You should absolutely think about retaining your users - once you have users.

Far too many people start worrying about having great retention features long before they have any users to retain. Having 100% retention is a wonderful thing, but if your acquisition and activation metrics are too low, you could find yourself retaining one really happy user until you go out of business.

Before you spend a lot of time working on rewards for super users, ask yourself if you’re ready for that yet. Remember, great cupholder design can make people who already own the car incredibly happy, but you’ve got to get them to buy it first, and nobody ever bought a junker for the cupholders.

Animations

I am not anti-animation. In fact, sometimes a great animation or other similar detail in a design can make a feature great. Sometimes a well designed animation can reduce confusion and make a feature easy to use.

The problem is, you have to figure out if the animation you’re adding is going to make your feature significantly more usable or just a little cooler.

As a general rule, if you have to choose between usable and cool, choose usable first. I’m not saying you shouldn’t try to make your product cool. You absolutely should. But animations can take a disproportionate amount of time and resources to get right, and unless they’re adding something really significant to your interface, you may be better served leaving them until later.

“But wait,” a legion of designers is screaming, “we shouldn’t have to choose between usable and cool! Apple doesn’t choose between usable and cool! They just release perfect products!”

That’s nice. When you’ve got more money than most first world governments, you’ve got fewer resource constraints than startups typically do. Startups make the usable/cool trade off every day, and I’ve looked at enough metrics to know that a lot of cool but unusable products get used exactly once and then immediately abandoned because they’re too confusing.

Note: this may seem to contradict my point about attracting users first and then worrying about retention, but I’d like to point out that there’s a significant difference between solving long term retention problems and confusing new users so badly that they never come back.

Before you spend a lot of time making your animation work seamlessly in every browser, ask yourself if the return you’re getting is really worth the effort, or if you’re just building an animated cupholder.

Your Feature Here

I can’t name every single different project that might be a cupholder. These are just a couple of examples that I’ve seen repeatedly.

And, frankly, one product’s cupholder might be another product’s transmission. The only thing that matters is how much of an effect your proposed change might have on key metrics.

As a business, you should be solving the problems that have the biggest chance of ensuring your survival. Cupholder projects are distractions that take up too much of your time, and it’s up to you to make sure that every project you commit to is going to give you a decent return.

If you want to identify the cupholders, make sure you’re always asking yourself what problem a feature is solving and how important that problem is compared to all the other problems you could be solving. Cupholders solve the problem of where to put your drink. Brakes solve the problem of how to keep you from smashing into a wall.

Of course, if I got to choose, I’d rather you built me a car that drives itself. Then I can use both hands to hold my drink.

Like the post? Follow me on Twitter!

5 Fun Ways to Ruin Your Startup

So, you’re interested in ruining your startup. At least, that’s what it seems like based on a lot of decisions I see some companies making.

Let’s talk about some of those terrible decisions that really hurt startups.

Hire Big Thinkers

Here’s the thing about Big Thinkers or people who describe themselves as Big Picture People. They don’t execute. At least, they don’t execute in any way that is helpful to a startup.

Sure, there are a few people who can both lead and get their hands dirty with details. If you find one of those people, hire them immediately.

But more often, I see startups stall out because they’ve got somebody making decisions who doesn’t have to actually implement any of those decisions. They’re delegators. And the problem is, at very early stage startups, there just aren’t enough people to delegate TO.

If you’ve got a team of four or five people (or even ten or fifteen), every person should be spending the majority of his or her time actually building, making, designing, writing, testing, selling, or some other verb that isn’t “setting direction” or “planning” or “establishing policies.”

Want a successful startup? Hire Big Doers, not Big Thinkers.

Talk About Awesome Features All The Time

Yes, yes. You have this fantastic idea for the next big pivot that’s going to make you all rich. But you know what? That idea that you had 2 months ago that you still haven’t finished building was also fantastic. So is the one you’ll have 2 months from now. Also the one you’ll have 2 minutes from now.

Startup people are incredibly rich in ideas. Unfortunately, they tend to be broke in every other conceivable resource.

A great way to ruin your startup is to spend all of your time in meetings discussing in detail all the wonderful features you’re going to add in the future. Instead, capture the broad outlines of the idea quickly, put them in your backlog, and, when you’ve actually built something and need to move on to something new, see what ideas you’ve collected that would solve a real customer need. THEN design and build them.

Want a successful startup? Sure, you need to dedicate a little bit of time to thinking about the future, but spend a hell of a lot more time working on the present.

Wait To Ship Until It’s Perfect

It can be tough to release something into the wild before you think it’s perfect. But the thing is, it’s never going to be perfect, and the faster you get it out there, the faster you’re going to start learning which parts are the least perfect.

The longer you put off getting something in front of users, the more money you’re going to spend on something that might very well fail. Wouldn’t it be better to find that out early enough to turn it around and make it awesome?

Want a successful startup? Release small pieces of your product often, and get over worrying that it’s ugly or doesn’t work exactly the way you want it to. You’re just going to end up changing it all anyway.

Work 40 Hours a Week

This one may not be what you expect. It’s not some diatribe about how startup employees need to work 24/7 and not have outside lives and eat all their meals at their desks. If that works for you, great. Personally, I enjoy going outside.

But you do need to acknowledge that work at a startup doesn’t follow a strict 9-5 routine. Sometimes you need to check on things over the weekend or answer customer complaints late at night. Sometimes you need to make a final push to get something out the door quickly. Sometimes decisions need to be made outside of regular business hours, and there isn’t anybody else to make them.

Want a successful startup? You don’t need to live at the office, but you do need to be aware of what’s happening and be able to react when necessary. If you want to turn your phone off at 5pm on Fridays, you might consider working someplace where you’ve got more people to back you up. 

Make A Lot of PowerPoint Decks

Sure, investors love them, and you’ve always got to show something to your board, but I’ve seen this get really out of hand. If you’re spending an hour or two a week building slides to share information with five other people, you are wasting everyone’s time.

I get that there’s important information that you need to share with the team, but the problem with PowerPoint is that people start doing things like tweaking display and finding funny pictures to make their points. A whiteboard works just as well for writing a few bullets, and it’ll get you out of meetings faster, not to mention taking far less prep time. 


Want a successful startup? Consider creating a simple dashboard of all the metrics that everybody in the company should be monitoring so that they can see the pertinent information at any time. That way, nobody’s waiting on you to build graphs and paste them into a deck once a week.

Like the post? Follow me on Twitter!

User Research You Should Be Doing (but probably aren't)

Startups know they should get out of the building and talk to their customers, but sometimes they’re a little too literal about it. There are tons of ways to get great information from your customers. The trick is knowing which technique answers the questions you have right now.

Sure, you’re doing usability tests and trying to have customer development interviews, but here are a few slightly unusual qualitative user research techniques you should be employing but probably aren’t.

Competitor Usability Testing

Have you ever considered running a user test on a competitor’s site?

This one’s fun because it feels a little sneaky. It also gets you a tremendous amount of great information, since chances are somebody is already making mistakes that you don’t have to make.

For example, when one of my clients, Crave, wanted to build a marketplace for buying and selling collectibles, we spent time watching people using other shopping and selling sites. We learned what people loved and hated about the products they were already using, so we could create a product that incorporated only the good bits.

The result was a buying and selling experience that users preferred to several big name shopping sites that will remain nameless.

Bonus tip: There’s always the temptation to borrow ideas from a big competitor with the excuse, “well, so and so is doing it, and they’re successful, so it must be right!” Guess what? Sometimes other companies are successful for a lot of reasons other than that thing you’re stealing from them. Make sure users like that part of a competitor's product before using it in your own.

Super Targeted Usability Testing

Typically, when conducting usability tests, we’ll run several sessions on an entire product with lots of scenarios and tasks. But often that generates a ton of data that you have to wade through and analyze.

Instead, try doing a few sets of three 10-15 minute tests on a very specific feature. That’s a lot of numbers in a row. How about an example?

When we were building Crave, we wanted to test a particular new feature that we thought users would love. When we actually launched it, we were a little concerned that it would be hard to find, so immediately after launch, we ran three quick, unmoderated user tests with one task.

As we suspected, all three users had some trouble finding the feature. We immediately created a contextual help bubble that guided interested users to the feature. Then we ran three more tests. None of the new users had any problems at all.

The entire process took about three hours, and users regularly tell us how much they like that feature.

Bonus Tip: Using unmoderated user testing services like UserTesting.com and TryMyUI.com (and about a dozen others), make testing like this fast and cheap. You can test, build, deploy, and iterate several times in a single day. If you don’t do continuous deployment, you can use them to test high fidelity prototypes rather than your actual product.

Purely Observational Testing

This type of research is the exact opposite of the last one, because you’re not always testing a very specific part of your interface or a brand new feature.

Sometimes you’re trying to generate ideas for what you could do next that would give you the biggest ROI. For example, you might know that there’s a problem somewhere in your metrics, and you’re trying to understand what pain points are causing the drop off.

Whatever the reason, one of the most enlightening things you can do is a purely observational test. This means sitting down, shutting up, and watching people use your software in whatever way they want to do it.

You don’t give them tasks or scenarios. You just schedule them for a time they’d normally be using your product and arrange to observe them, remotely or in person.

Bonus Tip: Make sure to do this with new users, power users, and occasional users, as well as people who fit in all of your various persona groups. This will give you a fabulous overview of what people are really doing with your product.

Micro-Usability Tests

These are quite different, and some don’t fall neatly into the qualitative testing realm, but they can be quite useful.

Navigation Tests
When we were building Crave, we obviously wanted to make sure that things were incredibly easy to purchase, since that’s where we’d make money.

Since we had wireframes and visual mockups of the screens, we simply loaded them into UsabilityHub’s NavFlow and asked users to show us how they’d purchase something.

After a couple of tests, we knew exactly where in the purchase funnel we needed to improve things before ever even had a real funnel!

Landing Page Tests
Another type of Micro-usability test can help you fine tune your messaging.

Ever run one of those landing page tests where you compare two different messages and see which one results in more conversion? Ever wonder why the winner was the winner?

This is one of those questions that’s not very cost effective to answer with standard usability testing, since what you really want is a couple of minutes of testing with a lot of different users rather than an hour with just a few users.

Luckily, you can post a screen on FiveSecondTest with a few simple questions like, “What does this product do?” and “Who is this product for?” and get extremely cheap feedback about people’s first impression of your landing page.

Now you’ll not only know which version won, but you’ll have a better idea of WHY it won. Different tests that we ran at Crave showed that some messages led people to believe that the site was about “buying and selling” while others led people to believe it was about “sharing” or “meeting people.” And, of course, some messages didn’t mean anything to anybody. We didn’t use those.

Bonus Tip: As with everything, I like running smaller versions of these micro-usability tests iteratively. With the FiveSecondTests, I might run each version of the page with 15 people and then update the messaging until I get a landing page where the vast majority of respondents understand exactly what my product is selling. 


Do These Techniques Replace Usability Testing?

Seriously? Is that even a question? Of course not!

You still need to do regular usability testing and conduct standard customer development interviews. You still need to get out and ask your users questions and have them perform predefined tasks and talk about their problems.

But the next time you want a particular question answered, think a little harder about the best way to answer it and the best tools to use.

Like the post? Follow me on Twitter!

Or check out my presentation on DIY User Research for Startups.

What Makes UX Lean - My Talk from SXSW

If you couldn’t make it to SXSW this year, there was a fantastic, all day lean startup track with talks from lots of lean startup experts. I was lucky enough to be asked to be on the Lean UX panel, along with the always awesome Janice Fraser, Ian McFarland, and Dan Martell.

I gave a short talk on what makes Lean UX Lean. Since I’m a blogger at heart, I wrote down pretty much everything I was going to say first, which means I can now publish a draft of the talk here! If you didn’t get to hear the panel, or if you did and want a quick refresher, please enjoy!


I’ve been a user experience designer for a lot of years, and I’ve worked with a lot of lean startups, which is part of the reason why I got a call last year from Manuel Rosso, the CEO of Food on the Table.

Now, Food on the Table is a very lean startup here in Austin. Because they’re a lean startup, they measure absolutely everything. And because they measure everything, Manuel knew immediately when the product developed an activation problem.

The whole project has been written up in a post for Eric’s Startup Lessons Learned blog, and I strongly recommend that you go read it if you haven’t already. It has a lot of tips about how to incorporate design into your startup that you’ll hopefully find helpful.

But today, I want to go a little deeper into what made that project a good example of Lean UX. Because, during that project, we did a lot of things that you might do in any sort of a UX project for any sort of a company.

For example, we did qualitative user research to understand why users were having a problem. We made sketches and built interactive prototypes, and we tested and iterated on them.

These are wonderful, helpful things to do, but they’re not unique to Lean User Experience Design. They’re part of User Centered Design, which I’m a huge fan of, and I’ve done all of those things in waterfall projects at giant companies that were anything but lean.

So, what are a few things that made this a lean ux project and not just a regular old redesign?

Integrating Quantitative Research

I think the first hallmark of Lean UX is using quantitative metrics to both drive and validate design changes. What does that mean? Well, it means that the reason we were working on the first time user experience was because a specific metric, activation, wasn’t as high as the team wanted it to be.

Quantitative metrics didn’t tell us exactly why we had a problem - we needed to do our qualitative research to understand that - but it did tell us what our most immediate problem was, which helped us to understand where we should start improving our user experience.

In that way, the metric drove the product decision.

Quantitative metrics also meant that we knew, at the end of the project, we’d be validating our work with an a/b test against the original design. That quantitative validation of design really helps improve the design process over the long run, because we can see what sorts of changes have the biggest positive impact on our end user experience. That lets us improve the ROI on future design projects.

Overlapping Design and Development

Another thing that makes lean UX different is a serious commitment to designing in parallel with engineering. In waterfall design, of course, all the design and research are done up front and then thrown over the wall to engineering.

But Lean UX is different. In Lean UX, design and development are working at the same time. This can be tricky, of course, since a lot of people don’t understand how you can start to build something until you know exactly what you’re building.

Well, here’s how we did it at Food on the Table. Once we had done our very fast, initial user research to understand the reasons for our activation problem, we came up with a lot of different fixes we knew we wanted to make. We then split those fixes into three different types: Fix Now, Redesign, and Iterate later.

The fix now problems were low hanging fruit. Those were usability bugs (or plain old BUG bugs) that could be addressed immediately by the engineering team without waiting for design. That meant that the users (and the company!) started seeing improvements immediately, rather than having to wait for a single, massive rollout of all the changes. Why make small changes wait on big changes?

Also, when it came to the stuff we were going to redesign completely, we didn’t wait for everything to be perfect before sharing the new design with engineering. While we were working on things like button placement, page layout, copy writing, or visual design, they could get started with the major structural changes and backend improvements that would be needed.

By overlapping design and development in this way, the whole project moved much faster.

Planning for Iteration

Perhaps the biggest difference with Lean UX vs User Centered Design, is planning for iteration rather than trying to include all the new features at once.

As I mentioned before, we separated our changes into three buckets: Fix Now, Redesign, and Iterate Later. That last one’s important, since it means that you are getting a good version of your design in front of users quickly in order to learn from it and optimize it rather than trying to come up with a perfect version of the design before building anything. This should sound pretty familiar to all of you.

Here’s an example. Food on the Table lets people choose recipes and add them to a meal plan. We found, during testing, that users enjoyed choosing from a selection of recommended recipes.

But that led to questions. How many recipes would users want recommended? Would they like to see recipes that their friends were recommending? Did they want to see recipes they’d recommended to others? Would they like to see them in a carousel or a list or a coverflow?

Luckily, these were all questions that could optimized later with a/b tests, once we’d implemented the major structural change, which involved letting people select recommended recipes at all.

Why was this important? Well, it meant we didn’t have to guess or debate or spend a lot of time prototyping and testing the exact right number and content of the recipes page before launch. We could concentrate on getting the user flow and the main interactions right and improve the rest later, once we’d validated that our larger design assumptions were correct.

So What Is It?

So, what makes Lean UX lean? Essentially, it’s about more than just applying good design principles to a Lean Startup, although, that is obviously important. It’s also about applying Lean principles to the design process.

Lean UX incorporates quantitative & qualitative research, it overlaps the design and development phases of the project, and it starts small and plans for iteration. And, I think, that all those things together with a great, user centered design process allow Lean UX to deliver huge value to both users and startups very quickly.

Like the post? Follow me on Twitter!

Wish you'd heard the talk? Don't miss the next one! I'll be speaking and running a workshop at Web 2.0 Expo in San Francisco on March 29th.

When Is a Design Done?

I was talking with a designer about Lean UX. I was explaining that one of the hallmarks of Lean UX is to get a good, but not complete version of a product or feature designed and built and then iterate on it later. She thought this sounded like an interesting approach, but then she asked, “When do you know you’re done?”

Figuring out when you’re “done” is tricky for any design or redesign project, unless you’re a consulting agency, of course, in which case the answer is, “when the client runs out of money.” But I realized that, in Lean UX, figuring out when you’re done is actually incredibly easy.

You’re done when your metrics tell you you’re done.

Let me explain. No product is ever actually “done.” There is always something you could do to improve it. However, projects can certainly be done. The trick is that you have to choose your projects correctly.

What’s the correct way to choose a Lean UX project? Every Lean UX project should be chosen based on a metric.

This may piss off a lot of designers who want to make wonderful, exciting, super cool designs just for the sake of design or user happiness, but when it comes down to it, unless you’re independently wealthy, every design change you make should move a number that is important to your business.

Now, it is a lucky break for those of us who care deeply about our users that improving the overall user experience of the product frequently improves some number that the business people care about. But not every single thing you can do to make a user happy has the same ROI for the business. And not every improvement makes the right people happy at the right time.

That’s why the UX projects you choose should be based on metrics.

Let me give you an example. Whoever it is at your startup who is in charge of running the business should have a pretty good idea of what your various metrics have to be in order for you to all retire and buy yachts. For example, your Activation number may have to be 20% and your Retention may have to be 70%. (Please note, I made these numbers up. Your metrics may vary.)

They pick these numbers because they know that having, for example, a 99% retention rate and a 1% activation rate may lead to retaining 3 incredibly happy users forever, which is suboptimal from a business perspective.

So, if your activation number is at 10%, your business folks may come to you and say, “we need to turn more of our acquired traffic into regular users because we have identified this as the most important problem to solve at this moment.” You respond, “Great! How many more do you need?” They explain that you need to get activation from 10% to 20%.

You will notice that the metrics are not driving your design decisions. Nor are they driving your feature requirements or any other product changes. They are simply telling you what your biggest business problem currently is.

Now, it’s up to you as a designer or product owner to figure out what is keeping the activation number low and then come up with some ideas of how to fix it. You do this with what I like to call “research and design” or alternately “that thing you are paid to do.”

You may have dozens of wonderful ideas for how to fix the problem, and you may love and believe in all of them. You may not, however, actually execute every single one of them.

This is where the Lean part comes in.

Ideally, you will design and execute as many of the fixes as necessary in order to move the number to where you want it to be. Maybe you’re awesome (or awesomely lucky), and you move that activation number on the first try with a very small bug fix.

Does that mean you never get to implement the super sweet, but somewhat complicated, feature that you know will make users incredibly happy and improve activation even more? No! Unfortunately, you may not get to implement it just yet.

You see, once you got your activation number to where it needed to be, it stopped being the most important problem to solve. Now, maybe you need to work on getting retention higher or improving revenue or referral.

On the flip side, maybe you redesign the first time user flow and improve activation, but not by enough. That means you should continue working on it. Figure out why your changes didn’t have as big of an impact as you thought they would, and then try some new things.

You’re not “finished” until your metrics are where you want them to be.

Why is this important? Startups have a ridiculous number of things to do, and they typically have limited resources. It can be incredibly difficult to prioritize when to keep working on a feature or an area of the product, and when to move on.

By setting the goals ahead of time based on metrics that are critical to the business, it becomes much easier to know when you’re “done,” and when you should keep optimizing or redesigning.

Like the post? Follow me on Twitter!

Like Lean UX but hate reading? I'll be on the UX panel at the Lean Startup track at SXSW. You should come see it and then say hi to me afterward.

Two Stupid Reasons for Complicated Products

I frequently get asked by startups to simplify products. In general, companies are fantastic at coming up with great feature ideas, but they tend to find it harder to either kill underperforming features or properly integrate new ideas that got tacked on as an experiment or pivot.

Because I get called in when a company already has a product that new users find confusing, I see a lot of the same mistakes repeated. I also hear the same excuses for those mistakes.

Often, when I’m looking at a new product, I’ll find very similar features in different parts of the interface.

For example, one social product had three completely different ways of searching for friends the user might know. Now, I don’t mean that there were different criteria you could use - like email address or interests or user name. That would have been fine.

I mean there were three completely different places the user could go in the product to find three completely different features that were meant to help people search for their friends. There was huge functionality overlap among the three features, but they were all slightly different.

There’s a Reason For That


Of course, I mentioned that it seemed odd and confusing to have three different places to go to do essentially the same thing. And the product owner patiently explained to me that there was a reason for that.

The product owner then launched into a detailed description of how the first one had been built by the team as an experiment. A few months later, since the experiment went well, but was a little slow, the team migrated to a different technology for search, and built a second version of the feature alongside the first one to see if it could be faster.

Since the product owner hadn’t given them any requirements for the new version other than “go faster,” and the new technology made some of the old functionality tricky, the second version didn’t have exactly the same capabilities as the first. So, the team decided it wasn’t an adequate replacement for the old version. They released it anyway, since they’d already built it, and it was faster.

The third version of the feature had been built as part of a larger feature, but the rest of that larger feature had been killed, and only the search part remained. It had some neat new functionality that users liked, but the team felt it didn’t really replace either of the other two versions.

Moreover, the product owner explained to me that different types of users liked the various different versions of friend search, so he was hesitant to kill any of them, because whichever one he killed would upset somebody.

So, he was stuck supporting three different variations of the same feature, and new users were overwhelmed by choice for where they were supposed to go to find their friends.

And here’s the thing: users don’t actually want this sort of choice. This sort of choice is confusing. It makes navigating the product cumbersome, and it’s unpleasantly surprising to constantly find new, slightly different ways to do the same thing.

There’s a Big Difference


This isn’t the only way that the problem manifests. Sometimes when I point out similar features in different places, the product owner reassures me that “there is a big difference” between the features.

This happened with a client when I pointed out that there were two different types of quizzes in one section of the product, but they had different names and placements. I asked why they couldn’t be combined into one section, so that the clutter on the page would be reduced and the user could always know where to go to take quizzes.

He assured me that there was a big difference between the two types of quizzes. When I asked him to elaborate, he went into detail about how the types of quizzes were different on the back end, and how one was often used as a business development tool while the other was user generated.

In other words, both of the “big differences” were things that were only different to the company. They were the sorts of differences that a user would never notice. All a user would notice would be that the quizzes were sometimes in one place and sometimes in another, which would be confusing and frustrating if she was looking for that feature.

How to Avoid This


Take a hard look at your product. Are there any similar features? You’ll be surprised at how often the answer is yes. If there are, ask yourself what your reasons were for building the different variations.

If your reasons for building different versions of the same feature are based entirely on technology or business development (in other words, things that only matter to YOU and not your users), you’ve got a problem.

The most customer friendly way of dealing with it is to try to come up with a superset of the best functionality from all the versions and improve one of the versions until it satisfies as many user stories as possible.

But even if you don’t have the time or resources to consolidate them into one great version of the product, just killing the duplicated features will ultimately simplify your product and make it more useful and understandable for all of your customers.

Like the post? Follow me on twitter!

Lean UX - A Case Study

For those very, very few (ok, none) of you who read my blog but don't read Eric Ries's blog, Startup Lessons Learned, I have some exciting news for you. But first, why the hell aren't you reading Eric's blog? You really should. It's great.

I've a written a guest post that now appears on the Startup Lessons Learned blog. It's a case study of a UX project I did with the lean startup Food on the Table.

If you're wondering whether design works well with lean startups, I answer that question in the post. Spoiler alert: The answer is 'yes'.

You Need to Redesign Your Product

Iterate, iterate, iterate. If there is something that I hear from lean startups more than “pivot” and “fail” it’s that iterating on your product is the way to improve it. And I absolutely believe that iteration is critical to making great features and products.

The problem is, sometimes just improving what you’ve got isn’t enough. Every so often, from a UX perspective anyway, you just need to throw everything out and start from scratch.

I’m not necessarily talking about reskinning the site with a new visual design, although that sometimes has to be done. Sometimes you also need to completely reorganize and refocus everything about your product’s user experience.

Of course, this can be incredibly expensive and time consuming, so it’s not something that you want to do unless it’s really necessary.

Here are a few signs that you may need to do a complete product redesign:

There’s No Room for Your New Feature

A big part of lean startups is coming up with lots of new feature ideas and throwing them in front of users to see what sticks. Another big part is killing the features that don’t make the cut, but that can be hard to do.

There comes a time in the life of every lean startup where they have a new feature that doesn’t quite fit within the navigation and structure of the rest of the product.

Often this new feature is not quite a pivot, but it may be the first step in that direction. Maybe you’re adding a social component to an ecommerce application or you’re adding games or a marketplace to a social site.

Or maybe you’ve just run out of room on your front page, and you simply can’t add another widget.

Whatever the reason, when you have a new feature that you can’t logically fit anywhere into your product, it’s probably time to do an overhaul, or at least a reorganization. It’s probably also a great time to go through and kill some of those underperforming features in order to make room for the new stuff.

You’ve Added a “Miscellaneous” Section to Your Navigation

Ever been tempted to add a section to your product navigation called “Misc.” or “Other” or “Stuff”? Yeah, we’ve all wanted to do it. DON’T.


Having a catchall in your product or site’s global navigation is a really good hint that something is terribly wrong with your information architecture. It may also mean that you have one or more features that don’t fit in well with the rest of your product.

This may mean that it’s time to rethink what your product offers to people or start killing (or spinning off) features that don’t fit. By focusing your offering, you will find that those random catchall categories magically disappear.

Your Users Aren’t Finding the Features You Already Have

You are measuring the adoption and user engagement for all of your current features right? So, you should know if there are features that are underperforming.

The important thing to remember is that sometimes features don’t do well not because they’re bad features but because nobody can find them in the confusing mess that your product has become over time.

You can figure out which it is by talking and listening to current users. If they’re constantly asking for features that you already have or if they get very excited when you describe existing features, you know that it’s time to do a product redesign so that they can find the things that you’ve already built for them.

Your Visual Design is Attracting the Wrong Audience

Sometimes a visual design change can make an enormous difference in attracting the right audience. If your product is meant to appeal to working moms, but in reality appeals to 15 year old gamers, you may just need to do a complete visual redesign.

The best way to test this is to get your product in front of some people in your target demographic who aren’t currently users. Just see how they react to it. Do they recoil in disgust? Do they appear disinterested? Do they say things like, “Oh, that’s not for me”? Then it’s probably time to reskin your product.

New Users Don’t Understand What Your Product Does

Another common problem with unfocused products is that new users just don’t get the value proposition. It may be tempting to try to address this with clever marketing messages or help pages or video tutorials, but this rarely fixes the real problem.

The real problem is frequently that there are simply too many things going on for new users to immediately grasp what the product can do for them. They start to explore, but they quickly become lost in a sea of unrelated features and inconsistent navigation.

By redesigning the UX, you can focus the navigation and features so that a new user can immediately understand what the product does for her.

Your Product Has Become Wildly Inconsistent

When you design and ship each feature separately, both the visual design and interaction can become really inconsistent. Button placements migrate, “Submit” becomes “Go”, and even navigation conventions can change.

Do this enough times, and it can feel like you’re looking at dozens of different products, which is extremely disorienting for your users.

This doesn’t always require a full redesign, but it does require a sweep of your entire product in order to make visual and interaction design details consistent and coherent.

When It’s Not Time

Because complete product overhauls can take a lot of time, it’s not something you want to undertake lightly. It’s not a panacea for a product that’s just not working.

I’ve frequently seen people who were simply out of ideas for engaging their users say that they needed to “redesign the whole thing” out of frustration or lack of vision.

But if your product has grown organically into a big, sprawling, inconsistent mess without a clear purpose or focus, it’s time to bite the bullet and redesign. Your users will thank you.

Like the post? Please, follow me on Twitter!

Want to read more? Check out these related posts:

When To Get Help With User Research

I don't spend a lot of time on this blog telling you why you should hire me to talk to your customers. In fact, the vast majority of the posts are meant to make it more possible for you to talk to your customers without hiring somebody like me. It's not that I don't like working. It's just that I think that anybody who is responsible for making decisions about products should know how to learn from users on their own. It results in better products for all of us.

Product owners need to be involved in customer research for a lot reasons. Reasons like:
  • You're more likely to believe the results if you participated in the research.
  • You're more likely to understand the relative importance of customer problems if you observed the problems happening.
  • You will come up with more comprehensive solutions to problems when you understand the context in which they're happening.
  • It's far too easy to ignore a report written up by a usability consultant, it's incredibly easy to forget to watch the testing videos.
  • If you do it yourself, all of the lessons you've learned will stay within the company, long after a consultant has gone on to other projects.
That said, I'm about to tell you why you may need to hire somebody like me. For a little while at least.

When I talk about customer research or customer development or learning from customers, I really mean quite a lot of different techniques. Sure, there are general best practices around talking to customers, tips for improving your research skills, and important things you should avoid, but there are also things like picking the right testing method or tool that you almost certainly have no experience with. You need to know what is the most important thing for you to do right now.

Do you know when it would be helpful to do a card sort? A journal study? A contextual inquiry? Do you know when it's fine to do a remote usability study vs when you should really run one in person? How about when your product will benefit from using an online tool like usertesting.com or fivesecondtest, and when something like that isn't useful? Do you know what sort of testing to do in order to find out why specific metrics are lower than you'd like? Do you know when you should start your visual design and when you need to concentrate on usability? Do you know how many people to talk to in order to answer a specific question? Do you know at what points in the development cycle talking to users is critical and when it's a waste of time? Do you know how to take several hours of free form user conversations and turn it into a small number of features or bug fixes that can be communicated to your engineering team?

If you answered, "of course I know that" to all of those questions, then move along. You almost certainly have no use for somebody like me to come in and help you out. If you answered, "I'm going to learn the answer to all of those questions," then I wish you good luck on your journey of discovery. I'll warn you though. There are more questions just like those.

If, on the other hand, you said, "I don't know the answer to a lot of those questions, but I wish somebody could help me understand the small subset of them that matter to me, as a product owner, so that I could get on with the business of building a great product," then you might want to give me (or somebody like me) a call.

Because it's true that there is a huge amount to know about talking to your users. But it's also true that, at any given stage in your product development, you probably only need to be concerned with only a little bit of it. And, it's also true that figuring out which bit of it you need to know can be really hard to do without help. That's where people like me come in handy. We can help you figure out what to do next, and then we can help you learn how to do what you need to do next.

But be careful. If you're a lean startup, you probably don't want to pay us to actually do what you need to do next. For all the reasons I mentioned above, that's still your job.

Interested in this sort of service? Learn more about Users Know here.

Want to read more posts on how to do this stuff yourself? Follow me on Twitter!

Nobody is Thinking About Your Product

When you're working at a startup it can be all-consuming. You can forget everything else in your life pretty easily when you're neck deep in valuations and minimum viable products and customer acquisition and a million other things that need your attention. You tend think about your product every waking minute.

That's why it can be such a shock to realize that nobody else is thinking about your product. Well, ok, unless you're Apple, but there's clearly some kind of weird mind control thing going on there. In general, when you have a new product, you're incredibly lucky if you're getting more than a few minutes of attention from anybody but your most passionate early adopters.

Why is it important to realize this? It's important, because it has a really big impact on how you design your product and connect with your users.

Make Everything More Discoverable

You know exactly where in the user interface to go to do every task that can be completed with your product (I hope!). Other people, especially new users, don't even know that most of your features exist. This means that it's just as important to design for discoverability as it is to design for usability. But how are they different?

Let's do a quick thought exercise. Imagine somebody hands you a featureless metal box. You might look at it for a minute or two. If it's particularly attractive, you might admire it, but you're probably not going to spend a lot of time with it. Now imagine that the box has $10,000 dollars inside of it. You will probably spend a lot more time figuring out to get it open, yes?

Your product is like that box that is hiding money. If people don't discover very quickly that it provides something valuable to them, they're not going to spend much time figuring out how to use it. You need to help people understand immediately that your product has features they really, really want. That's discoverability.
You also need to make it pretty easy to actual learn how to use those features, once they've decided to dig into the product a bit. That's usability. For bonus points, you can make the whole process interesting and engaging so that people actually enjoy discovering features and using your product. That's fun. 

Key Take Away: Users are not going to spend any time learning to use your product if they don't immediately understand what's in it for them. Make it easy for them to figure out what features exist and why they're useful.

Remind People You Exist

This is going to sound obvious, but people have lots of things to do every day. What with their own jobs and families and checking Facebook and standing in line at the Apple store for whatever is coming out next, their schedules are pretty full. If you're going to fit into that schedule, you can't just sit around and wait for them to come back to you.

Remember, they're not thinking about you. Ever. That's why you have to contact them regularly via email or Facebook or Twitter or post cards or sky writing or whatever you think will get the attention of the people you're targeting. Think that you can put off writing your welcome emails or your mass notification system? The longer you put it off, the more early users you're going to lose because they didn't think to come back to your product the next day. 

Key Take Away: Assume every single one of your users forgot about the existence of your product five seconds after closing it. If you want them to come back, you need to remind them.

Design for Distraction

You know what you never see in a traditional user test? The seventeen million other things that your users are doing while using your product.

See, even when people are thinking about your product, they're not thinking about only your product. They're thinking about the phone that's ringing and when they have to pick up the kids from soccer and what they're going to make for dinner and the fact that their boss wants a TPS report finished before 5.

Things like shopping carts that time out or registration forms that need to be filled out from scratch if you make a small error or login redirects that don't send you back where you wanted to go are all poisonous to the distracted user. They dramatically increase the number of people who are going to simply give up on your product halfway through.

That's why it's important to make sure that you're actually watching people use your product in their natural environments whenever possible so that you can understand the kinds of interruptions that you need to plan for. It's also critical to make sure that, if somebody were to get called away from your product in the middle of a task (which they will), that they can easily come back and finish that task without having to start over from scratch. 

Key Take Away: People have other things going on in their lives, so make sure that your product allows for interruption, inattention, and distraction.

Make It Addictive

In the way that heroin addicts always think about heroin and WoW addicts always think about Wow and Apple fans always think about new Apple products, if you can design something addictive people will think about your product more. The problem is, making things addictive is harder than just throwing in a few simple game mechanics or copying whatever WoW or Apple does.

There are a lot of good blog posts on how to make your product stickier, but some of the common themes include:
  • creating social bonds with other users (Joe wants to be your friend!)
  • having time sensitive tasks that require users to return at certain intervals (Log in in the next 15 minutes to get this great deal!)
  • providing incentives and achievements for regular use (You unlocked the Foozle Badge!)
  • providing competition with other users (You are now the mayor of the Scranton, PA Taco Bell!)
  • creating a sense of disaster if they don't return (Your fake crops will die!)
  • offering quality content that the user can't get anywhere else and that updates daily (Learn about the five things in your pantry that could KILL YOU!)
The important thing to do here is to pick styles of addiction that fit with your product. Your tax preparation software probably doesn't need a leaderboard, but a good, weekly blog on ways to save tax-free money for retirement might be a draw. 

Key Take Away: Provide reasons for your users to want to come back daily by using things like game mechanics, social pressure, or new content, but make sure that the features fit comfortably with your product.

Cultivate the People Who DO Think about Your Product

If you're lucky and you work really hard to get people's attention, everything I've said isn't going to apply to a tiny group of early adopters. These are the people who will think about your product all the time and want to be heavily involved in its growth and improvement. Use those people! Talk to them. Learn from them. Get them to evangelize your product to other people just like them. Give them jobs, or let them monitor your forums and answer questions for other users. They will love that they're contributing to something they care about, and your product will improve as a result.

Also, if you ignore them, they'll most likely stop thinking about your product, and go think about somebody else's product. 

Key Take Away: Understanding the few people who are deeply attached to your product can help you understand and improve the features that may make it appealing to other, less dedicated users.

But Most Importantly...

You need to internalize the fact that, even once they've visited your site or downloaded your app or become a registered user of your product, the vast majority of people simply aren't thinking about you or your product. At all. This means that a big part of your job is not so much about countering loathing or dislike as it is about countering total indifference.

So, get out there and make them remember you exist.

Like the post? There are more like it. Follow me on Twitter!

5 Big Mistakes People Make When Analyzing User Data

I was trying to write a blog post the other day about getting various different types of user feedback, when I realized that something important was missing. It doesn’t do any good for me to go on and on about all the ways you can gather critical data if people don’t know how to analyze that data once you have it.

I would have thought that a lot of this stuff was obvious, but, judging from my experience working with many different companies, it’s not. All of the examples here are real mistakes I’ve seen made by smart, reasonable, employed people. A few identifying characteristics have been changed to protect the innocent, but in general they were product owners, managers, or director level folks.

This post only covers mistakes made in analyzing quantitative data. At some point in the future, I’ll put together a similar list of mistakes people make when analyzing their qualitative data.

For the purposes of this post, the quantitative data to which I’m referring is typically generated by the following types of activities:
  • Multivariate or A/B testing
  • Site analytics
  • Business metrics reports (sales, revenue, registration, etc.)
  • Large scale surveys

Statistical Significance

I see this one all the time. It generally involves somebody saying something like, “We tested two different landing pages against each other. Out of six hundred views, one of them had three conversions and one had six. That means the second one is TWICE AS GOOD! We should switch to it immediately!”

Ok, I may be exaggerating a bit on the actual numbers, but too many people I’ve worked with just ignored the statistical significance of their data. They didn’t realize that even very large numbers can be statistically insignificant, depending on the sample size.

The problem here is that statistically insignificant metrics can completely reverse themselves, so it’s important not to make changes based on results until you are reasonably certain that those results are predictable and repeatable.

The Fix: I was going to go into a long description of statistical significance and how to calculate it, but then I realized that, if you don’t know what it is, you shouldn’t be trying to make decisions based on quantitative data. There are online calculators that will help you figure out if any particular test result is statistically significant, but make sure that whoever is looking at your data understands basic statistical concepts before accepting their interpretation of data.

Also, a word of warning: testing several branches of changes can take a LOT larger sample size than a simple A/B test. If you're running an A/B/C/D/E test, make sure you understand the mathematical implications.

Short Term vs. Long Term Effects

Again, this seems so obvious that I feel weird stating it, but I’ve seen people get so excited over short term changes that they totally ignore the effects of their changes in a week or a month or a year. The best, but not only, example of this is when people try to judge the effect of certain types of sales promotions on revenue.

For example, I've often heard something along these lines, “When we ran the 50% off sale, our revenue SKYROCKETED!” Sure it did. What happened to your revenue after the sale ended? My guess is that it plummeted, since people had already stocked up on your product at 50% off.

The Fix: Does this mean you should never run a short term promotion of any sort? Of course not. What it does mean is that, when you are looking at the results of any sort of experiment or change, you should look at how it affects your metrics over time.

Forgetting the Goal of the Metrics

Sometimes people get so focused on the metrics that they forget the metrics are just shorthand for real world business goals. They can end up trying so hard to move a particular metric that they sacrifice the actual goal.

Here’s another real life example: Once client decided that, since revenue was directly tied to people returning to their site after an initial visit, they were going to “encourage” people to come back for a second look. This was fine as far as it went, but after various tests they found that the most successful way to get people to return was to give them a gift every time they did.

The unsurprising result was that the people who just came back for the gift didn’t end up actually converting to paying customers. The company moved the “returning” metric without actually affecting the “revenue” metric, which had been the real goal in the first place. Additionally, they now had the added cost of supporting more non-paying users on the site, so it ended up costing them money.

The Fix: Don’t forget the actual business goals behind your metrics, and don’t get stuck on what Eric Ries calls Vanity Metrics. Remember to consider the secondary effects of your metrics. Increasing your traffic comes with certain costs, so make sure that you are getting something other than more traffic out of your traffic increase!

Combining Data from Multiple Tests

Sometimes you want to test different changes independently of one another, and that's often a good thing, since it can help you determine which change actually had an effect on a particular metric. However this can be dangerous if used stupidly.

Consider this somewhat ridiculous thought experiment. Imagine you have a landing page that is gray with a light gray call to action button. Let's say you run two separate experiments. In one, you change the background color of the page to red so that you have a light gray button on a red background. In another test, you change the call to action to red so that you have a red button on a gray background. Let's say that both of these convert better than the original page. Since you've tested both of your elements separately, and they're both better, you decide to implement both changes, leaving you with...a red call to action button on a red page. This will almost certainly not go well.

The Fix: Make sure that, when you're combining the results from multiple tests that you still go back and test the final outcome against some control. In many cases, the whole is not the sum of its parts, and you can end up with an unholy mess if you don't use some common sense in interpreting data from various tests.

Understanding the Significance of Changes

This one just makes me sad. I’ve been in lots of meetings with product owners who described changes in the data for which they were responsible. Notice I said “described” and not “explained.” Product owners would tell me, “revenue increased” or “retention went from 2 months to 1.5 months” or something along those lines. Obviously, my response was, “That’s interesting. Why did it happen?”

You’d be shocked at how many product owners not only didn’t know why their data was changing, but they didn’t have a plan for figuring it out. The problem is, they were generating tons of charts showing increases and decreases, but they never really understood why the changes were happening, so they couldn’t extrapolate from the experience to affect their metrics in a predictable way.

Even worse, sometimes they would make up hypotheses about why the metrics changed but not actually test them. For example, one product owner did a “Spend more than $10 and get a free gift” promo over a weekend. The weekend’s sales were slightly higher than the previous weekend’s sales, so she attributed that increase to the promotion. Unfortunately, a cursory look at the data showed that the percentage of people spending over $10 was no larger than it had been in previous weeks.

On the other hand, there had been far more people on the site than in previous weeks due to seasonality and an unrelated increase in traffic. Based on the numbers, it was extremely unlikely that it was the promotion that increased revenue, but she didn’t understand how to measure whether her changes actually made any difference.

The Fix: Say it with me, "Correlation does not equal causation!" Whenever possible test changes against a control so that you can accurately judge what effect they’re having on specific metrics. If that’s not possible, make sure that you understand ahead of time which changes you are LIKELY to see from a particular change and then judge whether that happened. For example, a successful “spend more than $10 promo” should most likely increase the percentage of orders over $10. 

Also, be aware of other changes within the company so that you can determine whether it was YOUR change that affected your metrics. Anything from a school holiday to an increased ad spend might affect your numbers, so you need to know what to expect.

I want your feedback!

Have you had problems interpreting your quantitative data, or do you have stories about people in your company who have? Please, share them in the comments section!

Also, if your company is currently working on getting feedback from users, I’d love to hear more about what you are doing and what you’d like to be doing better. Please take this short survey!