Whose Job is User Research? An Interview with Amy Santee

Almost every time I give a talk, I get asked how people can convince their companies to adopt more user research or pay attention to the research that’s being done. I’ve always given various answers, ranging from “quit and go to a better company” to “try to make research more inclusive,” but I realized that I was giving advice based on too small of a sample set.

So, earlier this year, I became obsessed with finding out who owns user research in functional teams. I’ve asked dozens of people on all sorts of different teams what makes research work for them, and I’m sharing their responses here. If you’re someone who is very familiar with how user research works on your team and would like to participate, please contact me.

Recently, I asked Amy Santee, an independent UX research consultant, some questions about who should own research. She trained as an anthropologist, and she’s been doing user research for several years for companies ranging from healthcare to insurance to hardware and mobile tech companies, so she’s seen what works and doesn’t work in a lot of different models.

Who Owns It

“For internal teams,” Amy explains, “researchers and designers who do research should ‘own’ the research process in the sense of being the go-to people responsible for driving its fundamental activities: planning and budgets, coordination, research design, recruiting participants, conducting sessions, and disseminating the results. It’s not so much ‘owning’ it but being the point person for getting things done.”

One of the themes I’ve seen so far in these interviews is the strong difference between the responsibility for conducting research and the ownership of the results. Regardless of who is responsible for getting research done, research results and participation should be owned by the whole team. The researcher or designer might be driving the process, but everybody else on the team should be along for the ride.

It’s not just direct members of the team who need to participate, either. “Stakeholders in design, product, engineering, business, marketing and other areas should share in this ownership to the greatest degree possible,” Amy says. “That’s why they’re called stakeholders – they have (or should have) a stake in the game when it comes to incorporating research into their processes and decision-making.”

To be clear, in Amy’s model, the stakeholders aren’t just interested in the outcomes. They should be active participants in the research. They can offer important perspectives from their respective business areas, and they should contribute to the research process itself by observing sessions, brainstorming ideas and solutions, and helping to synthesize the results.

The benefits of this sort of participatory research, Amy says, are clear. “The more this is done, the more value people will see in being involved, and the less the researcher needs to ‘own’ research by him or herself. Stakeholders might even learn how to do research so they don’t always have to rely on a single person or team to do it.”

Researching without Researchers

Of course, not all teams are lucky enough to have dedicated researchers or designers who are trained in user research methods. Amy has some suggestions for those who decide to do research without any experience on the team.

“My preference is for internal researchers because they have an understanding of the company and product from the inside,” she explains. “They are able to really get a sense for how research fits into the design process and business strategy. They can build relationships with other business areas and roles in order to figure out how research can bring the most value, when to do it, who to get involved, how to communicate most effectively, and possibly make more effective recommendations.”

That said, there are reasons to bring in experts from outside. “Training from an expert who has the right background and experience can help a team get started with the fundamentals and avoid the inappropriate execution of a research project (e.g., wrong methodology, misinformed research questions, etc.),” she says.

Sometimes, combining an external expert with internal trainees can even yield certain unexpected benefits. For example, outside consultants might have a fresher look at the questions the team should be asking. They might be able to bring up things that team members wouldn’t feel comfortable saying because they don’t have a bias or agenda. And, of course, they’ll typically have experience working with many different teams, so they’ll be able to spot patterns that less experienced researchers might not see.

Whether you’re working with internal experts or external coaches, the important thing is that the people on your team are engaged in the process. Making research a collaborative effort means more people in your company will learn from users, and that’s good for your product.

Learn More

For more information about Amy, check out her website, find her on Twitter, or connect with her on LinkedIn.


Whose Job is User Research? An Interview with Steve Portigal

This post appears in full on the Rosenfeld Media blog. 

Those of us who conduct user research as part of our jobs have made pretty big gains in recent years. I watched my first usability test in 1995 then spent a good portion of the 2000s trying to convince people that talking to users was an important part of designing and building great products. These days, when I talk to companies, the questions are less aboutwhy they should do user research and more about how they can do it better. Believe me, this feels like enormous progress.

Unfortunately, you still don’t see much agreement about who owns user research within companies. Whose job is it to make sure it happens? Who incorporates the findings into designs? Who makes sure that research isn’t just ignored? And what happens when you don’t have a qualified researcher available? These are tough questions, and many companies are still grappling with them.

So, I decided to talk to some people who have been dealing with these questions for a living. For this installment of the Whose Job is User Research blog series, I spoke with Steve Portigal, Principal at Portigal Consulting. He’s the author of Interviewing Users, which is a book you should read if you ever do any research on your own.

Steve has spent many years working with clients at large and small companies to conduct user research of all types. He also spends a lot of his time helping product teams get better at conducting their own research. Because he’s a consultant, he sees how a large number of companies structure their research processes, so I asked him to give me some advice.

Read the rest at Rosenfeld Media>

Whose Job is User Research? An Interview with Dorian Freeman

As part of my ongoing series about how user research is being done in organizations, I asked Dorian Freeman, the User Experience Lead at Harvard Web Publishing to answer a few questions. She shared her experiences working in UX design since the late ‘90s. See the rest of the series here

Owning Process vs. Owning Results

When I asked Dorian who on a product team should own user research, she explained that there is a difference between owning the process and owning the results. “The people who are accountable or responsible for research are the ones who oversee the researching, the synthesizing of the data, and the reporting on the findings,” she explained. “However, the data from the research is ‘owned’ really by everyone in the company. It should be accessible to everyone.”


This is an important distinction. Regardless of who is collecting the data, the output is important to the whole company  and should be accessible and used by everybody. Research, in itself, is not particularly valuable. It’s only the application of the research to product decisions that makes it valuable. Making the results of the research available to everybody means that, hopefully, more decisions will be made based on real user needs. 

External vs. Internal

A few folks in this series have talked about the benefits of having UX research done by members of the team, but Dorian called out one very important point about external researchers. “An external expert can often provide insights that are more credible to the leadership than an internal expert, which is a perception issue, but helpful in some cases.”

And she’s absolutely right. We may not always love that it’s true, but highly paid external consultants will sometimes be listened to where an employee won’t, even when they’re saying the same things.

On the other hand, for day to day research that is informing product team decisions, an in-house expert is often preferable. Dorian says, “Typically, the in-house expert researcher has more institutional knowledge which can speed up the process and provide more insight. In the ideal scenario, the product team should always have an internal expert researcher working closely with them.”

For teams that aren’t lucky enough to have an expert, Dorian recommends getting someone on the team to learn how to do it. “Understanding the people who use your product is essential,” she says. “If you’re not interviewing users, you’re not doing UX.”

Whose Job is User Research? An Interview with Susan Wilhite

As part of my ongoing series where I try to find out who is doing user research in organizations and who should be, I spoke with Susan Wilhite. Susan is a lead UX researcher. She was incredibly helpful in explaining how teams work best under different conditions. This is the third post in the series. 

Strategic vs Tactical

When we talk about ownership of the research function, we have to start with the type of research we’re doing and our goals for that research. “When research is mostly tactical,” Susan explains, “it should be owned by either product management or the design team, with the other being a key stakeholder.” 

Research that is intended to answer very specific, well understood questions, should be driven by people on the team who are asking those questions. For example, usability testing and other forms of tactical, evaluative research are going to be owned and driven by the people responsible for making decisions based on the results of the studies. 

Strategic research, on the other hand, like that done when a company is still developing its primary product or service or is branching into other lines of business, should be led with broad direction and budget from the VP of product or other high level stakeholder. This puts that leader in the best position to interpret UX research findings for their peers and champion those ideas into wider strategic decisions.

Most importantly though, generative and formative research is best done in-house rather than by people outside the company. “This research, unlike evaluative, has a very long shelf life. A tiny amount of information from strategic studies are communicated in a final report,” Susan explains. "Findings developed outside the company can be a lost opportunity to grow institutional knowledge within the org over time. Down the road this is important because findings from generative and formative research inform the most tactical research.” 

In other words, don’t pay vendors to acquire deep knowledge about your users unless you intend a long-term relationship those outside researchers. Understanding the product/service and users is a critical advantage, and the understanding that comes from conducting generative and formative research should be kept close to the vest.

Cross Functional Teams vs Silos

Recently, with the growth in Agile and Lean methodologies, we’ve seen a lot of companies break down functional silos in favor of cross functional teams. This can improve communication within the product team and help diminish the waste that happens when silos only communicate through deliverables. Susan points out some of advantages and disadvantages of doing away with the research team. 

“I have become a fan of the embedded research function,” Susan says. “Researchers are themselves tools, and as such are vastly more effective when given the chance to compound learnings and develop stakeholder trust in a circumscribed domain.” When a user researcher works within a product team, they become much more effective, since they’ll have a better understanding of the team’s real research needs. They can also build trust with the team, which will hopefully lead to less resistance to suggestions made by the researcher. 

On the other hand, embedded UX research has its own problems. “The hazard here is that product groups have varying budgets and sexiness – a researcher caught in a group not advancing fast from attention given by executives or the market can hobble a career.” Having a separate research team can prevent that by allowing researchers to circulate among teams and find areas of interest and groups where they work best. But still, it takes a very well managed corporate culture for a silo to work. As Susan warns about research teams in silos, “Success is uncommon.” 

Regardless of the company org chart, Susan encourages summing up and offering evolved thinking on strategic frameworks and tactical principles throughout the company. “I’d like to see twice-yearly off-sites where the org reviews what has been learned and workshops ideas from the product team at large,” she says. “Partly to remind the team of what has been learned and how we think we know it, but also to ponder aspirational research - what’s next.”

Whose Job is User Research? An Interview with Tomer Sharon

I'm interviewing researchers, designers, product managers, and other people in tech about their opinions on how user research should be done within companies. This is the second post in the series, and it appeared in full on the Rosenfeld Media blog. 

If you'd like to be featured as part of the series, contact me

As part of my ongoing series of posts where I try to get to the bottom of who owns user research, I reached out to Tomer Sharon, former Sr. User Experience Researcher for Google Search and now Head of UX at WeWork. He also wrote a book called It’s Our Research which addresses this exact topic, and his new book Validating Product Ideas is now available. He’ll be speaking at the upcoming Product Management + User Experience from Rosenfeld Media about ways teams can work together to learn more about their users.

I asked Tomer a few questions about his recent statement that UX at WeWork won’t have a research department and what suggestions he has for creating a team that conducts research well and uses it wisely.


Read the rest at Rosenfeld Media >


Whose Job is User Research? An Interview with Jeff Gothelf

I'm interviewing researchers, designers, product managers, and other people in tech about their opinions on how user research should be done within companies. This is the first post in the series, and it appeared in full on the Rosenfeld Media blog. 

If you'd like to be featured as part of the series, contact me

While doing research for the upcoming PM+UX Conference, several respondents requested guidance around how user research should managed. In fact, it was the most common write-in answer on our survey, and a question that comes up repeatedly whenever I give talks. Apparently, there seems to be very little consensus about who on a product team should own research. This makes it a lot harder to get user insights and make good product decisions.  

In a way, this is good news. Five or ten years ago, there would have been more questions like, “How do I get my boss to consider doing user research?” and “What is user research good for?” Those still come up, but far more frequently, I’m hearing things like, “How do we make sure that everybody on the team understands the research?” and “Who is in charge of making sure research happens and deciding what to do about it?” Research, these days, is assumed. It’s just not very well managed.

To answer these questions, I interviewed several very smart people who know a thing or two about research and building products. I’ll share some of their suggestions in a series of blog posts.

First, I spoke with Jeff Gothelf, the author of Lean UX (O’Reilly, 2013) and Sense and Respond (Harvard, 2016).

Read the rest at Rosenfeld Media > 

Intent to Solve

I wrote an article about finding out whether prospective customers will buy your product over at Boxes and Arrows. You should read it!

"When we’re building products for people, designers often do something called “needs finding” which translates roughly into “looking for problems in users’ lives that we can solve.” But there’s a problem with this. It’s a widely held belief that, if a company can find a problem that is bad enough, people will buy a product that solves it.

That’s often true. But sometimes it isn’t. And when it isn’t true, that’s when really well designed, well intentioned products can fail to find a market."


The Most Important User You're Not Talking To

Do you have a product? With users? 

If you answered “yes” to both of those questions, you have an amazing untapped source for product research. And I’m not talking about your users. 

I mean, sure, you should be listening to users and observing them. A lot. But there’s another group of people who can provide you with incredible insights into your product. 

You should be talking to people who used your product once and then abandoned it. Tweet This!

Specifically, you need to ask these people the following questions:
  • What were you expecting when you tried this product?
  • How did it not meet your expectations? 
This research will help you understand three things very clearly:
  • What your messaging and acquisition strategy is telling people to expect.
  • What problem the people you are acquiring are trying to solve.
  • Why your product doesn’t solve this problem for the people you are acquiring. 
You’ll notice that I mentioned “acquisition” in each of the above points. This is intentional. You see, one of the things you are very likely to find out from this sort of research is that you are getting entirely the wrong group of people to try out your product. 

If you’ve been spending a lot of time optimizing your ads and your messaging for sign up conversion rather than for actual product usage and retention, it may turn out that you are acquiring a whole lot of the wrong sort of user for your product, which can be a costly mistake. This kind of research is fabulous for understanding if that’s true. 

The other thing that this research helps with is understanding whether or not you’re adequately solving the problem you think you’re solving in a way that users can understand. If new users can’t figure it out what your product does and how to do it in a few seconds, they’ll leave without ever knowing that your product was the solution to their problem. 

Of course, this isn’t the easiest group of people to interview. These folks can be tricky to track down and tough to schedule. But finding a way to interview people who thought they wanted to use your product and then changed their minds is something that will pay off hugely in the long run.

Building the Right Thing vs Building the Thing Right

This originally appeared as a guest post on the O'Reilly Programming Blog.

I love it when companies test prototypes. Love love love it. But it makes me incredibly sad when they use prototype testing for the wrong thing.

First, let me give you my definition of “prototype testing” here. I often build interactive, or semi-interactive, prototypes when designing a product. These prototypes are not actual products. They’re simulations of products. People who see the prototype can often click around them and perform some simple tasks, but they’re generally not hooked up to a real back end system.

“Well, what good is that?” you might reasonably ask. Excellent question. Interactive prototypes are incredibly useful for finding problems in usability testing settings. In a checkout flow, you might create a simple interactive prototype and watch four or five people go through the process (with test accounts) in order to find out if there were any parts of the flow they found confusing or hard to use.

It’s a great technique for any reasonably complicated interaction that you want to test before you spend a lot of time writing code. Interactive prototype testing can save you a ton of time because it helps you make sure that you’re building the product right before you spend a lot of time and money actually writing code.

Read the rest of this post on the O'Reilly blog >

The Best Way(s) to Learn Lean User Research

I've been excited to see more and more people getting interested in user research and customer development over the past few years. It's not a new field by any means, but it's new to a lot of entrepreneurs and founders.

 Of course, what that means is that when I talk about research, I hear a lot of the same questions over and over again. I hear questions about recruiting the right users, the right number of people to talk to, and what questions to ask. I also hear a lot of confusion about how to choose the right type of research and when to use qualitative versus quantitative methods.

 Now, there is a lot of great information out there about how to do research. There are blogs and books and classes. But often these are more than entrepreneurs really need. They don't want to become user researchers. They want to learn exactly the techniques that they need to do whatever they need to do right now.

 The first third of my book, UX for Lean Startups, is aimed at getting people comfortable with the idea of validating hypotheses and figuring out what sort of research to do. But I've found that often people need a little more help. They need specific guides for running each different type of study.

So, that's what I'm working on now, and I hope to have some guides available in the next couple of months. These guides will be fairly detailed how-tos for things like running a usability test, recruiting users, conducting observational testing, and other topics that I get asked about constantly.

If you would like to sign up to be the first to hear when these guides are available for purchase, go here and sign up.

If you would like to tell me what guide you'd most like to see or what question you'd most like answered, send me email at laura@usersknow.com.

If you'd like something to read in the meantime, did I mention I'd written a book?

If you don't like all this reading and would prefer to learn in workshop format, I will be doing some video workshops for LUXr. You should sign up here.

And if you still have questions, you can reach me on Clarity for a quick call or sometimes hire me to consult, depending on my availability.

You know what this means, right? It means that pretty soon you will have absolutely no excuse for not learning from your users.

Maybe You're Just Delusional

I tell people to listen to their customers a lot. It’s kind of my thing. Every so often when I’m explaining how to learn about customer problems and incorporate that feedback into a product, I run into a founder who is truly resistant.

“But...my VISION!” they cry. Then they go on to build exactly the product that they want to build without getting feedback from users. And once in awhile this works out, I’m told. But typically I never hear about them, or their products, again.

The sad thing is that vision and customer feedback don’t have to be at odds.

I’m going to give you two different visions that a startup founder might have, and I’d like you to try to spot the differences between the two.

Vision #1

“Pet owners are upset about how much their pets cost. This product is going to make it more affordable to have a pet by getting jobs for the pets so that the pets are bringing in money! It’s called Jobs4Pets, and people will be able to post jobs for dogs, cats, rabbits, whatever. And other people will find jobs for their pets and apply right on the site. We’ll make money by charging a service fee on each of the transactions! Obviously, we’re mobile first, and the jobs will be shown in a Pinterest style layout because that’s the best possible layout for things.”

Vision #2

“Some pet owners are upset about how much their pets cost. This product is going to make it more affordable to have a pet.”

See the difference? I mean, besides the fact that the first one is completely delusional?

In the first one, the deranged...I mean visionary...founder has a vision not just for the goal of the company, but for every detail of the actual product. She’s not just envisioning what the product will help people do. She’s envisioning exactly how the product will help people do that, right down to the layout on the home page.

She hasn’t left room to validate the many assumptions she’s making - that pet owners have a problem with costs, that pets can do jobs, that people will post jobs for pets, that people want their pets to have jobs, etc. If any of those assumptions are invalid, by the way, the entire product will fail, and even her lovely, Pinterest-style layout can’t help her.

But the most important thing to note is that the second vision is entirely compatible with user research.

The founder with the second vision might want to go out and meet lots of pet owners in order to find out how big of a problem cost is for them. She might learn the ways that people are already saving money. She might ask which parts of pet ownership cost the most or are the most burdensome. She might test several different solutions for saving pet owners money and see which one gets the most interest or traction. She might even end up with an entirely different product than she originally imagined, all without sacrificing her vision!

So, how can you balance customer feedback with vision? Try to envision how your product is going to change somebody’s life, not how they’re going to perform specific tasks. Envision the problem that you’re solving, not the specific solution.

Then listen to your users. Observe them. Learn from them exactly how you can solve their problem.

That’s the best way to make sure that your vision becomes a reality.

This was written for Startup Edition. The question was, "How do you balance user feedback with your long term vision?"

Want more information like this? 

My new book, UX for Lean Startups, will help you learn how to do better qualitative and quantitative research. It also includes tons of tips and tricks for better, faster design. 

Combining Qualitative & Quantitative Research

Designers are infallible. At least, that’s the only conclusion that I can draw, considering how many of them flat out refuse to do any sort of qualitative or quantitative testing on their product. I have spoken with designers, founders, and product owners at companies of all sizes, and it always amazes me how many of them are so convinced that their product vision is perfect that they will come up with the most inventive excuses for not doing any sort of customer research or testing. 

Before I share some of these excuses with you, let’s take a look at the types of research I would expect these folks to be doing on their products and ideas.

Quantitative Reserach

When I say quantitative research in this context, I’m talking about a/b testing, product analytics, and metrics - things that tell you what is happening when users interact with your product. These are methods of finding out, after you’ve shipped a new product, feature, or change, exactly what your users are doing with it. 

Are people using the new feature once and then abandoning it? Are they not finding the new feature at all? Are they spending more money than users who don’t see the change? Are they more likely to sign up for a subscription or buy a premium offering? These are the types of questions that quantitative research can answer. 

For a simple example, if you were to design a new version of a landing page, you might run an a/b test of the new design against the old design. Half of your users would see each version, and you’d measure to see which design got you more registered users or qualified leads or sales or any other metric you cared about.

Qualitative Research

By qualitative testing, I mean the act of watching people use your product and talking to them about it. I don’t mean asking users what you should build. I just mean observing and listening to your users in order to better understand their behavior. 

You might do qualitative testing before building a new feature or product so that you can learn more about your potential users’ behaviors. What is their current workflow? What is their level of technical expertise? What products are they already using? You might also do it once your product is in the hands of users in order to understand why they’re behaving the way they are. Do they find something confusing? Are they getting lost or stuck at a particular point? Does the product not solve a critical problem for them? 

For example, you might find a few of your regular users and watch them with your product in order to understand why they’re spending less money since you shipped a new feature. You might give them a task in order to see if they could complete it or if they got stuck. You might interview them about their usage of the new feature in order to understand how they feel about it. 

Excuses, Excuses

While it may seem perfectly reasonable to want to know what your users are really doing and why they are doing it, a huge number of designers seem really resistant to performing these simple types of research or even listening to the results. I don’t know why they refuse to pay any attention to their users, but I can share some of the terrible excuses they’ve given me. 

A/B Testing is Only Good for Small Changes

I hear this one a lot. There seems to be a misconception that a/b testing is only useful for things like button color and that by doing a/b testing you’re only ever going to get small changes. The argument goes something like, “Well, we can only test very small things and so we will test our way to a local maximum without ever being able to really make an important change to our user experience.”
This is simply untrue.

You can a/b test anything. You can show two groups of users entirely different experiences and measure how each group behaves. You can hide whole features from users. You can change the entire checkout flow for half the people buying things from you. You can test a brand new registration or onboarding system. And, of course, you can test different button colors, if that is something that you are inclined to do.

The important thing to remember here is that a/b testing is a tool. Itʼs agnostic about what youʼre testing. If youʼre just testing small changes, youʼll only get small changes in your product. If, on the other hand, you test big things - major navigation changes, new features, new purchasing flows, completely different products - then youʼll get big changes. And, more importantly, you’ll know how they affected your users. 

Quantitative Testing Leads to a Confused Mess of an Interface

This is one of those arguments that has a grain of truth in it. It goes something like, “If we always just take the thing that converts best, we will end up with a confusing mess of an interface.”
Anybody who has looked at Amazonʼs product pages knows the sort of thing that a/b testing can lead to. They have a huge amount of information on each screen, and none of it seems particularly attractive. On the other hand, they rake in money.

Itʼs true that when youʼre doing lots of a/b testing on various features, you can wind up with a weird mishmash of things in your product that donʼt necessarily create a harmonious overall design. You can even wind up with features that, while they improve conversion on their own, end up hurting conversion when they’re combined. 

As an example, letʼs say youʼre testing a product detail page. You decide to run several a/b tests simultaneously for the following new features:
customer photos

  • comments
  • ratings

  • extended product details

  • shipping information

  • sale price

  • return info
Now, letʼs imagine that each one of those items, in its own a/b test, increases conversion by some small, but statistically significant margin. That means you keep all of them. Now youʼve got a product detail page with a huge number of things on it. You might, rightly, worry that the page is becoming so overwhelming that youʼll start to lose conversions.

Again, this is not the fault of a/b testing – or in this case, a/b/c/d/e testing. This is the fault of a bad test. You see, itʼs not enough that you run an a/b test. You have to run a good a/b test. In this case, just because the addition of a particular feature to your product page improved conversions doesn’t mean that adding a dozen new features to your product page will increase your conversion. 

In this instance, you might be better off running several a/b tests serially. In other words, add a feature, test it, and then add another and test. This way you’ll be sure that every additional feature is actually improving your conversion. Alternatively, you could test a few different versions of the page with different combinations of features to see which converts best. 

A/B Testing Takes Away the Need For Design

For some reason, people think that a/b testing means that you just randomly test whatever crazy shit pops into your head. They envision a world where engineers algorithmically generate feature ideas, build them all, and then just measure which one does best.

This is just absolute nonsense.

A/B testing only specifies that you need to test new designs against each other or against some sort of a control. It says absolutely zero about how you come up with those design ideas.

The best way to come up with great products is to go out and observe users and find problems that you can solve and then use good design processes to solve them. When you start doing testing, youʼre not changing anything at all about that process. Youʼre just making sure that you get metrics on how those changes affect real user behavior.

Letʼs imagine that youʼre building an online site to buy pet food. You come up with a fabulous landing page idea that involves some sort of talking sock puppet. You decide to create this puppet character based on your intimate knowledge of your user base and your sincere belief that what they are missing in their lives is a talking sock puppet. Itʼs a reasonable assumption.

Instead of just launching your wholly re-imagined landing page, complete with talking sock puppet video, you create your landing page and show it to only half of your users, while the rest of your users are stuck with their sad, sock puppet-less version of the site. Then you look to see which group of users bought more pet food. At no point did the testing process have anything to do with the design process. 

Itʼs really that simple. Nothing about a/b testing determines what youʼre going to test. A/B testing has literally nothing to do with the initial design and research process. 

Whatever youʼre testing, you still need somebody who is good at creating the experiences youʼre planning on testing against one another. A/B testing two crappy experiences does, in fact, lead to a final crappy experience. After all, if youʼre looking at two options that both suck, a/b testing is only going to determine which one sucks less.

Design is still incredibly important. It just becomes possible to measure designʼs impact with a/b testing.

There’s No Time to Usability Test

When I ask people whether they’ve done usability testing on prototypes of major changes to their products, I frequently get told that there simply wasn’t time. It often sounds something like, “Oh, we had this really tight deadline, and we couldn’t fit in a round of usability testing on a prototype because that would have added at least a week, and then we wouldn’t have been able to ship on time.” 

The fact is you don't have time NOT to usability test. As your development cycle gets farther along, major changes get more and more expensive to implement. If you're in an agile development environment, you can make updates based on user feedback quickly after a release, but in a more traditional environment, it can be a long time before you can correct a big mistake, and that spells slippage, higher costs, and angry development teams. Even in agile environments, it’s still faster to fix things before you write a lot of code than after you have pissed off customers who are wondering why you ruined an important feature that they were using. 

I know you have a deadline. I know it's probably slipped already. It's still a bad excuse for not getting customer feedback during the development process. You're just costing yourself time later. I’ve never known good usability testing to do anything other than save time in the long run on big projects.

Qualitative Research Doesn’t Work Because Users Don’t Know What They Want

This is possibly the most common argument against qualitative research, and it’s particularly frustrating, because part of the statement is quite true. Users aren’t particularly good at coming up with brilliant new ideas for what to build next. Fortunately, that doesn’t matter. 

Let’s make this perfectly clear. Qualitative research is NOT about asking people what they want. At no point do we say, “What should we build next?” and then relinquish control over our interfaces to our users. People who do this are NOT doing qualitative research. 

Qualitative research isn’t about asking people what they want and giving it to them. Qualitative research is about understanding the needs and behaviors of your users. It’s about really knowing what problem you’re solving and for whom.

Once you understand what your users are like and what they want to do with your product, it’s your job to come up with ways to make that happen. That’s the design part. That’s the part that’s your job.

It’s My Vision - Users Will Screw it Up

This can also be called the "But Steve Jobs doesn't listen to users..." excuse. 

The fact is, understanding what your users like and don't like about your product doesn't mean giving up on your vision. You don't need to make every single change suggested by your users. You don't need to sacrifice a coherent design to the whims of a user test. You don’t even need to keep a design just because it converts better in an a/b test. 

What you do need to do is understand exactly what is happening with your product and why. And you can only do that by gathering data. The data can help you make better decisions, but they don’t force you to do anything at all.

Design Isn’t About Metrics

This is the argument that infuriates me the most. I have literally heard people say things like, “Design can’t be measured, because design isnʼt about the bottom line. Itʼs all about the customer experience.”


Wouldnʼt it be a better experience if everything on Amazon were free? Be honest! It totally would. 

Unfortunately, it would be a somewhat traumatic experience for the Amazon stockholders. You see, we donʼt always optimize for the absolute best user experience. We make tradeoffs. We aim for a fabulous user experience that also delivers fabulous profits.

While itʼs true that we donʼt want to just turn our user experience design over to short term revenue metrics, we can vastly improve user experience by seeing which improvements and features are most beneficial for both users and the company.

Design is not art. If you think that thereʼs some ideal design that is completely divorced from the effect itʼs having on your companyʼs bottom line, then youʼre an artist, not a designer. Design has a purpose and a goal, and those things can be measured.

So, What’s the Right Answer?

If you’re all out of excuses, there is something that you can do to vastly improve your product. You can use quantitative and qualitative data together. 

Use quantitative metrics to understand exactly what your users are doing. What features do they use? How much do they spend? Does changing something big have a big impact on real user behavior?

Use qualitative research to understand why your users do what they do. What problems are they trying to solve? Why are they dropping out of a particular task flow when they do? Why do they leave and never come back.

Let’s look at an example of how you might do this effectively. First, imagine that you have a payment flow in your product. Now, imagine that 80% of your users are not getting through that payment flow once they’ve started. Of course, you wouldn’t know that at all if you weren’t looking at your metrics. You also wouldn’t know that the majority of people are dropping out in one particular place in the flow.

Next, imagine that you want to know why so many people are getting stuck at that one place. You could do a very simple observational test where you watch four or five real users going through the payment flow in order to see if they get stuck in the same place. When they do, you could discuss with them what stopped them there. Did they need more information? Was there a bug? Did they get confused?

Once you have a hypothesis about what’s not working for people, you can make a change to your payment flow that you think will fix the problem. Neither qualitative nor quantitative research tells you what this change is. They just alert you that there’s a problem and give you some ideas about why that problem is happening. 

After you’ve made your change, you can run an a/b test of the old version against the new version. This will let you know whether your change was effective or if the problem still exists. This creates a fantastic feedback loop of information so that you can confirm whether your design instincts are functioning correctly and you’re actually solving user problems. 

As you can hopefully see from the example, nobody is saying that you have to be a slave to your data. Nobody is saying that you have to turn your product vision or development process over to an algorithm or a focus group. Nobody is saying that you can only make small changes. All I’m saying is that using quantitative and qualitative research correctly gives you insight into what your users are doing and why they are doing it. And that will be good for your designs, your product, and your business.

Like the post? 

A Perfect Use for Personas

I was reading Dave McClure's post about changes to menus (and its not always flattering Hacker News thread), and I found myself both violently agreeing and disagreeing with both. I kept thinking something along the lines of, "That would be great! Except when it would be incredibly annoying!"

That's when I realized what was missing for me: personas.

 First off, apologies to Dave, who certainly doesn't need me to defend or improve his ideas. This is just meant to be an explanation of the process I went through as a designer and researcher to understand my weird, ambivalent reaction to his product suggestions. Here are the problems that Dave listed in his post that he was solving for:

  • Too many items, not enough pictures, simpler & more obvious recommendations. 
  • Not online, no order history, no reviews, no friends, no loyalty program, no a/b testing. 
  • Have to wait forever for waiter to order, re-order & pay. 
  • Nothing to do while I'm waiting. 

Then he presented reasonable solutions to these problems. All of the suggestions seemed geared toward making restaurants quicker, more efficient, and lower touch. Interestingly, both the Hacker News complaints and my own seemed to be from the point of view of people who do not have these problems. They were saying things like, "this would make restaurants awful!" but what they really meant was, "I, as a potential user, don't identify with that particular problem you're trying to solve, so your solution does not really apply to me."

In other words, Dave's suggested solutions might be great for people who have these problems but might not appeal at all to people who don't have these problems.

 So, then I started to think about the types of people who would have those types of problems. I put together a few rudimentary personas of people who likely would benefit from things like recommendations, entertainment while waiting, a more efficient order process, and a faster way to pay.

As a note, these personas are behavioral, not demographic. This means that you might sometimes fit into one of them and at other times you wouldn't. It depends more on what you do than who you are.

The Business Person

Imagine that you're on a business trip to someplace you've never been. You're quite busy, and it's likely that you'll have to eat a few meals on your own, possibly on the way to or from a meeting or the airport. You're not a fan of fast food, so you'd rather be able to find something you like at an interesting local place than at a big national chain.

 In this instance you might LOVE having things like recommendations from people you trusted, pictures on the menu of unfamiliar dishes, and a quick, efficient ordering and payment system that guaranteed you wouldn't hang around for twenty minutes waiting for a bill. You might also really enjoy some entertainment so that you'd have something to do that wasn't stare creepily at the other patrons.

The Barfly

Now imagine that you're at an incredibly crowded night spot. You are desperate for a bourbon, but you don't want to queue up five deep at the bar to try to get someone's attention. You manage to get a table, but now you have to decide whether to leave it to flag down one of the few waitresses or or just wait it out.

 In this instance you would almost certainly be excited to be able to order and pay directly from your table using some sort of tablet. You'd also be able to quickly order your second, third, and (dare I say it) fourth rounds without having to go through the whole process again or count on the waitstaff knowing exactly when to ask if you want a refill.

The Group Luncher

Last one for now, I promise. You're out to lunch with eight of your coworkers. You need to get back to the office in 45 minutes for another stupid meeting. You don't want to spend 10 of those minutes just for a waiter to make it to your table and take your orders. Also, you really don't want to be the one in charge of figuring out how to split the bill, especially since three of your coworkers always get booze, one of them never eats more than a salad, and two of them order the most expensive thing on the menu.

In this instance, you'd be thrilled to be able to just sit down, punch in your order (and your credit card!), get your food delivered to you quickly, and get to spend more time chatting with that cute new person in accounting rather than negotiating who forgot to figure in tax to the amount they owe on the bill. 

And the rest...

There are probably a half dozen other hypothetical persona groups, all of which would obviously need to be validated (or invalidated) with various forms of user research and quantitative testing.

 The persona groups that aren't on this list are also important. Many of these types of innovations might make things worse for the types of folks who are enjoying the experience of being in a restaurant as an event. For example, a romantic dinner for two at a high end restaurant is not improved by shaving thirty minutes off the wait between courses. Other people might enjoy the personal exchange with the waiter or a consultation from a sommelier more than reading about items on a tablet.

That's ok. These products aren't necessarily going to be for every type of restaurant all at once. There's no need to worry that suddenly Manresa is going to be putting pictures on the menu like Denny's.

 The reason I bring this up is that it often helps me to evaluate product ideas through the eyes of the people I expect to use the product. When I find myself saying things like, "Driving sucks! I'm going to fix driving!" I have to step back and realize that driving (like eating in restaurants) is an almost universal activity that has a constellation of problems, many of which are not shared by all types of drivers (or eaters). If you think your startup has a brand new product that's going to solve all the driving problems for stock car drivers, commuters, and truck drivers, I think you're probably wrong.

Instead of arguing back and forth whether or not these problems exist, it's very easy to identify particular types of people for whom these problems MIGHT exist and then do some simple qualitative research to see if you're right. After all, we know at least one person (Dave) has these problems that he wants solved. Presumably Dave (or the companies he invests in) are doing the sort of research necessary to make sure that there are enough people like Dave to make a profitable market. That market might not include you, but there are lots of wildly successful products you don't like.

 So. Long story short: personas, yay!


For those of you who notice these things, you're right, I didn't include the personas for the other side of the equation: the restaurant owners. Whenever your customers (the people who give you money) and your users (the people who actually use your product) are different, you're in a much more complicated space from a user experience point of view. I'm assuming that, if we can make a specific type of end user happy enough it will make the types of restaurant owners who cater to those users interested in purchasing the product.

 That's just another hypothesis, and all hypotheses need to be validated, not assumed to be facts.

Startups Shouldn't Hire User Researchers

Everybody seems to think I'm a user researcher, which is not strictly true.

It's true, I write a lot about user research, and I've certainly done my share of it over the course of my career. But I don't really consider myself a user researcher. I do enough user research to be extremely effective at being an interaction designer and product manager.

There are lots of actual user researchers with degrees in psychology and specialized training who are doing much more interesting and complicated research than I do. I respect those people. They are scientists in many cases. But I don't think that most of them should work for startups. In fact, I don't think that small startups should ever hire a user researcher just to do research.

Don't get me wrong. User research is critical to startups. How else are they supposed to understand their potential customers and find product market fit?

No, the reason people shouldn't hire people to do their user research is that learning about your customer is the single most important part of your startup. If you're outsourcing that to a person who isn't directly responsible for making critical product decisions, then you are making a horrible mistake.

I see startups do this over and over. They hire a consultant, or even a regular employee, to come in and get to know their users for them. That person goes out and runs a lot of tests and then prepares a report and hands it over to the people in charge of making product decisions. Half the time the product owners ignore the research or fail to understand the real implications of it.

And why wouldn't they? The product managers weren't there for the process of talking to users, so they almost certainly haven't bought into the results. It's really easy to ignore a bunch of bad things somebody else wrote about your idea in a Powerpoint presentation. It's a lot harder to ignore half a dozen real users saying those things to your face and showing you problems that they're having in real life.

The right way to do research in a startup is to have the people who are responsible for making decisions about your product intimately involved in the research itself. That means that product owners and UX people are designing and running the tests. Even the engineers should watch some of the sessions and hear first hand what their users are going through.

The reason I talk so much about user research is that I want you, the entrepreneurs, to learn enough about it so that you can DO IT YOURSELVES. You're welcome to hire people like me, or even real user researchers, to teach you what you need to do. But having somebody else do the research for you is not an option. At least, it's not one that you should use if you're still trying to find product market fit or learn anything actionable about your customers.

Stop thinking of user researchers as people you hire to get to know your users, and start thinking of yourself as a user researcher. At startups, you should all be user researchers, especially if what you really are is a designer or product manager.

When Talking to Users Saves You Time

I mentor a few young designers, which is great, because not only do I know exactly who I want to hire when I’m building a team, but they also share interesting stories about their current companies.

I was speaking with one of them a couple of weeks ago, and she shared a story that sounded incredibly familiar. I think this happens to all designers who work with a sales force at some point.

 The designer, whom we will call Jane, is working on the user experience for an enterprise product for hiring managers. The product has some competitors that have been around for awhile.

One day, a few weeks back, the sales team came to the product team and said, “We need Feature X. All of our competitors have Feature X, and we’ve heard from some of our potential customers that they won’t buy us if we don’t have Feature X.”

 Jane and her team looked at the competition’s implementation of the feature, which had a lot of bells and whistles. The product team asked sales which parts of Feature X was most important to the potential customers. “All of it,” sales replied.

Jane’s team starting pushing back. This was not a simple feature. They estimated that it would take months to get the feature to be comparable with the competition. There was one part of Feature X in particular, the live video part, that Jane knew would be incredibly tough to design and build, simply because of all the implicit requirements that would make it useful. They explained this to the sales department, but the sales department continued to complain that they couldn’t do their jobs without Feature X.

Finally, Jane insisted on speaking directly with a customer. A meeting was lined up with a few representatives of the company. Jane started off by asking how the potential customer would use Feature X. They gave detailed explanations of exactly the places that they needed Feature X, none of which had been conveyed by the sales team.

Interestingly, none of the uses they had for Feature X involved the live video part of the feature that was worrying the product team. Finally, Jane came right out and said, “Tell us about live video. How do you feel about it.” The potential customers shrugged. “I guess it might be useful,” they said. Jane asked, “Would not having live video prevent you from buying our product if we had Feature X?” “Not at all,” the potential customers said.

Jane’s team then had a similar conversation with other customers and potential customers. The product team gladly put the much smaller Feature X, minus the expensive live video feature, onto their product roadmap. They also left out a few other parts of Feature X that didn't solve actual user problems, and created a design for Feature X that was significantly different from the competitors' versions but that addressed all the customers' issues.

Sales was happy because now they could tell potential customers that they were competitive on features. Jane was happy because she was able to quickly identify a real customer problem and solve it, rather than fighting with sales about something that would take too long to build and would include features the customers didn’t actually want.

 My prediction is that Jane’s version of Feature X is going to be significantly better than the competitors’ version, simply because it will only have the pieces that customers actually need and use.

The new feature won’t be made needlessly more complicated by bells and whistles that are only put there so that a sales person can check something off on a list. They’ll be put there because they solve a problem.

You Know Too Much

This is not an earth shattering revelation. Think of it more as a friendly reminder that even people who have been doing UX for a very long time can get obvious things wrong.

For the last three months, I've been working on what I think is going to be an amazing product. Thanks to some fantastic engineers and some really hard work, our MVP is already out, and people are using it in closed beta. It's tremendously exciting.

The important thing to note is that I've been thinking about this product really, really hard for the last three months. We all have. We know everything about this product - who it's for, what it does, what it's going to do.

And that's the thing. We're crowdsourcing designs for children's clothing, sizes 2-6. We know that. We know that because that's what we told all of our wonderful, independent designers. We know that because those are the sizes we ordered from the manufacturers. We know that because that's the size of the models that we're using.

Which is why it was so surprising when I did a very quick user feedback session with someone who had used the site for the first time, and she pointed out that she wasn't sure what the age range on the clothes was supposed to be.

But then I looked at the site and tried, really hard, to see it from the point of view of somebody who hadn't been thinking about the product for three months - or even three minutes. And I realized that there was simply no way for users to know something that was so baked into our view of the product that we didn't even think to explain it.

It's a small thing, but it's incredibly important. After all, choosing designs you like for a 4 year old is quite different from choosing designs you like for a 40 year old, and crowdsourcing only works if the crowd knows what it's supposed to be picking.

This is why I'm so glad that we talk to users, even when we think things are simple or obvious. What is obvious to us is probably more likely not to be obvious to our users because we don't spend any time informing them of it.

We are making one very small, quick fix for this that should happen immediately, and we have a larger feature that I've already added a story for and hope goes into the product in the next couple of weeks.

The important reminder here is that I know too much about my product, and you know too much about your product. We think too much about our own products to be able to truly understand what a new user experiences.

Again, this is not a new concept, but it's a critical one if you're trying to make something that is truly simple and intuitive. You need to understand your user's starting point so that you can take her on a journey through your product without losing her along the way.

The single easiest way to see things through the eyes of your new user is to simply watch your user interacting with your product for the first time and talk to her about the experience. Don't try to do this without help from your users. You know way too much.

Sometimes It’s Not the Change They Hate

There has been a lot of discussion in the design world recently about "change aversion." Most of the articles about it seem to be targeting the new Google redesign, but I've certainly seen this same discussion happen at many companies when big changes aren't universally embraced by users.

Change aversion is a real thing. Often people don't like something different just because they're used to the old way. Once they get used to the new way, they discover all the wonderful new features and are happy with the new change.

But sometimes your users' rage isn't change aversion. Sometimes your new design actually sucks.

So, before you blame your users, you should figure out if it's change aversion, or if you really screwed something up. Ask yourself the following questions.

Did You Do Any Sort of User Testing Before Launch?

This is an important one. Sometimes people complain about a product because it has changed. Other times they complain because the product makes them feel stupid or it prevents them from doing what they want to do.

Most often, products make people feel stupid because the products are hard to use.

It's very possible that the changes you made to your product have made common tasks that the user is used to performing harder to do. Yes, the user may eventually learn to perform the tasks the new way, but that new way may be legitimately more difficult! You may even be reducing the amount of time the user spends performing that task, if you make it hard enough.

To pile onto the Gmail redesign for a moment, I can tell you right now that I am constantly hitting the folder button instead of the label button now that they are icons rather than text. I am still occasionally doing this probably a month after I switched to the new look. It's not a deal breaker for me, but it annoys me every time it happens. The new icons, for me, are honestly harder to use than the old buttons, and they build up a certain amount of unhappiness every time I use them incorrectly.

The interesting thing is that this is exactly the kind of problem that you can surface very easily by simply doing some observational testing of people using prototypes or early versions of the product.

Hell, you could probably even figure that one out with metrics. How often are people hitting the Undo button now compared to previously? If people are undoing their actions more frequently, you can bet that your new design is causing them to make mistakes.

Did You Test with Current Users or Just New Ones?

When you have millions of users, it's not cool just to test on people who have never seen your product before.

New user testing gives you really valuable feedback, but it's just one kind of feedback. It doesn't give you any insight into how your current users (often users who are paying you money) are using your product right now.

It may be that your users are doing something surprising with your product. They may be using it in ways you never anticipated. Making major changes without understanding their work styles can destroy something they were relying on.

It's not a matter of their relearning how to do a task in the new interface. You may literally have removed functionality for people who were using your product in innovative ways.

Similarly, if you're only testing with internal users (that is, users internal to your organization), you're not really getting the full idea of how all sorts of different people are interacting with your product. The more types of people you can observe, the clearer your understanding of real use cases and behaviors will be.

Did You Add Something Useful to Users? Really?

Sorry, improving your brand is not particularly useful to users. Even a nice new visual design tends not to be a big enough improvement if you're also changing functionality that users rely on.

Here are some significant improvements that might be enough to counteract a tendency to change aversion:
  • making your product noticeably faster or more responsive
  • adding a great new feature that users can understand and start enjoying immediately
  • fixing several major bugs that people have been complaining about
That's pretty much it.

The problem here is that often companies mistake doing something that is good for the company with something that is good for the user. That can be a tricky thing to spot, but a good way to handle it is to always ask yourself, "What real user problem is this change solving?" If the answer is, "How to give us more money" or "Well, there's more visual breathing space," you might want to brace yourself for the inevitable shitstorm when you launch that change.

Do You Mind Losing a Portion of Your Users?

This is a 100% legitimate question to ask. Sometimes you make changes to your product that you know will piss off a certain subset of your users, and that can be ok.

I've often advocated for prioritizing changes that help your paying customers over changes that help your non-paying users. But there can be other reasons to make changes that annoy certain groups of your users.

The thing is, you have to know that you're making this tradeoff and be ok with it. If you've gone into the change knowing that you might lose a certain percentage of your users, but hoping that you will make up the loss by making other users very happy or attracting new users, that's a fine choice to make. Just be sure that your metrics show this actually happening.

Have You Honestly Listened to Your Users' Complaints?

Let me give you two common examples of user feedback:
  • I hate it! It's terrible! I want the old way back!
  • I'm constantly hitting the folder button when I want to hit the label button, and I find it really hard to tell which emails are more important any longer. Also, every time I try to Reply to All in the middle of an email thread, it's just ridiculously difficult to do.
Can you spot why they are so different? That's right, one of them is completely non-actionable. There is nothing you can really react to with the first one. You can't fix this user's problem. Yet.

The second one is significantly better because you're starting to get at WHY the user hates the change. You know that the user is having trouble performing specific tasks. You can follow up with the user and have her show you the things that you have made harder to do. You can then figure out if those are things that are done frequently enough and by enough users to justify making them easier to do.

Here's the trick. You can turn the first type of feedback into the second type of feedback by following up with users and asking them for specific things that they hate about your change. If they just keep saying, "It's different!" then they may get over it when they get used to it. But a significant portion of them probably have specific complaints, and writing those complaints off as change aversion is really kind of a dick move.

Have You "Fixed" the Problem by Letting Users Change Settings?

Stop it. Seriously. Just stop it.

The vast majority of your users don't know how to change the default settings.

It's not a failing. They're not stupid. They just don't know nearly as much about your product as you do, so they don't have great understanding of the million different ways you've allowed them to customize their experience. They probably don't even know that those settings are there, and even if they do, why are you making them work that hard?

If you are going to include a few settings that they can change, make them obvious, easy to understand, and don't bury them in a thousand other settings that are incredibly confusing to everyone who isn't in tech and half of us who are.

Like the post? Follow me on Twitter!

I Don't Know What's Wrong with Your Product

When I’m talking with startups, they frequently ask me all sorts of questions. I imagine that they’re probably really disappointed when I respond with a shrug.

You see, frequently they’re asking entirely the wrong question. And, more importantly, they’re asking the wrong person.

It is an unfortunate fact that many startups talk to people like me (or their investors or their advisors or “industry experts”) instead of talking to their users.

Now, obviously, if they just asked the users the sorts of questions they ask me, the users couldn’t answer them directly either. This is the wrong question part. But the fact is, if they were to ask the right questions, they’d have a much better chance of getting the answers from their users.

Let’s take a look at a few of the most common sorts of questions I get about UX and how we might get the answers directly from users.

What’s Wrong With My Product?

I often get people who just want “UX advice.” I suppose they’re looking for somebody to come in and say something like, “oh, you need to change your navigation options,” or “if only you made all of your buttons green.”

Regardless of what they’d like to hear, what they typically hear is, “I have no idea.” That’s quickly followed by the question, “Which of your metrics is not where you want it to be?” If they can answer that question, they are light years beyond most startups.

You see, the first step to figuring out what’s wrong with your product is to figure out, from a business perspective, some realistic goals for where you’d like your product to be right now. Obviously, “We’d like every person on the planet to pay us $100/month” is probably not a realistic goal for a three month old venture, but hey, aim high.

Once you know what you want your key metrics to be, you need to look at which of them aren’t meeting your expectations. Are you not acquiring new customers fast enough? Are not enough of them sticking around? Are too few of them paying you money? While “all of the above” is probably true, it’s also not actionable. Pick a favorite.

Now that you know which of your key metrics is failing you, you need to conduct the appropriate sort of research to figure out why it’s so low. Note: the appropriate sort of research does not involve sitting around a conference room brainstorming why abstract people you’ve never met might be behaving in a surprising way. The appropriate sort of research is also not asking an expert for generic ways to improve things like acquisition or retention, since these things vary wildly depending your actual product and user base.

The appropriate sort of research depends largely on the sort of metric you want to move and the type of product/user you have. You will, without question, have to interact with current, potential, or former customers of your product. You may need to observe what people are doing. You may need to ask them why they tried your product and never came back. You may need to run usability tests on parts of your interface to see what is confusing people the most.

Feel free to ask people like me for help figuring out what sort of research you need to be doing. That’s the sort of things experts can do pretty effectively.

But if any expert tells you exactly what’s wrong with your product without considering your user base, your market, or your key metrics, either they’re lying to you or your problems are so incredibly obvious that you should have been able to figure them out for yourself.

What Feature Should I Build Next?

Let’s imagine for a moment that you have built a Honda Civic. Good for you! That’s a nice, practical car. Now, let’s imagine that you come to me and ask how you should change your Honda Civic to make more people want to buy it.

Well, I drive a Mini Cooper, so it’s very possible that I’ll tell you that you should make your Civic more adorable and have it handle better on curves. If, on the other hand, you ask somebody who drives a Ford F-150, they’ll probably tell you that you should make it tougher and increase the hauling capacity.

Do you see my point? I can’t tell you what feature you need to build next, because I almost certainly don’t use your product! To be fair, even the people who do use your product or might use your product in the future can’t just tell you what to build next.

What they can tell you is more about their lives. Do they frequently find themselves hauling lots of things around? Do they drive a lot of curvy mountain roads? Do they care about gas mileage? What about their other purchasing choices? Do they tend to buy very expensive luxury items? Do they care more about status or value?

You see, there is no single “right way” to design something. There are thousands of different features you could add to your product, and only the preferences and pains of your current and potential users can help you figure out what is right for you.

Should I Build an App or a Website or Something Else?

Another thing that people ask me a lot is whether they should be building an iPhone app, an iPad app, an Android app, a website, an installed desktop app, or some other thing.

That’s an excellent question...to do a little research on. After all, what platform you choose should have nothing to do with what’s popular or stylish or the most fun to design for. It should be entirely based on what works best for your product and market.

And don’t just go with the stereotypes. Just because it’s for teens doesn’t necessarily mean it’s got to be mobile, although that’s certainly something you should be considering. It matters where the product is most likely to be used and what sort of devices your market is most likely to have now and in the near future. It also depends on the complexity of your product. For example, I personally don’t want Photoshop on my phone, and I don’t want a check-in app on my computer.

Talk to your users and find out what sort of products they use and where they use them.

Are You Noticing a Pattern?

Experts are not oracles. You can’t use outside people as a shortcut to learning about your own product or your users. You need to go to the source for those things.

If you find yourself asking somebody for advice, first ask yourself if you’re asking the right question, and then ask yourself if you’re asking the right person.

And if anybody ever tells you definitively what you need to change about your product without first asking what your business goals are, who your users are, and what their needs are, you can bet that they’re probably wrong.

Hey, you got to the end! Now you should follow me on Twitter.

Tiny Tests: User Research You Can Do NOW!

There’s a lot of advice about how to do great user research. I have some pretty strong opinions about it myself.

But, as with exercise, the best kind of research is the kind that you actually DO.

So, in the interests of getting some good feedback from your users right now, I have some suggestions for Tiny Tests. These are types of research that you could do right this second with very little preparation on your part.

What Is a Tiny Test?

Tiny Tests do not take a lot of time. They don’t take a lot of money. All they take is a commitment to learning something from your users today.

Pick a Tiny Test that applies to your product and get out and run one right now. Oh, ok. You can wait until you finish the post.

Unmoderated Tests

Dozens of companies now exist that allow you to run an unmoderated test in a few minutes. I’ve used UserTesting.com many times and gotten some great results really quickly. I’ve also heard good things about Loop11 and several others, so feel free to pick the one that you like best.

What you do is come up with a few tasks that you want to see people perform with your product. When the test is over, you get screen captures of people trying to do those things while they narrate the experience.

Typically, I’ll use remote, unmoderated testing when I want to get some quick insight into whether a new feature is usable and obvious for a brand new user.

For example, if you’ve just added the ability for users to message each other on your site, you can use remote, unmoderated testing to watch people attempt to message somebody. This will help you identify the places where they’re getting lost or confused.

If you’ve done a little recruiting and have a list of users who are willing to participate, you can even ask your own users to be the participants.

And don’t forget, if you don’t have a product, or if you’re looking at other products for inspiration, you can run an unmoderated test on a competitor’s product. This can be a great way to see if a particular implementation of a feature is usable without ever having to write a line of code. It can also be a great way to understand where there might be problems with competing products that you can exploit.

Are you going to get as much in-depth, targeted feedback as you would if you ran a really well designed, in person user test? Probably not. But it’ll take you 10 minutes to set up and 15 minutes to watch each video, so you might actually do this.

Remote Observation

There is something to be said for traveling to visit your users and spending time in their homes or offices. It can be extremely educational. It can also be extremely expensive and time consuming.

Here’s a way to get a lot of value with fewer frequent flyer miles.

Look at the people in your Skype contacts. Find one that doesn’t know much about your product. Ping them. Ask them to do three small tasks on your product while sharing their screen.

Don’t have Skype? Send friends a GoToMeeting or a WebEx link through email.

As with the remote unmoderated testing, this is best for figuring out if something is confusing or hard to do. It’s not very useful for figuring out whether people will like or use new features, because typically the people in your Skype contacts aren’t representative of real users of your product.

The closer the people are to your target market, the better the feedback’s going to be, but almost anybody can tell you if something is hard to use, and that’s information that it would be great if you had right now.

Coffee Shop Guerrilla Testing

Of course, it’s tough to test a mobile app over Skype. You know where it’s easy to test a mobile app? At a coffee shop.

Go outside. Find a Starbucks (other coffee shops are also acceptable if you refuse to go to Starbucks, you insufferable snob). Buy some $5 gift cards. Offer to buy people coffee if they spend 5 minutes looking at your product. Have a few tasks in mind that you want them to perform.

In about an hour, you can watch a dozen people use your app. And if you don’t manage to get any good feedback, at least you can get coffee. But you’ll almost certainly get some good feedback.

This type of feedback is great for telling you if a particular task is hard or confusing. It’s also great for getting first impressions of what an app does or the type of person who might use it.

Five Second Landing Page Testing

Sometimes, all you want to test is a new landing page. What you frequently want to know about a landing page is, “What message is this conveying, and is it conveying it clearly and quickly?” Even the tiniest of tests can seem like overkill for that.

For landing pages, I use UsabilityHub’s Five Second Test. You take a screenshot or mockup of the landing page you want to show. You upload it to the site. You enter a few questions you want people to answer after looking at it.

If the whole setup process takes you more than 5 minutes, you’re doing it wrong, and within a few hours you can have dozens of people look at your landing page and tell you what they think your product does.

This sort of Tiny Test is wonderful for testing several different variations of messages or images that you might put on a landing page. You can get people’s real first impressions of what they think you’re trying to tell them.

CTA Testing

The most important thing to get right on any screen is the Call To Action. After all, you can have the most gorgeously designed images with a wonderfully crafted message, but if people can’t find the damn Buy button, you’re screwed.

But, as with the landing page tests, this is something that takes 5 seconds. Basically, you want to show people a screen and see if they can figure out where they should click. Guerrilla testing works pretty well for this, but even that may be overkill here.

For CTA testing, I often use UsabilityHub’s ClickTest product. Again, you just upload a mock and ask people something like, “Where would you click to purchase the product shown on this page?” or “Where would you go to advance to the next slide?” or whatever CTA you’re testing.

A few hours later, you get a map of where people clicked. If there are clicks all over the place, you’ve got some work to do on your CTA.

The advantage to doing something like this over a/b testing is simply that you can get it set up very quickly with just mockups. You don't have to actually implement anything on your site (or even have a site) in order to test this way. But, if you have enough traffic and a good a/b system already set up, by all means test that way, as well.

What Are You Waiting For?

There you go. Five different options for wildly fast, incredibly cheap feedback on your product. You don’t have to hire a recruiter or write a discussion guide or rent out a usability lab. In a few cases, you don’t even have to interact with a human.

Are they perfect? Do they take the place of more substantial research? Will you be able to get away with avoiding talking to your users forever? No. But they’re easy, and you can do one of them right this second.

So...do one of them right this second!

Like the post? Follow me on Twitter.

Give the Users What They Really Want

Recently, I’ve been trying to teach startups how to do their own user research. I’ve noticed that I teach a lot of the same things over and over again, since there are a few things about research that seem to be especially difficult for new folks.

One of the most common problems, and possibly the toughest one to overcome, is the tendency to accept solutions from users without understanding the underlying problem. In other words, a user says, “I want x feature,” and instead of learning why they want that feature, new researchers tend to write down, “users want x feature," and then move on.

This is a huge issue with novices performing research, When you do this, you are letting your users design your product for you, and this is bad because, in general, users are terrible at design.

Ooh! An Example!

I participated in some user research for a company with an expensive set of products and services. Users coming to the company’s website were looking for information so they could properly evaluate which set of products and services was right for them. Typically, users ended up buying a custom package of products and services.

One thing we heard from several users was that they really wanted more case studies. Case studies, they said, were extremely helpful.

Now, if you’re conducting user research, and a customer tells you that he wants case studies, this might sound like a great idea.

Unfortunately, the user has just presented you with a solution, not a problem. The reason that this is important is that, based on what the actual underlying problem is, there might be several better solutions available to you.

When we followed up on users’ requests for case studies with the question, “Why do you want to see case studies?” we got a variety of answers. Interestingly, the users asking for case studies were all trying to solve entirely different problems. But were case studies really the best solution for all three problems?

These were the responses along with some analysis.

“I want to know what other companies similar to mine are doing so that I have a good idea of what I should buy.”

The first user’s “problem” was that he didn’t know how to pick the optimal collection of products for his company. This is a choice problem. It’s like when you’re trying to buy a new home theater system, and you have to make a bunch of interrelated decisions about very expensive items that you probably don’t know much about.

While case studies can certainly be helpful in these instances, it’s often more effective to solve choice problems with some sort of recommendation engine or a selection of pre-set packages.

Both of these more quickly help the user understand what the right selection is for him rather than just give him a long explanation of how somebody else found a good solution that might or might not be applicable to the user.

“I want to know what sorts of benefits other companies got from the purchase so I can tell whether it’s worth buying.”

The second user’s “problem” was that he wanted to make sure that he was getting a good value for his money. This is a metrics problem. It’s like when you’re trying to figure out if it’s worth it to buy the more expensive stereo system. You need to understand exactly what you’re getting for your money with each system and then balance the benefits vs the cost.

This problem might have been solved by a price matrix showing exactly what benefits were offered for different products. Alternatively, it would be faster and more effective to display only the pertinent part of the case studies on the product description page - for example, “Customers saw an average of 35% increase in revenue 6 months after installing this product.”

By boiling this down to only the parts of the case study that were actually important to the user, it gives you more flexibility to show this information - statistics, metrics, etc. - in more prominent and pertinent places on the site. This actually increases the impact of these numbers and improves the chance that people will see them.

“I want to see what other sorts of companies you work with so that I can decide whether you have a reputable company.”

The third user’s “problem” was that he hadn’t ever heard of the company selling the products. Since they were expensive products, he wanted the reassurance that companies he had heard of were already clients. This is a social proof problem. It’s like when you’re trying to pick somebody to put a new roof on your house, so you ask your friends for recommendations.

His actual problem could have been solved a lot quicker with a carousel of short client testimonials. Why go to all the trouble of writing up several big case studies when all the user cares about is seeing a Google logo in your client list?

Why This Matters

This shouldn’t come as a surprise to any of you, but users ask for things they’re familiar with, not necessarily what would be best for them. If a user has seen something like case studies before, when he thinks about the value he got from case studies, he’s going to ask for more of the same. He’s not necessarily going to just ask for the part of the case study that was most pertinent to him.

The problem with this is that many people who might also find certain parts of case studies compelling won’t bother to read them because case studies can be quite long or because the user doesn’t think that the particular case study applies to him.

Obviously, this is applicable to a lot more than case studies. For example, I recently saw a very similar situation from buyers and sellers in a social marketplace asking for a “reputation system” when what they really wanted was some sort of reassurance that they wouldn’t get ripped off. I could name a dozen other examples.

The takeaway is that, when somebody asks you for a feature, you need to follow up with questions about why they want the feature, even when you think you already know the answer!

Once you know what their problems really are, you can go about solving them in the most efficient, effective way, rather than the way the user just happened to think of in the study.

Instead of just building what the user asks for, build something that solves the user’s real problem. As an added bonus, you might end up building a smaller, easier feature than the one the user asked for.