Top

Do’s and Don’ts of Usability Testing

Insights from Research

Walking in your customers’ shoes

March 8, 2010

Usability testing is one of the least glamorous, but most important aspects of user experience research. Over the years, it has also been one of the forms of user research we have performed most frequently. In doing so, we’ve learned quite a few best practices and encountered some potential pitfalls. We think it’s important that we share what we’ve learned with the many stakeholders, designers, and engineers who might find this information helpful.

Champion Advertisement
Continue Reading…

DO: Get involved and observe usability test sessions.

Both designers and stakeholders can get a lot out observing usability test sessions. Witnessing participants’ reactions to a product and its user interface can help you understand product and usability issues that might be extremely difficult for researchers to communicate through reports, meetings, or presentations. If you have the opportunity to observe a few test sessions, you should definitely take advantage of it.

DON’T: Jump to conclusions based on a couple of test sessions.

People are unique. It’s essential to keep this in mind when you are observing usability test sessions. The feedback just one or two participants provide might not reflect the reaction your entire target market would have toward the product and its design. Researchers look for trends rather than focusing on individual comments or isolated incidents. So keep this in mind and avoid jumping to conclusions about particular design directions. Take the time to talk to your research team about the user feedback before making any decisions.

DO: Provide direction to your user research team.

Often, user researchers do not understand a product or its business goals as well as the stakeholders and designers who have helped create it. As researchers, we rely on our clients to give us a clear understanding of a product’s value proposition, the goals for the product in the marketplace, and the goals they hope user research can achieve. The better we understand these goals, the better we can plan our research to achieve them. We try to gather this data during kick-off meetings or stakeholder interviews. By participating in this process and clearly articulating your views, you can help ensure that research findings are actionable.

DON’T: Dictate to your user research team.

Generally speaking, user researchers have more experience and knowledge when it comes to conducting research. It’s important to take their input into consideration when making decisions regarding the planning of a usability study. If researchers provide advice regarding schedule, logistics, recruiting, tasks, or other elements of planning a usability study, you can be sure their advice is based on their experience or deep knowledge of usability testing. Research works best as a collaborative process, so it is extremely important that the development team collaborates with the research team to achieve the best results.

DO: Require a screener that includes user behavior.

Recruiting is often the most critical element of usability testing. If you conduct usability testing with the wrong participants—people who aren’t representative of your target market—you’ll often find yourself making the wrong changes to a product’s user interface.

I remember a conversation I had with a friend of mine who happens to be a software engineer: He was complaining about the user interface of a consumer application. He wanted more options and more information in the user interface. But I thought the extremely technical language he suggested would confuse the vast majority of the application’s intended users and that many of the options he thought would be useful would overwhelm the typical user. By including behavioral questions in your screeners—such as weekly Internet usage, interaction with social networking sites, or use of specific mobile phone features—you can get an excellent idea of a potential participant’s level of technical ability. Matching characteristic user behaviors to your target criteria can help ensure you don’t design an application for the wrong user population.

DON’T: Handcuff your recruiter with unnecessary requirements.

As important as it is to get the right participants, it’s also important to ensure that you can get some participants. We’ve seen screeners requiring participants to own a certain model of a particular device, with firmware they’ve updated to a particular version number. Coupling such unnecessarily strict recruiting requirements with other recruiting requirements may make it nearly impossible to get the number of participants you need to perform a proper usability study. Plus, with such strict requirements, you may find that the participants you can get are outliers relative to a more broadly defined market. The idea is not to recruit participants who match the exact narrative of a user persona, but rather participants who represent an approximation of that persona’s usage behavior and technical ability.

DO: Get video clips of the test sessions.

Video clips are invaluable for communicating usability problems and can really hit a point home. It’s really important to see the emotional responses on participants’ faces and hear their feedback. Preparing video clips can also help you make a case for change to decision makers—potentially helping you to get the additional budget or time you need to ensure a product’s quality.

DON’T: Expect to watch a complete video of a usability study.

If you expect to review all of the video from a usability study, you may be in for a rude awakening. Even smaller usability studies can result in 10 to 20 hours of raw video. To make truly informed decisions based on that video, you would need to watch most of the video and systematically take notes that let you compare participants’ responses. (Remember the danger of jumping to conclusions.) In all of our years in this profession, we have yet to meet a client who has watched more than a few minutes of video from usability test sessions.

DO: Perform iterative usability testing.

Every now and then, we meet clients who want to get all of their usability testing done in one large study. There are several reasons why this is a bad idea. First, more prominent usability issues tend to obscure other issues that may also be very important. The more severe the usability issues that studies find, the more likely the issues they miss are also severe. In fact, it’s often best to start with a heuristic analysis or expert review before bringing in actual users to participate in a usability study. An expert review can help you to identify a product’s most obvious usability problems, so you can make some changes to improve the user interface before engaging in more costly usability testing.

One solid usability testing process involves testing with a limited number of participants, making the necessary fixes to solve identified problems, then retesting. Testing with 10 to 12 participants tends to provide very little additional value over testing with 6 to 8 participants, despite the significantly increased cost of testing with more participants. Iterative usability testing also has the advantage of more easily fitting into agile development cycles.

DON’T: Leave target user groups out of your studies.

If you are developing a product for more than one target audience, make sure you include representatives of each target user group in your usability testing at some point. There are two ways of accomplishing this goal. First, you can include each of these groups in a single usability test. Second, you can work in the different groups through different test iterations. Typically, we prefer the latter approach.

As we mentioned earlier, large usability studies are usually more costly and complicated to conduct, and they provide less value than iterative studies. We like to use an iterative test cycle, beginning with the most technically proficient group first, then progressing to the least proficient group through subsequent test iterations. We’ve found that highly proficient groups tend to work through usability issues, but they also tend to be the most critical of a user interface. Less proficient groups tend to have more difficulty performing tasks, but they are more likely to attribute their difficulties to their own level of ability rather than a product’s design.

DO: Minimize your impact on test sessions.

We could tell you some great stories about well-intentioned designers who have jumped into a usability test session, completely sidetracking or invalidating the session. One of my favorites involved a designer who turned a test session into a guided demo of a product. The problem with these kinds of interactions is that people can always see the logic behind a design if you show it to them. The goal of a usability test is to determine whether a product’s intended users would be able to figure out a user interface on their own once the product is out in the wild. To do so, it’s best if a test session can mimic the context in which participants would actually use a product—that is, without a developer sitting right next to them. While observing a test session, try to sit quietly, observe, and take notes. One way to introduce new questions into a test session is to slip a note to a user researcher or, better yet, ask him or her to incorporate your questions into the next session.

AVOID: Hiding yourself from participants.

In general, participants would much rather deal with a devil they know. The element of the unknown that a monolithic, one-way mirror introduces to a test session can tend to make participants much more nervous and self-conscious than having an extra person in the room with a laptop or a camera pointed at them. Likewise, when you hide your company affiliation, participants may sense that you are hiding something from them. As a result, it’s often beneficial to keep everything out in the open. Of course, there are times when this is not the case—such as when highly specialized equipment is necessary for testing or a product’s brand has a very strong negative or positive perception in the marketplace that would bias the results. But this is something you should discuss with your research team and address on a case-by-case basis.

DO: Get to know your user research team.

As we mentioned previously, user research works best when it’s a collaborative process. Collaboration works best when the members of different teams really get to know one another and have effective communication. Therefore, we like to foster long-term relationships with the development teams we work with. When we know their processes and they understand ours, we can quickly and flexibly engage in user research for those teams, reducing our preparation time and requirements, and we know what methods are optimal for communicating our findings. If you can cultivate this kind of relationship with a research team, you’ll notice significant benefits.

AVOID: Using functional prototypes that are overly buggy or unstable.

Buggy, unstable functional prototypes can significantly complicate a usability study. As researchers, we know that, if participants uncover bugs or glitches while they’re attempting to perform tasks, they tend to respond to the bugs and glitches rather than the actual design of a product. This kind of feedback can compromise the amount and quality of actionable data you can acquire during a test session. Also, because crashes and bugs can extend session time by as much as 100%, they have the potential to negatively impact your study’s schedule. You may even lose participants. Of course, there are times when it’s not possible to test what you need to test using a mockup, so you’ll have to put participants in front of an incomplete product. In such cases, it is important to communicate the state of the prototype to participants and have a member of the development team on hand to provide technical support.

DO: Incorporate experience testing.

It’s a good idea to maximize the value of usability test sessions whenever you bring in participants to get feedback. With that in mind, usability testing provides a great opportunity for gathering data about participants’ reactions to a product’s value proposition. This kind of data can inform product definition and help you assess whether you are on track to create the kind of positive experience that would improve your brand perception and even create brand advocates. You can also learn whether your product is missing a key feature that could dramatically increase its value to users. Getting this kind of feedback involves your observing participants’ emotional reactions to the experience of using your product. We’ve often seen participants light up with enthusiasm after discovering an innovative new feature, gushing about all the ways they could put it to use. We’ve also seen participants be completely underwhelmed by the innovations a product includes. Being able to distinguish between these two extremes of experience before a product goes to market has tremendous value.

Conclusion

As user researchers, we see lots of different types of products, and we do a lot of usability testing. The simple guidelines we’ve presented here can help you get the most out of usability testing. In general, the guidelines we’ve outlined tend to hold true. Of course, in some exceptional situations, doing the exact opposite of what we’ve recommended might be optimal. So, it’s important to keep in mind that every situation is unique. It is always up to you to collaborate with your user research team to develop a course of action that best suits your situation and your product goals. Getting user feedback on a product can be a fun and very rewarding process. We hope these guidelines help everyone to enjoy the usability testing process. 

VP, UX & Consumer Insights at 30sec.io

Co-Founder and VP of Research & Product Development at Metric Lab

Redwood City, California, USA

Demetrius MadrigalDemetrius truly believes in the power of user research—when it is done well. With a background in experimental psychology, Demetrius performed research within a university setting, as well as at NASA Ames Research Center before co-founding Metric Lab with long-time collaborator, Bryan McClain. At Metric Lab, Demetrius enjoys innovating powerful user research methods and working on exciting projects—ranging from consumer electronics with companies like Microsoft and Kodak to modernization efforts with the U.S. Army. Demetrius is constantly thinking of new methods and tools to make user research faster, less costly, and more accurate. His training in advanced communication helps him to understand and connect with users, tapping into the experience that lies beneath the surface.  Read More

President & Co-Founder at Metric Lab

Strategic UX Adviser & Head of Business Development at 30sec.io

Redwood City, California, USA

Bryan McClainBryan is passionate about connecting with people and understanding their experiences and perspectives. Bryan co-founded Metric Lab with Demetrius Madrigal after doing research at NASA Ames Research Center for five years. While at NASA, Bryan worked on a variety of research studies, encompassing communication and human factors and interacting with hundreds of participants. As a part of his background in communication research, he received extensive training in communication methods, including certification-level training in police hostage negotiation. Bryan uses his extensive training in advanced communication methods in UX research to help ensure maximum accuracy and detail in user feedback. Bryan enjoys innovating user research methods that integrate communication skills, working with such companies as eBay, Kodak, Microsoft, and BAE Systems.  Read More

Other Columns by Demetrius Madrigal

Other Columns by Bryan McClain

Other Articles on Usability Testing

New on UXmatters