21st Century User Testing: Test It, Cheap Fast and Often

6

Click here to load reader

description

It’s time for a new approach to usability testing. White Paper By Dan Willis, Associate Creative Director: Government Services

Transcript of 21st Century User Testing: Test It, Cheap Fast and Often

Page 1: 21st Century User Testing: Test It, Cheap Fast and Often

PB1 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

21st Century User Testing: Test It Cheap, Fast and Often

Page 2: 21st Century User Testing: Test It, Cheap Fast and Often

PB2 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

believing that we can somehow create reports that are so effective that stakeholders find them irresistible. But that’s like believing there’s an umbrella so effective that it can stop rain from falling.

We kill ourselves to catalogue every possible usability issue, but we need to step back and ask, how many issues from such an exhaustive list actually get resolved in a typical project? 20 percent? If we got to even half of the issues raised by testing, it would be a miracle.

For years, we had long debates about the appropriate number of usability test participants until finally the research determined that 85 percent of problems surfaced with five test participants, and we all nodded our heads and felt secure in our methods. But what practical value have we seen in 85 percent discovery when we have only been able to address a handful of issues?

Even with all of its flawed elements, traditional practices have given us glimpses of usability testing’s true potential. User Interface Engineering (UIE), a leading user research firm, tested an e-commerce Web site for a major corporate client where first-time buyers encountered a screen similar to the one in Figure 1.

Four decades of usability testing has done little to make our world any more usable. This is a shame because usability testing is such a unique and powerful development tool. By watching people work through specific tasks using the interfaces, applications and services we build, we avoid our own biased views of what people want and need. It provides a much greater chance for us to meet users’ requirements and align with their expectations.

You would expect the use of such an impressive tool for 40 years to have a positive influence on the overall usability of all solutions. So why hasn’t this been the case?

For one thing, clients may listen intently when we present our findings, but the data doesn’t seem to stick. In organizations with weak project management, the data doesn’t stick because no one is directly responsible for acting on usability testing results. In organizations with strong project management, the data doesn’t stick because—whether or not it fits the project manager’s job description—everyone assumes that the project manager will integrate research results into projects. This means that no one other than the project manager takes personal responsibility.

Another influence on usability testing’s lack of long-term impact is the way we’ve traditionally reported results. We craft the most comprehensive analysis and communication of our research possible, but despite our efforts people rarely read the report. Many of us have taken this personally, seeing it as a reflection on the quality of our work and

21st Century User Testing: Test It Cheap, Fast and Often

Page 3: 21st Century User Testing: Test It, Cheap Fast and Often

PB3 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

Figure 1. The Login Screen with a Register ButtonParticipants resented being forced to register before making a purchase. As one person said, “I’m not here to enter into a relationship, I just want to buy something.” The client’s intent had been to make shopping easier, but their solution actually made it much more difficult. Based on the usability testing results, the client changed the “Register” button to a “Continue” button (as shown in Figure 2). This made it more obvious that registering was optional.

Figure 2. The Login Screen with a Continue ButtonStarting the day they changed the button, the number of customers purchasing jumped 45 percent. In the first month, the client made $15 million more than it had in the month before the button was changed. In the first year, it made $300 million more in sales than it had the year before.

Despite hundreds of thousands of usability tests and projects as successful as the $300,000 button example, the quality of usability across all products and interfaces hasn’t improved. It certainly wasn’t the result of a lack of effort. We strove to find the right number of participants to surface the greatest number of issues. We carefully crafted comprehensive analysis and presented exhaustive details to justify the substantial investments of our clients. But we ignored or sacrificed the practical in our dedication to a highly principled approach.

There is no magic that will make traditional usability testing data suddenly stick within an organization, no secret formula for creating reports that stakeholders will actually read, and no new process that will allow projects to address a majority of issues raised. The only real answer is to radically change how we do usability testing, with the most radical part being that we actually need to be doing less.

THE WAY WE’VE ALWAYS DONE USABILITY TESTINGThis is how it typically works. You pay a recruiter to find people similar to your users. You rent a facility and invite key stakeholders to observe the test. Your test participants work their way through a series of tasks using a fully- or semi-functional version of the Web site or application your team is building.

You review the video of each session again and again until you’ve gleaned every possible data point about every possible usability issue. You combine, compare, and classify all of the data and; issues are prioritized and placed into a comprehensive catalog of concerns.

You take screenshots from the interface and snapshots of the testing and combine them with your insightful, exhaustive analysis. Then you deliver it all to the client in one giant, glorious report.

In a meeting you present your data and watch as your client scrawls notes on the report cover. After skimming the executive summary, the client tosses your shiny, full-color, indexed report on a shelf with a stack of others, and she buries the digital version deep in her hard drive. Your keen insights and analysis eventually blend in with the existing white noise of her organization’s developmental mythology.

If you’ve presented well and stayed within your budget—usually between $20,000 and $40,000, depending on the number of types of users and your team’s travel costs—you get the chance to do it all over again for a later phase or for a different project.

Page 4: 21st Century User Testing: Test It, Cheap Fast and Often

PB4 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

Test with three people and recruit looselyInstead of trying to find all possible issues with no concern for how many we can actually address, our revised, more practical goal is to concentrate on the most serious usability problems. These tend to be about navigation, page layout, and other issues that don’t require participants with domain knowledge, so recruitment can be less precise. Three participants a month will surface the significant problems and running so few sessions makes it possible to have both the testing and the debriefing on the same day.

As with traditional methods, testing is one-on-one with sessions lasting up to an hour. A facilitator in the same room follows a script, asking each participant to work through the same series of tasks and asking them to think aloud as they do so.

Conduct usability testing on-site and make it an all-inclusive spectator sportTesting at off-site facilities automatically limits the number of people who can attend. By moving it to the client’s offices (or to space extremely convenient to the client) we can involve every member of the team, every stakeholder and everybody in the client’s organization with an interest in the project. Watching users struggle with tasks transforms the people who see it and will generate the support we always expected (but never got) from old-school reports.

Testing takes place in one room as the audience watches in another. The two rooms should be far enough apart so that the test participants remain unaware of their audience. There are no special requirements for the rooms (they can be regular offices or conference rooms). The room where the testing takes place should have a computer with Internet access, a monitor, a mouse, and a keyboard. The computer should have screen recording software (e.g., Camtasia) and screen sharing software (e.g., WebEx) so the audience in the second room can watch in real time what’s happening on the test participant’s screen. An audio solution as simple as a telephone set on speakerphone will be needed so the audience can hear the test participant (although the test participant should never hear the audience).

DOING LESSWhen a client needs usability testing, the package above is what we typically suggest. Depending on the client, project and budget, we might get asked for a less ambitious approach and the cheaper we need to go, the more guerilla we get with our tactics. We find ways to get usability testing done on-site and we find ways to do things on a smaller scale. We assure our client that the reduced scope will still result in credible data, but we’re almost apologetic about our methods.

But to unleash usability testing’s true potential, we must now adopt our guerilla tactics as the norm rather than mumble about them in embarrassment. Steve Krug, author of Don’t Make Me Think and Rocket Surgery Made Easy, provides a blueprint for a more practical approach to usability testing.

Test whatever you have at the same time every monthTesting should start as early in the project as possible, even earlier than you think makes sense. Organizations have historically done usability testing in a single, exhaustive round at the end of the development cycle. While this approach generates a high volume of data, little of it is useful because it arrives too late in the development cycle. By committing to monthly testing, we can change the culture of the project and its relationship to research.

It’s always tempting to wait for the thing you’re building to get to critical mass before testing it. But with testing materials required in advance, the data can already be weeks old before it’s available to the team (assuming that testing takes at least a week and analysis takes at least another week). Also, research results served in one or two bulky servings (as it is with traditional methods) is harder for an organization to digest than data in regular, smaller servings delivered over the course of the project.

Page 5: 21st Century User Testing: Test It, Cheap Fast and Often

PB5 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

Improving usability almost always requires taking things away rather than adding things. The solutions we are looking for should be as simple and unambitious as possible, because we have to implement them before the next month’s regularly scheduled testing.

CONCLUSIONInstead of being embarrassed by our guerilla tactics, we should be recommending their use as a regular part of every engagement. If a client is determined to pay big money for traditional usability testing, we should do it and do it well. We provide usability testing as a service, but it’s not what we’re selling when we make our pitch. What we’re selling, what we deliver, is our client’s success.

Regularly scheduled, small-scale, highly practical usability testing will deliver success and change our clients’ organizations. The dirty little secret of 21st Century usability testing is that if we can’t talk the client into paying for it, we should do it anyway and Sapient should pick up the tab. It’s smart and cheap and it greatly increases our ability to create wildly successful interfaces, applications, and services.

It’s time for a new approach to usability testing. By focusing on the practical application of user testing data rather than clinging to principles that impractically aspire to perfection, the practice of usability testing will finally reach its potential in the first part of the this new century.

Get everybody who watches the sessions to come to the debriefing

After completing the three sessions and sending the test participants away, the team discusses what they saw over a lunch that can be as simple as pizza and sodas. Each team member brings with them a list of the three most important usability problems they saw for each session. By socializing the research, each member of the team becomes a source of data. Let people talk about whether they thought it was important if a test participant did or did not click a button. Let them have that argument along with any others that come up. If you don’t have the facilitation skills to maintain a healthy and productive conversation, bring the talent in. (There is usually someone on your team that can fill in as a facilitator.) By making usability testing a spectator sport, research results can be internalized by the whole organization.

Focus ruthlessly on only the most serious problems.A successful debriefing session will result in a list of the most serious usability problems encountered, along with a list of the problems that will be fixed before the next month’s round of testing. Severity will be gauged by considering how many people will experience an issue and whether it will cause a serious problem for the people who experience it or if it will just be an inconvenience.

Failing to deal with a serious problem ruthlessly guarantees that the problem will show up in future testing. It takes a severe ruthlessness to identify a serious problem and make it go away.

As a follow-up to the debriefing, we send a short e-mail that briefly covers what was tested, the list of tasks the participants performed, and the list of problems to be fixed before the next test. The e-mail should take less than two minutes to read (making it more likely that most stakeholders will read it). In addition, it should be delivered in a format readable from any device (which means stakeholders can read it wherever and whenever they prefer). This e-mail replaces the voluminous report delivered and presented at the end of a traditional usability testing engagement.

Page 6: 21st Century User Testing: Test It, Cheap Fast and Often

PB6 © Sapient Corporation 2011 © Sapient Corporation 2011

21st Century User Testing: Test It Cheap, Fast and Often

Subject Matter Expert: Dan Willis

Dan Willis is an Associate Creative Director for Sapient Government Services. He has been designing

web sites since the mid-1990s. Dan is prominent in the user experience community in the U.S., having

presented at the last two South by Southwest Interactive Festivals as well as numerous Usability

Professionals’ Association, Information Architecture Institute, and Interaction Design Association

conferences. He is also the creator of uxcrank.com, a resource for UX professionals.