World Usability Day 2005 • User Research at Orbitz

7

Click here to load reader

description

This is my presentation from World Usability Day 2005 in Chicago. Thanks to Dayna Bateman for asking me to participate!

Transcript of World Usability Day 2005 • User Research at Orbitz

Page 1: World Usability Day 2005 • User Research at Orbitz

WUD Chicago: It Makes $ense

The User Research Practicein my experience at Orbitz

Page 2: World Usability Day 2005 • User Research at Orbitz

Why do we test?

Bad design = lost transactions = lost revenue

• To predict adoption of new features—what isworth building?

• To understand audience view of technology• To predict business performance of a design• To evaluate our performance against others

Page 3: World Usability Day 2005 • User Research at Orbitz

Overview of Methods

StrategySubjectiveRoundtables

Post-launchQuantitativeAutomated methods(log analysis, etc.)

Any timeQuantitativeSurveys (automatedor traditional)

Any timeSubjective and quantitativeUsability Tests

IdeationSubjectiveFocus groups

When inLifecycleInsights gainedMethod

Page 4: World Usability Day 2005 • User Research at Orbitz

Performing the usability test

• One+ design variants of a new feature• Existing features versus redesigns• Ability for users to either identify or

successfully complete a given task• Bad design = dissatisfied users• Designers may not moderate tests on

their own designs

Page 5: World Usability Day 2005 • User Research at Orbitz

Interpreting the results

• Often, team members will observe a fewparticipants and jump to conclusions aboutthe results

• Easy for non-practitioners to assume they aredrawing the same conclusions a usabilityspecialist would—we’re co-workers, right?

• It is essential to maintain a balanced opinionin the face of questioning by businessmembers, for whom features = revenue

Page 6: World Usability Day 2005 • User Research at Orbitz

Communicating Findings

• Need to highlight the findings as they pertainto the project which initiated the study

• Must make sure that things that are observedby users about your interface are actuallyacted upon or considered

• In other words, it’s not just that you proveyour hypothesis, but you must report andattempt to spur action on findings unrelated toit if they impair usability

Page 7: World Usability Day 2005 • User Research at Orbitz

When Practice and Theory Collide

• No matter what choices are made andhow results are acted upon:– Remember, you are actually testing an

hypothesis, not trying to have your model(or others’) chosen by users

– You can continue to monitor performancevia other methods after launch

– Don’t accept broken windows