Oz-IA/2008 UCD Benchmarking Andrew Boyd

59
UCD Benchmarking How do you like them apples? Andrew Boyd Oz-IA/2008

description

Presentation by Andrew Boyd on UCD Benchmarking at Oz-IA/2008, Sydney, Australia, on 21 September 2008

Transcript of Oz-IA/2008 UCD Benchmarking Andrew Boyd

Page 1: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingHow do you like them apples?

Andrew BoydOz-IA/2008

Page 2: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

Page 3: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

The case for ROI

Page 4: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

Page 5: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

Page 6: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

The basic process

Page 7: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

The basic process

Comparing apples to apples (and other lessons learned)

Page 8: Oz-IA/2008 UCD Benchmarking Andrew Boyd

And we are not talking about

Page 9: Oz-IA/2008 UCD Benchmarking Andrew Boyd

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

Page 10: Oz-IA/2008 UCD Benchmarking Andrew Boyd

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

Multivariate statistical stuff and anything else with too many big words (even though this is potentially very useful)

Page 11: Oz-IA/2008 UCD Benchmarking Andrew Boyd

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

Multivariate statistical stuff and anything else with too many big words (even though this is potentially very useful)

Anything else that takes more than 5 minutes out of our half an hour today

Page 12: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The case for ROI

Page 13: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The case for ROI

Who cares?

Page 14: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The case for ROI

Who cares?

Why should I care as an IA?

Page 15: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The case for ROI

Who cares?

Why should I care as an IA?

What should I do about it?

Page 16: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD Benchmarking

Page 17: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingOne way to prove (or disprove!) ROI

Page 18: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Page 19: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

Page 20: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

How it differs from traditional/classical/industrial benchmarking, Analytics, automated/tree testing

Page 21: Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

How it differs from traditional/classical/industrial benchmarking, Analytics, automated/tree testing

How it is the same (comparing a system to itself or to a rival/analogue)

Page 22: Oz-IA/2008 UCD Benchmarking Andrew Boyd

A bit on the measures

Page 23: Oz-IA/2008 UCD Benchmarking Andrew Boyd

A bit on the measures

Efficiency (task completion time, task learning time)

Page 24: Oz-IA/2008 UCD Benchmarking Andrew Boyd

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

Page 25: Oz-IA/2008 UCD Benchmarking Andrew Boyd

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

Satisfaction (perceived ease of use)

Page 26: Oz-IA/2008 UCD Benchmarking Andrew Boyd

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

Satisfaction (perceived ease of use)

Which one do I use and when?

Page 27: Oz-IA/2008 UCD Benchmarking Andrew Boyd

How does this work for IAs?

Page 28: Oz-IA/2008 UCD Benchmarking Andrew Boyd

How does this work for IAs?Proving that your work has made a difference

Page 29: Oz-IA/2008 UCD Benchmarking Andrew Boyd

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

Page 30: Oz-IA/2008 UCD Benchmarking Andrew Boyd

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

Effectiveness (is the site measurably better to use for end to end information seeking task completion?)

Page 31: Oz-IA/2008 UCD Benchmarking Andrew Boyd

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

Effectiveness (is the site measurably better to use for end to end information seeking task completion?)

Satisfaction (are the end users definably happier against , say, survey results? do they like the new site more?)

Page 32: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Page 33: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Establish: Why are we benchmarking?

Page 34: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Page 35: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

Page 36: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

Analyse: Make some sense of it

Page 37: Oz-IA/2008 UCD Benchmarking Andrew Boyd

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

Analyse: Make some sense of it

Present: Show the results

Page 38: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Comparing apples to apples

Page 39: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Comparing apples to apples

The trouble with apples (and people, and browse behaviour) in benchmarking is adequately and meaningfully comparing them.

Page 40: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Which is quantitatively bigger?

Page 41: Oz-IA/2008 UCD Benchmarking Andrew Boyd

What about these?

Page 42: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Are all apples created equal?

Page 43: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Are all apples created equal?

Page 44: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Are all apples created equal?

Page 45: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Are all apples created equal?

Page 46: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Sample size can drive you nuts

Page 47: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Sample size can drive you nuts

Page 48: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Task completion order

Page 49: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

Page 50: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

User B: Picks up form from in tray, sets file up on PC, enters form block A-B data, drops in out tray

Page 51: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

User B: Picks up form from in tray, sets file up on PC, enters form block A-B data, drops in out tray

User C (Different office): Picks up form from in tray, sets file up on PC, enters all form block A-D data, drops in decision maker in tray

Page 52: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Other Lessons Learned (1)

The other S word - Stats

The Art and the Science

Who should do benchmarking?

Which projects are good candidates for benchmarking? What aren’t?

Page 53: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Other Lessons learned (2)

Page 54: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Page 55: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Logistics are important - availability of key people, getting permission, having the gear together

Page 56: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Logistics are important - availability of key people, getting permission, having the gear together

Not getting hung up on the gear - logging sheets work as well

Page 57: Oz-IA/2008 UCD Benchmarking Andrew Boyd

OK, questions?

http://www.scottberkun.com/essays/27-the-art-of-usability-benchmarking/

http://www.youtube.com/watch?v=MWaRulZbIEQ

Page 58: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Me

Page 59: Oz-IA/2008 UCD Benchmarking Andrew Boyd

Me

Andrew Boyd

[email protected]

facibus@twitter

facibus@Plurk

facibus@slideshare