Oz-IA/2008 UCD Benchmarking Andrew Boyd

Post on 05-Jul-2015

1.132 views 1 download

description

Presentation by Andrew Boyd on UCD Benchmarking at Oz-IA/2008, Sydney, Australia, on 21 September 2008

Transcript of Oz-IA/2008 UCD Benchmarking Andrew Boyd

UCD BenchmarkingHow do you like them apples?

Andrew BoydOz-IA/2008

What we’re going to talk about

What we’re going to talk about

The case for ROI

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

The basic process

What we’re going to talk about

The case for ROI

A brief introduction to UCD Benchmarking

How does this work for IAs

The basic process

Comparing apples to apples (and other lessons learned)

And we are not talking about

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

Multivariate statistical stuff and anything else with too many big words (even though this is potentially very useful)

And we are not talking about

Definitional stuff around stats or IA vs UCD vs IxD/UxD

Multivariate statistical stuff and anything else with too many big words (even though this is potentially very useful)

Anything else that takes more than 5 minutes out of our half an hour today

The case for ROI

The case for ROI

Who cares?

The case for ROI

Who cares?

Why should I care as an IA?

The case for ROI

Who cares?

Why should I care as an IA?

What should I do about it?

UCD Benchmarking

UCD BenchmarkingOne way to prove (or disprove!) ROI

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

How it differs from traditional/classical/industrial benchmarking, Analytics, automated/tree testing

UCD BenchmarkingOne way to prove (or disprove!) ROI

Scott Berkun, 2003, Usability Benchmarking

Measuring, and the Measures (Efficiency, Effectiveness, Satisfaction) from ISO9241

How it differs from traditional/classical/industrial benchmarking, Analytics, automated/tree testing

How it is the same (comparing a system to itself or to a rival/analogue)

A bit on the measures

A bit on the measures

Efficiency (task completion time, task learning time)

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

Satisfaction (perceived ease of use)

A bit on the measures

Efficiency (task completion time, task learning time)

Effectiveness (%successful task completion, %total errors)

Satisfaction (perceived ease of use)

Which one do I use and when?

How does this work for IAs?

How does this work for IAs?Proving that your work has made a difference

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

Effectiveness (is the site measurably better to use for end to end information seeking task completion?)

How does this work for IAs?Proving that your work has made a difference

Efficiency (is the site faster to use as a result of the improved IA? is information more findable?)

Effectiveness (is the site measurably better to use for end to end information seeking task completion?)

Satisfaction (are the end users definably happier against , say, survey results? do they like the new site more?)

The basic process

The basic process

Establish: Why are we benchmarking?

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

Analyse: Make some sense of it

The basic process

Establish: Why are we benchmarking?

Plan: How are we going to get away with this?

Evaluate: Get out there and get some data

Analyse: Make some sense of it

Present: Show the results

Comparing apples to apples

Comparing apples to apples

The trouble with apples (and people, and browse behaviour) in benchmarking is adequately and meaningfully comparing them.

Which is quantitatively bigger?

What about these?

Are all apples created equal?

Are all apples created equal?

Are all apples created equal?

Are all apples created equal?

Sample size can drive you nuts

Sample size can drive you nuts

Task completion order

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

User B: Picks up form from in tray, sets file up on PC, enters form block A-B data, drops in out tray

Task completion orderUser A: Picks up form from intray, sets file up on PC, enters form block A data, picks up next form from in tray

User B: Picks up form from in tray, sets file up on PC, enters form block A-B data, drops in out tray

User C (Different office): Picks up form from in tray, sets file up on PC, enters all form block A-D data, drops in decision maker in tray

Other Lessons Learned (1)

The other S word - Stats

The Art and the Science

Who should do benchmarking?

Which projects are good candidates for benchmarking? What aren’t?

Other Lessons learned (2)

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Logistics are important - availability of key people, getting permission, having the gear together

Other Lessons learned (2)

People are messy - they do things in their own time, and in their own way.

Logistics are important - availability of key people, getting permission, having the gear together

Not getting hung up on the gear - logging sheets work as well

OK, questions?

http://www.scottberkun.com/essays/27-the-art-of-usability-benchmarking/

http://www.youtube.com/watch?v=MWaRulZbIEQ

Me

Me

Andrew Boyd

facibus@gmail.com

facibus@twitter

facibus@Plurk

facibus@slideshare