Benchmarking Using SUS
-
Upload
cake-and-arrow -
Category
Design
-
view
65 -
download
2
Transcript of Benchmarking Using SUS
Benchmarking Using SUSKick Ass UX Hour
AU G U S T 2 , 2 0 1 6
SUS / System Usability Scale
About SUS
A 10 question survey rating perceived system usability.
1. I think that I would like to use this system frequently.
2. I found the system unnecessarily complex.
3. I thought the system was easy to use.
4. I think that I would need the support of a technical person to be able to use this
system.
5. I found the various functions in this system were well integrated.
6. I thought there was too much inconsistency in this system.
7. I would imagine that most people would learn to use this system very quickly.
8. I found the system very cumbersome to use.
9. I felt very confident using the system.
10.I needed to learn a lot of things before I could get going with this system.
Items vary from positive to negative ratings. Users rank each question from 1 (strongly disagree) to 5 (strongly agree) Likert scale.
Launched in 1986 by John Brook at DEC
Free!
• Correlated with hundreds of
systems and thousands of users
of known usability
• Better reliability than
commercial surveys.
SUS is Reliable
— measuringu.com
measuringu.com has a database
of 500+ studies (for a fee). Can
drill down on software type, do
custom analysis
SUS is Benchmarked
• Average score is 68
• For consumer software average is 72
SUS can be reliably used with low numbers of users (10 - 15)
SUS is not diagnostic and doesn’t correlate with actual task
performance
Let’s Try It:
https://docs.google.com/a/cakeandarrow.com/
spreadsheets/d/1aMZ4uhX1Bz-U05oZcxOX2m_JXtxHT_72tAFGn
8XBDgc/edit?usp=sharing
Scoring SUS
Scoring normalizes even and odd questions to a 4 point scale
1. I think that I would like to use this system frequently. [4] - 1 = 3
2. I found the system unnecessarily complex. 5 - [2] = 3
3. I thought the system was easy to use. [3] - 1 = 2
4. I think that I would need the support of a technical person to be able to use this
system. 5 - [2] = 3
5. I found the various functions in this system were well integrated. [1] - 1 = 0
6. I thought there was too much inconsistency in this system. 5 - [5] = 0
7. I would imagine that most people would learn to use this system very quickly.
[3] - 1 = 2
8. I found the system very cumbersome to use. 5 - [4] = 1
9. I felt very confident using the system. [1] - 1 = 0
10.I needed to learn a lot of things before I could get going with this system. 5 - [1] = 4
18 * 2.5 = 45 SUS Score
Odd Questions: [Score] - 1 Even Questions: 5 - [Score] [Total] * 2.5 = SUS Score
Not a percentage!
80.3 is an ‘A’ - also correlates to
NPS Promoters
Average score is 68
51 is an F (bottom 15% of results)
Scores are 0 to 100
— measuringu.com
• Helps provide a frame of
reference to stakeholders.
• Scale can also be phrased this
way to survey takers, though
it’s usually better to stick with
the original scale.
Adjectives to map scores
— measuringu.com
Why Perceived Usability?
Perceived Usability != Actual Usability
But usability includes satisfaction. Not just success. Attitude and action.
P E R C E I V E D U SA B I L I T Y I S I M P O R TA N T
High perceived usability impacts first impressions.
User impressions are impacted to varying degrees by
subsequent usage and actual usability.
P E R C E I V E D U SA B I L I T Y I S I M P O R TA N T
High perceived usability increases trust and loyalty. Trust
and loyalty are correlated to perceptions of usability.
P E R C E I V E D U SA B I L I T Y I S I M P O R TA N T
SUS and Customer Loyalty
How likely are you to recommend this product to a friend or
colleague?
M E A S U R I N G C U S TO M E R LOYA LT Y - N P S
SUS score of 70 will generate an approximate NPS of 7 and a SUS score of at least 88 is needed to
be a promoter (9+)
S U S CO R R E L AT E S W I T H N P S
— measuringu.com
• Promoters have an average SUS
score of 82
• Detractors have an average
score of 67
Scores are 0 to 100
— measuringu.com
Can predict between 30% and 60% of customer loyalty.
Highly correlated with loyalty scores and can be used to
predict NPS.
P E R C E I V E D U SA B I L I T Y I S I M P O R TA N T
How & When to Use It
Ask the questionnaire to each user using a Google
Spreadsheet or TryMyUI. Benchmark vs other system and
subsequent tests.
http://www.trymyui.com/blog/2014/10/03/the-system-
usability-scale-a-walk-through-our-newest-user-
testing-feature/
A F T E R / D U R I N G E V E RY U S E R T E S T
Get a pulse on overall usability to make the case for more
testing or to assess what you’re dealing with.
I F YO U C A N ’ T D O A F U L L U S E R T E S T
Use site intercept or email-based surveys to establish a
baseline for an existing software system
I N A S U RV E Y
Coda: SEQ
• Measures single task
performance satisfaction
• Versus SUS which measures
overall perceived satisfaction
• Ask after every major test task.
• Helps discriminate about which
tasks caused issues.
SEQ
— measuringu.com
• Can we establish a C&A SUS database as a part of our usability offering as a
differentiator?
• SUS scores categorized according to level of fidelity of thing we’re testing, client
industry, user type, etc.
• Over time (6 months?) could have a decent data set to write about, tout to clients,
actually use!
Ideas, Thoughts & Questions
Limitations
B U T . . .
SUS doesn’t break down measures granularly into sub-factors like trust, learnability, or different UI factors (i.e, terminology vs functionality)
There are other survey tools that do this, often commercial.
• SUMI http://sumi.uxp.ie
• QUIS http://lap.umd.edu/quis/
• SUPR-Q http://www.measuringu.com/products/suprq
• and many other custom / proprietary ones . . .
Check out measuringu.com