Evaluation of eLearning

Post on 27-Jan-2015

105 views 0 download

Tags:

description

Moving beyond level 1 and level 2 evaluations. Using usability methods to improve elearning.

Transcript of Evaluation of eLearning

Evaluation of eLearningMichael M. Grant, PhD

Michael M. Grant 2010

Kirkpatrick’s Levels

Level 5:ROI

the investment of the training comparedto its relative benefits to the organizationand/or productivity/revenue

91.3%

53.9%

22.9%

7.6%

2.1%

(ASTD, 2005)

Kirkpatrick (& Phillips) Model

92%

17.9%

(ASTD, 2009)

FORMATIVE EVALUATIONWhat’s the purpose?

A focus on improvement during development.

Level 2 Evaluations

Appeal

Effectiveness

Efficiency

Data Collection Matrix

Methods

1. What are the logistical requirements?

2. What are user reactions?

3. What are trainer reactions?

4. What are expert reactions?

5. What corrections must be made?

6. What enhancements can be made?

Anecdotal records X X X X X

User questionnaires X X X X

User interviews X X X X

User focus groups X X X

Usability observations X X X X

Online data collection X X

Expert reviews X X X

“Vote early and often.”

The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided. (Reeves & Hedberg, 2003, p. 142)

“Experts are anyone with specialized knowledge that is relevant to the design of your ILE.”

(Reeves & Hedberg, 2003, p. 145)

Expert Review

Interface Review Guidelines

from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html

USER REVIEWObservations from one-on-ones and small groups

What Is Usability?

The most common user action on a Web site is to flee.”

— Edward Tufte

“at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.”

— Jakob Nielsen

Nielsen’s Web Usability Rules

1. Visibility of system status

2. Match between system and real world

3. User control and freedom

4. Consistency and standards

5. Error prevention6. Recognition rather

than recall

7. Flexibility and efficiency of use

8. Help users recognize, diagnose, and recover from errors

9. Help and documentation

10. Aesthetic and minimalist design

Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?

Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?

Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?

Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?

Subjective satisfaction - How much does the user like using the system?

Two Major Methods to Evaluate Usability

Heuristic Evaluation Process

1. Several experts individually compare a product to a set of usability heuristics

2. Violations of the heuristics are evaluated for their severity and extent suggested solutions

3. At a group meeting, violation reports are categorized and assigned

4. average severity ratings, extents, heuristics violated, description of opportunity for improvement

Heuristic Evaluation Comparisons

AdvantagesQuick: Do not need to find or schedule users

Easy to review problem areas many times

Inexpensive: No fancy equipment

DisadvantagesValidity: No users involved

Finds fewer problems (40-60% less??)

Getting good expertsBuilding consensus with experts

Heuristic Evaluation Report

Heuristic Evaluation Report

USER TESTING

User Testing

People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.

Examines:– Ease of learning

– Speed of task performance

– Error rates

– User satisfaction

– User retention over time

Image from (nz)dave at http://www.flickr.com/photos/nzdave/491411546/

Elements of User Testing

Define target users

Have users perform representative tasks

Observe users

Report results

Why Multiple Evaluators?

Single evaluator achieves poor results

– Only finds about 35% of usability problems

– 5 evaluators find more than 75%

Why only 5 Users?

(Nielsen, 2000)

Reporting User Testing

Overall goals/objectives

Methodology

Target profile

Testing outline with test script

Specific task list to perform

Data analysis & results

Recommendations

RECENT METHODS FOR USER TESTING

10 Second Usability Test

1. Disable stylesheets

2. Check for the following:

1. Semantic markup

2. Logical organization

3. Only images related to content appear

ALPHA, BETA & FIELD TESTING

Akin to prototyping

References & Acknolwedgements

American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author.

Follett, A. (2009, October 9). 10 qualitative tools to improve your web site. Instant Shift. Retrieved March 18, 2010 from http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/

Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html

Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt

Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.