Chap006

32
Chapter 6 Training Evaluation Copyright © 2010 by the McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin

description

 

Transcript of Chap006

Page 1: Chap006

Chapter 6 Training Evaluation

Copyright © 2010 by the McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin

Page 2: Chap006

6-2

Training effectivenessTraining effectiveness - the benefits that the company and the trainees receive from training.

Training outcomes or criteriaTraining outcomes or criteria - measures that the trainer and the company use to evaluate training programs.

Introduction

Page 3: Chap006

6-3

Training evaluationTraining evaluation - the process of collecting the outcomes needed to determine if training is effective.

Evaluation designEvaluation design - collection of information, including whom, what, when, and how, for determining the effectiveness of the training program.

Introduction (cont.)

Page 4: Chap006

6-4

Companies make large investments in training and education and view them as a strategy to be successful; they expect the outcomes of training to be measurable.

Training evaluation provides the data needed to demonstrate that training does provide benefits to the company.It involves formative and summative

evaluation.

Reasons for Evaluating Training

Page 5: Chap006

6-5

Formative evaluation - takes place during program design and development.It helps ensure that the training program is

well organized and runs smoothly, and trainees learn and are satisfied with the program.

It provides information about how to make the program better; it involves collecting qualitative data about the program.

Reasons for Evaluating Training (cont.)

Page 6: Chap006

6-6

Formative evaluationPilot testing - process of previewing the

training program with potential trainees and managers or with other customers.

Reasons for Evaluating Training (cont.)

Page 7: Chap006

6-7

Summative evaluation - determine the extent to which trainees have changed as a result of participating in the training program.It may include measuring the monetary

benefits that the company receives from the program.

It involves collecting quantitative data.

Reasons for Evaluating Training (cont.)

Page 8: Chap006

6-8

A training program should be evaluated:To identify the program’s strengths and

weaknesses.To assess whether content, organization, and

administration of the program contribute to learning and the use of training content on the job.

To identify which trainees benefited most or least from the program.

Reasons for Evaluating Training (cont.)

Page 9: Chap006

6-9

A training program should be evaluated:To gather data to assist in marketing training

programs.To determine the financial benefits and costs

of the program.To compare the costs and benefits of:

training versus non-training investments.different training programs to choose the best

program.

Reasons for Evaluating Training (cont.)

Page 10: Chap006

6-10

Figure 6.1 - The EvaluationProcess

Page 11: Chap006

6-11

Table 6.1 - Kirkpatrick’s Four-Level Framework of Evaluation Criteria

Page 12: Chap006

6-12

Outcomes Used in the Evaluation of Training Programs

The hierarchical nature of Kirkpatrick’s framework suggests that higher level outcomes should not be measured unless positive changes occur in lower level outcomes.

The framework implies that changes at a higher level are more beneficial than changes at a lower level.

Page 13: Chap006

6-13

Outcomes Used in the Evaluation of Training Programs (cont.)

Kirkpatrick’s framework criticisms:Research has not found that each level is

caused by the level that precedes it in the framework, nor does evidence suggest that the levels differ in importance.

The approach does not take into account the purpose of the evaluation.

Outcomes can and should be collected in an orderly manner, that is, measures of reaction followed by measures of learning, behavior, and results.

Page 14: Chap006

6-14

Table 6.2 - Evaluation Outcomes

Page 15: Chap006

6-15

Outcomes Used in the Evaluation of Training Programs (cont.)

Reaction outcomesIt is collected at the program’s conclusion.

Cognitive outcomesThey do not help to determine if the trainee

will actually use decision-making skills on the job.

Skill-based outcomesThe extent to which trainees have learned

skills can be evaluated by observing their performance in work samples such as simulators.

Page 16: Chap006

6-16

Outcomes Used in the Evaluation of Training Programs (cont.)

Return on investmentDirect costs - salaries and benefits for all

employees involved in training; program material and supplies; equipment or classroom rentals or purchases; and travel costs.

Indirect costs - not related directly to the design, development, or delivery of the training program.

Benefits - value that the company gains from the training program.

Page 17: Chap006

6-17

Determining Whether Outcomes are Appropriate

CriteriaRelevance

The extent to which training outcomes are related to the learned capabilities emphasized in the training program.Criterion contamination - the extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions.Criterion deficiency - the failure to measure training outcomes that were emphasized in the training objectives.

Reliability The degree to which outcomes can be measured consistently over time.

Discrimination The degree to which trainees’ performance on the outcome actually reflects true differences in performance.

Practicality The ease with which the outcome measures can be collected.

Page 18: Chap006

6-18

Figure 6.2 - Criterion Deficiency, Relevance, and Contamination

Page 19: Chap006

6-19

Figure 6.4 - Training Program Objectives and Their Implications for Evaluation

Page 20: Chap006

6-20

Evaluation Designs

Threats to validity - factors that will lead an evaluator to question either the:Internal validity - the believability of the study

results.External validity - the extent to which the

evaluation results are generalizable to other groups of trainees and situations.

Page 21: Chap006

6-21

Table 6.7 - Threats to Validity

Page 22: Chap006

6-22

Methods to Control for Threats to ValidityPretests and Posttests

A comparison of the posttraining and pretraining measures can indicate the degree to which trainees have changed as a result of training.

Random assignment - assigning employees to the training or comparison group on the basis of chance.

Helps to reduce the effects of employees dropping out of the study, and differences between the training group and comparison group in ability, knowledge, skill, or other personal characteristics.

Evaluation Designs (cont.)

Page 23: Chap006

6-23

Methods to Control for Threats to ValidityUsing a comparison group - employees who

participate in the evaluation study but do not attend the training program.

Helps to rule out the possibility that changes found in the outcome measures are due to factors other than training.

Evaluation Designs (cont.)

Page 24: Chap006

6-24

Table 6.8 - Comparison of Evaluation Designs

Page 25: Chap006

6-25

Time series - training outcomes are collected at periodic intervals both before and after training.

It allows an analysis of the stability of training outcomes over time.

Reversal - time period in which participants no longer receive the training intervention.

Types of Evaluation Designs

Page 26: Chap006

6-26

Table 6.12 - Factors That Influence the Type of Evaluation Design

Page 27: Chap006

6-27

Determining Return on Investment (ROI)

Cost-benefit analysis - process of determining the economic benefits of a training program using accounting methods that look at training costs and benefits.

ROI should be limited only to certain training programs, because it can be costly.

Page 28: Chap006

6-28

Determining Return on Investment (ROI) (cont.)

Determining costsMethods for comparing costs of alternative

training programs include the resource requirements model and accounting.

Determining benefits – methods include:technical, academic, and practitioner

literature.pilot training programs and observance of

successful job performers.Estimates by trainees and their managers.

Page 29: Chap006

6-29

Determining Return on Investment (ROI) (cont.)

To calculate ROI, divide benefits by costs. The ROI gives an estimate of the dollar return expected from each dollar invested in training.

Page 30: Chap006

6-30

Table 6.13 - Determining Costs for a Cost Benefit Analysis

Page 31: Chap006

6-31

Determining Return on Investment (ROI) (cont.)

Utility analysis - a cost-benefit analysis method that involves assessing the dollar value of training based on:estimates of the difference in job performance

between trained and untrained employees.the number of individuals trained.the length of time a training program is

expected to influence performance.the variability in job performance in the

untrained group of employees.

Page 32: Chap006

6-32

Table 6.16 - Training Metrics