Kirkpatrick Four Levels

download Kirkpatrick Four Levels

of 3

Transcript of Kirkpatrick Four Levels

  • 8/7/2019 Kirkpatrick Four Levels

    1/4

    Kirkpatrick Four Levels

    Kirkpatrick created the most familiar taxonomy of a four step approach to evaluation(Kirkpatrick, 1959/60) now referred to as a model of four levels of evaluation(Kirkpatrick, 1994). It is one of the most widely accepted and implemented models usedto

    evaluate training interventions (Russ-Eft and Preskill, 2001).Table 6 below, shows Kirkpatricks four levels of measurement:Table 6. Kirkpatrick's (1959/60, 1994) four levels of evaluation

    1. Reaction to the intervention2. Learning attributed to the intervention3. Application ofbehaviour changes on

    the job4. Business results realised by the organization

    This simple model is well-recognised and Russ-Eft and Preskill (2001note that theubiquity of Kirkpatricks model stems from its simplicity and understandability having

    reviewed 57 journal articles in the training, performance, and psychology literature thatdiscussed training evaluation models, they found that 44 (77%) of these includedKirkpatricks model. They go on to note that only in recent years, 1996 on, that severalalternative models have been developed.Kirkpatrick (1998b), in an article written in 1977, considered how the evaluation athisfour levels provided evidence or proof of training effectiveness and declares thatsuchproof of effectiveness requires an experimental design using a control group toeliminatepossible factors affecting the measurement of outcomes from a training programmeWithout such a design, [the model] can only provide evidence of training effectiveness,

    but not proof(Kirkpatrick, 1998a).Kirkpatricks model is the basis of Phillips (1997) ROI model often referred to as thefifth level and strongly endorsed as a preferred approach by the American Society forTraining and Development (ASTD) and its sister organisation the ROI Network(www.astd.org). Others suggest that it would be possible and desirable to go beyondbusiness impact and ROI and consider societal impact (Watkins et al., 1998).Whilst widely used, Kirkpatricks model is not without criticism. Alliger and Janak (1989)and Holton (1996) discuss the flaws of the model in detail in essence, their critique isthat the model only has limited use in education because it lacks explanatory power.Themodel is useful in addressing broadly what happens, but not why it has happened.

    A useful question to ask about using the Kirkpatrick (or Phillips) model is for whom istheevaluation being conducted? At the first level of Reaction usually called happy sheetsit is often the trainers who are most interested to discover how much they were liked, orthe client (the persons accountable for the training investment) who frequently use thisas the only measure of training effectiveness (Thompson et al., 2002, Suqrue and Kim,

  • 8/7/2019 Kirkpatrick Four Levels

    2/4

    2004). This does not mean that reaction to training is unimportant, however, if itspurposeis to demonstrate effectiveness it has questionable validity. Level 2, learning is clearly ofinterest to the trainees and trainers and may be of interest to others as well. Behaviourchange on the job is likely to be of interest to the trainees managers and to trainers

    aswell, particularly if they are interested in understanding the impact of training. Thoughsome argue that learning and behaviour change only occurs after failure (Schank, 1997)which is rarely an enjoyable experience of itself and hence the link from reaction tolearning to behaviour change may not always be in place. On a personal note, I recallvividly not enjoying Latin class in school at all yet to this day I can recite any number ofverb conjugations that I have never used in real life.Kirkpatricks model assumes that the levels represent a causal chain such that positivereactions lead to greater learning, which in turn produces greater transfer and hence tomore positive business impact or results. Kirkpatrick is, however, vague about theactual

    nature of the linkages, his writings do imply that a simple causal relationship existsbetween the levels of evaluation (Holton, 1996).34Other authors suggest that Kirkpatricks model is incomplete for other reasons

    Systems model

    The systems model school of evaluation falls into the tradition of behavioural objectivesapproach (Russ-Eft and Preskill, 2001) and is similarly scientific to the experimentalschool though more pragmatic. There are three main features of the systems model,starting with the objectives with an emphasis on identifying the outcomes of training andastress on providing feedback about these outcomes to those involved in providingtraininginputs (Easterby-Smith, 1994).Evaluation here assesses the total value of training in social as well as financial terms.Hamblin (1974), suggests that evaluating social as well as financial terms isoverambitious.38Also widely referenced, Hamblin (1974), devised a five-level model (Figure 5) similar toKirkpatricks. Hamblin adds a fifth level that measures ultimate value variables ofhuman good.

    TrainingEventReactionsEffectsLearningeffectsUltimate valueeffects

  • 8/7/2019 Kirkpatrick Four Levels

    3/4

    OrganisationeffectsJob behavior

    An important feature of Hamblins work is the emphasis on measurement of outcomesfrom training at different levels. It assumes that any training event will, or can, lead to achain of consequence. Hamblin suggests that it would be unwise to conclude from anobserved change at one of the higher levels of effect, that this was due to a particulartraining intervention, unless one has followed the chain of causality through theintervening levels of effect. Should a job change behaviour, for example, be observed,theconstructivist take on this would be to ask the individual for his own views of why theywere now behaving in a different way and then compare this interpretation with theviews of one or two close colleagues.The stress on feedback to trainers and decision makers in the training process is animportant feature of the systems model school. Warr et al. (1970) take a verypragmatic view of evaluation, suggesting that it should be of help to the trainer inmaking decisionsabout a particular programme as it is happening - reflecting in action to continuouslyimprove the process as it happens (Schon, 1983). Rackham (1973) builds on earlierwork(Warr et al., 1970) making a further distinction between assisting decisions that can bemade about current programmes and feedback that can contribute to decisions aboutfutureprogrammes. Rackham notes that the process of feedback from one programme to thenextresulted in clear improvements when the programmes were non-participative in nature but in programmes involving a lot of participation, there was no apparent improvement

    after feedback to improve future programmes.Feedback, as an important aspect of evaluation, is further developed by Burgoyne andSingh (1977) who distinguish between evaluation as feedback and feedback adding tothebody of knowledge. The former they saw as perishable data of momentary value directlyto decision-making and the latter, as generating permanent and enduring knowledgeabouteducation and training processes.Burgoyne and Singh relate evaluative feedback to a range of decisions about training inthe broadest sense. Figure 6 shows an adaptation of their model with examples ofdecisions at each level. This not only highlights the critical importance of the evaluation

    and feedback process, but also the level of importance of each decision to the traininganddevelopment process.The systems model has been widely accepted, especially in the U.K., but there are anumber of problems and limitations with the systems model approach to evaluation overfeedback, the emphasis on outcomes and on the establishment of objectives.Easterby-Smith (1994) suggests that feedback, as data provided from an evaluation of a

  • 8/7/2019 Kirkpatrick Four Levels

    4/4

    past event, can only contribute marginally to decisions about the future due to a legacyofthe past training event. Thus, feedback can highlight incremental improvements basedon aprevious design, but not accentuate when radical change is needed.

    The emphasis on outcomes provides a good and logical basis for evaluation but itrepresents a mechanistic view of learning. In the extreme, this suggests that learningconsists of facts and knowledge being placed in peoples heads and that this becomesinternalised and gradually incorporated into behavioural responses. Indeed, thiscriticismis often levelled at many forms of e-Learning (Schank, 2002).