Measurement and confidence in OD
-
Upload
ilmo-van-der-loewe -
Category
Documents
-
view
208 -
download
0
Transcript of Measurement and confidence in OD
How do we know what works?
Ilmo van der LöweCHIEF SCIENCE OFFICER
iOpener Institute for People and Performance
Lord KelvinPHYSICIST
To measure is to know.“
”
• OD interven3ons must be measured–Did the interven3on have an impact?–Were the effects were posi3ve or nega3ve?–What were the success factors?
A simple example
• Ques3on:–Does training managers create more produc3ve workers?
• Interven3on:–Train 10 managers to be beEer leaders
• Measurement plan:–Measure the produc3vity of the managers’ direct reports before and aIer the training (a total of 400 people)
Pre-‐interven3onmeasurement
Training Training put into prac3ce
TIME
Plan #1
• If direct reports are more produc:ve at work in the end, does it mean that the training worked?
Post-‐interven3onmeasurement
Not necessarily...
• Increased scores could be caused by:–Economy gePng beEer, local team winning championship, seasonal weather differences, a friendly new hire...
• Decreased scores could be caused by:–Fear of layoffs, the coffee machine being broken, serious injuries to team members, recession...
Change over time
• Outside factors other than training can change scores
• Mere change in scores is not evidence of efficacy–Measurement must take into account outside factors
Control group
• Revised plan–Include a control group that is similar to the experiment group in all aspects, except the training• Ideally, same loca3on, same work hours, same work, same tenure, same seniority etc.
• Ra3onale–If outside factors influence scores, then their effect should be the same for both groups because both experienced them–If training influences iPPQ scores, the scores of the control group should differ from the experimental group
Training Training put into prac3ce
Plan #2
• If the group scores differ, how can we tell if the difference is significant?
Business as usual
EXPERIMENTAL GROUP
CONTROL GROUPTIME
Pre-‐interven3onmeasurement
Post-‐interven3onmeasurement
Statistical significance
• Sta3s3cal significance is the confidence you have in your results
• Sta3s3cs put confidence into precise terms–"There's only one chance in a thousand this could have happened by coincidence." (p < 0.001)
confidence =signalnoise
sample size×
How big of a difference will training create between groups?
How many people in each group?
What other factors can create differencesbetween groups?
• To maximize confidence– Increase interven:on quality (boost signal)–Minimize other differences between groups (reduce noise)– Increase sample size
confidence =signalnoise
sample size×
• Is the sample size 10 or 400?– 10 managers get trained– 400 employees get surveyed
Although the employees’ produc:vity at work is being measured,it is the efficacy of the training interven:on that maSers.
Each manager is different and will put the training into prac3ce differently.
Most managers will do an okay job.
Some will be excep3onally good.
Some will be excep3onally bad.
Each manager creates variabilityin data that cannot be controlled.
Thus, the effec:ve sample size is 10,although 400 people are measured.
Small samples are more likely to be biased(In a sample of three, you may have two bad ones and a mediocre one, for example)
(Or the other way around.)
• Results should not change depending on who happens to respond.
• The sample should be large enough to reduce unintended biases.
Training Training put into prac3ce
Plan #3
• To reduce the impact of manager variability, recruit larger number of managers to both experimental and control groups–With large numbers of managers, extremes cancel each other out
Business as usual
EXPERIMENTAL GROUP
CONTROL GROUPTIME
Pre-‐interven3onmeasurement
Post-‐interven3onmeasurement
Getting close, but...
• Even sta3s3cally significant differences between the experimental and control groups do not automa3cally speak for the efficacy of training–Placebo effectBelief in efficacy creates changes–Hawthorne effectSpecial situa3on and treatment of the measurement creates changes
Training Training put into prac3ce
Plan #4
Business as usual
EXPERIMENTAL GROUP
CONTROL GROUP
TIME
Pre-‐interven3onmeasurement
Post-‐interven3onmeasurement
Fake training Training put into prac3cePLACEBO GROUP
Three-way comparisons
–Experimental group• If significantly different from the control group, outside factors did not account for the effect.• If significantly different from the placebo group, the effects were unique to training, not just different treatment.
–Control group• If not different from experimental group, training had no effect at all.
–Placebo group• If not different from experimental group, training had no real effect beyond the special treatment given to the group.
Measurement in OD practice
• Measurement is important
• Measurement must be carefully planned and executed
• The bare minimums are a proper control group and a large enough sample size