Reliability IOP 301-T Mr. Rajesh Gunesh Reliability Reliability means repeatability or consistency ...

Post on 13-Dec-2015

216 views 0 download

Tags:

Transcript of Reliability IOP 301-T Mr. Rajesh Gunesh Reliability Reliability means repeatability or consistency ...

Reliability

Mr. Rajesh GuneshIOP 301-T

Reliability Reliability means repeatability

or consistencyA measure is considered

reliable if it would give us the same result over and over again (assuming that what we are measuring isn’t changing!)

Mr. Rajesh GuneshIOP 301-T

Definition of ReliabilityReliability usually “refers to the

consistency of scores obtained by the same persons when they are reexamined with the same test on different occasions, or with different sets of equivalent items, or under other variable examining conditions (Anastasi & Urbina, 1997).

Dependable, consistent, stable, constant

Gives the same result over and over again

Mr. Rajesh GuneshIOP 301-T

Validity vs Reliability

Mr. Rajesh GuneshIOP 301-T

Variability and reliability

What is the acceptable range of error in measurement– Bathroom scale ±1 kg– Body thermometer ±0.2 C– Baby weight scale ±20 g – Clock with hands ±5 min– Outside thermometer ±1 C

Mr. Rajesh GuneshIOP 301-T

Variability and reliability

We are completely comfortable with a bathroom scale accurate to ±1 kg, since we know that individual weights vary over far greater ranges than this, and typical changes from day to day are about the same order of magnitude.

Mr. Rajesh GuneshIOP 301-T

Reliability

True Score TheoryMeasurement ErrorTheory of reliabilityTypes of reliabilityStandard error of

measurement

Mr. Rajesh GuneshIOP 301-T

True Score Theory

Mr. Rajesh GuneshIOP 301-T

True Score Theory

Every measurement is an additive composite of two components:

1. True ability (or the true level) of the respondent on that measure

2. Measurement error

Mr. Rajesh GuneshIOP 301-T

True Score Theory

Individual differences in test scores

– “True” differences in characteristic being assessed

– “Chance” or random errors.

Mr. Rajesh GuneshIOP 301-T

True Score Theory

What might be considered error variance in one situation may be true variance in another (e.g Anxiety)

Mr. Rajesh GuneshIOP 301-T

Can we observe the true score?

X = T + ex

We only observe the measurement, we don’t observe what’s on the right side of equation (only God knows what those values are)

Mr. Rajesh GuneshIOP 301-T

True Score Theory

var(X) = var(T) + var(ex)

The variability of the measure is the sum of the variability due to true score and the variability due to random error

Mr. Rajesh GuneshIOP 301-T

What is error variance?

Conditions irrelevant to purpose of the test– Environment (e.g., quiet v. noisy)– Instructions (e.g., written v. verbal)– Time limits (e.g., limited v.

unlimited)– Rapport with test taker

All test scores have error variance.

Mr. Rajesh GuneshIOP 301-T

Measurement Error

Measurement error:

–Random–Systematic

Mr. Rajesh GuneshIOP 301-T

Measurement Error

Mr. Rajesh GuneshIOP 301-T

Measurement Error

Random error: effects are NOT consistent across the whole sample, they elevate some scores and depress others– Only adds noise; does not

affect mean score

Mr. Rajesh GuneshIOP 301-T

Measurement ErrorSystematic error: effects are

generally consistent across a whole sample– Example: environmental

conditions for group testing (e.g., temperature of the room)

– Generally either consistently positive (elevate scores) or negative (depress scores)

Mr. Rajesh GuneshIOP 301-T

Measurement Error

Mr. Rajesh GuneshIOP 301-T

Measurement Error

Mr. Rajesh GuneshIOP 301-T

Theory of Reliability

Mr. Rajesh GuneshIOP 301-T

Reliability

Reliability = The variance of the true scoreThe variance of the measure

Reliability = Var(T)

Var(X)

Mr. Rajesh GuneshIOP 301-T

How big is an estimate of Reliability?

Var(T)Reliability =

Var(T)

Var(X) =

Var(T) + Var(e)

Reliability =Subject variability

Subject variability + measurement error

Mr. Rajesh GuneshIOP 301-T

We can’t compute reliability because we can’t calculate the variance of the true score; but we can get an estimate of the variability.

Mr. Rajesh GuneshIOP 301-T

Estimate of Reliability

Observations would be related to each other to the degree that they share true scores. For example consider the correlation between X1 and X2:

)(var)(var

),(covariance

21

21

XX

XX

Mr. Rajesh GuneshIOP 301-T

Test-Retest

Stability

Alternate-form

Inter-scorer

Equivalence

Split-half

Kuder-R ichardson

Cronbach Alpha

Internal consistency

RELIABILITY

Mr. Rajesh GuneshIOP 301-T

Types of Reliability1. Test-Retest Reliability

Used to assess the consistency of a measure from one time to another

2. Alternate-form ReliabilityUsed to assess the consistency of the results of two tests constructed the same way from the same content domain

Mr. Rajesh GuneshIOP 301-T

Types of Reliability3. Split-half Reliability

Used to assess the consistency of results across items within a test by splitting them into two equivalent halves

Kuder-Richardson Reliability Used to assess the extent to which items are homogenous when items have a dichotomous response, e.g. “yes/no” items.

Mr. Rajesh GuneshIOP 301-T

Types of Reliability Cronbach’s alpha (α) Reliability

Compares the consistency of response of all items on the scale (Likert scale or linear graphic response format)

4. Inter-Rater or Inter-Scorer ReliabilityUsed to assess the concordance between two or more observers scores of the same event or phenomenon for observational data

Mr. Rajesh GuneshIOP 301-T

Test-Retest Reliability

Definition: When the same test is administered to the same individual (or sample) on two different occasions

Mr. Rajesh GuneshIOP 301-T

Test-Retest Reliability:Used to assess the consistency of a measure from one time to another

Mr. Rajesh GuneshIOP 301-T

Test-Retest ReliabilityStatistics used

– Pearson r or Spearman rhoWarning

– Correlation decreases over time because error variance INCREASES (and may change in nature)

– Closer in time the two scores were obtained, the more the factors which contribute to error variance are the same

Mr. Rajesh GuneshIOP 301-T

Test-Retest Reliability

Warning– Circumstances may be

different for both test-taker and physical environment.

– Transfer effects like practice and memory might play a role on the second testing occasion

Mr. Rajesh GuneshIOP 301-T

Alternate-form Reliability

Definition: Two equivalent forms of the same measure are administered to the same group on two different occasions

Mr. Rajesh GuneshIOP 301-T

Alternate-form Reliability:Used to assess the consistency of the results of two tests constructed same way from the same content domain

Mr. Rajesh GuneshIOP 301-T

Alternate-form Reliability

Statistic used– Pearson r or Spearman rho

Warning– Even when randomly

chosen, the two forms may not be truly parallel

– It is difficult to construct equivalent tests

Mr. Rajesh GuneshIOP 301-T

Alternate-form Reliability

Warning– Even when randomly chosen,

the two forms may not be truly parallel

– It is difficult to construct equivalent tests

– The tests should have the same number of items, same scoring procedure, uniform content and item difficulty level

Mr. Rajesh GuneshIOP 301-T

Split-half ReliabilityDefinition: Randomly divide the test into two forms; calculate scores for Form A, B; calculate Pearson r as index of reliability

Mr. Rajesh GuneshIOP 301-T

Split-half Reliability

Mr. Rajesh GuneshIOP 301-T

Split-half Reliability

hh

hhtt r

rr

1

2

(Spearman-Brown formula)

Mr. Rajesh GuneshIOP 301-T

Split-half ReliabilityWarning The correlation between the odd

and even scores are generally an underestimation of the reliability coefficient because it is based only on half the test.

Mr. Rajesh GuneshIOP 301-T

Cronbach’s alpha & Kuder-Richardson-20

Measures the extent to which items on a test are homogeneous; mean of all possible split-half combinations– Kuder-Richardson-20 (KR-

20): for dichotomous data– Cronbach’s alpha: for non-

dichotomous data

Mr. Rajesh GuneshIOP 301-T

Cronbach’s alpha (α)

Mr. Rajesh GuneshIOP 301-T

Cronbach’s alpha (α)

2

22

1 t

it

s

ss

n

n

(Coefficient

alpha)

Mr. Rajesh GuneshIOP 301-T

Kuder-Richardson (KR-20)

2

2

1 t

ttt

s

pqs

n

nr

Mr. Rajesh GuneshIOP 301-T

Inter-Rater or Inter-Observer Reliability:

Used to assess the degree to which different raters or observers give consistent estimates of the same phenomenon

Mr. Rajesh GuneshIOP 301-T

Inter-rater Reliability

Definition Measures the extent to

which multiple raters or judges agree when providing a rating of behavior

Mr. Rajesh GuneshIOP 301-T

Inter-rater ReliabilityStatistics used

– Nominal/categorical data• Kappa statistic

– Ordinal data• Kendall’s tau to see if pairs of

ranks for each of several individuals are related– Two judges rate 20

elementary school children on an index of hyperactivity and rank order them

Mr. Rajesh GuneshIOP 301-T

Inter-rater ReliabilityStatistics used

– Interval or ratio data•Pearson r using data obtained from the hyperactivity index

Mr. Rajesh GuneshIOP 301-T

Factors affecting Reliability

Whether a measure is speeded

Variability in individual scores

Ability level

Mr. Rajesh GuneshIOP 301-T

Whether a measure is speeded

For speeded measures, test-retest and equivalent-form reliability are more appropriate. Split-half techniques may be considered if the split occurs according to time rather than number of items.

Mr. Rajesh GuneshIOP 301-T

Variability in individual scores

Correlation is normally affected by the range of individual differences in a group. Sometimes, smaller subgroups display correlation coefficients which are completely different from that of the whole group. This phenomenon is known as range restriction.

Mr. Rajesh GuneshIOP 301-T

Ability level

One must also consider the variability and ability levels of samples. It is advisable to compute separate reliability coefficients for homogeneous and heterogeneous subgroups.

Mr. Rajesh GuneshIOP 301-T

Interpretation of Reliability

One must ask oneself the following questions:How high must the coefficient of reliability be?

How is it interpreted?What is the standard error of measurement?

Mr. Rajesh GuneshIOP 301-T

Magnitude of reliability coefficient

Anastasi & Urbina (1997) 0.8 – 0.9 Huysamen (1996)

at least 0.85 for individualsat least 0.65 for groups

Smit (1996)0.8 – 0.85 for personality &

interest at least 0.9 for aptitude

Mr. Rajesh GuneshIOP 301-T

Standard Error of the Measurement

Definition: Estimate of the amount of error usually attached to an individual’s obtained test score– As SEM ↑, test reliability ↓– As SEM ↓, test reliability ↑

Mr. Rajesh GuneshIOP 301-T

Standard Error of the Measurement

ttt rsSEM 1

Mr. Rajesh GuneshIOP 301-T

Standard Error of the Measurement

Confidence Interval: Uses SEM to calculate a band or range of scores that has a high probability of including the person’s true score.

Example: 95% confidence interval means only 5 times in 100 will the person’s TRUE score lie outside this range of scores.

Mr. Rajesh GuneshIOP 301-T

Reliability

Formula: CI = Obtained score + z(SEM)z = 1.0 for 68% levelz = 1.44 for 85% levelz = 1.65 for 90% levelz = 1.96 for 95% levelz = 2.58 for 99% level

Mr. Rajesh GuneshIOP 301-T

Reliability of standardized tests

An acceptable standardized test should have reliability coefficients of at least:

0.95 for internal consistency0.90 for test-retest (stability)0.85 for alternate-forms

(equivalency)

Mr. Rajesh GuneshIOP 301-T

Reliability: Implications

Evaluating a test– What types of reliability have

been calculated and with what samples?

– What are the strengths of the reliability coefficients?

– What is the SEM for a test score– How does this information

influence decision to use and interpret test scores?