Item and Distracter Analysis

Post on 14-May-2015

21.527 views 1 download

Tags:

Transcript of Item and Distracter Analysis

Sue Q

uirante

EDRE146

Report Outline

• Item Analysis1. Item Difficulty Index

a) Diagnostic Testingb) Out-of-Level Testingc) Distracter Analysis

2. Item Discrimination Level/Index1. Hogan (2007)2. Biserial-Point Correlation3. Gronlund & Linn (1990)

3. Criterion-Referenced Test Analysis

Item Analysis

the effort to improve individual questions after they are used

process of examining answers to questions in order to assess the quality of individual test items and test itself

Item Analysis

the effort to improve individual questions after they are used

process of examining answers to questions in order to assess the quality of those items and of the test

Item Analysis

• items to be analyzed must be valid measures of instructional objectives

• items must be diagnostic• selecting and rewriting items on the basis of

item performance data improves effectiveness of items and improves validity of scores

Item Analysis

• items to be analyzed must be valid measures of instructional objectives

• items must be diagnostic• selecting and rewriting items on the basis of

item performance data improves effectiveness of items and improves validity of scores

Item Analysis

• items to be analyzed must be valid measures of instructional objectives

• items must be diagnostic• selecting and rewriting items on the basis of

item performance data improves effectiveness of items and improves validity of scores

Purpose

• improves items used again in later tests• eliminates ambiguous or misleading items in a

single test administration• increases instructors' skills in test construction • identify specific areas of course content which

need greater emphasis or clarity

Purpose

• improves items used again in later tests• eliminates ambiguous or misleading items in a

single test administration• increases instructors' skills in test construction • identify specific areas of course content which

need greater emphasis or clarity

Purpose

• improves items used again in later tests• eliminates ambiguous or misleading items in a

single test administration• increases instructors' skills in test construction • identify specific areas of course content which

need greater emphasis or clarity

Purpose

• improves items used again in later tests• eliminates ambiguous or misleading items in a

single test administration• increases instructors' skills in test construction • identify specific areas of course content which

need greater emphasis or clarity

Before Item Analysis

• Editorial Review– 1st: a few hours after the first draft was written– 2nd: involve one or more other teachers, esp.

those w/ content knowledge of the field

Before Item Analysis

• Editorial Review– 1st: a few hours after the first draft was written– 2nd: involve one or more other teachers, esp.

those w/ content knowledge of the field

Item Analysis

1. Item Difficulty Index>percentage of students who answered a test item correctly>p-value

Difficulty Index

• Occasionally everyone knows the answerAn unusual high level of success may be due to:

a)previous teacherb)knowledge from home; child’s backgroundc)excellent teachingd)poorly constructed, easily guessed

Difficulty Index

• Low scores– Is it the student’s fault for “not trying”?

a)Motivation levelb)Ability of teacher to get a point acrossc)Construction of the test item

Difficulty Index

p = total who answered correctlytotal taking the test

*p is the difficulty index

Difficulty Index

p = 22 get the correct answer25 students take the test

*p is the difficulty index

p = ?

Difficulty Index

p = 22 get the correct answer25 students take the test

*p is the difficulty index

p = 0.88

Difficulty Index

p = 0.88

*p is the difficulty index

88% of the students got it righthigh difficulty index

Difficulty Index

p = 0.88

*p is the difficulty index

a) item was too easyb) students were well taught

Difficulty Index

Sample Problem:

In a Math test administered by Mr. Reyes, seven students answered word problem #1 correctly. A total of twenty-five students took the test.

What is the difficulty index for word problem #1?

Difficulty Index

p = 0.28

Difficulty Index

p = 0.28low difficulty indexlow difficulty level at p < 0.30

Difficulty Index

p = 0.28a)students didn’t understand the concept being testedb)item could be badly constructed

Distribution with Negative Skew

picture from http://billkosloskymd.typepad.com

Distribution with Negative Skew

picture from http://billkosloskymd.typepad.com

p > 0.70Useful in identifying students who are experiencing difficulty in learning the material.

Distribution with Negative Skew

picture from http://billkosloskymd.typepad.com

Diagnostic TestingUsed to identify learning problems experienced by a child.

Distribution with Negative Skew

picture from http://billkosloskymd.typepad.com

Diagnostic TestingMade up of easy test items that cover core skill areas of a subject.

Distribution with Positive Skew

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

Distribution with Positive Skew

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

Out-of-Level TestingUsed to select the very best top students for special programs.

Distribution with Positive Skew

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

Out-of-Level TestingThe optimal level of item difficulty is the selection ratio.

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

selection ratio:number that will be selectednumber of applicants

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

Sample Problem:

A university summer program for junior high school students limits admission to 40 slots. 200 students are nominated by their high schools.What should be the average difficulty level of the program’s admission test?

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

40 admission slots200 test takers

The test should have an average difficulty level of 0.20 (low difficulty index).

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

The average score would be only 20% plus a bit more for the guessing factor.The distribution of scores would show a positive skew.The best students would be evident at the upper end of the distribution.

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

The average score would be only 20% plus a bit more for the guessing factor.The distribution of scores would show a positive skew.The best students would be evident at the upper end of the distribution.

Out-of-Level Testing

picture from http://www.ken-szulczyk.com/lessons/statistics/asymmetric_distribution_01.png

The average score would be only 20% plus a bit more for the guessing factor.The distribution of scores would show a positive skew.The best students would be evident at the upper end of the distribution.

Difficulty Index

Critical Factor:Guessing

Likelihood of guessing the correct answer for a multiple choice question is a function of the number of answer alternatives

Difficulty Index

Critical Factor:Guessing

The chance of guessing correctly out of four alternatives is 1:4 or 25%

Lower difficulty index (p < 0.30) has a higher proportion of those answering correctly by guessing

Item Analysis with Constructed or Supply-Type Items

e.g. essay

Difficulty Index

p = mean score of the classmaximum possible score

*p is the difficulty index

Difficulty Index

Sample Problem:

The mean score of a class on an essay is 16.5 out of a total maximum score of 20.

What is the difficulty index of the essay?

Difficulty Index

Sample Problem:

p = 16.5 20

p = 0.825

Optimum Difficulty*

0.75 True-False

MCQ 3 alternatives0.67

MCQ 4 alternatives0.625

MCQ 5 alternatives0.60

Essay test0.50

*corrected for guessing

Problem

When using a rigid grading scale, over half of the students will fail or get a D.

picture from http://savingphilippinepupilsandparents.blogspot.com/

Distracter Analysis

a multiple choice item has a low difficulty index (p < 0.30)

examine the item’s distracters

1

2

Distracter Analysis

a multiple choice item has a low difficulty index (p < 0.30)

examine the item’s distracters

1

2

Anatomy of a multiple choice item

How many inches are in a foot?

A.12B.20C.24D.100

item stem

distracters or foils

keyed or correct optionoptions,alternativesor choices

Distracter Analysis

Answer Choices Number who selected choice (out of 50)

Choice A 2

Choice B 26

Choice C 7

Choice D 15p = 15/50 or 0.30

Item Analysis

2. Item Discrimination Level/Index> extent to which success on an item corresponds to success on the whole test> D

Assumptions

• We assume that the total score on the test is valid.

• We also assume that each item shows the difference between students who know more from students who know less.

• The discrimination index tells us the extent that this is true.

Item Discrimination

Methods:

a) Hogan (2007)b) point-biserial correlation (rpb)c) Gronlund & Linn (1990)

Hogan (2007)

Hogan’s Method

Item Group A B* C D

5 High 0 90 10 0

Low 10 50 30 10

Total 5 70 20 5

N=40; Median=35; The test contained 50 items.* Indicates correct option.

in %

Sample Problem

Item Group A* B C D

5 High 40 60 0 0

Low 60 30 0 10

Total 50 45 0 5

N=40; Median=35; The test contained 50 items.

Interpretation

• Positive discrimination occurs if more students in the upper group than the students in the lower group get the item right.

• This indicates that the item is discriminating in the same direction as the total test score.

Interpretation

• Positive discrimination occurs if more students in the upper group than the students in the lower group get the item right.

• This indicates that the item is discriminating in the same direction as the total test score.

Point-Biserial Correlation

• Used to correlate item scores with the scores of the whole test

• A special case of the Pearson Product Moment Correlation, where one variable is binary (right vs. wrong), and the other is continuous (total raw test score)

Point-Biserial Correlation

• Used to correlate item scores with the scores of the whole test

• A special case of the Pearson Product Moment Correlation, where one variable is binary (right vs. wrong), and the other is continuous (total raw test score)

Point-Biserial Correlation

Point-Biserial Correlation

mean raw score of all students who got the item rightmean raw score of all students who got the item

wrongstandard deviation of the raw scores

p proportion of students who got the right answer

X

pb

S

ppXXr )1(01

1X

0XXS

Point-Biserial Correlation

• A negative point-biserial correlation means that the students who did well on the test missed that item, while those students who did poorly on the test got the item right.

• This item should be rewritten.

Software Support

• Calculates rpb online:http://faculty.vassar.edu/lowry/pbcorr.htmlDate accessed: 1 August 2011

• Software index for free reliability software:http://www.rasch.org/software.htm

Gronlund & Linn (1990)For norm-referenced tests:

Gronlund & Linn (1990)

• Item discriminating power– degree to w/c a test item discriminates

between pupils with high and low scores

D = RU – RL ½ T

RU is the number of pupils in the upper group who got the item rightRL is the number of pupils in the lower group who got the item rightT is the total number of pupils included in the analysis

Gronlund & Linn (1990)

• Item discriminating power– degree to w/c a test item discriminates

between pupils with high and low scores

D = RU – RL ½ T

RU is the number of pupils in the upper group who got the item rightRL is the number of pupils in the lower group who got the item rightT is the total number of pupils included in the analysis

Interpreting Values

• D=0.60 indicates average discriminating power.

• D=1.00 has maximum positive discriminating power where all pupils in the upper group get the item right while all pupils in the lower group get the item wrong.

• D=0.00 has no discriminating power where an equal number of pupils in the upper and lower groups get the item right.

Interpreting Values

• D=0.60 indicates average discriminating power.

• D=1.00 has maximum positive discriminating power where all pupils in the upper group get the item right while all pupils in the lower group get the item wrong.

• D=0.00 has no discriminating power where an equal number of pupils in the upper and lower groups get the item right.

Interpreting Values

• D=0.60 indicates average discriminating power.

• D=1.00 has maximum positive discriminating power where all pupils in the upper group get the item right while all pupils in the lower group get the item wrong.

• D=0.00 has no discriminating power where an equal number of pupils in the upper and lower groups get the item right.

Sample Problem

Item Group A B* C D

5 Upper 10 0 10 0 0

Lower 10 2 4 1 3

* Indicates correct option.

Find the p and D.

Answers

p = 0.70D = 0.90 - 0.50 = 0.40

Analysis of Criterion-Referenced Mastery Items

Crucial Questions:

To what extent did the test items measure the effects of instruction?

Item Response Chart

Items 1 2 3 4 5Pretest (B)Posttest (A)

B A B A B A B A B A

Jim - + + + - - + - - +Dora - + + + - - + - + +Lois - + + + - - + - - +Diego - + + + - - + - - +

+ means correct- means incorrect

Sensitivity to Instructional Effects (S)

S = RA – RB

T

RA is the number of pupils who got the item right after instruction

RB is the number of pupils who got the item right before instruction

T is the total number of pupils who tried the item both times

Sensitivity to Instructional Effects (S)

S = 1

The ideal item value is 1.00. Effective items fall between 0.00 and 1.00.

The higher the positive value, the more sensitive the item is to instructional effects.

Items with zero and negative values do not reflect the intended effects of instruction.

Sensitivity to Instructional Effects (S)

S = 1

The ideal item value is 1.00. Effective items fall between 0.00 and 1.00.

The higher the positive value, the more sensitive the item is to instructional effects.

Items with zero and negative values do not reflect the intended effects of instruction.

Sensitivity to Instructional Effects (S)

S = 1

The ideal item value is 1.00. Effective items fall between 0.00 and 1.00.

The higher the positive value, the more sensitive the item is to instructional effects.

Items with zero and negative values do not reflect the intended effects of instruction.

References• Gronlund, N.E. & Linn, R.L. (1990). Measurement and Evaluation in

Teaching (6th ed.). USA: MacMillan Publishing Company.

• Hogan, Thomas P. (2007). Educational Assessment: A Practical Introduction. USA: John Wiley & Sons, Inc.

• Michigan State University Board of Trustees. (2009). Introduction to Item Analysis. Retrieved from

http://scoring.msu.edu/itanhand.html

• Wright, Robert J. (2008). Educational Assessment: Tests and Measurements in the Age of Accountability. Calif: Sage Publications, Inc.