Test authoring and the reliability coefficient

6
Slide 1 Test Authoring and the Reliability Coefficient Test Authoring and the Reliability Coefficient

description

Helpful guidelines on how to assess the reliability of your tests.

Transcript of Test authoring and the reliability coefficient

Page 1: Test authoring and the reliability coefficient

Slide 1

Test Authoring and the

Reliability Coefficient

Test Authoring

and the

Reliability

Coefficient

Page 2: Test authoring and the reliability coefficient

Slide 2

Test Authoring and the

Reliability Coefficient

Overview

1. Assessing the Reliability of Your Tests and Items

2. Measuring Types of Reliability

3. Between 0 and 1

Source: Test Generator – Test Authoring and the Reliability Coefficient

Page 3: Test authoring and the reliability coefficient

Slide 3

Test Authoring and the

Reliability Coefficient

Assessing the Reliability of Your Tests and Items

In many of the things that we do in our daily lives we expect certain things/processes to work, to be reliable. We expect our watch to keep accurate time today, tomorrow and the

next day. We have this unspoken expectation that certain things will behave reliably.

For test authors the same is true regarding tests - it shouldn't matter when a test is taken. If a test taker answered a question correctly on one test and then took

another test containing the same, or a variation of the same, item on a different test, the outcome should be the same.

If the item is "reliable", the result should have a high predictability, or in this case reliability.

Source: Test Generator – Test Authoring and the Reliability Coefficient

Page 4: Test authoring and the reliability coefficient

Slide 4

Test Authoring and the

Reliability Coefficient

Measuring Types of Reliability

Simply put, test authors/test administrators want to be able to measure a single item/question or a set of items (s1) and compare the two items or sets of items.

Some of the methods for assessing reliability include: Stability or Test-Retest: Give the same test twice separated by a time

interval (days, weeks, months). Reliability would be defined as a correlation between the first test attempt and the second.

Alternate For: Create t1 and then make a copy, t2. Slightly modify t2. Reliability would then be defined as the correlation between t1 and t2.

Internal Consistency (Alpha, a): Compare one half of the test to the other half. Or, use methods such as Kuder-Richardson Formula 20 or Cronbach’s Alpha

Source: Test Generator – Test Authoring and the Reliability Coefficient

Page 5: Test authoring and the reliability coefficient

Slide 5

Test Authoring and the

Reliability Coefficient

Between 0 and 1

Between 0 and 1 is a means of numerically measuring reliability. The range is 0 to 1 and 0 to < .50 then it should be reviewed by the test

author, it would not be considered very reliable.

If the range is => .80, that would be classified as having good reliability. "1.0" or perfect reliability and not something that is ever attained.

Reliability and Validity A test, quiz, assessment may be deemed reliable but that does not mean that it is valid.

Source: Test Generator – Test Authoring and the Reliability Coefficient