PSY 1950 t-tests, one-way ANOVA October 1, 2008. vs.

Post on 04-Jan-2016

215 views 0 download

Tags:

Transcript of PSY 1950 t-tests, one-way ANOVA October 1, 2008. vs.

PSY 1950t-tests, one-way ANOVA

October 1, 2008

vs

0

2

4

6

8

10

12

2 12 22 32 42 52 62 72 82 92

sample N

mean sampling statistic

sample SQRT(SS/N) sample SQRT(SS)/Npopulation SQRT(SS/N) population SQRT(SS)/N

History of the t-test

• William Gosset– Statistician, brewer at Guinness factory

• Which variety of barley is best?– Small samples, no known population – Student. (1908). The probable error of a mean. Biometrika, 6, 1–25.

QuickTime™ and a decompressor

are needed to see this picture.QuickTime™ and a

decompressorare needed to see this picture.

QuickTime™ and a decompressor

are needed to see this picture.+ =

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture. QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.+ =

From z to tOne sample z-testNull hypothesized

Known 2

sample mean - population mean

standard error

One sample t-test

Null hypothesized Unknown 2

sample mean - population mean

estimated standard error Use s2 for 2

The Sampling Distribution of s2

• s2 is unbiased estimator of 2

– mean s2 = 2

• But sampling distribution of s2 is positively skewed, especially for small samples

• Because of this, odds are that an individual s2 underestimates 2, especially for small samples

• Thus, on the average, t > z, especially for small samples

• Can’t use z-distribution to determine p for t

• Must devise new distribution that takes into account sample size

QuickTime™ and a decompressor

are needed to see this picture.

df = n - 1

http://www.uvm.edu/~dhowell/SeeingStatisticsApplets/TvsZ.html

Psychologists are Naughty Brewers

• Pearson to Student/Gosset in 1912:

“only naughty brewers take n so small that the difference is not on the order of the probable error!”

Assumptions1. Normality (of population, not

sample)2. Independence of observations

(within sample)

Tails• Two-tailed test

– p <. 025 in both tails– Conservative, conventional

• One-tailed test– p < .05 in predicted tail– A priori, justifiable directional

hypothesis?

• The one-and-a-half tailed test– p <. 05 in predicted tail– p <. 025 in unpredicted tail– Un-ignorable “wrong-tailed” result?

• The lopsided test– p <. 05 in predicted tail– p <. 005 in unpredicted tail

From 1-sample t to 2-sample t

One sample t-test

Null hypothesized Unknown 2

sample mean - population mean

estimated standard error Use s2 for 2

Two sample t-testNull hypothesized = 1-2 Unknown 2

= 12 + 2

2 sample mean dif - population

mean difestimated standard error

Use s12 and s2

2 for 12

and 22

Standard Error of the Difference Between Means

• Variances add: the variance of x minus y = the variance of x plus the variance of y– Only true if x and y are uncorrelated

Assumptions1. Normality (of populations, not

samples)2. Independence of observations (within

and between samples)• Dependence due to groups

• Sampling• Shared history• Social interaction

• Dependence due to time/sequence• e.g., psychophysical variables

• Dependence due to space• e.g., city blocks

3. Homogeneity of variance (of populations, not samples)– Okay so long as one variance isn’t more

than 4 times the other, and samples sizes are approximately equal

ANOVA• Analysis of variance

– Comparing variance between sample means with variance within samples means• Variancewithin = noise

• Variancebetween = noise + possible signal

• Omnibus test– Are there any differences in means between populations?

– H0: 1 = 2 = 3…

– H1: at least one population mean is different from another

• F-ratio = Variancebetween/Variancewithin

– Variancebetween/variancewithin > 1 reject H0

– Variancebetween/variancewithin ≤ 1 retain H1

ANOVA

Example: 0,1,2;1,2,3;2,3,4

Assumptions1. Normality (of populations, not

samples)2. Independence of observations

(within and between samples)3. Homogeneity of variance (of

populations, not samples)– Okay so long as one variance isn’t

more than 4 times another, and samples sizes are approximately equal

Crawford, J. R., & Howell, D. C. (1998). Comparing an individual’s test score

against norms derived from small samples. The Clinical Neuropsychologist, 12, 482-

486.

Why develop new statistics?• Clinicians often compare an individual’s score to a normative sample that is treated like a population

• Sometimes normative sample is small– Instruments with poor normative data– Demographic considerations decrease n– Local norms are expensive to collect– Case studies can have small comparison groups

What’s wrong with the z score?

• Z-scores assume that normalized sample is a population

• With small n, sampling distribution of variance can be skewed

• Leads to a greater likelihood of underestimating population SD and overestimating z

• http://www.ruf.rice.edu/~lane/stat_sim/sampling_dist/index.html

Why use the modified t statistic?

• T-statistic allows clinicians to use a small normative sample to estimate population SD

• Formula is almost the same as z-score formula but allows for wider tails

• t = [X1 – XM2] / [s2 √[(N2 + 1) / N2]]

When should modified t statistic be used?

• Difference is “vanishingly small” when sample size is greater than 250, and not necessarily large even with smaller samples

• Modified t-test should be used with a sample size of less than 50

• Shouldn’t be used when normative data are skewed