The use & abuse of tests Statistical significance ≠ practical significance Significance ≠ proof...

Post on 04-Jan-2016

243 views 0 download

Tags:

Transcript of The use & abuse of tests Statistical significance ≠ practical significance Significance ≠ proof...

The use & abuse of tests

• Statistical significance ≠ practical significance

• Significance ≠ proof of effect (confounds)

• Lack of significance ≠ lack of effect

Factors that affect a hypothesis test

• the actual obtained difference

• the magnitude of the sample variance (s2)

• the sample size (n)

• the significance level (alpha)

• whether the test is one-tail or two-tail

X

Why might a hypothesis test fail to find a real result?

Two types of error

We either accept or reject the H0. Either way, we could be wrong:

Two types of error

False positive rate

False negative rate

“sensitivity” or “power”

We either accept or reject the H0. Either way, we could be wrong:

Error probabilities

When the null hypothesis is true:

P(Type I Error) = alpha

When the alternative hypothesis is true:

P(Type II Error) = beta

Two types of error

False positive rate

False negative rate

“sensitivity” or “power”

Type I error

The “false positive rate”

• We decide there is an effect when none exists; we reject the null wrongly

• By choosing an alpha as our criterion, we are deciding the amount of Type I error we are willing to live with.

• P-value is the likelihood that we would commit a Type I error in rejecting the null

Type II error

The “false negative” rate

• We decide there is nothing going on, and we miss the boat – the effect was really there and we didn’t catch it.

• Cannot be directly set but fluctuates with sample size, sample variability, effect size, and alpha

• Could be due to high variability… or if measure is insensitive or effect is small

Power

The “sensitivity” of the test

• The likelihood of picking up on an effect, given that it is really there.

• Related to Type II error: power = 1-

A visual example

(We are only going to work through a one-tailed example.)

We are going to collect a sample of 10 highly successful leaders & innovators and measure their scores on scale that measures tendencies toward manic states.

We hypothesize that this group has more tendency to mania than does the general population ( and )50 5

µ0 = 50

Rejection region

Step 1: Decide on alpha and identify your decision rule (Zcrit)

Z = 0 Zcrit = 1.64

null distribution

Step 2: State your decision rule in units of sample mean (Xcrit )

null distribution

Rejection region

µ0 = 50

Z = 0 Zcrit = 1.64

Xcrit = 52.61

Rejection regionRejection regionAcceptance region

Step 3: Identify µA, the suspected true population mean for your sample

µ0 = 50 Xcrit = 52.61 µA = 55

alternative distribution

Rejection region

power

beta

Xcrit = 52.61 µA = 55µ0 = 50

Step 4: How likely is it that this alternative distribution would produce a mean in the rejection region?

Z = 0Z = -1.51

alternative distribution

beta

µ0 XcritµA

alpha

Power & Error

Power is a function of

The chosen alpha level ()

The true difference between 0 and A

The size of the sample (n)

The standard deviation (s or )

standard error

beta

µ0 XcritµA

alpha

Changing alpha

beta

µ0 XcritµA

Changing alpha

alpha

beta

µ0 XcritµA

alpha

Changing alpha

beta

µ0 XcritµA

Changing alpha

alpha

beta

µ0 XcritµA

alpha

Changing alpha

• Raising alpha gives you less Type II error (more power) but more Type I error. A trade-off.

Changing distance between 0 and A

beta

µ0 XcritµA

alpha

beta

µ0 XcritµA

alpha

Changing distance between 0 and A

beta

µ0 XcritµA

alpha

Changing distance between 0 and A

beta

µ0 XcritµA

alpha

Changing distance between 0 and A

beta

µ0 XcritµA

alpha

Changing distance between 0 and A

• Increasing distance between 0 and A lowers Type II error (improves power) without changing Type I error

Changing standard error

beta

µ0 XcritµA

alpha

beta

µ0 XcritµA

alpha

Changing standard error

beta

µ0 XcritµA

alpha

Changing standard error

beta

µ0 XcritµA

alpha

Changing standard error

beta

µ0 XcritµA

alpha

Changing standard error

• Decreasing standard error simultaneously reduces both kinds of error and improves power.

To increase power

Try to make really different from the null-hypothesis value (if possible)

Loosen your alpha criterion (from .05 to .10, for example)

Reduce the standard error (increase the size of the sample, or reduce variability)

For a given level of alpha and a given sample size, power is directly related to effect size. See Cohen’s power tables, described in your text