Sampling and Inference September 23, 2011. Web Address swinters/public_html/lingstats/index.html.

72
Sampling and Inference September 23, 2011

description

Let’s Try an Experiment! First: how do we take a sample of a population in R? Let’s try it with the VOT data: VOTdata

Transcript of Sampling and Inference September 23, 2011. Web Address swinters/public_html/lingstats/index.html.

Page 1: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Sampling and Inference

September 23, 2011

Page 2: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Web Addresshttps://webdisk.ucalgary.ca/~swinters/public_html/lingstats/index.html

Page 3: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Let’s Try an Experiment!• First: how do we take a sample of a population in R?• Let’s try it with the VOT data:• VOTdata <-

read.table("http://www.basesproduced.com/R/classdata.txt",header=T,row.names=1)

• Here’s how to grab a sample of 5 rows from the VOT database:• VOTdata[sample(nrow(VOTdata),5),]

• Note: nrow(VOTdata) is just the number of rows in the dataframe• Here’s how you can randomize the entire data set:• VOTdata[sample(nrow(VOTdata),nrow(VOTdata)),]

Page 4: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

More Sampling• You can store a sample in a vector, “s”, this way:• s <- VOTdata[sample(nrow(VOTdata),5),]

• Determine the mean of a sample:• mean(VOTdata[sample(nrow(VOTdata),5),7])

• Let’s generate a bunch of samples with a for loop, and then store the mean of each sample in a vector called "means"--• First: declare that the means vector exists:

means = vector()

Page 5: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Looping• Then set up the for loop:• for (i in 1:20) {

• Store the mean of each sample in the new vector:

means[i] = mean(VOTdata[sample(nrow(VOTdata),10),7])

• Close off the for loop:

}

• Now check out the new means vector:meansmean(means)hist(means)

Page 6: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Parameter Playtime• Tweak two parameters: • the number of items you select in the sample

(from, say, 10 to 100)• the number of samples you take (and store)

• How does this change the resulting distribution of sample means?

Page 7: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

What is a sampling distribution?

The sampling distribution of a statistic is the

distribution of all possible values taken by the statistic

when all possible samples of a fixed size n are taken

from the population. It is a theoretical idea—we do

not actually build it.

The sampling distribution of a statistic is the

probability distribution of that statistic.

Page 8: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Sampling distribution of the sample meanWe take many random samples of a given size n from a population with mean m and standard deviation s.

Some sample means will be above the population mean m and some will be below, making up the sampling distribution.

Sampling distribution of “x bar”

Histogram of some sample

averages

Page 9: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Sampling distribution of x bar

m

s/√n

For any population with mean m and standard deviation s:

The mean, or center of the sampling distribution of , is equal to the population mean m : .

The standard deviation of the sampling distribution is s/√n, where n is the sample size : .

x

mx m

s x s / n This is also known as the Standard Error.

Page 10: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

• Mean of a sampling distribution of There is no tendency for a sample mean to fall systematically above

or below m, even if the distribution of the raw data is skewed. Thus,

the mean of the sampling distribution is an unbiased estimate of the

population mean m — it will be “correct on average” in many

samples.

• Standard deviation of a sampling distribution of

The standard deviation of the sampling distribution measures how

much the sample statistic varies from sample to sample. It is smaller

than the standard deviation of the population by a factor of √n.

Averages are less variable than individual observations.

x

x

Page 11: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

For normally distributed populationsWhen a variable in a population is normally distributed, the sampling

distribution of for all possible samples of size n is also normally

distributed.

If the population is N(m, s)

then the sample means

distribution is N(m, s/√n).Population

Sampling distribution

x

Page 12: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The central limit theoremCentral Limit Theorem: When randomly sampling from any population with mean m and standard deviation s, when n is large enough, the sampling distribution of is approximately normal: ~ N(m, s/√n).

Population with strongly skewed

distribution

Sampling distribution of

for n = 2 observations

Sampling distribution of

for n = 10 observations

Sampling distribution of for n = 25 observations

x

x

x

x

Page 13: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Income distributionLet’s consider the very large database of individual incomes from the Bureau of Labor Statistics as our population. It is strongly right skewed.

– We take 1000 SRSs of 100 incomes, calculate the sample mean for each, and make a histogram of these 1000 means.

– We also take 1000 SRSs of 25 incomes, calculate the sample mean for each, and make a histogram of these 1000 means.

Which histogram corresponds to

samples of size 100? 25?

Page 14: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

In many cases, n = 25 isn’t a huge sample. Thus, even

for strange population distributions we can assume a

normal sampling distribution of the mean and work

with it to solve problems.

How large a sample size?

It depends on the population distribution. More observations are required if the population distribution is far from normal.

– A sample size of 25 is generally enough to obtain a normal sampling distribution from a strong skewness or even mild outliers.

– A sample size of 40 will typically be good enough to overcome extreme skewness and outliers.

Page 15: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Any linear combination of independent normal random variables is also normally distributed.

More generally, the central limit theorem is valid as long as we are sampling many small random events, even if the events have different distributions (as long as no one random event dominates the others).

Why is this cool? It explains why the normal distribution is so common.

Further properties

Example: Height seems to be determined by a

large number of genetic and environmental

factors, like nutrition. The “individuals” are

genes and environmental factors. Your height is

a mean.

Page 16: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Overview of Inference• Methods for drawing conclusions about a population

from sample data are called statistical inference

• MethodsConfidence Intervals - estimating a value of a population

parameterTests of significance - assess evidence for a claim about a

population

• Inference is appropriate when data are produced by either

a random sample or a randomized experiment

Page 17: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Statistical confidenceAlthough the sample mean, , is a unique number for any particular sample, if you pick a different sample you will probably get a different sample mean.

In fact, you could get many different values for the sample mean, and virtually none of them would actually equal the true population mean, m.

x

Page 18: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

But the sample distribution is narrower than the population distribution,

by a factor of √n.

Thus, the estimates

gained from our samples

are always relatively

close to the population

parameter µ.

nSample means,n subjects

m

ns

s

Population, xindividual subjects

x

x

If the population is normally distributed N(µ,σ), so will the sampling distribution N(µ,σ/√n),

Page 19: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Red dot: mean valueof individual sample

95% of all sample means will be within roughly 2 standard deviations (2*s/√n) of the population parameter m.

Distances are symmetrical which implies that the population parameter m must be within roughly 2 standard deviations from the sample average , in 95% of all samples.

This reasoning is the essence of statistical inference.

s n

x

Page 20: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Experiment #2!• Let’s test out the confidence interval approach with our VOT data.• Take a sample of 9 elements from the data set:• s9 <- VOTdata[sample(nrow(VOTdata),9),]

• Calculate the mean and standard deviation:• mean(s9[,7])• sd(s9[,7])

• Note that with this sample, we’re estimating the population mean.• …so the standard error is the standard deviation of the

sample divided by the square root of n• In this case: se <- sd(s9[,7])/sqrt(9)

Page 21: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Determining the Interval• The theory of inference says:• There is a 95% chance that the population mean (m = 49.4)

will be within mean + 1.96 * standard error• This range = the 95% confidence interval

• Calculate the confidence interval for your sample.• Upper limit = mean + 1.96 * standard error• Lower limit = mean - 1.96 * standard error

• Does your confidence interval include the population mean?

Page 22: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The Properties of Confidence

• We can increase or decrease the confidence interval however we see fit.• 80% confidence interval• 99% confidence interval

• Q1: Will the estimated range of the mean (= the “margin of error”) be larger or smaller when the confidence % is smaller?• Q2: How does sample size affect the margin of error?

Page 23: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Confidence intervalsThe confidence interval is a range of values with an associated

probability or confidence level C. The probability quantifies the

chance that the interval contains the true population parameter.

± 4.2 is a 95% confidence interval for the population parameter m.

This equation says that in 95% of the cases, the actual value of m will be within 4.2 units of the value of .

x

x

Page 24: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

ImplicationsWe don’t need to take a lot of random samples to “rebuild” the sampling distribution and find m at its center.

n

n

Sample

Population

m

All we need is one SRS of size n and rely on the properties of the sample means distribution to infer the population mean

m.

Page 25: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Reworded

With 95% confidence, we can

say that µ should be within

roughly 2 standard deviations

(2*s/√n) from our sample mean

.

– In 95% of all possible samples of

this size n, µ will indeed fall in our

confidence interval.

– In only 5% of samples would be

farther from µ.

s n

x

x

Page 26: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

A confidence interval can be expressed as:

• Mean ± m m is called the margin of errorm within ± mExample: 120 ± 6

Two endpoints of an interval m within ( − m) to ( + m)

ex. 114 to 126

A confidence level C (in %)

indicates the probability that the µ

falls within the interval.

It represents the area under the

normal curve within ± m of the

center of the curve.

mm

x

x

x

Page 27: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Review: standardizing the normal curve using z

N(0,1)

z

x

N(64.5, 2.5)N(µ, σ/√n)

Standardized height (no units)

z x ms n

Here, we work with the sampling distribution, and s/√n is its standard deviation (spread).

Remember that s is the standard deviation of the original population.

Page 28: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Confidence intervals contain the population mean m in C% of samples.

Different areas under the curve give different confidence levels C.

Example: For an 80% confidence level C, 80% of the normal curve’s

area is contained in the interval.

C

z*−z*

Varying confidence levels

Practical use of z: z* z* is related to the chosen confidence level C. C is the area under the standard normal curve between −z* and z*.

x z *s n

The confidence interval is thus:

Page 29: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Link between confidence level and margin of error

The confidence level C determines the value of z* (in table C).

The margin of error also depends on z*.

m z *s n

C

z*−z*

m m

Higher confidence C implies a larger

margin of error m (thus less precision in

our estimates).

A lower confidence level C produces a

smaller margin of error m (thus better

precision in our estimates).

Page 30: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Sample size and experimental designYou may need a certain margin of error (e.g., drug trial, manufacturing

specs). In many cases, the population variability (s) is fixed, but we

can choose the number of measurements (n).

So plan ahead what sample size to use to achieve that margin of error.

m z * sn

n z *s

m

2

Remember, though, that sample size is not always stretchable at will. There are

typically costs and constraints associated with large samples. The best approach is to

use the smallest sample size that can give you useful results.

Page 31: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Cautions about using

• Data must be a SRS from the population.

• Formula is not correct for other sampling designs.

• Inference cannot rescue badly produced data.

• Confidence intervals are not resistant to outliers.

• If n is small (<15) and the population is not normal, the true confidence level will be different from C.

• The standard deviation s of the population must be known.

The margin of error in a confidence interval covers only random sampling errors!

x z* *s / n

Page 32: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Interpretation of Confidence Intervals• Conditions under which an inference method is valid are never

fully met in practice. Exploratory data analysis and judgment should be used when deciding whether or not to use a statistical procedure.

• Any individual confidence interval either will or will not contain the true population mean. It is wrong to say that the probability is 95% that the true mean falls in the confidence interval.

• The correct interpretation of a 95% confidence interval is that we are 95% confident that the true mean falls within the interval. The confidence interval was calculated by a method that gives correct results in 95% of all possible samples.

In other words, if many such confidence intervals were constructed, 95% of these intervals would contain the true mean.

Page 33: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Stating hypotheses

A test of statistical significance tests a specific hypothesis using

sample data to decide on the validity of the hypothesis.

In statistics, a hypothesis is an assumption or a theory about the

characteristics of one or more variables in one or more populations.

What you want to know: Does the calibrating machine that sorts cherry

tomatoes into packs need revision?

The same question reframed statistically: Is the population mean µ for the

distribution of weights of cherry tomato packages equal to 227 g (i.e., half

a pound)?

Page 34: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The null hypothesis is a very specific statement about a parameter of the

population(s). It is labeled H0.

The alternative hypothesis is a more general statement about a parameter of

the population(s) that is exclusive of the null hypothesis. It is labeled Ha.

Weight of cherry tomato packs:

H0 : µ = 227 g (µ is the average weight of the population of packs)

Ha : µ ≠ 227 g (µ is either larger or smaller)

Page 35: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

One-sided and two-sided tests• A two-tail or two-sided test of the population mean has these null and alternative hypotheses:

H0 : µ = [a specific number] Ha : µ [a specific number]

• A one-tail or one-sided test of a population mean has these null and alternative hypotheses:

H0 : µ = [a specific number] Ha : µ < [a specific number] OR

H0 : µ = [a specific number] Ha : µ > [a specific number]

The FDA tests whether a generic drug has an absorption extent similar to the known absorption extent of the brand-name drug it is copying. Higher or lower absorption would both be problematic, thus we test:

H0 : µgeneric = µbrand Ha : µgeneric µbrand two-sided

Page 36: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

How to choose?What determines the choice of a one-sided versus a two-sided test is what we know about the problem before we perform a test of statistical significance.

A health advocacy group tests whether the mean nicotine content of a brand of cigarettes is greater than the advertised value of 1.4 mg.Here, the health advocacy group suspects that cigarette manufacturers sell cigarettes with a nicotine content higher than what they advertise in order to better addict consumers to their products and maintain revenues.

Thus, this is a one-sided test: H0 : µ = 1.4 mg Ha : µ > 1.4 mg

It is important to make that choice before performing the test or else you could make a choice of “convenience” or fall into circular logic.

Page 37: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The P-valueThe packaging process has a known standard deviation s = 5 g.

H0 : µ = 227 g versus Ha : µ ≠ 227 g

The average weight from your four random boxes is 222 g.

What is the probability of drawing a random sample such as yours if H0 is true?

Tests of statistical significance quantify the chance of obtaining a particular

random sample result if the null hypothesis were true. This quantity is the P-

value.

This is a way of assessing the “believability” of the null hypothesis, given the

evidence provided by a random sample.

Page 38: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Interpreting a P-value

Could random variation alone account for the difference between the null hypothesis and observations from a random sample?

– A small P-value implies that random variation due to the sampling process alone is not likely to account for the observed difference.

– With a small p-value we reject H0. The true property of the population is significantly different from

what was stated in H0.

Thus, small P-values are strong evidence AGAINST H0.

But how small is small…?

Page 39: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

P = 0.1711

P = 0.2758

P = 0.0892

P = 0.0735

P = 0.01

P = 0.05

When the shaded area becomes very small, the probability of drawing such a sample at random gets very slim. Oftentimes, a P-value of 0.05 or less is considered significant: The phenomenon observed is unlikely to be entirely due to chance event from the random sampling.

Significant P-value???

Page 40: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Tests for a population mean

µ defined by H0

x

Sampling distribution

z x ms n

σ/√n

To test the hypothesis H0 : µ = µ0 based on an SRS of size n from a Normal

population with unknown mean µ and known standard deviation σ, we

rely on the properties of the sampling distribution N(µ, σ/√n).

The P-value is the area under the sampling distribution for values at least

as extreme, in the direction of Ha, as that of our random sample.

Again, we first calculate a z-value

and then use Table A.

Page 41: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

P-value in one-sided and two-sided tests

To calculate the P-value for a two-sided test, use the symmetry of the normal

curve. Find the P-value for a one-sided test and double it.

One-sided (one-tailed) test

Two-sided (two-tailed) test

Page 42: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Does the packaging machine need revision?

– H0 : µ = 227 g versus Ha : µ ≠ 227 g

– What is the probability of drawing a random sample such as yours if H0 is true?

245227222

nxzs

m

From table A, the area under the standard

normal curve to the left of z is 0.0228.

Thus, P-value = 2*0.0228 = 4.56%.

4g5 g222 nx s

2.28%2.28%

217 222 227 232 237

Average package weight (n=4)

Sampling distribution

σ/√n = 2.5 g

µ (H0)2

,z

x

The probability of getting a random

sample average so different from

µ is so low that we reject H0.

The machine does need recalibration.

Page 43: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Steps for Tests of Significance1. State the null hypotheses Ho and the

alternative hypothesis Ha.

2. Calculate value of the test statistic.

3. Determine the P-value for the observed data.

4. State a conclusion.

Page 44: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The significance level: a

The significance level, α, is the largest P-value

tolerated for rejecting a true null hypothesis

(how much evidence against H0 we require).

This value is decided arbitrarily before

conducting the test.

– If the P-value is equal to or less than α (P ≤ α), then we reject H0.

– If the P-value is greater than α (P > α), then we fail to reject H0.

Does the packaging machine need revision?

Two-sided test. The P-value is 4.56%.

* If α had been set to 5%, then the P-value would be significant.

* If α had been set to 1%, then the P-value would not be significant.

Page 45: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

When the z score falls within the

rejection region (shaded area on

the tail-side), the p-value is smaller

than α and you have shown

statistical significance.z = -1.645

Z

One-sided test, α = 5%

Two-sided test, α = 1%

Page 46: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Rejection region for a two-tail test of µ with α = 0.05 (5%)

upper tail probability p 0.25 0.20 0.15 0.10 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005

(…)

z* 0.674 0.841 1.036 1.282 1.645 1.960 2.054 2.326 2.576 2.807 3.091 3.291 Confidence interval C 50% 60% 70% 80% 90% 95% 96% 98% 99% 99.5% 99.8% 99.9%

A two-sided test means that α is spread between both tails of the curve, thus:

-A middle area C of 1 − α = 95%, and

-An upper tail area of α /2 = 0.025.

Table C

0.0250.025

Page 47: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Confidence intervals to test hypotheses

Because a two-sided test is symmetrical, you can also use a confidence interval to test a two-sided hypothesis.

α /2 α /2

In a two-sided test,

C = 1 – α.

C confidence level

α significance level

Packs of cherry tomatoes (σ = 5 g): H0 : µ = 227 g versus Ha : µ ≠ 227 g

Sample average 222 g. 95% CI for µ = 222 ± 1.96*5/√4 = 222 g ± 4.9 g

227 g does not belong to the 95% CI (217.1 to 226.9 g). Thus, we reject H0.

Page 48: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Ex: Your sample gives a 99% confidence interval of .

With 99% confidence, could samples be from populations with µ = 0.86? µ = 0.85?

x m 0.84 0.0101

99% C.I.

Logic of confidence interval test

x

Cannot rejectH0: m = 0.85

Reject H0 : m = 0.86

A confidence interval gives a black and white answer: Reject or don't reject H0. But it

also estimates a range of likely values for the true population mean µ.

A P-value quantifies how strong the evidence is against the H0. But if you reject H0, it

doesn’t provide any information about the true population mean µ.

Page 49: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.
Page 50: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Reasoning of Significance TestsWe have seen that the properties of the sampling distribution of help us estimate a range of likely values for population mean m.

We can also rely on the properties of the sample distribution to test hypotheses.

Example: You are in charge of quality control in your food company. You sample randomly four packs of cherry tomatoes, each labeled 1/2 lb. (227 g).

The average weight from your four boxes is 222 g. Obviously, we cannot expect boxes filled with whole tomatoes to all weigh exactly half a pound. Thus,

– Is the somewhat smaller weight simply due to chance variation?

– Is it evidence that the calibrating machine that sorts cherry tomatoes into packs needs revision?

x

Page 51: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Stating hypotheses

A test of statistical significance tests a specific hypothesis using

sample data to decide on the validity of the hypothesis.

In statistics, a hypothesis is an assumption or a theory about the

characteristics of one or more variables in one or more populations.

What you want to know: Does the calibrating machine that sorts cherry

tomatoes into packs need revision?

The same question reframed statistically: Is the population mean µ for the

distribution of weights of cherry tomato packages equal to 227 g (i.e., half

a pound)?

Page 52: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The null hypothesis is a very specific statement about a parameter of the

population(s). It is labeled H0.

The alternative hypothesis is a more general statement about a parameter of

the population(s) that is exclusive of the null hypothesis. It is labeled Ha.

Weight of cherry tomato packs:

H0 : µ = 227 g (µ is the average weight of the population of packs)

Ha : µ ≠ 227 g (µ is either larger or smaller)

Page 53: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

One-sided and two-sided tests• A two-tail or two-sided test of the population mean has these null and alternative hypotheses:

H0 : µ = [a specific number] Ha : µ [a specific number]

• A one-tail or one-sided test of a population mean has these null and alternative hypotheses:

H0 : µ = [a specific number] Ha : µ < [a specific number] OR

H0 : µ = [a specific number] Ha : µ > [a specific number]

The FDA tests whether a generic drug has an absorption extent similar to the known absorption extent of the brand-name drug it is copying. Higher or lower absorption would both be problematic, thus we test:

H0 : µgeneric = µbrand Ha : µgeneric µbrand two-sided

Page 54: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

How to choose?What determines the choice of a one-sided versus a two-sided test is what we know about the problem before we perform a test of statistical significance.

A health advocacy group tests whether the mean nicotine content of a brand of cigarettes is greater than the advertised value of 1.4 mg.Here, the health advocacy group suspects that cigarette manufacturers sell cigarettes with a nicotine content higher than what they advertise in order to better addict consumers to their products and maintain revenues.

Thus, this is a one-sided test: H0 : µ = 1.4 mg Ha : µ > 1.4 mg

It is important to make that choice before performing the test or else you could make a choice of “convenience” or fall into circular logic.

Page 55: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The P-valueThe packaging process has a known standard deviation s = 5 g.

H0 : µ = 227 g versus Ha : µ ≠ 227 g

The average weight from your four random boxes is 222 g.

What is the probability of drawing a random sample such as yours if H0 is true?

Tests of statistical significance quantify the chance of obtaining a particular

random sample result if the null hypothesis were true. This quantity is the P-

value.

This is a way of assessing the “believability” of the null hypothesis, given the

evidence provided by a random sample.

Page 56: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Interpreting a P-value

Could random variation alone account for the difference between the null hypothesis and observations from a random sample?

– A small P-value implies that random variation due to the sampling process alone is not likely to account for the observed difference.

– With a small p-value we reject H0. The true property of the population is significantly different from

what was stated in H0.

Thus, small P-values are strong evidence AGAINST H0.

But how small is small…?

Page 57: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

P = 0.1711

P = 0.2758

P = 0.0892

P = 0.0735

P = 0.01

P = 0.05

When the shaded area becomes very small, the probability of drawing such a sample at random gets very slim. Oftentimes, a P-value of 0.05 or less is considered significant: The phenomenon observed is unlikely to be entirely due to chance event from the random sampling.

Significant P-value???

Page 58: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Tests for a population mean

µ defined by H0

x

Sampling distribution

z x ms n

σ/√n

To test the hypothesis H0 : µ = µ0 based on an SRS of size n from a Normal

population with unknown mean µ and known standard deviation σ, we

rely on the properties of the sampling distribution N(µ, σ/√n).

The P-value is the area under the sampling distribution for values at least

as extreme, in the direction of Ha, as that of our random sample.

Again, we first calculate a z-value

and then use Table A.

Page 59: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

P-value in one-sided and two-sided tests

To calculate the P-value for a two-sided test, use the symmetry of the normal

curve. Find the P-value for a one-sided test and double it.

One-sided (one-tailed) test

Two-sided (two-tailed) test

Page 60: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Does the packaging machine need revision?

– H0 : µ = 227 g versus Ha : µ ≠ 227 g

– What is the probability of drawing a random sample such as yours if H0 is true?

245227222

nxzs

m

From table A, the area under the standard

normal curve to the left of z is 0.0228.

Thus, P-value = 2*0.0228 = 4.56%.

4g5 g222 nx s

2.28%2.28%

217 222 227 232 237

Average package weight (n=4)

Sampling distribution

σ/√n = 2.5 g

µ (H0)2

,z

x

The probability of getting a random

sample average so different from

µ is so low that we reject H0.

The machine does need recalibration.

Page 61: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Steps for Tests of Significance1. State the null hypotheses Ho and the alternative

hypothesis Ha.

2. Calculate value of the test statistic.

3. Determine the P-value for the observed data.

4. State a conclusion.

Page 62: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The significance level: aThe significance level, α, is the largest P-value tolerated for rejecting

a true null hypothesis (how much evidence against H0 we require).

This value is decided arbitrarily before conducting the test.

– If the P-value is equal to or less than α (P ≤ α), then we reject H0.

– If the P-value is greater than α (P > α), then we fail to reject H0.

Does the packaging machine need revision?

Two-sided test. The P-value is 4.56%.

* If α had been set to 5%, then the P-value would be significant.

* If α had been set to 1%, then the P-value would not be significant.

Page 63: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

When the z score falls within the

rejection region (shaded area on

the tail-side), the p-value is smaller

than α and you have shown

statistical significance.z = -1.645

Z

One-sided test, α = 5%

Two-sided test, α = 1%

Page 64: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Rejection region for a two-tail test of µ with α = 0.05 (5%)

upper tail probability p 0.25 0.20 0.15 0.10 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005

(…)

z* 0.674 0.841 1.036 1.282 1.645 1.960 2.054 2.326 2.576 2.807 3.091 3.291 Confidence interval C 50% 60% 70% 80% 90% 95% 96% 98% 99% 99.5% 99.8% 99.9%

A two-sided test means that α is spread between both tails of the curve, thus:

-A middle area C of 1 − α = 95%, and

-An upper tail area of α /2 = 0.025.

Table C

0.0250.025

Page 65: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Confidence intervals to test hypothesesBecause a two-sided test is symmetrical, you can also use a confidence interval to test a two-sided hypothesis.

α /2 α /2

In a two-sided test,

C = 1 – α.

C confidence level

α significance level

Packs of cherry tomatoes (σ = 5 g): H0 : µ = 227 g versus Ha : µ ≠ 227 g

Sample average 222 g. 95% CI for µ = 222 ± 1.96*5/√4 = 222 g ± 4.9 g

227 g does not belong to the 95% CI (217.1 to 226.9 g). Thus, we reject H0.

Page 66: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Ex: Your sample gives a 99% confidence interval of .

With 99% confidence, could samples be from populations with µ = 0.86? µ = 0.85?

x m 0.84 0.0101

99% C.I.

Logic of confidence interval test

x

Cannot rejectH0: m = 0.85

Reject H0 : m = 0.86

A confidence interval gives a black and white answer: Reject or don't reject H0. But it

also estimates a range of likely values for the true population mean µ.

A P-value quantifies how strong the evidence is against the H0. But if you reject H0, it

doesn’t provide any information about the true population mean µ.

Page 67: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.
Page 68: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The National Collegiate Athletic Association (NCAA) requires Division I athletes to score at least 820 on the combined math and verbal SAT exam to compete in their first college year. The SAT scores of 2003 were approximately normal with mean 1026 and standard deviation 209.

What proportion of all students would be NCAA qualifiers (SAT ≥ 820)?

x 820m 1026s 209

z (x m)

s

z (820 1026)

209

z 206209

0.99

Table A : area under N(0,1) to the left of z = -0.99 is 0.1611 or approx. 16%.

Note: The actual data may contain students who scored exactly 820 on the SAT. However, the proportion of scores exactly equal to 820 is 0 for a normal distribution is a consequence of the idealized smoothing of density curves.

area right of 820 = total area - area left of 820= 1 - 0.1611

≈ 84%

Page 69: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

The NCAA defines a “partial qualifier” eligible to practice and receive an athletic scholarship, but not to compete, with a combined SAT score of at least 720.

What proportion of all students who take the SAT would be partial qualifiers? That is, what proportion have scores between 720 and 820?

x 720m 1026s 209

z (x m)

s

z (720 1026)

209

z 306209

1.46

Table A : area under N(0,1) to the left of z = -1.46 is 0.0721 or approx. 7%.

About 9% of all students who take the SAT have scores between 720 and 820.

area between = area left of 820 - area left of 720

720 and 820 = 0.1611 - 0.0721≈ 9%

Page 70: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

IQ scores: population vs. sampleIn a large population of adults, the mean IQ is 112 with standard deviation 20. Suppose 200 adults are randomly selected for a market research campaign.

•The distribution of the sample mean IQ is:

A) Exactly normal, mean 112, standard deviation 20

B) Approximately normal, mean 112, standard deviation 20

C) Approximately normal, mean 112 , standard deviation 1.414

D) Approximately normal, mean 112, standard deviation 0.1

C) Approximately normal, mean 112 , standard deviation 1.414

Population distribution : N(m = 112; s = 20)

Sampling distribution for n = 200 is N(m = 112; s /√n = 1.414)

Page 71: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

, P(z < −3) = 0.0013 ≈ 0.1%

Note: Make sure to standardize (z) using the standard deviation for the sampling distribution.

ApplicationHypokalemia is diagnosed when blood potassium levels are below 3.5mEq/dl. Let’s assume that we know a patient whose measured potassium levels vary daily according to a normal distribution N(m = 3.8, s = 0.2).

If only one measurement is made, what is the probability that this patient will be misdiagnosed with Hypokalemia?

z (x m)

s

3.5 3.80.2

1.5 , P(z < −1.5) = 0.0668 ≈ 7%

Instead, if measurements are taken on 4 separate days, what is the probability of a misdiagnosis?

z (x m)s n

3.5 3.80.2 4

3

Page 72: Sampling and Inference September 23, 2011. Web Address  swinters/public_html/lingstats/index.html.

Practical note• Large samples are not always attainable.

– Sometimes the cost, difficulty, or preciousness of what is studied drastically limits any possible sample size.

– Blood samples/biopsies: No more than a handful of repetitions are acceptable. Oftentimes, we even make do with just one.

– Opinion polls have a limited sample size due to time and cost of operation. During election times, though, sample sizes are increased for better accuracy.

• Not all variables are normally distributed.– Income, for example, is typically strongly skewed.– Is still a good estimator of m then?

x