Basic Concepts

Post on 24-Feb-2016

39 views 0 download

description

College of Education and Health Professions. Basic Concepts. Chapter 1. Statistical Inference. A Population is a group with a common characteristic. A population is usually large and it is difficult to measure all members. - PowerPoint PPT Presentation

Transcript of Basic Concepts

Basic Concepts

Chapter 1

College of Educationand Health Professions

Statistical Inference• A Population is a group with a common characteristic.• A population is usually large and it is difficult to

measure all members.• To make inference about a population we take a

representative sample (RANDOM).• In a random sample each member of the population

is equally likely to be selected.• A sample cannot accurately represent the population

unless it is drawn without BIAS.• In a bias free sample selection of one member does

not affect to selection of future subjects.

Types of Variables

• Continuous variable – can assume any value (ht, wt)

• Discrete variable – limited to certain values: integers or whole numbers (2.5 children?)

Level of Measurement

• Nominal Scale: mutually exclusive (male, female)• Ordinal Scale: gives quantitative order to the

variable, but it DOES NOT indicate how much better one score is than another (pain of 2 is not twice of 1)

• Interval Scale: has equal units and zero is not an absence of the variable (temperature)

• Ratio Scale: based on order, has equal distance between scale points, and zero is an absence of value

Independent & Dependent Variables

• Independent Variable: the variable being controlled (gender, group, class).

• Dependent Variable: NOT free to vary (math score, height, weight).

• The INDEPENDENT VARIABLE is controlled by the researcher – Effects of exercise on body fat.– Effects of type of instruction on learning.

• The DEPENDENT VARIABLE is the variable being studied – Body fat– learning

Parameters and Statistics

• A parameter represents the population μ.• A statistic represents the sample .• The difference between a statistic and a

parameter is the result of sampling error.

Describing and Exploring Data

Chapter 2

College of Educationand Health Professions

A frequency distribution organizes the data in a logical order.

Most of the scores are between 46 – 67.

Histogram with the normal curve superimposed.

Measures of Central Tendency

• Mean, Median and Mode• Describe the middle or central characteristics

of the data.• The mode is the most frequent score.• The median is the middle score.• In a normal distribution the mean, median and

mode are the near the same score.

Measures of Variability

• Range (lowest-highest). Suffers reliance on extreme scores.

• Interquartile Range: middle 50% of the scores.• Variance: sum of squared deviations from the

mean divided by (N-1), [data in squared units]• Standard Deviation: square root of the

variance [data in original units].

Formulas for Mean, Variance and SD

𝑋=∑ 𝑋𝑁 𝑠2=∑ (𝑋 𝑖−𝑋 )2

𝑁−1 𝑠=√∑ (𝑋 𝑖−𝑋 )2

𝑁−1

Mean Variance StandardDeviation

Degrees of Freedom

• Df is the number of things that are free to vary.

• For Example– The sum of three numbers is 10. The df is two, you

can pick any two numbers. Once you pick the first two numbers the third number is fixed, it is not free to vary.

5 + 4 + ? = 10

SPSS Box Plots

• The top of the box is the upper fourth or 75th percentile. • The bottom of the box is the lower fourth or 25th percentile. • 50 % of the scores fall within the box or interquartile range. • The horizontal line is the median. • The ends of the whiskers represent the largest and smallest values

that are not outliers. • An outlier, O, is defined as a value that is smaller (or larger) than

1.5 box-lengths.• An extreme value, E, is defined as a value that is smaller (or larger)

than 3 box-lengths.• Normally distributed scores typically have whiskers that are about

the same length and the box is typically smaller than the whiskers.

A normal distribution is symmetrical, uni-modal and mesokurtic.

Playturtic is flat.

Leptokurtic is peaked.

Tests for Normality

Tests for Normality

Tests for Normality

Tests for Normality

Tests of Normality

.175 14 .200* .890 14 .081set1Statistic df Sig. Statistic df Sig.

Kolmogorov-Smirnova Shapiro-Wilk

This is a lower bound of the true significance.*.

Lilliefors Significance Correctiona.

– Not less than 0.05 so the data are normal.

Tests for Normality: Normal Probability Plot or Q-Q Plot

5 10 15 20 25 30

Observed Value

-2

-1

0

1

2

Expe

cted

Nor

mal

Normal Q-Q Plot of set1– If the data are

normal the points cluster around a straight line

Tests for Normality: Boxplots

set1

9.00

12.00

15.00

18.00

21.00

24.00

27.00

– Bar is the median, box extends from 25 – 75th percentile, whiskers extend to largest and smallest values within 1.5 box lengths

set1

0.00

20.00

40.00

60.00

80.00

100.00

16

15

– Outliers are labeled with O, Extreme values are labeled with a star

Tests for Normality: Normal Probability Plot or Q-Q Plot

5 10 15 20 25 30

Observed Value

-2

-1

0

1

2

Expe

cted

Nor

mal

Normal Q-Q Plot of set1

Outliers and Extreme Scores

Chapter 2

College of Educationand Health Professions

SPSS – Explore BoxPlot• The top of the box is the upper

fourth or 75th percentile. • The bottom of the box is the lower

fourth or 25th percentile. • 50 % of the scores fall within the box

or interquartile range. • The horizontal line is the median. • The ends of the whiskers represent

the largest and smallest values that are not outliers.

• An outlier, O, is defined as a value that is smaller (or larger) than 1.5 box-lengths.

• An extreme value, E , is defined as a value that is smaller (or larger) than 3 box-lengths.

• Normally distributed scores typically have whiskers that are about the same length and the box is typically smaller than the whiskers.

Choosing a Z Score to Define Outliers

Z Score % Above % +/- Above

3.0 0.0013 0.0026

3.1 0.0010 0.0020

3.2 0.0007 0.0014

3.3 0.0005 0.0010

3.4 0.0003 0.0006

Decisions for Extremes and Outliers1. Check your data to verify all numbers are entered correctly.2. Verify your devices (data testing machines) are working

within manufacturer specifications.3. Use Non-parametric statistics, they don’t require a normal

distribution.4. Develop a criteria to use to label outliers and remove them

from the data set. You must report these in your methods section.

1. If you remove outliers consider including a statistical analysis of the results with and without the outlier(s). In other words, report both, see Stevens (1990) Detecting outliers.

5. Do a log transformation.1. If you data have negative numbers you must shift the numbers to

the positive scale (eg. add 20 to each).2. Try a natural log transformation first in SPSS use LN().3. Try a log base 10 transformation, in SPSS use LG10().

Transform - Compute

Add 10 to the data. Then log transform.

Add 10 to each data point.

Try Natural Log.

Last option, use Log10.

Add 10 to each data point, since you can not take a log of a negative number.

First try a Natural Log Transformation

If Natural Log doesn’t work try Log10 Transformation.

Outlier Criteria: 1.5 * Interquartile Range from the Median

Milner CE, Ferber R, Pollard CD, Hamill J, Davis IS. Biomechanical Factors Associated with Tibial Stress Fracture in Female Runners. Med Sci Sport Exer. 38(2):323-328, 2006.

Statistical analysis. Boxplots were used to identify outliers, defined as values >1.5 times the interquartile range away from the median. Identified outliers were removed from the data before statistical analysis of the differences between groups. A total of six data points fell outside this defined range and were removed as follows: two from the RTSF group for BALR, one from the CTRL group for ASTIF, one from each group for KSTIF, and one from the CTRL group for TIBAMI.

Outlier Criteria: 1.5 * Interquartile Range from the Median (using SPSS)

Outlier Criteria: 1.5 * Interquartile Range from the Median (using SPSS)

Descriptives

-.0785 .53292-1.1684

1.0115

-.2540-.35498.520

2.91893-5.2710.0015.27

3.391.176 .4273.783 .833

MeanLower BoundUpper Bound

95% ConfidenceInterval for Mean

5% Trimmed MeanMedianVarianceStd. DeviationMinimumMaximumRangeInterquartile RangeSkewnessKurtosis

JumpStatistic Std. Error

Outliers = 1.5 * 3.39 above and below -.3549

Outlier Criteria: ± 3 standard deviations from the mean

Tremblay MS, Barnes JD, Copeland JL, Esliger DW. Conquering Childhood Inactivity: Is the Answer in the Past? Med Sci Sport Exer. 37(7):1187-1194, 2005.

Data analyses. The normality of the data was assessed by calculating skewness and kurtosis statistics. The data were considered within the limits of a normal distribution if the dividend of the skewness and kurtosis statistics and their respective standard errors did not exceed ± 2.0. If the data for a given variable were not normally distributed, one of two steps was taken: either a log transformation (base 10) was performed or the outliers were identified (± 3 standard deviations from the mean) and removed. Log transformations were performed for push-ups and minutes of vigorous physical activity per day. Outliers were removed from the data for the following variables: sitting height, body mass index (BMI), handgrip strength, and activity counts per minute..

Computing and Saving Z Scores

Check this box and SPSS creates and saves the z scores for all selected variables. The z scores in this case will be named zLight1…

Computing and Saving Z Scores

Now, you can identify and remove raw scores above and below 3 sds if you want to remove outliers.

Comparison of Outlier MethodsMedian ± 1.5 * Interquartile Range

27 ± 1.5 * 7 gives a range of 16.5 – 37.5

TABLE 1.1 Newcomb's measurements of the passage time of light

28 22 36 26 28 2826 24 32 30 27 2433 21 36 32 31 2524 25 28 36 27 3234 30 25 26 26 25-44 23 21 30 33 2927 29 28 22 26 2716 31 29 36 32 2840 19 37 23 32 29

-2 24 25 27 24 16 29 20 28 27 39 23

Comparison of Outlier Methods ± 2 SDs (Notice the 16’s are not removed)

TABLE 1.1 Newcomb's measurements of the passage time of light

28 22 36 26 28 2826 24 32 30 27 2433 21 36 32 31 2524 25 28 36 27 3234 30 25 26 26 25-44 23 21 30 33 2927 29 28 22 26 2716 31 29 36 32 2840 19 37 23 32 29

-2 24 25 27 24 16 29 20 28 27 39 23

Comparison of Outlier Methods ± 2 SDs (Notice the 16s are not removed)

TABLE 1.1 Newcomb's measurements of the passage time of light

28 22 36 26 28 2826 24 32 30 27 2433 21 36 32 31 2524 25 28 36 27 3234 30 25 26 26 25-44 23 21 30 33 2927 29 28 22 26 2716 31 29 36 32 2840 19 37 23 32 29

-2 24 25 27 24 16 29 20 28 27 39 23

Comparison of Outlier Methods ± 3 SDs (Step 1, then run Z scores again)

TABLE 1.1 Newcomb's measurements of the passage time of light

28 22 36 26 28 2826 24 32 30 27 2433 21 36 32 31 2524 25 28 36 27 3234 30 25 26 26 25-44 23 21 30 33 2927 29 28 22 26 2716 31 29 36 32 2840 19 37 23 32 29

-2 24 25 27 24 16 29 20 28 27 39 23

Comparison of Outlier Methods ± 3 SDs (Step 2, after already removing the -44, the -2 then has a SD of -4.68 so it is removed)

TABLE 1.1 Newcomb's measurements of the passage time of light

28 22 36 26 28 2826 24 32 30 27 2433 21 36 32 31 2524 25 28 36 27 3234 30 25 26 26 25-44 23 21 30 33 2927 29 28 22 26 2716 31 29 36 32 2840 19 37 23 32 29

-2 24 25 27 24 16 29 20 28 27 39 23

Conclusions?

• The median ± 1.5 * interquartile range appears to be too liberal.

• ± 2 SDs may also be too liberal and statisticians may not approve.

• An iterative process where you remove points above and below 3 SDs and then re-check the distribution may be the most conservative and acceptable method.

• Choosing 3.1, 3.2, or 3.3 as a SD increases the protection against removing a score that is potentially valid and should be retained.

Choosing a Z Score to Define Outliers

Z Score % Above % +/- Above

3.0 0.0013 0.0026

3.1 0.0010 0.0020

3.2 0.0007 0.0014

3.3 0.0005 0.0010

3.4 0.0003 0.0006

Decisions for Extremes and Outliers1. Check your data to verify all numbers are entered correctly.2. Verify your devices (data testing machines) are working

within manufacturer specifications.3. Use Non-parametric statistics, they don’t require a normal

distribution.4. Develop a criteria to use to label outliers and remove them

from the data set. You must report these in your methods section.

1. If you remove outliers consider including a statistical analysis of the results with and without the outlier(s). In other words, report both, see Stevens (1990) Detecting outliers.

5. Do a log transformation.1. If you data have negative numbers you must shift the numbers to

the positive scale (eg. add 20 to each).2. Try a natural log transformation first in SPSS use LN().3. Try a log base 10 transformation, in SPSS use LG10().

Data Transformations and Their Uses

Data Transformation Can Correct For

Log Transformation (log(X)) Positive Skew, Unequal Variances

Square Root Transformation (sqrt(X)) Positive Skew, Unequal Variances

Reciprocal Transformation (1/X) Positive Skew, Unequal Variances

Reverse Score Transformation – all of the above can correct for negative skew, but you must first reverse the scores. Just subtract each score from the highest score in the data set + 1.

Negative Skew