Post on 28-Mar-2015
Week 2 – PART III
POST-HOC TESTS
POST HOC TESTS
• When we get a significant F test result in an ANOVA test for a main effect of a factor with more than two levels, this tells us we can reject Ho
• i.e. the samples are not all from populations with the same mean.
• We can use post hoc tests to tell us which groups differ from the rest.
POST HOC TESTS
• There are a number of tests which can be used. SPSS has them in the ONEWAY and General Linear Model procedures
• SPSS does post hoc tests on repeated measures factors, within the Options menu
Sample data
Group 1 2 3 4 12 25 13 24 14 22 14 25 15 19 17 23 13 18 14 16 12 23 34 22
Post Hoc test button
Select desired test
Tests of Between-Subjects Effects
Dependent Variable: SCORE
372.150a 3 124.050 7.254 .003
6777.992 1 6777.992 396.374 .000
372.150 3 124.050 7.254 .003
273.600 16 17.100
7677.000 20
645.750 19
SourceCorrected Model
Intercept
GROUP
Error
Total
Corrected Total
Type III Sumof Squares df Mean Square F Sig.
R Squared = .576 (Adjusted R Squared = .497)a.
ANOVA Table
Multiple Comparisons
Dependent Variable: SCORE
LSD
-7.80* 2.77 .013 -13.68 -1.92
-3.00 2.62 .268 -8.54 2.54
-10.80* 2.50 .001 -16.11 -5.49
7.80* 2.77 .013 1.92 13.68
4.80 2.77 .103 -1.08 10.68
-3.00 2.67 .278 -8.66 2.66
3.00 2.62 .268 -2.54 8.54
-4.80 2.77 .103 -10.68 1.08
-7.80* 2.50 .007 -13.11 -2.49
10.80* 2.50 .001 5.49 16.11
3.00 2.67 .278 -2.66 8.66
7.80* 2.50 .007 2.49 13.11
(J) GROUP2
3
4
1
3
4
1
2
4
1
2
3
(I) GROUP1
2
3
4
MeanDifference
(I-J) Std. Error Sig. Lower Bound Upper Bound
95% Confidence Interval
Based on observed means.
The mean difference is significant at the .05 level.*.
Post Hoc Tests
Choice of post-hoc test
• There are many different post hoc tests, making different assumptions about equality of variance, group sizes etc.
• The simplest is the Bonferroni procedure
Bonferroni Test
• first decide which pairwise comparisons you will wish to test (with reasonable justification)
• get SPSS to calculate t-tests for each comparison
• set your significance criterion alpha to be .05 divided by the total number of tests made
Bonferroni test
• repeated measures factors are best handled this way
• ask SPSS to do related t-tests between all possible pairs of means
• only accept results that are significant below .05/k as being reliable (where k is the number of comparisons made)
PLANNED COMPARISONS/ CONTRASTS
• It may happen that there are specific hypotheses which you plan to test in advance, beyond the general rejection of the set of null hypotheses
PLANNED COMPARISONS
• For example:– a) you may wish to compare each of three
patient groups with a control group– b) you may have a specific hypothesis that for
some subgroup of your design – c) you may predict that the means of the four
groups of your design will be in a particular order
PLANNED COMPARISONS
• Each of these can be tested by specifying them beforehand - hence planned comparisons.
• The hypotheses should be orthogonal - that is independent of each other
PLANNED COMPARISONS
• To compute the comparisons, calculate a t-test, taking the difference in means and dividing by the standard error as estimated from MSwithin from the ANOVA table
TEST OF LINEAR TREND – planned contrast
• for more than 2 levels, we might predict a constantly increasing change across levels of a factor
• In this case we can try fitting a model to the data with the constraint that the means of each condition are in a particular rank order, and that they are equidistant apart.
TEST OF LINEAR TREND
• The Between Group Sum of Squares is then partitioned into two components. – the best fitting straight line model through the
group means– the deviation of the observed group means
from this model
TEST OF LINEAR TREND
• The linear trend component will have one degree of freedom corresponding to the slope of the line.
• Deviation from linearity will have (k-2) df.• Each of these components can be tested,
using the Within SS, to see whether it is significant.
TEST OF LINEAR TREND
• If there is a significant linear trend, and non-significant deviation from linearity, then the linear model is a good one.
• For k>3, The same process can be done for a quadratic trend - a parabola is fit to the means. For example, you may be testing a hypothesis that as dosage level increases, the measure initially rises and then falls (or vice versa).
TEST OF LINEAR TREND
Report
SCORE
13.7352 8 2.3244
15.6401 8 1.8961
19.9698 8 2.5631
16.4484 24 3.4408
GROUP1.00
2.00
3.00
Total
Mean N Std. Deviation
TEST OF LINEAR TREND
GROUP
Group 3Group 2Group 1
Me
an
SC
OR
E22
20
18
16
14
12
10
8
6
4
2
0
TEST OF LINEAR TREND
ANOVA Table
163.319 2 81.660 15.736 .000
155.480 1 155.480 29.962 .000
7.840 1 7.840 1.511 .233
108.974 21 5.189
272.293 23
(Combined)
Linearity
Deviation from Linearity
BetweenGroups
Within Groups
Total
SCORE * GROUP
Sum ofSquares df Mean Square F Sig.