Within Subject ANOVAs: Assumptions & Post Hoc Tests

Click here to load reader

  • date post

    18-Jan-2018
  • Category

    Documents

  • view

    218
  • download

    0

Embed Size (px)

description

The Research Cycle Real World Research Representation Research Results Research Conclusions Abstraction Data Analysis MethodologyGeneralization ***

Transcript of Within Subject ANOVAs: Assumptions & Post Hoc Tests

Within Subject ANOVAs: Assumptions & Post Hoc Tests Outline of Todays Discussion 1.Within Subject ANOVAs in SPSS 2.Within Subject ANOVAs: Assumptions & Post Hoc Tests 3.In Class Exercise: Applying our knowledge to 200-level Research Courses The Research Cycle Real World Research Representation Research Results Research Conclusions Abstraction Data Analysis MethodologyGeneralization *** Part 1 Within Subject ANOVAs in SPSS 1.Fun Fact: It can be shown that there is a formal mathematical relationship between ANOVA and linear correlations! 2.Any ANOVA is considered a special case of a linear model, to mathematicians. (We wont bother with the details here.) 3.Here are the SPSS steps for the within-subjects ANOVA: Analyze General Linear Model Repeated Measures Within Subject ANOVAs in SPSS 1.You will then be prompted by a box Repeated Measures Define Factor(s) 2.For each variable in your ANOVA, you will be prompted for a Factor Name (of your choosing), and the number of levels. 3.You can click ADD after each variable is enteredthen click DEFINE. Within Subject ANOVAs in SPSS 1.Finally, you should slide the variables in the left box over to the Within-Subjects Variables box on the right. 2.Note: SPSS does NOT conduct Post Hoc tests on Within Subjects variables. (Say it with me) Part 2 Within Subject ANOVAs: Assumptions & Post Hocs Between-Subjects ANOVA Equal Variance Assumption The Sig. value here is > 0.05, so we retain the equal variance assumption. (The ANOVA is a fair test of this data set.) Assumptions & Post Hocs The repeated measures ANOVA is based on the Sphericity Assumption (say it with me) Assumptions & Post Hocs Sphericity Assumption - The correlations among scores in the various conditions are equal (or close enough!). Correlation between A & B, is equal to the correlation between A & C, which is equal to the correlation between B & C, etc.. The sphericity assumption is a bit more complicated than that, but that will do! Assumptions & Post Hocs Great News! SPSS automatically conducts a test (Mauchlys Test of Sphericity) to indicate whether the sphericity assumption should be retained or rejected. Remember: SPSS did the same for us in the between-subjects case with Levenes statistic. Assumptions & Post Hocs Within-Subjects ANOVA Because this Sig. value is < 0.05, we reject something! namely, the sphericity assumption. Assumptions & Post Hocs Within-Subjects ANOVA If this Sig. value had been >0.05, we could use the F-Value listed in the row labeled Sphericity Assumed. Assumptions & Post Hocs Within-Subjects ANOVA If we retain the sphericity assumption, use the df an F values in the top row(s). Assumptions & Post Hocs Within-Subjects ANOVA If we reject the sphericity assumption, use the Greenhouse-Geisser row(s) Assumptions & Post Hocs Within-Subjects ANOVA When sphericity is not assumed, the degrees of freedom are adjusted according to these epsilon values (coefficients). Assumptions & Post Hocs Within-Subjects ANOVA Could someone walk us through the relationship between the DF & epsilon values here? Assumptions & Post Hocs Review Question: What were the two reasons for using post hoc tests? Unfortunately, SPSS does not perform post hoc tests for the within-subjects ANOVAs. :( Assumptions & Post Hocs To isolate which means differ from which in a within-subjects ANOVA, we can use lots of little repeated measures t-tests. Of course, this raises the problem of cumulative type 1 error. What was cumulative type 1 error, again? Assumptions & Post Hocs The Bonferroni post hoc adjustment controls cumulative type 1 error among the repeated measures t-tests by multiplying each observed alpha level (sig value) by the number of t-tests weve run. Example: If we run 2 t-tests (post hoc), we would multiply each observed alpha level (sig value) by 2, and compare it to 0.05 (as always). Now, the new Bonferroni-adjusted sig value for a particular t-test in SPSS would have to be lower than 0.05 for us to claim statistical significance. Assumptions & Post Hocs Lets get some practice with this idea. Lets say we ran 5 t-tests (post hoc). If a particular t-test had a sig value of 0.015, would we retain or reject? Assumptions & Post Hocs Lets get some practice with this idea. Lets say we ran 4 t-tests (post hoc). If a particular t-test had a sig value of 0.015, would we retain or reject? Assumptions & Post Hocs Lets get some practice with this idea. Lets say we ran 3 t-tests (post hoc). If a particular t-test had a sig value of 0.015, would we retain or reject? Assumptions & Post Hocs Lets get some practice with this idea. Lets say we ran 2 t-tests (post hoc). If a particular t-test had a sig value of 0.04, would we retain or reject? Assumptions & Post Hocs Lets get some practice with this idea. Lets say we ran 2 t-tests (post hoc). If a particular t-test had a sig value of 0.015, would we retain or reject? Part 3 In Class Exercise: Applying Our Methods To 200-Level Research Courses