Gauging the Impact of Student Characteristics on Faculty Course Evaluations Laura Benson Marotta...

Post on 20-Dec-2015

214 views 0 download

Tags:

Transcript of Gauging the Impact of Student Characteristics on Faculty Course Evaluations Laura Benson Marotta...

Gauging the Impact of Student Characteristics on Faculty Course Evaluations

Laura Benson MarottaUniversity at Albany – SUNY

Student Characteristics

UAlbany course evaluations ask student

1. level of study2. course is required/ an elective3. major/minor/other4. GPA5. expected grade6. gender

Student Characteristics

40% of the University at Albany course evaluation instrument measures student characteristics

Other institutions go further Average hours per week studying Average hours per week seeking outside help Level of interest in the subject before taking

the course

Student Characteristics

“ Surveys are not ends unto themselves. They are tools that are used to help make decisions. You wouldn’t buy tools for a workbench unless you knew what you would use them for and unless you knew for sure that you were going to use them.”

Linda A. Suskie (1992)

Problems Students may view questions about

them as intrusive or irrelevant

Gathering more data than we can analyze wastes good will and instructional time

Contributes to survey fatigue

Companion Survey

Departments do not assess student satisfaction by bubble sheets alone

Open-ended departmental course surveys ask students to describe themselves all over again

Student Characteristics

Data warehouse has canned queries to report student demographics by

course

Opportunities

It is easier to establish a clear link between student characteristics and faculty evaluation outcomes if the characters are measured on the survey, rather than estimated from registration records after the fact

Opportunities

Sampling bias is a fact of life

Comparing demographic information between the survey respondents and the class population

Underrepresented subpopulations?

Student Characteristics

Grade Inflation Course Assessment Non-response bias

Grade Inflation

“Evaluations depend solely on students, and grade inflation reflects faculty worried about the impact students may have on their careers.”

Virginia Myers Kelly (2005)

Grade Inflation on your campus

Does Expected Grade

predict the response to

Instructor, Overall

Grade Inflation

Practice Data Set: Undergraduate Courses Student getting a grade (not

pass/fail) Limited to students who are

passing

0

2000

4000

6000

8000

10000

12000

Poor Fair Avg. Good Excel

UndergraduateSurvey Responses for Instructor, Overall

0

2000

4000

6000

8000

10000

12000

D C B A

Undergraduate Expected Grade

Undergraduate Students Not Failing Their Course

0

1000

2000

3000

4000

5000

6000

7000

Poor Fair Average Good Excellent

Instructor, OverallD C B A

Grade Inflation Karl Pearson published a model

in 1900 that described experiments with mutually exclusive, categorical outcomes

Row by Column test of independence

SPSS output using this model is still labeled “Pearson’s Chi Square”

Nonparametric Test Assumptions for Chi-Square: “Nonparametric tests do not require assumptions about the shape of the underlying distribution…The expected frequencies for each category should be at least 1. No more than 20% of the categories should have expected frequencies of less than 5.”

SPSS Base User’s Guide 12.0 page 466; follows guidelines set by W.G. Cochrain (1954).

Row by Column Test of Independence

Instructor_Overall Total

1 Poor 2 Fair 3 Average 4 Good 5 Excellent

ExpectedGrade

2 D

32 33 59 88 79 291

3 C196 343 603 998 849 2989

4 B371 622 1436 3872 4765 11066

5 A197 305 781 2773 5774 9830

Total796 1303 2879 7731 11467 24176

Grade Inflation

Null hypothesis: “Instructor, Overall” is independent of “Expected Grade”

Alternative hypothesis “Instructor, Overall” and “Expected Grade” are dependent

Grade Inflation- RESULTS

Chi-Square Tests

Value df Asymp. Sig. (2-sided)

Pearson Chi-Square 1488.890(a) 12 .000

N of Valid Cases 24176

(a) 0 cells (.0%) have expected count less than 5. The minimum expected count is 9.58.

Grade Inflation

INTERPRETATION Faculty ratings on the Likert

scale varies depending on the students’ expected grade

Instructors have a reason to expect lower student satisfaction if they assign lower grades.

Grade Inflation

Clear progression in students rating instructors as “Poor”:

Expecting a D: 32/291 = 11% Expecting a C: 196/2989 = 6% Expecting a B: 371/11066 = 3% Expecting an A: 197/9830 =

2%

Grade Inflation

Instructors Rated as “Excellent”

Expecting a B: 4765/11066 = 43%

Expecting an A: 5774/9830 = 59%

Policy Implications Faculty evaluations should be

considered in conjunction with grade distributions

If your institution wants to follow Harvard and fight grade inflation by setting a cap on “A” grades in undergraduate courses, expect lower student satisfaction ratings

“Expected Grade” should be included during a survey redesign

Course Assessment

1 credit lower-division general education course in Information Science

Gap between satisfaction with Instructors and satisfaction with course

Course Assessment

The first step to solving the problem is to confirm that student satisfaction with the general education course in Information Science is different from the other lower-level undergraduate courses

No Answer Poor Fair Average Good Excellent

Course_Overall

0.0%

10.0%

20.0%

30.0%

40.0%

Pe

rce

nt

* Mode response of "Good"

"Course, Overall" for Lower-Level Undergrad Courses

No Answer Poor Fair Average Good Excellent

Course_Overall

0.0%

10.0%

20.0%

30.0%

40.0%

Pe

rce

nt

Too Many Poor Responses

"Course, Overall" for a lower-level Information Science course

Student Characteristics

Course Assessment

Student Level

Students said

course “Poor”

TotalStudents

% rating course “Poor”

Freshmen 2 25 8.0%Sophomore 9 85 10.6%

Junior 3 62 4.8%

Senior 10 68 14.8%

Graduate 1 8 12.5%

Course Assessment

Exploring these data did not solve the curriculum coordinator’s original problem, but it did help focus our questions in designing a follow-up study.

Course Assessment

The following semester the instructors handed out a two question survey on the first day of class:

Why did you take this class? Are you a freshman,

sophomore, junior, senior, or other?

Course Assessment

3 Readers Scored Open-Ended Responses (Reliability)

Scored categories:1. General Education2. Need 1 Credit3. Subject Matter4. Other

Course Evaluation

Results1. ~ 1/3 Seniors interested in

subject 2. ~ 4/5 Seniors needed 1 credit3. Grads self-selecting for

remediation

Course Evaluation

Policy Implications Examine other opportunities for

upperclassmen to earn 1 credit Make Seniors jump through hoops

to get into this course

Student Characteristics

Conclusion Institutional Researchers use

student characteristics on faculty evaluations to:

Track trends like grade inflation Conduct ad hoc analyses Estimate Sample Bias

Student Characteristics

Conclusion:

It is only wise to gather as much data as we will use.

Questions?