TUTOR SELECTION PROCESSES AND THE STUDENT LEARNING …

21
Australasian Journal of Economics Education Volume 15, Number 1, 2018, pp.8-28 TUTOR SELECTION PROCESSES AND THE STUDENT LEARNING EXPERIENCE * Carl Sherwood, Kam Ki Tang and Zi Yin School of Economics, University of Queensland Le Hoa Phan Institute for Teaching and Learning Innovation, University of Queensland ABSTRACT The selection process for tutors in higher education is somewhat opaque and is largely unexplored. We compare two methods of tutor selection. The first, traditionally used in many universities, is based on the academic performance of applicants. The second is based on both academic performance and a group interview process that focuses on the applicants’ communication and interpersonal skills. Data from teaching evaluations suggests that while most tutors selected under the traditional process perform well, some must be regarded as ‘underperforming’. We provide teaching evaluation data showing that tutors chosen under the new selection process improved by nearly 20 percent on average, and that the proportion of underperformingtutors fell from 19 to less than 7 percent. These findings suggest that the group interview process can complement and improve on the traditional process of tutor selection based on academic performance alone. Keywords: Tutor selection, communication, collaboration, learning experience. JEL classifications: A22. * Correspondence: Kam Ki Tang, School of Economics, University of Queensland, QLD 4072, Australia. E-mail: [email protected]. Thanks to two anonymous referees for comments and suggestions. ISSN 1448-4498 © 2018 Australasian Journal of Economics Education

Transcript of TUTOR SELECTION PROCESSES AND THE STUDENT LEARNING …

UTS TEACHING & LEARNING COMMITTEEVolume 15, Number 1, 2018, pp.8-28
TUTOR SELECTION PROCESSES AND THE
STUDENT LEARNING EXPERIENCE*
School of Economics,
University of Queensland
Le Hoa Phan
University of Queensland
ABSTRACT
The selection process for tutors in higher education is somewhat opaque and is
largely unexplored. We compare two methods of tutor selection. The first,
traditionally used in many universities, is based on the academic performance of
applicants. The second is based on both academic performance and a group
interview process that focuses on the applicants’ communication and interpersonal
skills. Data from teaching evaluations suggests that while most tutors selected
under the traditional process perform well, some must be regarded as
‘underperforming’. We provide teaching evaluation data showing that tutors
chosen under the new selection process improved by nearly 20 percent on average,
and that the proportion of ‘underperforming’ tutors fell from 19 to less than 7
percent. These findings suggest that the group interview process can complement
and improve on the traditional process of tutor selection based on academic
performance alone.
JEL classifications: A22.
* Correspondence: Kam Ki Tang, School of Economics, University of Queensland, QLD
4072, Australia. E-mail: [email protected]. Thanks to two anonymous referees for
comments and suggestions.
Tutor Selection Processes 9
The hiring process typically requires candidates to submit resumes,
research papers and reference letters, to present seminars, to meet with
incumbent staff and to undergo one or more interviews. These multiple
hurdles aim to ensure that the best person is hired. Yet as far as teaching
is concerned, full-time faculty members are not the only ones employed
for this task. Casual teaching staff, sessional academics and tutors (also
referred to as sessional teachers in Australia, adjunct faculty in North
America, or part-time teachers in the UK) are also hired for teaching
duties. Many tutors are hired by universities from their undergraduate
or postgraduate cohorts. Yet the processes used to hire these tutors are
not always clear, sometimes even to those academics working with
them (Gilbert 2017).
In U.S. and European institutions, teaching assistant appointments
are reportedly based on academic merit alone with no assessment of
teaching ability (Fumasoli, Goastellec & Kehm 2015; and Sutherland
2009). In Australia, the recommended minimum standards for the
recruitment of sessional staff within the Benchmarking Leadership and
Advancement of Standards for Sessional Teaching (BLASST 2013)
states that universities should at least specify minimum qualification
requirements for the recruitment of sessional staff. But no other advice
is provided within the BLASST guidelines for the selection process.
Anecdotal evidence suggests that tutor selection in many Australian
universities is mostly based on the academic achievement of applicants
as indicated by such measures as their Grade Point Average (GPA), or
whether they are enrolled in a postgraduate program, especially a Ph.D.
program. The effectiveness of such practices depends on the
assumption that students with higher academic achievement are more
knowledgeable and will, therefore, make better tutors. Such practices
also have administrative benefits such as being inexpensive and easy to
implement, and being arguably objective and equitable. They do,
however, have limitations. Students, universities and society-at-large
expect casual tutors to contribute to quality learning environments
(Sutherland 2009) and the narrow focus on academic performance in
tutor selection may not necessarily meet this expectation if being smart
is not enough to be effective in the classroom. The selection process
may, therefore, need to identify applicants that, when provided with
minimal tutor training, will be able to teach, manage and facilitate
10 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
effective learning environments (Knott, Crane, Heslop & Glass 2015).
A good hiring process does not eliminate the need for training, but the
training will be more efficient if the trainees have the attributes for the
job in the first place.
This paper describes and evaluates an alternative tutor selection
process developed at the University of Queensland (denoted as UQ
hereafter) in Australia, to identify tutors with the right attributes for
effective teaching as well as the intellectual ability to work in a
university department. More specifically, we ask and answer the
following series of questions: Is academic performance a useful
criterion for selecting tutors? Is academic performance a sufficient
criterion for selecting tutors? What importance should be placed on
applicants’ communication and collaboration skills compared to their
academic performance in the selection process? Does the new selection
process developed at UQ result in higher tutor evaluations from
students, suggesting an enhanced learning experience? If the new
selection process does improve the student learning experience, is this
improvement significant enough to justify the additional costs that it
entails? These questions will be answered by comparing data from
student feedback surveys for tutors selected under the old and new
reqimes at UQ. Before considering the approach taken to this analysis,
however, we review relevant liteature.
2. LITERATURE REVIEW
In Australia since around 1990, the growth of casual teaching staff in
universities has outpaced the growth of full-time academics as a result
of the “massification” of higher education (Bryson 2013; Kreber 2007;
and Matthews, Duck & Bartle 2017). From 1989 to 2007, Australian
tertiary students increased from 441,000 to over one million. Casual
teaching staff have been deemed a solution by university administrators
to the problem of meeting significantly increased demand for teaching
services while simultaneously addressing issues relating to staff costs,
redundancy and ensuring that full-time teaching academics have
sufficient time also to conduct research (Bryson 2013; Coates, Dobson,
Goedegebuure & Meek 2009; Lama & Joullié 2015; Percy & Beaumont
2008; and Sutherland 2009). In his introduction to the RED Report
(Percy et al. 2008, p.6), the University of Wollongong Vice-Chancellor,
suggested that “between 40 to 50 percent of teaching in Australian
higher education is currently done by sessional staff”. Likewise,
Harvey, Fraser & Bowes (2005, p.2) observed that up to 85 percent of
Tutor Selection Processes 11
teaching staff in one department in their Australian university were
sessional, while Sutherland & Gilbert (2013, p.1) reported that 40
percent of teaching staff in New Zealand universities were casual.
Clearly, as for the last 30 years, casual teaching staff today continue to
make a significant contribution to teaching in higher education and
tutors are one of the largest segments of causal teaching staff. This is
particularly evident in large undergraduate courses.
For these courses, lecturers may take only a few or no tutorials,
leaving this task largely to tutors. Assuming a teaching model where a
one hour tutorial accompanies a two hour lecture, tutors occupy about
one third of the academic contact hours with university students.
Furthermore, tutorials are important in contributing toward enhancing
student learning outcomes (Baderin 2005). Milliken & Barnes (2002,
p.17) argue that tutorials can provide “an effective arena for teaching
and learning through immediate, interpersonal dynamic exchange”.
Tutors are, therefore, a major contributor to the overall quality of a
university’s education programs. However, how much thought have
universities given toward the selection of tutors?
With quality of teaching and learning being at the heart of the student
experience, and with tutors increasingly and actively contributing
towards this experience, surprisingly little has been written on exactly
how tutors are being recruited. There is a large body of literature on
graduate teaching assistants (see, for example, Park (2004) and Muzaka
(2009)), yet this literature rarely touches on the practice of their
recruitment. Park & Ramos (2002) suggested that in North America,
teaching assistant jobs are commonly offered to graduate students
because of their financial needs, while in the UK, graduate teaching
assistants are recruited largely based on their potential to undertake
research. Since casual appointments are typically on a short-term basis,
there is a general belief that the stringent recruitment processes
designed for continuous appointments is unwarranted (Lama & Joullié
2015). This very much reflects the nature of tutor appointments. For
instance, Kwiek & Antonowicz (2015, p. 44) reported that students
were informally invited to apply for tutor positions by “the coordinating
professor”. Such an informal approach often means selecting someone
locally available, such as PhD students (Bryson 2013). Walstad &
Becker (2010, p. 209) warned that this approach can result in appointing
international students having “limited English-language skills for
teaching”. This problem presents real risks for both teaching quality and
12 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
students’ learning outcomes (Lama & Joullié 2015). It is, therefore, not
surprising that the Australian Learning and Teaching Council (ALTC)
reported “quality assurance of sessional teaching in many institutions is
inadequate” (Percy & Beaumont 2008).
Having a less rigorous selection process could also put pressure on
the subsequent training programs for recruits. Regarding the training of
casual tutors, the overall picture is mixed. Percy & Beaumont (2008, p.
11) found the “support of sessional teachers is still largely ad hoc”, with
quality assurance measures for sessional teachers being inadequate and
having the ability to compromise institutional risk management
strategies. Harvey (2017) also stressed that despite a continued increase
and expected reliance on casual staff into the future, there is no
systematic approach toward their academic training. On the other hand,
Knott et al. (2015) reckoned that training programs for sessional staff
are increasing worldwide. One example is the institution-wide tutor
training program at the University of Queensland (Matthews, Duck &
Bartle 2017).
With the student experience being an imperative for many Australian
universities in the current educational environment (Australian
Government 2016), there has been a shift towards an active rather than
passive approach to teaching and learning. The literature provides us
insights as to what requirements might be needed in active learning
tutorials. For example, essential features of good teaching require
abilities such as being able to stand in the shoes of students (Ramsden
2003), enthusiasm, providing timely, consistent and relevant feedback,
while actively promoting collaborative learning and sharing
experiences with peers (Duarte 2013). Accordingly, the role of tutors
has recently changed from being a transmitter of knowledge to one that
offers guidance and facilitates learning, particularly for problem-based
learning approaches (Azer 2005; and Dolmans, De Grave, Wolfhagen
& Van Der Vleuten 2005). Kane, Taylor, Tyler & Wooten (2011) found
that classroom behaviours are useful parameters to identify the practices
of effective teachers. This suggests that appointing tutors exhibiting
these behaviours would support the student learning experience through
harnessing tutors’ motivation and enthusiasm (Sutherland 2009). A
selection process based on the principles of effective teaching and
learning practices would therefore be an improvement on the informal
practices.
Tutor Selection Processes 13
This paper aims to fill this void around tutor recruitment by
expanding on the preliminary analysis conducted by Sherwood &
Littleboy (2016).
3. THE TUTOR SELECTION AND EVALUATION PROCESS
In this section we describe the old tutor selection process at UQ, the
new process, and provide a preliminary qualitative evaluation of the
new approach.
(a) The Old Process
Prior to 2013, tutor applications at UQ were initially filtered using
applicants’ overall GPA scores for their undergraduate studies. The
grading scale for courses at UQ ranges from 1 to 7, with 4 being a
passing grade, a grade of 6 designated as a “Distinction” grade, and a
grade of 7 designated as a “High Distinction” grade. Students with a
GPA of between 6 and 7 would automatically be shortlisted for an
interview. Doctoral and fourth year Honours students were also
automatically shortlisted because of the demanding academic entry
requirements for these programs. This process typically identified about
half the number of tutors required. A second stage was then used to look
for tutors for specific economics courses, especially large, first year
courses. Applicants who had a grade of 6 or 7 in these courses were
then also shortlisted for interview. Using this two-stage approach,
typically 50 to 60 applicants were selected for interview each year.
During a 15 minute interview, applicants were then asked standard
questions such as “Why do you want to be a tutor?”. Almost every
applicant passed this benign interview process and was offered a
position. In effect, the selection had been determined largely by
applicants’ overall GPAs or grades in specific courses.
(b) The New Regime
Since 2013, a new tutor selection approach has been implemented. The
new screening process reduced the overall GPA and course specific
GPA from 6 to 5.5. This had the effect of increasing the pool of potential
applicants. Since 2013, around 120 students apply each year and up to
90 of these are shortlisted for interview. To test the ability of applicants
to deliver teaching which the UQ Economics Department regards as
effective and which is regularly evaluated using surveys completed by
students, a 20-minute group interview was designed.
Each group interview is conducted in the following way. Three
applicants are randomly assigned to a group, but they do not know who
14 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
will be in their group until the interview. At the interview, applicants
are required to collaboratively work together to create a tutorial
question in 10 minutes. Their tutorial question must be based on a
newspaper article assigned to them on the spot. As many as nine articles
are used to reduce the chances of applicants in later interviews knowing
what articles they will be given. Examples of article titles include:
“Hospital parking fees enough to make you sick” (Mickelburough
2012); and “Easter holidays to deliver fuel price hikes for motorists”
(Kelly 2012). In essence, the applicants need to apply economic
theories to dissect daily life events, to communicate their ideas to each
other, and to work together to complete the task (writing a tutorial
question). As such, this process aims to identify natural collaborators
and facilitators who can work individually and as part of a team. During
the interview, a chief academic interviewer and several observers
(typically two other academic staff and one administrative staff) are
present. In the last 10 minutes of the interview, applicants individually
answer questions from the chief academic interviewer.
Each staff member scores each applicant against five criteria: (i)
appropriateness of their questions and answers; (ii) communication
skills; (iii) interpersonal skills; (iv) provides evidence of encouraging
student participation; and (v) potential for being a tutor. These criteria
are closely related to the aspects on which tutors are evaluated by their
prospective students. Each criterion is scored out of 5, with a total score
independently arrived at by each observer out of 25 for each applicant.
The observers and chief interviewer then discuss all three applicants for
about five minutes, with the chief interviewer determining a final score.
Typically, a minimum score of 21 is required for an applicant to be
offered a position. The group setting in itself is not a competition
because it is possible for all applicants in the same group to obtain a
high score. Using this approach, 40 to 50 applicants (out of about 90)
are appointed as new tutors.
This new selection process has resulted in some applicants with high
(to very high) GPA scores failing to be appointed. This contrasts to the
high probability that such applicants would have been appointed prior
to 2013. Further, some applicants with slightly lower GPA scores who
might not have been selected before 2013, are now being recruited. As
a case in point, less Ph.D. students have been appointed under the new
selection process compared to the old process.
Tutor Selection Processes 15
(c)Preliminary Feedback on the New Regime
Each year, observers during the interviews have been invited to provide
written feedback on the process. Examples of this feedback include the
following:
The way it is structured provides for an equitable playing field for all
involved which is so important in selecting tutors.
Teaching and Learning Awards and Grants Officer, 2014
I thought by running these interviews three students at a time was a great
way to assess the strengths of the students in a way that could not have been
done by their grades alone or even a one-on-one interview.
Senior Lecturer, Business School, 2015
I found the format provided valuable insight into the candidates which is
difficult to achieve through the typical interview process of questions and
answers. I was able to assess the candidate’s skills across a number of areas
including time management, leadership, teamwork, and communication.
Human Resource Staff, 2015
Feedback from the tutors appointed under the new process indicates
that 70 percent view the process favourably. These tutors have indicated
that the process allowed them to adequately demonstrate their
knowledge, personality and abilities. The materials relating to the
interview and associated training processes, have been shared with
colleagues at two other Australian Universities. Their feedback notes
that:
We are following most of your interview strategy, with only a few changes.
University of New South Wales, 2016
I have put forward the proposal for our school to start with the initial tutor
training workshop and it has been very positively received. This could put
us on track to build towards a more comprehensive program such as what
you have established at UQ.
Royal Melbourne Institute of Technology, 2016
This qualitative feedback complements the quantitative analysis
presented below in that it reflects an assessment of the recruitment
process by tutors and staff while the quantitative analysis reflects
evaluation of the process by students. Furthermore, to the extent that
both the qualitative and quantitative evidence point to the same
conclusions, they reinforce and validate each other.
16 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
4. METHODOLOGY AND DATA
Performance of tutors selected under both the old and the new regimes
was measured using data from student feedback surveys each semester.
In each semester, we collect tutor evaluation results on a tutor-course
basis. Tutorial class sizes are typically capped at 25 students. When
evaluating tutors at the end of each semester, the average number of
student responses for a tutor are approximately 12 for both regimes.
Beginning from 2010 onwards, the evaluation form asked students to
rate tutors by expressing a level of agreement with the propositions
outlined in Table 1.
For each proposition, students were able to select a level of
agreement on a 5-point scale, where 1 indicated strong disagreement
and 5 indicated strong agreement. Thus, the higher the score, the better
the evaluation. The scores on these questions are designed to measure
students’ perceptions of a tutor’s performance. Besides these eight
questions, students also have the opportunity to provide written
comments.
Question Number Question Wording
Q2 The tutor communicated clearly.
Q3 The tutor was approachable.
Q4 The tutor inspired me to learn.
Q5 The tutor encouraged student input.
Q6 The tutor treated students with respect.
Q7 The tutor gave helpful advice.
Q8 Overall, how would you rate this tutor?
We do not engage with the debate about the efficacy of student
evaluations, which is a subject in its own right (see, for example, Biggs
(2011) and Nulty (2008)). We simply make the assumption that student
evaluations of tutors are a valid and reliable source of data to evidence
teaching quality. The evaluation process has been tested on multiple
occasions at UQ to ensure that responses to evaluation propositions are
not too highly correlated with each other and are reliable in that they
Tutor Selection Processes 17
have reproducible outcomes across class size, course levels, locations
and modes of teaching. The tutor survey from 2010 to 2014 was
administered via paper surveys. From 2015 onwards, this was switched
to an online format via single use Quick Response (QR) codes.
However, the response propositions remained the same and tutors were
still required to hand out QR codes to students for the surveys to be
completed in class. Kordts-Freudinger & Geithner (2013) argue that the
evaluation situation (in-class versus after-class) has more of an impact
on survey results than evaluation mode (paper versus online). We find
no significant shifts in the data from 2014 to 2015 and, therefore, we
assume that the change in survey administration method did not impact
the results of the present study.
For evaluation purposes, our dataset includes only tutors who had
completed UQ courses as students in previous years. This included
current undergraduate students, course work masters students, and
Ph.D. students. The majority were undergraduate and coursework
masters students, with only two out of 231 tutors evaluated being Ph.D.
students.
We only used evaluation data for each tutor’s first semester of
teaching. The School of Economics monitored tutors’ performances
during each semester and only reappointed those with good evaluations
in subsequent semesters. So, selection bias would likely be introduced
if incumbent tutors were included in the analysis. Data were de-
identified and ethics clearance was granted for this research.
It should also be stressed that we focused only on the impact of the
new tutor selection process on the student learning experience and not
on student academic performance because data to control for factors
affecting academic performance other than the selection process, such
as, for example, lecture and class attendance (Stanca 2006), were not
available.
Using data from these student surveys we regressed tutor performance
(as measured by student satisfaction on these surveys) against a number
of explanatory variables. Specifically, the dependent variable was the
logarithmic value of student responses to Questions 1 to 8 on the
questionnaire and the explanatory variables included: a dummy variable
(NEW ) set equal to one if the tutor was selected under the new regime,
and zero if the tutor was selected under the old regime; the log value of
a tutor’s GPA score at the time of interview (ln(GPA)); and the log
value of a tutor’s interview score under the new selection regime
18 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
(ln(Score)) - there were no interview scores under the old regime). In
addition, a set of control variables were included to differentiate tutors
teaching in first, second, or third year undergraduate courses, or in
master’s courses. These variables accounted for the possibility that tutor
evaluation results may vary across courses of different levels. The
analysis was conducted using cross-sectional ordinary least square
(OLS) regression.
We did not include tutor characteristics such as gender or program of
study. This was because if the selection process successfully identified
applicants with ‘good’ teaching characteristics (for example, females
because women are hypothetically better communicators), then
controlling for things such as gender would lead to an under-estimation
of the effects associated with the new selection process.
Our dataset had 231 observations, which span across two years of the
old regime (2011 and 2012) and four years of the new regime (2013-
2016). With 97 percent of the tutors only tutoring in one course, 231
observations can be considered to represent 231 tutors. Of these tutors,
64 percent were selected under the new regime and the remaining 36
percent under the old regime. These tutors were deployed amongst six
first year, six second year, and eight third year undergraduate courses,
and 13 master’s courses.
Table 2 provides summary statistics of the key variables used in the
regression. GPA scores for the 231 tutors were very high, indicated by
a mean value of 6.14 and a median value of 6.20 on a 7-point scale. Yet
not all tutors were academic ‘superstars’ as there were instances where
tutors had a GPA score between about 4.5 and 5 under the new regime.
Tutors selected under the two regimes had average GPA scores that
were almost identical at 6.16 for the old and 6.12 for the new. A t-test
of this difference indicated a lack of statistical significance with p =
0.57. Furthermore, tutors under the two regimes had almost the same
GPA score spread (as measured by the standard deviation to mean
ratio). Interview scores under the new regime had a rather narrow
distribution with high average scores (22.5 out of a maximum possible
25). This was, of course, to be expected since this score was used as a
selection criterion.
A key point of comparison between the old and new selection
regimes was indicated by the median Q8 score (a measure of overall
tutor effectiveness) which increased from 4.39 under the old regime to
4.54 under the new regime. In addition, the proportion of tutors with
Tutor Selection Processes 19
Mean Median Standard
GPA (Old Regime) 6.16 6.23 0.51 4.86 7.00 84
GPA (New Regime) 6.12 6.15 0.56 4.45 7.00 147
Interview Score
(New Regime) 22.46 22.50 1.73 17.00 25.00 152
scores below 4 on Q8, considered as a sign of underperformance, fell
noticeably from 19.1 percent under the old regime to only 6.6 percent
under the new regime. Similarly, the distribution of scores on Q2 (a
measure of tutor communication skills) increased from 4.31 under the
old regime to 4.51 under the new regime. The proportion of
underperforming tutors based on Q2 also fell from 23.6 percent under
the old regime to 11.2 percent under the new regime.
4. RESULTS
Under the old selection process, tutors were chosen on the basis of their
GPA scores. We, therefore, first examined whether GPA was a good
predictor of tutor performance as measured by student feedback. To that
end, we regressed each of the 8 evaluation outcomes on the student
survey against ln (GPA) for tutors appointed under the old regime. The
results are reported in Table 3. In all regressions reported in this and
other tables, course level dummies are included as controls. The dummy
for first year undergraduate courses is not included because tutors of
first year undergraduate courses constitute a baseline group. For
example, for Q1, after controlling for ln (GPA), the score for tutors of
second year undergraduate courses is 0.057 lower than that for the
baseline group on average, though the difference is not statistically
significant.
Table 3 indicates that ln (GPA) is only statistically significant for Q1
(“was well prepared”) at p < 0.05. This result shows that a one percent
increase in the GPA of a tutor is on average associated with a 0.743 (out
of 5) point increase in the evaluation score on Q1 of the student
feedback survey. This effect is large in magnitude and the largest
amongst all eight evaluation items. The magnitudes of the coefficients
for all other regressions are in general quite sizable as well, with the
20 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
exception of that for Q2. But none of these coefficients are statistically
significant at the standard levels. This may be due to an insufficient
number of observations and associated large standard errors.
Regardless of this significance issue, the result for Q2 (“communicated
clearly”) is revealing in that its coefficient is the only one that has a
negative sign. These results suggest that while tutors with stronger
academic background are likely to be more competent with course
material (Q1), they are not necessarily better communicators (Q2).
Table 3: Regression Results for Old Tutor Selection Regime.
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8
Log 0.743** -0.058 0.560 0.438 0.490 0.339 0.301 0.648
GPA (0.367) (0.68 5) (0.445) (0.621) (0.513) (0.245) (0.632) (0.594)
2nd year Course
3nd year Course
Masters
Course
Const 3.188*** 4.435*** 3.446*** 3.150*** 3.471*** 4.063*** 3.766*** 3.178***
R2 0.070 0.028 0.041 0.041 0.076 0.065 0.020 0.019
No. Obs 84
Note: Figures in parentheses are robust standard errors. All regressions include course level
dummies as controls. ***, **, * denote significance at the 1%, 5% and 10% levels respectively.
Although the coefficients of seven out of the eight evaluation
outcomes were not significant at the standard levels and one of them
was negative, it may be unwarranted to conclude that GPA has little
value in the tutor selection process. This is because our sample does not
include those applicants that failed the selection. Therefore, a more
cautious interpretation of the results is that, among those recruited as
tutors under the old regime, there is insufficient evidence that those with
higher GPAs perform better on most evaluation criteria.
Next, we turned to the key question of whether the new selection
process successfully identifies ‘better’ tutors in the sense that students
reported higher levels of satisfaction and more positive learning
experiences under these teachers compared to the old regime. To
answer this question, we considered observations from both the old and
Tutor Selection Processes 21
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8
Old 0.132*** 0.204*** 0.158*** 0.255*** 0.186*** 0.134*** 0.215*** 0.185***
(0.039) (0.061) (0.042) (0.061) (0.048) (0.026) (0.053) (0.054)
2nd year 0.070* 0.035 0.096** 0.105 -0.004 0.017 0.040 0.077
(0.041) (0.069) (0.042) (0.064) (0.052) (0.027) (0.056) (0.059)
3rd year -0.084 -0.100 -0.032 -0.016 -0.216** -0.079 -0.077 -0.008
(0.115) (0.136) (0.093) (0.117) (0.102) (0.070) (0.119) (0.125)
Masters 0.030 0.120* 0.101** 0.341*** 0.065 -0.005 0.152*** 0.118**
(0.048) (0.064) (0.048) (0.066) (0.055) (0.035) (0.056) (0.057)
Constant 4.484*** 4.237*** 4.439*** 3.878*** 4.304*** 4.644*** 4.275*** 4.305***
(0.038) (0.061) (0.044) (0.064) (0.048) (0.025) (0.054) (0.056)
R2s 0.0641 0.065 0.099 0.158 0.100 0.128 0.110 0.074
New 0.136*** 0.204*** 0.159*** 0.257*** 0.187*** 0.135*** 0.217*** 0.188***
(0.038) (0.061) (0.041) (0.061) (0.048) (0.026) (0.052) (0.053)
Ln(GPA) 0.499** 0.078 0.303 0.236 0.270 0.252** 0.228 0.493*
(0.207) (0.293) (0.195) (0.289) (0.223) (0.118) (0.264) (0.257)
2nd year 0.060 0.034 0.090** 0.101 -0.009 0.012 0.035 0.067
(0.041) (0.070) (0.043) (0.065) (0.052) (0.027) (0.056) (0.059)
3rd year -0.103 -0.103 -0.043 -0.025 -0.226** -0.088 -0.086 -0.026
(0.114) (0.137) (0.092) (0.116) (0.100) (0.069) (0.119) (0.124)
Masters 0.021 0.118* 0.096** 0.337*** 0.060 -0.009 0.148*** 0.110*
(0.047) (0.065) (0.048) (0.066) (0.055) (0.034) (0.057) (0.057)
Constant 3.585*** 4.095*** 3.893*** 3.453*** 3.818*** 4.190*** 3.862*** 3.416***
(0.374) (0.512) (0.352) (0.522) (0.407) (0.212) (0.469) (0.461)
R2s 0.090 0.066 0.109 0.160 0.106 0.143 0.114 0.089
#obs = 231
Note: Figures in parentheses are robust standard errors. All regressions include course
level dummies as controls. ***, **, * denote significance at the 1%, 5% and 10% levels
respectively.
new regimes, and the results are reported in Table 4. The upper panel
of Table 4 shows the results obtained from regressing each of the
evaluation outcomes against the dummy variable NEW, which
identifies whether a tutor was selected under the new regime. The
coefficient for the variable is positive and significant at p < 0.01 for all
eight questions. This implies that tutors selected under the new regime
performed better across all eight questions in the evaluation survey than
22 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
their counterparts under the old regime. The coefficient for Q2 suggests
that the communication skills of tutors selected under the new regime
were about 20.4 percent better. These results are in line with the design
of the new regime to better identify communicators and facilitators. An
unexpected result, however, was that the coefficient for Q4 (“inspired
me to learn”) was the largest amongst all eight questions at p < 0.01.
This is interesting because it is not an easy task to identify an inspiring
Table 5: Results for New Regime Conditioned on Interview Score.
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8
Ln(Score) 0.815*** 1.650*** 0.606*** 0.921*** 0.460* 0.270 0.986*** 1.177***
(0.295) (0.357) (0.222) (0.334) (0.260) (0.164) (0.275) (0.291)
2nd year 0.143*** 0.189*** 0.099** 0.201*** 0.002 0.045 0.096* 0.141**
(0.049) (0.066) (0.045) (0.064) (0.058) (0.029) (0.050) (0.056)
3rd year 0.007 -0.002 0.016 0.092 -0.087 -0.044 0.027 0.091
(0.134) (0.146) (0.131) (0.157) (0.101) (0.072) (0.152) (0.137)
Masters 0.090 0.268*** 0.164*** 0.429*** 0.206*** 0.060 0.215*** 0.215***
(0.061) (0.079) (0.052) (0.071) (0.057) (0.041) (0.062) (0.068)
Constant 2.051** -0.755 2.697*** 1.232 3.031*** 3.921*** 1.395 0.795
(0.928) (1.123) (0.700) (1.043) (0.813) (0.516) (0.867) (0.913)
R2s 0.094 0.172 0.102 0.200 0.092 0.045 0.132 0.141
No. obs 152
(0.301) (0.362) (0.228) (0.340) (0.270) (0.170) (0.282) (0.294)
Ln(GPA) 0.434* 0.248 0.194 0.179 0.099 0.21 0.242 0.486*
(0.252) (0.294) (0.199) (0.312) (0.227) (0.128) (0.256) (0.266)
2nd year 0.131*** 0.167** 0.089* 0.184*** 0.003 0.040 0.079 0.120**
(0.050) (0.068) (0.047) (0.066) (0.061) (0.030) (0.051) (0.057)
3rd year -0.022 -0.015 0.004 0.085 -0.091 -0.058 0.012 0.059
(0.135) (0.150) (0.131) (0.158) (0.101) (0.070) (0.153) (0.139)
Masters 0.109* 0.297*** 0.192*** 0.459*** 0.217*** 0.065 0.236 0.232***
(0.061) (0.082) (0.052) (0.074) (0.060) (0.043) (0.064) (0.070)
Constant 0.986 -1.539 2.189** 0.602 2.781*** 3.407*** 0.671*** -0.461
(1.143) (1.299) (0.844) (1.251) (0.987) (0.618) (1.097) (1.122)
R2s 0.125 0.191 0.132 0.222 0.099 0.067 0.159 0.181
No.obs 147
Note: Figures in parentheses are robust standard errors. All regressions include course
level dummies as controls. ***, **, * denote significance at the 1%, 5% and 10% level
respectively.
Tutor Selection Processes 23
teacher within a brief 20-minute interview. Yet the results show that
under the new regime, which places emphasis on an applicant’s ability
to collaborate with others, more inspiring teachers appear to have been
identified. Quantitatively, the results for Q8 suggest that tutors selected
under the new regime were on average 18.5 percent ‘better’ than their
old regime counterparts on this characteristic.
The lower panel of Table 4 reports the results of tests that included
ln (GPA) as an explanatory variable. Both the qualitative and
quantitative results regarding the NEW variable remained the same in
this specification of the regressions, suggesting that the effectiveness of
the new regime was robust. More importantly, the results indicate that
the new process is able to identify attributes of good tutors about which
GPA data is not informative.
Under the new regime, prospective tutors are given a score based on
the communication and collaboration skills they demonstrate during the
interview. This information allowed us to further test the effectiveness
of the new selection process using a subsample of tutors from the new
regime only. In particular, if the new process is working effectively,
tutors with a higher interview score should have higher evaluation
scores. The results from testing this hypothesis are reported in Table 5.
The upper panel of Table 5 reports results from regressing each of
the eight valuation outcomes against ln (Score). Coefficients of this
variable for all but Q6 were significant at the standard levels, with 6 out
of 8 evaluation outcomes significant at p < 0.01. The effect of ln (Score)
on performance against Q2 was particularly strong and large. In the
lower panel, we report results when we further controlled for ln (GPA).
These results are largely the same with performance against Q2
remaining higher than that against all other questions. Overall, the
results in Table 5 confirm that the design of the new selection process
has been very effective in identifying good tutors who receive very
positive evaluations from their students.
Lastly, we used GPA scores for only economics courses instead of
overall GPA score in the estimations. All our previous conclusions
remained the same and we do not report these results.
5. DISCUSSION AND CONCLUSION
The selection of tutors in a higher education setting has attracted little
research attention. Unlike the very formal, in depth approach taken to
appointing faculty members, the selection of tutors has tended to be
more informal, generally identifying tutors who are immediately and
24 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
locally available. This paper has investigated how the selection of tutors
might be done more rigorously. It has presented student evaluation data
of tutor performance under two tutor recruitment methods in the School
of Economics at UQ. The first method largely relied on filtering
applicants using only GPA scores, with the second method relaxing the
GPA requirement slightly and supplementing selection with a new
group interview process. The new group interview approach sought to
identify additional tutor attributes that could further enhance students’
learning experiences. By collecting and analysing data from the two
regimes, we aimed to answer several research questions listed in the
introduction.
Is academic performance a useful criterion for selecting tutors?
The answer to this question is a qualified ‘yes’. Results from Table 3
do not provide a definite answer to the question. However, the results
presented in Tables 4 and 5 indicate that GPA was still informative even
under the new group interview selection process. This suggests that the
new process could be further strengthened by placing more emphasis
on both the applicants’ GPA and communication skills. To some extent,
this is already happening since applicants with a low GPA are unlikely
to be shortlisted for interview.
Is academic performance a sufficient criterion for selecting tutors?
The answer to this question is a definite ‘no’ based on the results from
Tables 4 and 5.
What importance should be placed on a tutor’s communication and
collaboration skills compared to their academic performance in the
selection process?
The results from Tables 4 and 5 clearly demonstrate that
communication and collaboration skills are as important, if not more so,
than academic performance for selecting tutors.
Did the new group interview selection process result in higher tutor
evaluations from students, suggesting an enhanced learning
experience?
The answer to this question is a definite ‘yes’. The results from Tables
4 and 5 provide encouraging evidence that the new selection process
has been very successful. The aim of the new process was to identify
‘better’ tutors, in the sense that they improve the student learning
experience, as well as see a reduction in the number of underperforming
Tutor Selection Processes 25
tutors, defined as those having a Q8 score below a threshold of 4 out of
5. Under the old scheme most tutors in the sample could have been
classified as “good” with a median score for Q8 of 4.39, almost 10
percent above the threshold. In comparison, under the new scheme,
tutors could be considered to have been “excellent” with a median score
for Q8 equal to 4.54.
If the new selection process does improve students’ learning
experience, is this improvement significant to justify the cost of
implementing it?
On the cost side, the new system requires about the same amount of
time to interview applicants as the old system, but requires more input
time from staff. On the benefit side, an immediate demonstrable reward
is an improvement in the student experience evidenced by increases in
the mean and median student evaluation scores. But there are also
implicit down-stream benefits, including: (1) reducing the need for
students to consult directly with the course instructor; (2) lessening the
chance of students failing their courses; (3) an extension of benefit (2)
in the form of a reduced need to organise supplementary exams or for
students to repeat courses; (4) decreasing the incidence of student
complaints, and thus, resources needed to handle these complaints; and
(5) lowering tutor training costs by selecting applicants that have
greater potential, and thus, require less training and support. Although
our analysis does not yield a dollar value for the benefits from the new
process (and the costs for that matter), our view from the experience of
operating it is that its combined benefits easily outweigh its additional
costs.
A limitation of the current study is that student evaluations only
indicate students’ learning experiences, not their learning outcomes.
Therefore, a natural extension of the study would be to examine
whether and how the new tutor selection process might impact students’
learning outcomes such as their overall scores or grades for particular
courses. This extension is feasible given students’ academic outcomes
are readily available. Another possible extension is to invite a panel of
educational experts to come and observe the selection process and then
gather their feedback through a systematic evaluation survey. A
qualitative analysis of their systematic feedback would be an
improvement on the anecdotal evidence presented in Section 3.
To close the discussion, it is worth noting that the new process is not
discipline specific. Although our analysis is based on the data from an
26 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
economics department, the emphasis on communication and
collaboration skills is universal. This suggests that in selecting tutors
for any discipline, not only should their academic performance be
evaluated, but also their skills to work in a collaborative environment
using a realistic student learning setting. In doing so, a combination of
both strong academic ability, communication and collaborative
working skills can lead to improved chances of identifying high
performing tutor applicants, thereby helping to further enhance
students’ learning experiences and learning outcomes. Likewise,
although the majority of tutors in our analysis were undergraduate-
student tutors, the lesson is not confined to their recruitment.
Communication and interpersonal skills in conducting tutorials or
teaching in general, is equally important for graduate-student tutors or
full-time tutors and, therefore, their recruitment as well.
REFERENCES
in Australian Higher Education (TRIM Ref No. ED16/000060), View at
https://docs.education.gov.au/documents/driving-innovation-fairness-
and-excellence-australian-education.
Azer, S.A. (2005) “Challenges Facing PBL Tutors: 12 Tips for Successful
Group Facilitation”, Medical Teacher, 27 (8), pp.676-681.
Baderin, M.A. (2005) “Towards Improving Students’ Attendance and
Quality of Undergraduate Tutorials: A Case Study on Law”, Teaching in
Higher Education, 10 (1), pp.99-116.
Benchmarking Leadership and Advancement of Standards for Sessional
Teaching (2013) The Sessional Staff Standards Framework, View at
http://www.blasst.edu.au/docs/BLASST_ framework_WEB.pdf.
Biggs, J.B. (2011) Teaching for Quality Learning at University: What the
Student Does, Maidenhead, UK: McGraw-Hill Education.
Bryson, C. (2013) “Supporting Sessional Teaching Staff in the UK – To
What Extent is There Real Progress?” Journal of University Teaching &
Learning Practice, 10 (3), pp.1-17.
Coates, H., Dobson, I. R., Goedegebuure, L., and Meek, L. (2009)
“Australia's Casual Approach to its Academic Teaching Workforce”,
People and Place, 17 (4), pp.47-54.
Dolmans, D.H., De Grave, W., Wolfhagen, I. H. and Van Der Vleuten, C. P.
(2005) “Problembased Learning: Future Challenges for Educational
Practice and Research”, Medical Education, 39 (7), pp.732-741.
Tutor Selection Processes 27
Duarte, F.P. (2013) “Conceptions of Good Teaching by Good Teachers:
Case Studies from an Australian University”, Journal of University
Teaching & Learning Practice, 10 (1), pp.1-15.
Fumasoli, T., Goastellec, G., and Kehm, B. M. (eds.) (2015) Academic Work
and Careers in Europe: Trends, Challenges, Perspectives, London, UK:
Springer International Publishing.
Gilbert, A. (2017) “Using Activity Theory to Inform Sessional Teacher
Development: What Lessons Can Be learned from Tutor Training
Models?”, International Journal for Academic Development, 22 (1),
pp.56-69.
Harvey, M. (2017) “Quality Learning and Teaching with Sessional Staff:
Systematising Good Practice for Academic Development”, International
Journal for Academic Development, 22 (1), pp.1-6.
Harvey, M., Fraser, S. and Bowes, J. (2005) “Quality Teaching and Sessional
Staff”, Paper presented at the 28th Annual Higher Education Research
and Development Society of Australasia Conference, 3-6 July, Sydney,
Australia.
Kane, T.J., Taylor, E.S., Tyler, J.H. and Wooten, A.J. (2011) “Identifying
Effective Classroom Practices Using Student Achievement Data”, Journal
of Human Resources, 46 (3), pp.587-613.
Kelly, A. (2012) “Easter Holidays to Deliver Fuel Price Hikes for Motorists”,
The Sunday Mail, April 1.
Kordts-Freudinger, R. and Geithner, E. (2013) “When Mode Does Not
Matter: Evaluation in Class versus out of Class”, Educational Research
and Evaluation, 19 (7), pp.605-614.
Knott, G., Crane, L., Heslop, I. and Glass, B.D. (2015) “Training and
Support of Sessional Staff to Improve Quality of Teaching and Learning
at Universities”, American Journal of Pharmaceutical Education, 79 (5),
pp.1-8.
Kreber, C. (2007) “What‘s It Really All About? The Scholarship of Teaching
and Learning as an Authentic Practice”, International Journal for the
Scholarship of Teaching and Learning, 1 (1), p.3.
Kwiek, M. and Antonowicz, D. (2015) “The Changing Paths in Academic
Careers in European Universities: Minor Steps and Major Milestones”, in
Fumasoli, T., Goastellec, G. and Kehm, B.M. (eds.), Academic Work
and Careers in Europe: Trends, Challenges, Perspectives, London, UK:
Springer International Publishing, pp. 41-68.
Lama, T. and Joullié, J.E. (2015) “Casualization of Academics in the
Australian Higher Education: Is Teaching Quality at Risk?”, Research in
Higher Education, 28, pp.1-11.
Matthews, K. E., Duck, J. M. and Bartle, E. (2017) “Sustaining Institution-
wide Induction for Sessional Staff in a Research-intensive University: The
28 C. Sherwood, K.K. Tang, Z. Yin & L.H. Phan
Strength of Shared Ownership”, International Journal for Academic
Development, 22 (1), pp.43-55.
Mickelburough, P. (2012) “Hospital Parking Fees Enough to Make You
Sick”, Herald Sun, March 17.
Milliken, J. and Barnes, L.P. (2002) “Teaching and Technology in Higher
Education: Student Perceptions and Personal Reflections”, Computers
and Education, 39 (3), pp.223-235.
Muzaka, V. (2009) “The Niche of Graduate Teaching Assistants (GTAs):
Perceptions and Reflections”, Teaching in Higher Education, 14 (1), pp.1-
12.
Nulty, D.D. (2008) “The Adequacy of Response Rates to Online and Paper
Surveys: What Can be Done?”, Assessment & Evaluation in Higher
Education, 33 (3), pp.301-314.
Park, C. (2004) “The Graduate Teaching Assistant (GTA): Lessons from
North American Experience”, Teaching in Higher Education, 9 (3),
pp.349-361.
Park, C. and Ramos, M. (2002) “The Donkey in the Department? Insights
into the Graduate Teaching Assistant (GTA) Experience in the UK”,
Journal of Graduate Education, 3 (2), pp.47-53.
Percy, A. and Beaumont, R. (2008) “The Casualisation of Teaching and the
Subject at Risk”, Studies in Continuing Education, 30 (2), pp.145-157.
Percy, A., Scoufis, M., Parry, S., Goody, A. and Hicks, M. (2008) The RED
Report, Recognition - Enhancement - Development: The Contribution of
Sessional Teachers to Higher Education, Sydney: Australian Learning
and Teaching Council.
Ramsden, P. (2003) Learning to Teach in Higher Education, 2nd edition,
New York, NY: Routledge Falmer.
Sherwood, C.W. and Littleboy, B. (2016) “Selecting Sessional Tutors: A
New and Effective Process”, Australasian Journal of Economics
Education, 13 (1), pp.30-48.
Stanca, L. (2006) “The Effects of Attendance on Academic Performance:
Panel Data Evidence for Introductory Microeconomics”, Journal of
Economic Education, 37 (3), pp.251-266.
Sutherland, K.A. (2009) “Nurturing Undergraduate Tutors’ Role in the
University Teaching Community”, Mentoring & Tutoring: Partnership in
Learning, 17 (2), pp.147-164.
Sessional Tutors in a New Zealand University”, Journal of University
Teaching & Learning Practice, 10 (3), pp.1-11.
Walstad, W.B. and Becker, W.E. (2010) “Preparing Graduate Students in
Economics for Teaching: Survey Findings and Recommendations”,
Journal of Economic Education, 41 (2), pp.202-210.