NUCCAT & SACWG project report - nuc.ac.uk€¦ · NUCCAT & SACWG project report Page 6 of 54...
Transcript of NUCCAT & SACWG project report - nuc.ac.uk€¦ · NUCCAT & SACWG project report Page 6 of 54...
NUCCAT & SACWG project report
Page 2 of 54
To what extent do re-assessment,
compensation and trailing support student success?
The first report of the NUCCAT-SACWG Project on
the honours degree outcomes of students
progressing after initial failure at Level 4
The Northern Universities Consortium (NUCCAT) ‘provides a forum for higher education practitioners with an interest in the design, implementation and
regulation of credit-based curriculum and its implications for the student experience and progression, reflecting the changing dynamics of the sector’
(http://www.nuc.ac.uk/). The Student and Assessment Classification Working Group (SACWG), formed in
1994, is composed of academics and administrators who have a professional and personal interest in assessment.
NUCCAT & SACWG project report
Page 3 of 54
Contents
Introduction
The NUCCAT & SACWG project
Quantitative findings
Regulatory variety:
o Re-assessment
o Compensation / condonement
o Trailing modules
Conclusions and questions
o Next steps?
References
Appendices:
o The call for contributions
o The datasets
NUCCAT & SACWG project report
Page 4 of 54
Introduction
Recovery from failure:
‘A relatively invisible aspect of academic practice’1
1. For the past two decades SACWG2 has explored the impact of institutions’
academic regulations on the outcomes for students of different assessment practices. One aspect of those practices - the ways in which students who fail modules are managed - has been largely overlooked in
the published literature on assessment3. To help fill this lacuna, SACWG recently investigated the variety of regulations for progression from Level
4 to Level 5 across the UK Higher Education (HE) sector and the theoretical significance of modifying key elements of these regulations4.
2. While this research threw up impressionistic evidence that students who
had to recover from module failures fared less well than students who passed all Level 4 their modules, there were no substantive data to support the perception. The current NUCCAT-SACWG project set out to
provide such data.
3. The ‘invisibility’ of academic recovery is perhaps unsurprising: many such
practices take place outside the mainstream timetable of academic activity. Re-assessment of failed assessments, for example, often occurs
1 Stowell, Marie. 2016. “Assessing Re-assessment.” Leeds Beckett University, 12 July.
Recovery comprises both retrieval of failure and progression facilitated through
institutional academic regulations without re-assessment. 2 For example, Yorke, Mantz, Harvey Woolf, Marie Stowell et al. 2008. "Enigmatic
Variations: Honours Degree Assessment Regulations in the UK." Higher Education
Quarterly 62 (3): 157-80;
Stowell, Marie, Marie Falahee and Harvey Woolf. 2016. “Academic Standards and
Regulatory Frameworks: Necessary Compromises?.” Assessment & Evaluation in
Higher Education 41(4): 515-31. 3 Exceptions include Pell, Godfrey, Katherine Boursicot and Trudie Roberts. 2009. "The
Trouble with Resits …'." Assessment & Evaluation in Higher Education 34 (2):243-51;
Ricketts, Chris. 2010. "A New Look at Resits: Are They Simply a Second Chance?."
Assessment & Evaluation in Higher Education 35 (4): 351-56; Proud, Steven. 2015.
"Resits in Higher Education: Merely a Bar to Jump over, or Do They Give a
Pedagogical ‘Leg Up’?." Assessment & Evaluation in Higher Education 40 (5): 681-97;
Arnold, Ivo. 2016. "Resitting or Compensating a Failed Examination: Does It Affect
Subsequent Results?." Assessment & Evaluation in Higher Education: 1-15. doi:
10.1080/02602938.2016.1233520. 4 Falahee, Marie, Marie Stowell, and Harvey Woolf. 2013. Crafting Assessment
Regulations for First Year Students: Stringency, Academic Alignment and Equity
(unpublished project report).
NUCCAT & SACWG project report
Page 5 of 54
after the end of one academic year or before the start of the succeeding year. Confirmation of assessment results can be delegated to a sub
group of an Examination Board. External examiners may be only tangentially involved in the re-assessment process. Student records
might hide recovery if original fail marks are overwritten with a passing score.
4. However, the significance of retrieval should not be underestimated.
Substantial numbers of students are subject to recovery mechanisms in order to progress from Level 4 to Level 5. Twenty-five percent of the students in our total sample, which is not, in our view, untypical of the
sector, failed at least one Level 4 module. Sixty-four per cent of the complaints made to the Office of the Independent Adjudicator for Higher
Education ‘are about issues that affect a student’s academic status’5. The direct, indirect and opportunity costs to institutions of implementing recovery processes are rarely if ever calculated but rough reckonings
indicate that considerable sums are expended annually.
5. The inclusion of student non-continuation metrics in the Teaching
Excellence Framework (TEF)6 has focussed attention on how institutions respond to the challenges of supporting students to succeed, especially in their crucial first year of higher education.
6. Both NUCCAT and SACWG comprise members whose professional interests and responsibilities include student assessment, progression and completion. All universities have institutional regulations that provide the
basis upon which students are assessed and against which their eligibility for progression or completion is evaluated to determine outcomes. It is
of equal importance to both researchers and practitioners to be able to appraise the effectiveness of the strategies employed within their own institutional academic infrastructures by means of comparison with both
other institutions and against sector benchmarks.
7. Both NUCCAT7 and SACWG have highlighted the regulatory variations
that exist in the sector. This raises the issue of the extent to which
students are treated equally and fairly across UK HE. This project is therefore not an abstract research topic, but a means of initiating a
debate about the extent to which universities have an obligation to create conditions in which the opportunities for the achievement of student
5 Office of the Independent Adjudicator for Higher Education (OIA), Annual Report
2015, p.14 6 Department for Education. Teaching Excellence Framework: year two specification,
September 2016, p.27; Teaching Excellence Framework: year two and beyond:
Government technical consultation response, September 2016. 7 Atlay, Mark et al. 2012. A Survey of Higher Education Credit Practice in the United
Kingdom 2012. United Kingdom Credit Forum: University of Derby.
NUCCAT & SACWG project report
Page 6 of 54
potential are maximised. Stowell8 argues that it should be a central tenet of any (re-)assessment policy that all students are treated equitably and
fairly.
8. This Report describes the project methodology, outlines the quantitative
findings and the variety of regulatory practices in the contributing
universities and offers some provisional conclusions. Anonymised data for each contributing institution are provided in the appendices.
8 Stowell, Marie. 2004. "Equity, Justice and Standards: Assessment Decision Making in
Higher Education." Assessment & Evaluation in Higher Education 29 (4): 495-510.
doi: 10.1080/02602930310001689055. For a complementary approach to Stowell,
see McArthur, Jan. 2016. "Assessment for Social Justice: The Role of Assessment in
Achieving Social Justice." Assessment & Evaluation in Higher Education 41 (7): 967-
81. doi: 10.1080/02602938.2015.1053429.
NUCCAT & SACWG project report
Page 7 of 54
The NUCCAT & SACWG Project
9. The project compares the award outcomes of students on 3 year honours
courses who were initially unsuccessful but who recovered from failure at Level 4 with those who passed first time. We have developed four
progression categories (categories b-d encompass recovery from failure):
a) First timers: Students who passed all Level 4 modules at the first
attempt
b) Re-assessed: Students who passed all Level 4 modules at a subsequent attempt following initial failure at Level 4
c) Compensated: Students whose progression from Level 4 to Level 5 was not contingent on re-assessment following initial failure at
Level 49
d) Trailing: Students whose progression from Level 4 to Level 5
comprises a further attempt at assessment during study at Level 5 or 6 following initial failure at Level 4
10.These categories were developed mindful of the range of terms used by
universities to refer to the assessment and progression of students. For every ‘re-assessment’ there may be a ‘re-sit’ or a ‘re-take’ or a ‘referral’.
These terms may be used interchangeably, or to mean different things (e.g., exams may be ‘re-sat’ whilst courseworks are ‘re-submitted’, a ‘re-
take’ may imply re-attendance rather than just re-assessment.). In order to avoid this mire, we sought to include any and all students in the ‘re-assessed’ category who had to demonstrate their success in (a)
module(s) following initial failure in that module in order to progress from Level 4 to Level 5. That is, effort was required on the part of the student
before their progression to Level 5.
11.Having navigated that peril, we were keen to escape falling into the ‘compensation versus condonement’ void. This is territory well-scoped by
both NUCCAT (via participation in the UK Credit Forum) and by SACWG. Lessons learned from previous research have proved valuable in this activity and have led to the following terminology being adopted in
relation to respective methods of facilitating the progression of students:
‘condonement’ is forgiveness of failure, requiring latitude in interpreting credit requirements for awards and
‘compensation’ is a deliberative balancing process, determining whether elements of performance elsewhere can offset elements of
failure within fixed award credit requirements.
9 Some students who progressed by compensation may also have been re-assessed.
NUCCAT & SACWG project report
Page 8 of 54
12.However, for the purpose of this research project, the terms ‘compensation’ and ‘condonement’ are interchangeable, for want of
consistent definition. If ‘condoned’ students have been ‘forgiven’ their failure and are not required to redeem those credits, then these students are counted as ‘compensated’, alongside students for whom credit was
awarded by compensation. ‘Re-assessment’ covers students who undertook additional assessment prior to progression unlike ‘trailing’
students, whose failure has not been forgiven and who must subsequently redeem their credit shortfall. ‘Compensated’ students have been given a ‘helping-hand’, in accordance with institutional academic
regulations, in order to facilitate their progression.
13.In setting up the project it was recognised that timely completion was
only one of several student success criteria. However, given the cost of
additional study time for students, it is an important criterion.
14.The call for data (see appendix 1) went to Directors of NUCCAT, members
of SACWG with an institutional affiliation and a further university that, whilst affiliated to neither NUCCAT nor SACWG, had expressed an interest in participating. A key principle of the project is anonymity. No student
or institution can be identified from the published data. Only one of the two members of the project team is aware of which data set is associated
with each institution.
15.In seeking to make the project manageable for institutional respondents, all that was asked for were the honours outcomes (First, Upper Second,
Lower Second, Third, No Award) for students who progressed from Level 4 in the four progression categories – Census Point A - and completed their honours degree 18 months later – Census Point B.
16.In addition to the quantitative information, institutions were requested to submit the relevant sections of their academic regulations to help illuminate the progression and degree outcomes (see the ‘Regulatory
Variety’ section below).
17.We were very aware that there were many items, both quantitative and
qualitative, that we could have sought and which could nuance the
project’s findings. These include:
the extent (scale/volume/nature) of the initial failure outcomes beyond Census Point B
a breakdown by subject
NUCCAT & SACWG project report
Page 9 of 54
demographic information such as gender, age, ethnicity
whether students were university-based or studying with a collaborative partner
the institution’s assessment, learning and teaching strategy
institutional student support systems
However, we decided that requesting this type of material would be likely to render the project unviable within the constraints of time and resource.
NUCCAT & SACWG project report
Page 10 of 54
The quantitative findings
18.Nine universities submitted a total of results for nearly 20,000 students
(N=19,828). Over three-quarters (13332) of the population are students who passed all Level 4 modules at the first attempt. Almost half of the
remainder of the population (2048) passed all of their modules following re-assessment. Students progressing to Level 5 without passing all Level 4 modules were almost three times more likely to be exempted from re-
assessment (1534) through mechanisms such as compensation than to be expected to undertake re-assessment subsequently (577 ‘trailing’).
Figure a refers.
19.The dataset reported the award outcomes of the students, categorised
into class bands, at Census Point B. Of the five reporting categories at Census Point B, four are concerned with degree class (qualitative) and one (‘no honours’) provides a (quantitative) measure of timeliness of
completion. As there will be a range of students under this latter heading (some exiting previously with or without a lower award and some
continuing, with deferred assessments who may or may not subsequently
NUCCAT & SACWG project report
Page 11 of 54
achieve an honours degree), no (value) judgements are made about why a student may not have attained an honours award at Census Point
B. Figure b refers.
20.The headlines from the data analysis are:
First timers do better than any other category (higher ‘good honours’, lower ‘no honours’) and are significantly more likely to complete in time.
There is little difference in outcome between re-assessed and
compensated students. Re-assessed students are slightly more likely to graduate with a ‘good degree’ than compensated students, but are also slightly more likely not to graduate ‘in time’ with honours.
Over half of the students trailing credit into Level 5 fail to complete
with honours ‘in time’. Only 1 in 5 ‘trailing’ students complete ‘in time’ with ‘good honours’.
21.Beyond the headlines lies a complex range of findings and issues that
demand further consideration. It may be of little surprise that the first timers achieve a better rate of completions of ‘good honours in time’ than the other categories of student. Such students have already
demonstrated consistent performance that meets the required pass
NUCCAT & SACWG project report
Page 12 of 54
standard for modules at Level 4 at the first attempt, so have a secure basis for further academic achievement at Levels 5 and 6. The Higher
Education Statistics Agency reported that of the students who obtained a classified first degree in 2014/15 71.5% were awarded either a first- or
upper second-class degree10 (Figure c refers). 68% of students within the project dataset completing with honours ‘in time’ were awarded either a first- or upper second-class degree whereas 74% of the first timers
within the project dataset who completed with honours ‘in time’ did so with a first- or upper second-class degree.
22.Spare though it may be, the published literature suggests that there may
be advantages to a student in engaging with re-assessment. SACWG’s earlier work has highlighted the basic principles of natural justice that
afford students the opportunity to make good an initial failure11 and explored further issues around standards and student learning arising from re-assessment12. If it is true that re-assessment re-enforces
10 HESA. 2016. Introduction - Students 2014/15. 11 Stowell, Marie, “Equity”. 12 Rust, C. 2010. Purposes and consequences of re-assessment.
http://web.anglia.ac.uk/anet/faculties/alss/public/purposes_and_consequences_of_re
assessment.pdf
NUCCAT & SACWG project report
Page 13 of 54
learning13, allows students to achieve mastery of subject matter and underpins pedagogic development at higher levels, then it follows that
our ‘re-assessed’ students should do better than our ‘compensated’ students who are not required to redeem initial failure as a condition of
progression.
23.The data suggest otherwise. There is very little difference in outcome (at Census Point B) between ‘re-assessed’ and ‘compensated’ students. In
both of these categories are students progressing to Level 5 whose initial performance at Level 4 was deficient in some regard: the ‘re-assessed’ students completed additional work which resulted in the release of credit
(and so ‘proved their mastery of subject’) whilst the ‘compensated’ students did not. In terms of attainment of ‘good honours’, re-assessed
students are slightly more likely (31% to 27%) to achieve a first- or upper second-class honours degree than compensated students. However, re-assessed students are also (marginally) less likely to
complete with honours ‘in time’ than students permitted to progress without redeeming initial failure (30.9% to 30.6% respectively).
13 It should be noted that recent research has begun to challenge the notion that re-
assessment is inherently ‘good for students’: see Proud, “Resits in Higher Education”
and a recent study from the University of Munich, Ostermaier, Andreas, Philipp Beltz
and Susanne Link. 2013. Do University Policies Matter? Effects of Course Policies on
Performance. Dusseldorf: Beiträge zur Jahrestagung des Vereins für Socialpolitik).
NUCCAT & SACWG project report
Page 14 of 54
Regulatory variety
25.In addition to submitting data, each participating university was invited to
provide extracts from their institutional academic regulations pertaining to:
Passing a module
Re-assessment
Compensation/condonement Trailing modules
Honours classification
26.The range of regulations presented in this project reflect the regulatory
variety NUCCAT and SACWG have encountered in their previous work. In its 2013 a report14 SACWG noted that:
Our starting point is that academic standards, as framed by institutional assessment regulations, are socially constructed …in that
they are the outcome of complex and political processes of negotiation, reflecting institutional histories and priorities,
assumptions about the nature of learning and assessment, as well as adherence to the principles of validity, reliability and equity.
Institutional assessment regulations are crafted and subject to continuous review and amendment. Changes will often have a direct impact on the number and proportion of students who are deemed
successful and able to progress to the next year of study. Crafting assessment regulations involves taking account of many different
interests, whilst ensuring that the outcomes are consistently fair. For those involved in developing regulatory frameworks the goal is to attain an optimal balance between often competing principles and
interests. An important dimension of this is the extent to which individual Higher Education Institutions (HEIs) permit regulatory
matters to be devolved – that is determined at the departmental or course level, and the scope for an examination board to exercise discretion with regard to outcomes for individual students. This means
that pass and progression rates are not a simple reflection of student academic ability, and that a student with exactly the same set of
assessment outcomes or grades may well have quite different opportunities for progression in different HEIs, or in some cases between different departments within the same HEI.
14 Falahee, Marie, Marie Stowell, and Harvey Woolf, Crafting Assessment Regulations,
p.3.
NUCCAT & SACWG project report
Page 15 of 54
27.In the following sections we review and discuss the regulations governing
Re-assessment, Compensation / condonement and Trailing modules.
NUCCAT & SACWG project report
Page 16 of 54
Re-assessment
28.Careful consideration should be given to the observation of Ricketts15 about the purposes of re-assessment. Re-assessments are a response to the consequences of failure and it follows that the requirement for re-
assessment is determined by:
the performance of the student at the initial assessment attempt and the judgement of markers and
the defined consequences of failure as enshrined within academic
regulation.
29.All of the universities contributing to the research project permit re-assessment in a module after failure, but no two academic regulations
are the same. The number of re-assessment attempts differ (e.g., one in University C, two in University A, three in University B), as does the number of modules that may be re-assessed (e.g., unlimited in University
E, maximum 60 credits in University D) also a student’s eligibility for re-assessment (e.g., certain grades must be achieved at the first attempt in
University F, whilst there is no ‘qualifying standard’ in University G). Differences in the terminology employed by these universities is also
apparent (‘re-assessment’, ‘re-sit’, ‘re-take’, ‘referral’). So we can conclude from this project dataset that whilst re-assessment may come in many different flavours it is always on the menu.
30.Whilst there is little tangible difference in the rates of completion in time between reassessed and compensated students (we would need to use a 1,000-point scale to show how compensated students more likely to
complete with an award in time than re-assessed students), there is a discernible difference in the distribution of classifications between re-
assessed students and compensated students (Figure d refers). By concentrating solely upon the proportion of the population that achieved an honours outcomes ‘in-time’ we note a greater proportion of re-
assessed and trailing students attaining ‘good honours’ than compensated students.
31.These findings prompt a number of questions about re-assessment and
the claims that it is ‘good for students’. In part this arises from the many varieties of re-assessment that feature in different universities. It is
difficult to test a hypothesis if the variables are so varied. Yet for all that academic regulations are specific to their context and situated in a given institutional culture, there are some ‘universal criticisms’ of re-
assessment that may apply. If a re-assessment is not subject to the
15 Ricketts, “A New Look at Resits”.
NUCCAT & SACWG project report
Page 17 of 54
same rigour as the initial assessment (e.g., different standards of moderation), then might it be easier for a student to pass at the second
attempt? Are markers more ‘merciful’ at re-assessment, knowing that their decision may have a significant impact upon a student’s status
(unlike at the initial assessment where failure can be redeemed)? Is there tacit pressure to ‘pass’ students who have attempted re-assessment, even if the quality of their re-attempt was questionable? At
current prices’ every 111 students lost to the university costs £1,000,000 in fee income in the following year.
32.As ‘re-assessed’ students have already failed to demonstrate the consistent
performance that meets the required pass standard for modules at Level 4 at the first attempt, they may face further difficulties at Levels 5 and 6. If so,
the practice of most universities (but not in every case in University E) is for re-assessed modules or re-assessed components to be capped at the threshold pass, which may present a ‘drag factor’ on classification outcomes.
This may explain the difference at Census Point B between ‘first timers’ and re-assessed’ students, but not the similarity between ‘re-assessed’ and
‘compensated’ students, as re-assessed students should be doing better as a consequence of their extra effort and subsequent attainment. However, if re-assessment is neither ‘good for students’ nor robust, then might this explain
why the outcomes for ‘re-assessed’ students are broadly comparable with those for ‘compensated’ students?
NUCCAT & SACWG project report
Page 18 of 54
Compensation/condonement
33.One university (B) in the sample does not permit compensation or condonement. The others practise compensation and/or condonement and all publish explicit criteria which must be satisfied to permit the
award of credit or ‘forgiveness’ of failure. In some (e.g., A & F), granting compensation is a discretionary power, albeit defined by regulatory
parameters, afforded to an assessment board, therefore somewhere between a right (mandatory) and a privilege (discretionary). Each set of eligibility criteria differs but most (other than D) publish a consistent
threshold standard: this may be a level average (e.g., 40% in A), a minimum mark for the module (e.g., 25-39% in G, 30-39% in E, 35-39%
in F) or a combination of these (e.g., C).
34.Unlike re-assessment, which is generally available to most students irrespective of the nature of their initial failure, compensated students
‘qualify’ through achieving a ‘narrow fail’ in a given module in light of good overall performance. As all but one of the universities contributing data to this project permit the award of credit by compensation, it follows
that students in the ‘re-assessment’ category not only did not do well enough to pass, but also failed to do well enough to be awarded credit or
otherwise be exempted from further assessment following initial failure. The difference between a student in the ‘first-timer’ category and a
‘compensated’ student may be as little as 1 mark; the difference between a ‘first-timer’ and a ‘re-assessed’ student is likely to be greater.
35.By definition, a compensated student has failed to demonstrate
consistent performance that meets the required pass standard for all modules at Level 4 at the first attempt, so does this suggest that they may not succeed at latter levels as strongly as ‘first timers’? If
underpinning Level 4 concepts have not been successfully demonstrated, then this could explain the differences between the ‘first timers’ and
‘compensated’ students at Census Point B in the lower attainment of ‘good honours’. However, the similarity of outcome between ‘re-assessed’ and ‘compensated’ students requires careful disambiguation, as
it may owe as much to the greater effort required by students recovering from ‘broad failure’ than those students whose ‘narrow failure’ is not an
impediment to progression as it does to any concerns over the robustness and rigour of re-assessment.
36.It should be noted that the one university in the dataset that does not
award credit by compensation appears to be ‘compensating’ for this in a different way (Figure e refers). University B does not award credit by compensation, but has the second highest proportion of trailing students
of any of the universities contributing data. So whilst university B may appear to have ‘harsh’ compensation regulations, the approach to trailing
appears somewhat more ‘lenient’.
NUCCAT & SACWG project report
Page 19 of 54
5
0
7
9
11
7
14
6
8
2
7
1.32
0.2
5
9
0.8
3
0
2
4
6
8
10
12
14
16
A B C D E F G H I
Pe
rce
nta
ge o
f st
ud
en
ts p
rogr
ess
ing
University
Figure e: respective percentage of students progressing
by compensation and trailing
% Compensation
% Trailing
NUCCAT & SACWG project report
Page 20 of 54
Trailing modules
37.All of the universities allow students to ‘trail’ credit between levels, but not in large numbers (ranging from 9% of the students in University G to 0.2% of the students in University E). Trailing, therefore, may be
regarded as exceptional (in University D, which requires that each student permitted to trail signs-up to an action plan), as a qualified ‘right’
(in University G where trailing is permitted on the basis of academic discretion) or in relation to other non-academic processes (in University E where trailing is only permitted if supported by evidence of extenuating
circumstances). Four universities contributing data to this sample regard trailing as an ‘automatic outcome’ provided that features of a student’s
academic profile meet specific criteria. A combination of student performance and the academic regulations of each university will determine a student’s ability to progress and thus two students with the
same academic profile in two different universities may face differing outcomes, especially with regard to the variation between academic
regulations governing ‘trailing’.
38.This category of students is enigmatic in that the circumstances that led to their demarcation as ‘trailing’ widely vary. Within this category are
students who suffered illness and/or circumstances that impacted on their initial assessments to such an extent that they may have been awarded
‘another go’. There will also be students who perform strongly overall, save for a narrow failure in a single module which they may be permitted to discard and replace with another. Then there are the students whose
performance was weak, but who are deemed ‘deserving’ of further opportunities at re-assessment alongside attempting new material at a
higher level.
39.These data suggest, however, that where trailing is permitted solely on the basis of ‘academic discretion’ (see University G, where 9% of
students are trailing at Census Point A), it is much more common than in universities (such as University C, where only 1% of students are trailing at Census Point A) that apply a consistent and robust threshold to
determine ‘eligibility by right’. Does this imply that, where unfettered discretion determines eligibility to trail, that ‘the benefit of the doubt’ is
routinely given to students in this position? If so, is this always in the best interest of the student?
40.At Census Point B the outcomes for trailing students do not compare
favourably with other categories of student. Over half of the students trailing Level 4 credit into Level 5 at Census Point A did not complete with honours ‘in time’ at Census Point B. Of those who, fewer ‘good honours’
awards were conferred than ‘other honours’ outcomes of lower second- or third-class degrees (Figure d refers). This could be on account of the
inherent difficulty of adding remedial material to a new, full workload at a
NUCCAT & SACWG project report
Page 21 of 54
higher level for students whose performance at Level 4 was deficient in some regard. Are such students helped or hindered by their
progression, trailing credit into Level 5? There may also be logistical difficulties for such students arising from the scheduling of module
delivery, especially if they are repeating tuition as well as assessment. Many may feel it appropriate to switch to part-time study, to ‘step-off’ their programme until their circumstances improve or to accept credit/a
lesser award which can be transferred to another context.
41.Whatever the myriad of circumstances or individual personal issues at play, the data suggest that ‘trailing’ may be setting many students up to struggle - and perhaps fail. This inevitably raises the question of why (all
of the contributing) universities facilitate student progression in this way.
NUCCAT & SACWG project report
Page 22 of 54
Some conclusions and questions for
consideration
42.In many ways the data confirm some of the intuitive assumptions about
recovery from failure. As noted above, in particular:
First timers are most likely to achieve a Good degree and least
likely not to be completers Almost 1 in 3 of both Re-assessed and Compensated are likely
not to be timely completers but Re-assessed are more likely to achieve a Good degree than Compensated
Trailing are most likely not to be timely completers, though
more likely to achieve a Good degree if completing in time than
Compensated.
43.The data may offer limited support to the notion that redoing failed
assessments, either as re-assessment in the same academic year as the
failure(s) occurred or in a subsequent year, may be more beneficial to students than compensating them for failure, in terms of achieving a
good honours degree. The findings of a recent study, conducted in a different setting, drew similar conclusions (‘We conclude that the evidence for a positive effect of resits on learning is weak at best’16.)
44.Despite the range of variables and complicating factors (such curriculum mix, mandatory and optional modules, intra-module compensation, student support systems, the student populations), patterns of
performance across the institutions are broadly similar. This may suggest that the grading of student performance is informed only in part
by institutional factors but that it is also moderated by sector-norms and subject-standards. It would be mistaken to conclude that commonality in patterns of performance are a measure of consistency or fairness. The
effect is illusory in that it masks the range of differences in universities’ regulatory practices to which students are subject(ed) and which raise
issues of fairness to individuals to across the sector as a whole.
45.Not only do institutions differ in their regulatory practices, they also differ
in their use of terminology. There are subtle but fundamental differences
in the use of language. We believe that the choice of language may reflect the philosophies underpinning institutional regulation and that
these differ between settings. For example, one university in our sample repeatedly uses the phrase ‘permitted to progress’, where another uses
16 Arnold, Ivo, “Resitting or Compensating”, p.1.
NUCCAT & SACWG project report
Page 23 of 54
the term ‘level completion is achieved’ in the same context; the former suggests authorisation, the latter suggests entitlement. Do students earn
the right to progress, or do universities offer opportunities? Academic regulation is replete with references to failure, penalty, mitigation,
expulsion, recovery and redemption. Apart from the moralistic overtones of such language, there is also the implication of universities operating a deficit model of assessment. Both the underlying model of assessment
and the language in which that model is couched are, we think, topics worthy of further research.
46.Stowell, Falahee and Woolf identified the inherent tension in academic
infrastructures:
From the perspective of the UK Quality Code for Higher Education17, assessment regulations are the articulation, or at least the reflection, of an institution’s assessment principles and values, serving to support
academic standards and providing the basis for equitable and consistent treatment of its students. This might suggest that
assessment regulations are underpinned by established educational principles and rationales, ‘tried and tested’ over time so that they
continue to meet their key purposes of assuring academic standards, equity and fairness. It is our contention, however, that regulations are ‘sites of compromise’ in that they are constantly negotiated through
the normal decision-making processes of the academy, where competing perspectives and interests are brought to bear 18.
47. For students, these conclusions raise the fundamental question of
whether achievement of a good degree is of greater importance than timely completion.
48. For universities, the findings test the principles on which (re-)assessment
policies are developed. The conclusions may also query the extent to which it is possible (or even desirable) to ensure regulations are explicit,
so that they can be applied consistently without local interpretation or discretion.
49.In the light of the project’s findings institutions, might want to ask
themselves: Why the students who have recovered Level 4 failure fare less
well in both quantitative and qualitative measures than those students who passed first time?
17 Quality Assurance Agency for Higher Education. 2013. UK Quality Code for Higher
Education – Part A: Setting and Maintaining Academic Standards. Gloucester: QAA 18 Stowell, Marie, Marie Falahee and Harvey Woolf, “Academic Standards”, p.517.
NUCCAT & SACWG project report
Page 24 of 54
Given the similar timely completion rates for Re-assessed and Compensated students, does the increased likelihood of Re-
assessed students obtaining a Good degree in time (32% versus 27% for Compensated) warrant the costs of re-assessment?
Does the educational rationale for re-assessment versus
compensation/condonement stand up to scrutiny?
What data about retrieval can and should be provided to
examination boards and course teams to enhance support for students who fail?
To what extent do the academic regulations support student progression and success?
50. We hope that our work offers an insight into the diversity of regulatory
frameworks within the UK university sector. However, the missing
variable in any such analysis is the extent to which the policy definition or institutional framework permits local discretion (in the name of academic
judgement) from the stated Institutional policy norm. Without such an understanding, it is unlikely that the extent of variation can be fully
understood. Expanding upon this point, McNay19 reveals that: …in many of the universities I have studied, there is a gap between the
leaders and the led so that the practices of professionals making judgements informed locally are at variance with the corporate policy
statements, which imply a standard model universally implemented.
51.Herein is encapsulated the missing variable, the white noise in the system which potentially obscures the direct relationship between a
student’s performance and the eventual degree classification outcome. This represents for analysts of UK University outputs a major impediment in understanding how decisions are reached and upon what basis such
decisions are made. Turnbull, Burton and Mullins20 have highlighted the tensions that exist within an increasingly modularised higher education
system which has demanded greater transparency to be mapped onto a culture grounded in flexibility and academic autonomy, and how this requires a repositioning of institutional regulatory frameworks.
52.The importance of Institutional Framework regulations, systems and processes has been highlighted elsewhere yet represents an under-investigated area of study. In the absence of Institutional Frameworks
there cannot be any expectation of consistent practice within an
19 McNay, I, Beyond Mass Higher Education: Building on Experience, Society for
Research into Higher Education & Open University Press, 2006, p.42. 20 Turnbull W, Burton D.M. & Mullins P. 2008. “Strategic Re-positioning of Institutional
Frameworks: balancing competing demands within the modular UK higher education
environment.” Quality in Higher Education 14 (1): 15-28.
NUCCAT & SACWG project report
Page 25 of 54
Institution and, as Stowell21 argues, in relation to module marking and honours degree classification within the context of defined and published
Institutional framework regulations:
The logic of ‘technicist’ approaches would suggest that equity, justice and comparability of academic standards lies as much in the construction and operation of consistent regulatory conventions for
defining thresholds as in the specification of broad outcome statements or descriptors.
53.However, as Stowell also recognises, the actual decisions by assessment
boards to approve or reject individually specific issues impacts significantly upon assessment outcomes such as progression and award
classifications. There is legitimate concern that allegations of ‘unconscious compensation tactics’ and/or ‘personal biases’ influence assessment board decisions and outcomes. This goes to the heart of the
(often hidden) issues surrounding the exercise of academic judgement within UK Universities. As Stowell comments, ‘[t]here would appear to
be no studies that seek to demystify the process by which boards seek to ensure fair treatment and justice for individual students’22.
54.Data about students exist in various silos throughout the UK university
sector. ‘Data mining’ takes data from different areas and perspectives and analyses the ‘big picture’ in order to provide useful information
and/or summary measures. The researcher will examine different datasets to discern patterns and may also aggregate data to facilitate ‘meta-analyses’. Yorke et al23 comment that:
Institutional record systems contain much data that can be usefully
mined in order to inform planning and decision-making. The analysis of a wide range of institutional data is commonplace in higher education in the USA, but work of this type is less developed
elsewhere.
55.UK universities accumulate and report externally (as they are obliged to
do so, by statute) upon a vast range of data which rarely, if at all, are
utilised to inform decision-making within the universities from which these data are sourced24. This study utilises institutional datasets and we
believe that attention should be paid to the potential value of institutional datasets for both researchers and practitioners in the UK university
21 Stowell, “Equity”, p.501. 22 All from Stowell, “Equity”, p.502. 23 Yorke, Mantz et al. 2005. “Mining institutional datasets to support policy making and
implementation.” Journal of Higher Education Policy and Management 27 (2): p.285. 24 For an Australian perspective, see Selwyn, Neil, Michael Henderson, and Shu-Hua
Chao. 2016. "‘You Need a System’: Exploring the Role of Data in the Administration of
University Students and Courses." Journal of Further and Higher Education: 1-11. doi:
10.1080/0309877x.2016.1206852.
NUCCAT & SACWG project report
Page 26 of 54
sector. In some ways, the universities’ datasets are akin to parish records, about which professional historians and amateur genealogists
alike agree that there are few, if any, more authentic datasets available for providing a fine grained understanding of local practices and
behaviours.
56.In an earlier study, Yorke25 discussed both the benefits and the costs inherent within institutional research. A reflection from the perspective of
managers of university data systems is provided by Yanosky26 who highlights generic issues with data accuracy, currency and relevance to the processes that institutional data is intended to support. He also
highlights that data is perceived to be legitimate only if it meets the requirements of the defined model for that data and that such ‘structural
schema’ may limit the effectiveness of subsequent analysis. Yanosky’s research concluded that universities in the United States of America have the necessary infrastructure to capture and maintain large datasets, but
that (perhaps in contradiction of Yorke’s belief) only a ‘modest infrastructure’ existed to support data analytics.
57.The researcher must be mindful of the risks inherent in mining institutional data from differing sources. Each university will have bespoke institutional regulations, employ specific terminology and hold
data on different systems. Data may, therefore, take many different forms and this provides challenges to facilitating ‘like comparison’. In
submitting evidence to the Parliamentary Innovation, Universities, Science & Skills Committee in 200927, SACWG noted that:
• Assessment regulations and practices (‘practices’ is taken to include not only the rules and conventions that complement the published
regulations, but also assessment methods) across the higher education sector are quite varied.
• The profiles of honours degree classifications in different subject areas are varied.
• The type of assessment task set for students influences the grades
that they receive for their work.
• Assessment criteria are, in practice, fuzzier than is often
acknowledged.
25 Yorke, Mantz. 2004. “Institutional research and its relevance to the performance of
higher education institutions.” Journal of Higher Education Policy and Management
26(2): 141–52. 26 Yanosky, R. 2009. Institutional Data Management in Higher Education. 27 Innovation, Universities, Science and Skills Committee. 2009. Students and
Universities: Eleventh Report of Session 2008–09 Volume 1, pp.115-16.
NUCCAT & SACWG project report
Page 27 of 54
58.Indeed, over time SACWG has demonstrated differences in marking practices28, the variation between degree classification algorithms29 and a
range of variations between institutional academic frameworks30. It would be a naïve researcher who took the view that just because two
things in two different contexts had the same name then they must be the same thing - and the SACWG canon demonstrates clearly that this is certainly not the case in the UK university sector. All submissions of data
utilised in this research project are provided to the same specification, with local differences in terminology, coding and structure homogenised.
28 Bridges, Paul et al. 1999. “Discipline‐ related Marking Behaviour Using Percentages: a
potential cause of inequity in assessment.” Assessment & Evaluation in Higher
Education 24 (3): 285-300; Yorke, Mantz, Paul Bridges and Harvey Woolf. 2002.
“Mark Distributions and Marking Practices in UK Higher Education: Some Challenging
Issues.” Active Learning in Higher Education 1 (1): 7-27. 29 Woolf, Harvey, and David Turner. 1997. "Honours Classifications: The Need for
Transparency." The New Academic 6 (3):10-12; Yorke, Mantz et al. 2004. "Some
Effects of the Award Algorithm on Honours Degree Classifications in UK Higher
Education." Assessment & Evaluation in Higher Education 29 (4):401-13. doi:
10.1080/02602930310001689000. 30 Yorke, Mantz, Woolf, Harvey, Stowell, Marie et al. 2008. "Enigmatic Variations:
Honours Degree Assessment Regulations in the UK." Higher Education Quarterly 62
(3):157-80.
NUCCAT & SACWG project report
Page 28 of 54
Next steps?
59. Ricketts31 notes that ‘[f]urther work is needed by higher education
institutions and certifying bodies to re‐examine the role of resit
examinations in their assessment strategies.’ The outcomes of this project confirm Ricketts’ assertion that more work is needed. This work might include extending the project to other HE providers, a finer
grained analysis of the additional data to identify any differences in, for example, gender, age, ethnicity, subject, and/or further analysis of the
relationship between regulatory arrangements and degree outcomes.
60. The Higher Education Funding Council for England’s most recent incursion into the quality assurance of English Higher Education includes
‘a review of classification algorithms’. To this end ‘Universities UK are setting up a working group to consider the approaches used to calculate overall classifications, taking into account a range of different and
diverse pedagogic aims, with the aim of producing advice and guidance on a sensible range of algorithms’. One of the reasons for the ‘decision
to shin[e] a light on…algorithms’ is that students have told us [HEFCE] that they would welcome greater transparency and consistency in this area’32. NUCCAT and SACWG’s work over the years has demonstrated
that such a review is long overdue. It is to be hoped that a similar review of progression regulations will not be so long in gestation.
31 Ricketts, “A New Look at Resits”, p.356. 32 Bourke, Tish. 2016. Degree standards: doing what it says on the tin, HEFCE 13 April.
NUCCAT & SACWG project report
Page 29 of 54
References Arnold, Ivo. 2016. "Resitting or Compensating a Failed Examination: Does It Affect Subsequent Results?." Assessment & Evaluation in Higher Education: 1-
15. doi: 10.1080/02602938.2016.1233520.
Atlay, Mark et al. 2012. A Survey of Higher Education Credit Practice in the United Kingdom 2012. United Kingdom Credit Forum: University of Derby.
Bourke, Tish. 2016. Degree standards: doing what it says on the tin, HEFCE 13 April available at http://tinyurl.com/gmrp7ag.
Bridges, Paul et al. 1999. “Discipline‐ related Marking Behaviour Using
Percentages: a potential cause of inequity in assessment.” Assessment & Evaluation in Higher Education 24 (3): 285-300.
Department for Education. Teaching Excellence Framework: year two specification, September 2016 available at http://tinyurl.com/hruwzjc.
Department for Education. Teaching Excellence Framework: year two and beyond: Government technical consultation response, September 2016 available
at http://tinyurl.com/zc5r6dz.
Falahee, Marie, Marie Stowell, and Harvey Woolf. 2013. Crafting Assessment Regulations for First Year Students: Stringency, Academic Alignment and Equity (unpublished project report available at https://eprints.worc.ac.uk/2990/).
HESA. 2016. Introduction - Students 2014/15 available at
https://www.hesa.ac.uk/data-and-analysis/publications/students-2014-15/introduction.
Innovation, Universities, Science and Skills Committee. 2009. Students and Universities: Eleventh Report of Session 2008–09 Volume 1.
http://www.publications.parliament.uk/pa/cm200809/cmselect/cmdius/170/170i.pdf.
McArthur, Jan. 2016. "Assessment for Social Justice: The Role of Assessment in Achieving Social Justice." Assessment & Evaluation in Higher Education 41 (7):
967-81. doi: 10.1080/02602938.2015.1053429. McNay, I. 2006. Beyond Mass Higher Education: Building on Experience, Society
for Research into Higher Education & Open University Press.
Office of the Independent Adjudicator for Higher Education (OIA). 2016. Annual Report 2015 available at http://www.oiahe.org.uk/news-and-
publications/annual-reports.aspx. Ostermaier, Andreas, Philipp Beltz and Susanne Link. 2013. Do University
Policies Matter? Effects of Course Policies on Performance. Dusseldorf: Beiträge zur Jahrestagung des Vereins für Socialpolitik.
NUCCAT & SACWG project report
Page 30 of 54
Pell, Godfrey, Katherine Boursicot and Trudie Roberts. 2009. "The Trouble with Resits …'." Assessment & Evaluation in Higher Education 34 (2): 243-51.
Proud, Steven. 2015. "Resits in Higher Education: Merely a Bar to Jump over, or
Do They Give a Pedagogical ‘Leg Up’?." Assessment & Evaluation in Higher Education 40 (5): 681-97.
Quality Assurance Agency for Higher Education. 2013. UK Quality Code for Higher Education – Part A: Setting and Maintaining Academic Standards.
Gloucester: QAA. Ricketts, Chris. 2010. "A New Look at Resits: Are They Simply a Second
Chance?." Assessment & Evaluation in Higher Education 35 (4): 351-56.
Rust, C. 2010. “Purposes and consequences of re-assessment.” Available at http://web.anglia.ac.uk/anet/faculties/alss/public/purposes_and_consequences_of_reassessment.pdf
Selwyn, Neil, Michael Henderson, and Shu-Hua Chao. 2016. "‘You Need a
System’: Exploring the Role of Data in the Administration of University Students and Courses." Journal of Further and Higher Education: 1-11. doi:
10.1080/0309877x.2016.1206852. Stowell, Marie. 2004. "Equity, Justice and Standards: Assessment Decision
Making in Higher Education." Assessment & Evaluation in Higher Education 29 (4): 495-510. doi: 10.1080/02602930310001689055.
Stowell, Marie. 2016. “Assessing Re-assessment.” Presentation at Leeds Beckett University, 12 July.
Stowell, Marie, Marie Falahee and Harvey Woolf. 2016. “Academic Standards and
Regulatory Frameworks: Necessary Compromises?.” Assessment & Evaluation in Higher Education 41(4): 515-31.
Turnbull W, D.M Burton and P Mullins. 2008. “Strategic Re-positioning of Institutional Frameworks: balancing competing demands within the modular UK
higher education environment.” Quality in Higher Education 14 (1): 15-28. Woolf, Harvey, and David Turner. 1997. "Honours Classifications: The Need for
Transparency." The New Academic 6 (3): 10-12.
Yanosky, R. 2009. Institutional Data Management in Higher Education available at https://net.educause.edu/ir/library/pdf/EKF/EKF0908.pdf
Yorke, Mantz, Paul Bridges and Harvey Woolf. 2002. “Mark Distributions and Marking Practices in UK Higher Education: Some Challenging Issues.” Active
Learning in Higher Education 1 (1): 7-27. Yorke, Mantz. 2004. “Institutional research and its relevance to the performance
of higher education institutions.” Journal of Higher Education Policy and Management 26(2): 141–52.
NUCCAT & SACWG project report
Page 31 of 54
Yorke, Mantz et al. 2004. "Some Effects of the Award Algorithm on Honours Degree Classifications in UK Higher Education." Assessment & Evaluation in
Higher Education 29 (4): 401-13. doi: 10.1080/02602930310001689000.
Yorke, Mantz et al. 2005. “Mining institutional datasets to support policy making and implementation.” Journal of Higher Education Policy and Management 27 (2): 285-298.
Yorke, Mantz, Harvey Woolf, Marie Stowell et al. 2008. "Enigmatic Variations:
Honours Degree Assessment Regulations in the UK." Higher Education Quarterly 62 (3): 157-80.
NUCCAT & SACWG project report
Page 32 of 54
Appendices
Appendix 1: The call for contributions
Appendix 2: The data
NUCCAT & SACWG project report
Page 33 of 54
Appendix 1: Call for contributions to a research project
‘To what extent do re-assessment, compensation and trailing support student success?’
The Northern Universities Consortium (NUCCAT) and the Standards and Classification Working Group (SACWG) invite member institutions to participate in a research project. The project seeks to measure the extent to which academic
regulations that facilitate student progression (following initial failure in modules) support student success, measured in terms of the students’ final award. This research is undertaken in light of the Higher Education Green Paper “Fulfilling our
Potential: Teaching Excellence, Social Mobility and Student Choice” which specifies that one of the three ‘common metrics’ for best informing Teaching Excellence Framework judgements is “Retention/continuation – from the UK Performance
Indicators which are published by Higher Education Statistics Agency (HESA)”33. The first phase of the project is based upon an analysis of institutional datasets.
Participants are invited to submit a dataset in relation to fulltime students at Census Point A34 enrolling upon Level 5 of undergraduate (3 year) honours degrees following progression from Level 4, divided into the following categories:
J. Students who passed all Level 4 modules at the first attempt K. Students who passed all L4 modules after a re-assessment attempt (re-
assessment) L. Students for whom credit was awarded to facilitate progression35
(compensation)
M. Students who were permitted to progress without attaining 120c (trailing) The dataset (which may be from any recent academic year) must subsequently
report the award outcomes of these students, categorised into class bands, at Census Point B, as follows:
Example dataset
Census Point A
Census Point B
Population = 5000
First class Upper second
Lower second
Third class
No award
J (4000) 750 2000 750 300 200
K (500) 50 300 50 50 50
L (350) 40 200 40 20 20
M (150) 10 120 10 5 5
Data from participating institutions will be analysed and the initial findings shared
anonymously amongst participants. Initial analysis of the data will inform subsequent phases of the research, which are likely to include follow-up research activity (structured interview and/or discussion of institutional academic regulation
with participant volunteers). Please note that this call for contributions in no way
33 Part A: Teaching Excellence, Quality and Social Mobility, Chapter 3: Criteria and metrics: Key principles for metrics and institutional evidence: Common metrics 34 Census points should be two years apart (e.g., cohort progresses from L4 to L5 in summer 2013 and completes with award in summer 2015). 35 It is noted that a student may, for example, be both re-assessed and compensated, but as defined above it is possible to associate a student with one category only (e.g., if a student is both re-assessed and compensated, then they are classed as ‘compensated’ because ‘credit was awarded to facilitate progression’ rather than ‘passed all modules after a re-assessment attempt’. Ditto trailing.).
NUCCAT & SACWG project report
Page 34 of 54
seeks to identify any individual from within the dataset. The researchers guarantee not to disclose the names of the participating universities and also to ensure that if
we have insufficient data with which to proceed, any submitted data will be deleted. Please submit data / direct questions about the research project to
Leopold Green (Chair of NUCCAT) & Marie Stowell (Chair of SACWG)
NUCCAT & SACWG project report
Page 35 of 54
Appendix 2
The data
The data submitted by each of the participating universities is presented herein.
Each submission is anonymised, in accordance with the guarantee provided in the Call for Contributions.
For each university please find enclosed:
the submitted dataset, the distribution of students at Census Point A,
the destination of students at Census Point B, the distribution of classifications for students achieving an award and the distribution of classifications for students achieving an award in each
progression category.
NUCCAT & SACWG project report
Page 36 of 54
University A
Submitted dataset
Distribution at Census Point A
[key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 1337 First Upper Sec Lower Sec Third No
J 979 146 457 193 16 167
K 262 4 51 84 17 106
L 63 0 8 25 5 25
M 33 0 0 0 0 33
Census point A Census point B
979
262
63
33
NUCCAT & SACWG project report
Page 37 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each
progression category
NUCCAT & SACWG project report
Page 38 of 54
University B
Submitted dataset
Distribution at Census Point A
[key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 1185 First Upper Sec Lower Sec Third No
J 945 121 504 162 5 153
K 157 2 47 49 3 56
L 0 0 0 0 0 0
M 83 0 10 12 4 57
Census point A Census point B
945
157
0
83
NUCCAT & SACWG project report
Page 39 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each
progression category
NUCCAT & SACWG project report
Page 40 of 54
University C
Submitted dataset
Distribution at Census Point A
[key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 3380 First Upper Sec Lower Sec Third No
J 2819 559 1368 496 38 358
K 276 5 81 69 21 100
L 241 3 79 71 13 75
M 44 2 6 2 2 32
Census point A Census point B
2819
276241
44
NUCCAT & SACWG project report
Page 41 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 42 of 54
University D
Submitted dataset
Distribution at Census Point A [key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 2531 First Upper Sec Lower Sec Third No
J 2073 424 1070 437 51 91
K 170 10 53 57 14 36
L 233 11 63 91 31 37
M 55 4 18 14 3 16
Census point BCensus point A
2073
170
233
55
NUCCAT & SACWG project report
Page 43 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 44 of 54
University E
Submitted dataset
Distribution at Census Point A [key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 3029 First Upper Sec Lower Sec Third No
J 2314 352 1190 553 52 167
K 365 12 126 148 18 61
L 343 6 98 138 33 68
M 7 1 4 1 1 0
Census point A Census point B
2314
365
343
7
NUCCAT & SACWG project report
Page 45 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 46 of 54
University F
Submitted dataset
Distribution at Census Point A [key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 380 First Upper Sec Lower Sec Third No
J 271 50 160 48 0 13
K 66 2 29 31 0 4
L 25 1 7 13 1 3
M 18 0 4 8 0 6
Census point A Census point B
271
66
2518
NUCCAT & SACWG project report
Page 47 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 48 of 54
University G
Submitted dataset
Distribution at Census Point A
[key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 3423 First Upper Sec Lower Sec Third No
J 2120 343 871 472 36 398
K 489 21 127 160 25 156
L 495 12 106 169 34 174
M 319 5 63 77 16 158
Census point A Census point B
2120489
495
319
NUCCAT & SACWG project report
Page 49 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 50 of 54
University H
Submitted dataset
Distribution at Census Point A [key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population 2226 First Class UpperSec LowerSec Third No
J 1811 344 660 422 44 341
K 263 5 60 75 10 113
L 134 2 20 39 11 62
M 18 0 0 1 0 17
Census point BCensus point A
1811
263
134 18
NUCCAT & SACWG project report
Page 51 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each
progression category
NUCCAT & SACWG project report
Page 52 of 54
University I
Submitted dataset
Distribution at Census Point A [key: first timers, re-assessed, compensated and trailing]
Distribution at Census Point B
Population = 23372337 First Class UpperSec LowerSec Third No
J 1435 405 556 268 33 173
K 600 45 175 161 33 186
L 190 5 40 54 7 84
M 112 4 16 26 8 58
Census Point BCensus Point A
1435
600
190
112
NUCCAT & SACWG project report
Page 53 of 54
Distribution of classifications for students achieving an award
Distribution of classifications for students achieving an award in each progression category
NUCCAT & SACWG project report
Page 54 of 54