Can knowledge tests and situational judgement tests predict selection centre performance?

8
Can knowledge tests and situational judgement tests predict selection centre performance? Haroon Ahmed, Melody Rhydderch & Phil Matthews OBJECTIVES Written tests are an integral part of selection into general practice specialty training in the UK. Evidence supporting their validity and reliability as shortlisting tools has prompted their introduction into the selection processes of other medical specialties. This study explores whether candidate performance on two written tests predicts performance on subsequent workplace-based simulation exercises. METHODS A prospective analysis of candidate performance (n = 135) during the general practice selection process was undertaken. Candidates were shortlisted using their scores on two written tests, a clinical problem-solving test (CPST) and a situational judgement test (SJT). Successful candidates then undertook workplace-based simulation exercises at a selection centre (SC). Scores on the CPST and SJT were correlated with SC scores. Regression analysis was undertaken to explore the predic- tive validity of the CPST and SJT for SC per- formance. RESULTS The data show that the CPST and SJT are predictive of performance in workplace- based simulations (r = 0.598 for the CPST, r = 0.717 for the SJT). The SJT is a better pre- dictor of SC performance than the CPST (R 2 = 0.51 versus R 2 = 0.35). However, the two tests together provide the greatest degree of predictive ability, accounting for 57% of the variance seen in mean scores across SC exercises. CONCLUSIONS The CPST and SJT play valu- able roles in shortlisting and are predictive of performance in workplace-based SC exercises. This study provides evidence for their contin- ued use in selection for general practice train- ing and their expansion to other medical specialties. assessment Medical Education 2012: 46: 777–784 doi:10.1111/j.1365-2923.2012.04303.x Discuss ideas arising from this article at www.mededuc.com ‘discuss’ Department of Postgraduate General Practice Education, Cardiff University School of Medicine, Cardiff, UK Correspondence: Dr Haroon Ahmed, Postgraduate General Practice Education, Cardiff University, 8th Floor, Neuadd Meirionydd, Heath Park, Cardiff CF14 4YS, UK. Tel: 00 44 7855 140666; E-mail: [email protected] ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784 777

Transcript of Can knowledge tests and situational judgement tests predict selection centre performance?

Page 1: Can knowledge tests and situational judgement tests predict selection centre performance?

Can knowledge tests and situational judgement testspredict selection centre performance?Haroon Ahmed, Melody Rhydderch & Phil Matthews

OBJECTIVES Written tests are an integral partof selection into general practice specialtytraining in the UK. Evidence supporting theirvalidity and reliability as shortlisting tools hasprompted their introduction into the selectionprocesses of other medical specialties. Thisstudy explores whether candidate performanceon two written tests predicts performanceon subsequent workplace-based simulationexercises.

METHODS A prospective analysis of candidateperformance (n = 135) during the generalpractice selection process was undertaken.Candidates were shortlisted using their scoreson two written tests, a clinical problem-solvingtest (CPST) and a situational judgement test(SJT). Successful candidates then undertookworkplace-based simulation exercises at aselection centre (SC). Scores on the CPST andSJT were correlated with SC scores. Regressionanalysis was undertaken to explore the predic-

tive validity of the CPST and SJT for SC per-formance.

RESULTS The data show that the CPST andSJT are predictive of performance in workplace-based simulations (r = 0.598 for the CPST,r = 0.717 for the SJT). The SJT is a better pre-dictor of SC performance than the CPST(R2 = 0.51 versus R2 = 0.35). However, the twotests together provide the greatest degreeof predictive ability, accounting for 57% ofthe variance seen in mean scores across SCexercises.

CONCLUSIONS The CPST and SJT play valu-able roles in shortlisting and are predictive ofperformance in workplace-based SC exercises.This study provides evidence for their contin-ued use in selection for general practice train-ing and their expansion to other medicalspecialties.

assessment

Medical Education 2012: 46: 777–784doi:10.1111/j.1365-2923.2012.04303.x

Discuss ideas arising from this article atwww.mededuc.com ‘discuss’

Department of Postgraduate General Practice Education, CardiffUniversity School of Medicine, Cardiff, UK

Correspondence: Dr Haroon Ahmed, Postgraduate General PracticeEducation, Cardiff University, 8th Floor, Neuadd Meirionydd,Heath Park, Cardiff CF14 4YS, UK. Tel: 00 44 7855 140666;E-mail: [email protected]

ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784 777

Page 2: Can knowledge tests and situational judgement tests predict selection centre performance?

INTRODUCTION

Written psychometric tests designed to assess clinicaland non-clinical acumen are an integral part ofselection into general practice specialty training inthe UK. Such tests are used as shortlisting tools anddetermine which applicants should be invited toattend a selection centre (SC). Their role is nowbeing expanded to selection into other postgraduatespecialty training programmes. This paper reports ona study that explores whether candidate performanceon these tests predicts performance on subsequenthigh-fidelity, workplace-based simulation exercises atan SC. It also debates the implications of thesefindings for future research and practice withingeneral practice and across other specialties.

Selection into medical specialty training should beundertaken using selection methods that are validand reliable.1 The selection method used should bedriven by a clear analysis of what is required for thejob and its purpose should be to select candidateswho will be most likely to perform well in the job.2

Beyond this, selection methods should be sensitiveto issues of diversity to ensure that selection is fairand unbiased.1 It has been argued that selectionmethods should be subject to the same rigour asassessment methods and should endeavour to meetissues relevant to social inclusion, workforce planningand widening of access.1

In their meta-analysis of 85 years of research intopersonnel selection, Schmidt and Hunter describepredictive validity as the most important property of apersonnel selection method.3 They show that thegreatest degree of predictive validity is achieved bycombining a test of general mental ability with a worksample test, an interview or an integrity test. Inpostgraduate medicine, appropriate outcomes tomeasure the success of selection remain a matter ofdebate.1 Although some studies have used supervisorratings4,5 or results from end-of-training examina-tions,6,7 there remains a lack of consensus on how bestto objectively measure job performance in doctors.Prideaux and colleagues emphasise the difficultiesfaced by those responsible for the design and deliveryof selection processes, highlighting the relative lack ofstudies relating to selection into postgraduatetraining.1 At the 2010 Ottawa Conference of theAssociation for Medical Education in Europe, onlythree studies were mentioned by an expert consensusstatement outlining selection.8–10 All three studieswere undertaken in North America and describe theuse of multiple mini-interviews8 or structured single

interviews9,10 for selection into residencyprogrammes. When comparing this with the wealth ofstudies evaluating achievement tests, aptitude tests,multiple mini-interviews, SCs and other methodo-logies in the context of medical school selection, it isreasonable to conclude that selection at thepostgraduate level has not been subject to the samescrutiny as selection into initial medical education.1

Selection into postgraduate general practice train-ing in the UK somewhat follows the modeldescribed by Schmidt and Hunter3 by using tests ofgeneral mental ability followed by work sampletests.11 This nationally coordinated, competency-based selection process begins by identifying eligi-ble candidates according to evidence of therequired foundation competencies demonstratedon their electronic application forms.12 Thesecandidates then undergo a shortlisting processusing two invigilated, machine-marked, psychomet-ric tests, which aim to identify those who shouldbe invited to a regionally organised SC.12,13 Thefirst of these tests is a 100-item clinical problem-solving test (CPST). The CPST measures crystallisedintelligence and tests declarative knowledge thathas been gained during previous medical training.The second of these tests is a 50-item situationaljudgement test (SJT). The SJT confronts applicantswith written descriptions of job-related scenariosand asks candidates to indicate their actions from alist of predetermined responses.14 The SJT hasbeen widely used in high-stakes industry personnelselection15 and in medical student selection16 and isdesigned to test non-cognitive professionalattributes. Successful candidates then attend the SC,which uses high-fidelity simulations to assessbehavioural evidence of workplace-basedcompetencies in a consistent, standardised manner,based on a nationally agreed blueprint.2,4,17

Studies evaluating this process have shown that it isreliable and can predict future job perfor-mance.4,11,14 In a longitudinal matched-case com-parison study in 2005, Patterson et al. comparedtrainees selected using the current competency-basedprocess against those selected using traditionalmethods.4 They showed that trainees selected usingthe SC approach attained significantly higher super-visor ratings 3 months into their training, comparedwith trainees selected using traditional methods.There was also significant correlation between SCscores and supervisor ratings, supporting the evi-dence for the predictive validity of the SC process.Furthermore, Irish and Patterson have reported that

778 ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784

H Ahmed et al

Page 3: Can knowledge tests and situational judgement tests predict selection centre performance?

evaluation of the UK selection process by candidatesshows that satisfaction with and confidence in thethree-stage process is generally high.12

Previous methods of shortlisting for selection intospecialty training involved hand-marking and scor-ing of structured application forms.14 The firstmajor limitation of this method refers to thedevelopment of websites that provide instructionand advice on model answers. The completion ofapplication forms under invigilated conditions par-tially resolves this issue, but leads to increasedcosts.11 A second major limitation refers to reli-ability. Research comparing the outcomes of appli-cations to general practice specialty training for thesame candidate in different regions of the UK hasshown that agreement among shortlisters for thismethod is low, with kappa scores of 0.3, suggestingpossible problems in the reliability of thismethod.11

This paper sets out to explore the associationsbetween CPST and SJT scores and SC scores forcandidates who applied to general practice trainingin 2011 and specifically asks:

1 Do CPST and SJT scores correlate well with scoreson the high-fidelity, workplace-based simulationsthat comprise the SC exercises? Good correla-tions would be expected to allow the CPST andSJT to continue to fulfil their roles as shortlistingtools.

2 Does the SJT offer any advantage over the CPST?Based on previous research findings,14,18 SJTscores are expected to show a stronger correla-tion with scores on SC exercises and thus the SJTis expected to offer incremental validity over theuse of the CPST alone.

3 Have correlations among scores on the CPST,SJT and SC exercises improved over the years?Selection is an evolutionary process that under-goes regular improvement and development.The magnitude of the association between scoreson psychometric written tests and SC exercises isexpected to have increased over time.

The results of this paper will inform policy andpractice in the context of optimal methods ofselection into postgraduate medical training. Thisextends beyond selection into general practice as itwill also be of value in those specialties in which theuse of written tests is being trialled, such as internalmedicine and emergency medicine,18,19 and willinform process in an area in which the number ofavailable studies is small.1

METHODS

Design and sampling

This study was set within Wales, in the UK. Aprospective analysis of CPST, SJT and SC scores wasundertaken for candidates in 2011. Data were avail-able for all 135 candidates. Correlations betweenscores on the CPST and the SJT and scores on eachSC exercise and mean scores across exercises werecomputed. Scores on the CPST and SJT were com-bined to give each candidate a total shortlisting score.Correlations between total shortlisting scores andmean SC scores for candidates in 2011 were com-pared with those from a retrospective analysis ofcandidates in 2009 and 2010 to explore whetherthe magnitude of association between CPST, SJT andSC exercise scores had increased over the years.

The SC in 2009 and 2010 consisted of three work-related simulations: a written exercise; a groupdiscussion exercise, and a patient simulation. Theseexercises give candidates opportunities to providebehavioural evidence for the required competencies.As part of ongoing evaluation of the selectionprocess, the format in 2011 was adjusted to includefour exercises: a written exercise, and three simula-tions that involved acting out scenarios with, respec-tively, a patient, a colleague, and a carer or relative ofa patient. The group discussion exercise was removedbecause it demonstrated a lower incremental contri-bution to the SC outcome.12

Figure 1 outlines sample questions from the CPSTand SJT.

Statistical methods

Correlations were conducted using Pearson correla-tion coefficients. In order to ascertain normality, theskew, kurtosis and distribution of the selection scoreswere assessed. Shapiro–Wilk significance testing was> 0.05 for the written test scores and the SC exercises,indicating a normal distribution.

The ability of the CPST and SJT to predict SC scoreswas explored using regression analysis. Firstly, linearregression was undertaken to identify the degree ofvariance predicted by each written test indepen-dently, using the mean score across SC exercises asthe dependent variable. Multiple regression was thenundertaken to explore the predictive ability of bothtests combined. Analysis was carried out using SPSS

Version 16 (SPSS, Inc., Chicago, IL, USA).

ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784 779

Knowledge and judgement tests for selection

Page 4: Can knowledge tests and situational judgement tests predict selection centre performance?

RESULTS

Candidates in the study sample (n = 135) had a meanage of 28 years and were mostly female (66%). Almosttwo-thirds (64%) of candidates were UK graduates.Compared with candidates to general practice speci-alty training in the UK as a whole13 the study samplecontained a higher proportion of women (66% versus53%) and a lower proportion of international medicalgraduates (36% versus 54%).

Do CPST and SJT scores from 2011 correlate withscores on SC exercises?

Table 1 shows the Pearson coefficients for correla-tions between CPST and SJT scores and scores on SCexercises in 2011. Correlations between all SC exer-cises and the CPST and SJT are good (r = 0.373–0.482for the CPST; r = 0.439–0.681 for the SJT). Thestrongest correlations referred to the two new simu-lations that involve enacting scenarios with a carerand a colleague (carer simulation: r = 0.469 for theCPST and r = 0.681 for the SJT; colleague simulation:r = 0.482 for the CPST and r = 0.558 for the SJT).

Does the SJT offer any advantage over the CPST?

Correlating scores on the SJT and the CPST separatelywith scores on each SC exercise revealed the followingresults. Scores on both tests achieved similar correla-tions with scores on the patient simulation exercise(SJT: r = 0.466; CPST: r = 0.463). For all other exer-cises, the SJT achieved stronger correlations than theCPST.

Figure 2 shows the scatterplots and regression linesfor correlations between scores on the CPST and SJT,and SC scores. Scores on both written tests demon-strate a positive relationship with SC scores. Linearregression analysis (Table 2) shows that the SJT was abetter independent predictor of SC performancethan the CPST (R2 = 0.51 for the SJT; R2 = 0.35 forthe CPST). Multiple regression showed that the SJTand CPST together accounted for 57% of thevariance seen in SC scores (R2 = 0.57), which suggeststhat the SJT explains an additional 22% of thevariance above the CPST and demonstrates a degreeof incremental validity.

Have correlations between CPST and SJT scores andscores on SC exercises improved over the years?

Table 3 shows the correlations between total short-listing scores and scores on SC exercises from 2009 to

2011. The results show a stronger association betweenthe combined scores of the CPST and SJT and SCperformance, with Pearson correlation coefficients of0.749 in 2011, and 0.612 and 0.624 in 2010 and 2009,respectively.

DISCUSSION

This study aimed to explore the association betweencandidate performance on written knowledge andjudgement tests, and performance in workplace-based simulation exercises in an SC. It set out todescribe the ability of the SJT and the CPST topredict SC performance, and to determinewhether the SJT offered incremental validity over theCPST.

The findings show that performance on the CPSTand SJT is strongly correlated with performance inworkplace-based simulations in the SC. The data alsosuggest that the CPST and SJT have predictive validityfor SC performance, explaining 57% of the variabilityin SC scores. The SJT is a better predictor of SCperformance than the CPST. However, the combinedscore from the two tests shows the strongest correla-tion and predictive ability.

Clinical problem-solving test

A 25-year-old woman has a muco-purulent discharge, pelvic pain, cervicitis and urethritis. Which is the SINGLE most likely cause of her symptoms? Choose one option only

(a) Bacterial vaginosis (b) Candida albicans (c) Chlamydia trachomatis (d) Herpes simplex (e) Trichomaniasis

Situational judgement test

You are a second-year foundation doctor working in general practice. At the baby clinic, the nurse gives you a syringe with fluid already drawn up containing an immunisation (MMR) to give to a baby. After the parent and child have gone, you realise that the syringe contained only diluent; the ampoule of active powder is intact Rank in order the following actions in response to this situation

(a) Contact the parent immediately and explain what has happened (b) Inform the practice manager of the nurse’s mistake (c) Fill in a critical incident form (d) Send a further appointment without delay (e) Take no action

Figure 1 Sample questions on the clinical problem-solvingtest and the situational judgement test. Further examplescan be found on the website of the National RecruitmentOffice (http: ⁄ ⁄ www.gprecruitment.org.uk)

780 ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784

H Ahmed et al

Page 5: Can knowledge tests and situational judgement tests predict selection centre performance?

The findings provide positive evidence for the con-tinued use of the CPST and SJT in selection forgeneral practice training. There have been consider-able improvements in the strength of associationbetween these tests and SC exercises. Reportedcorrelations between SC scores and scores on theCPST and SJT (0.30 and 0.46, respectively) in the2006 selection year14 increased to 0.598 and 0.717,respectively, in the 2011 selection data used in thisstudy. These improvements may reflect the additionof new simulation exercises, differences in the char-acteristics of participating candidates, or ongoingquality assurance of the selection process, which hasled to improved expertise amongst question writersand SC assessors.

This study is limited in its sampling and uses a smallersample size than that used in previous studies. The

candidates all came from within one geographicalregion within the UK and their characteristics maydiffer from those of UK candidates as a whole,limiting the generalisability of these results. Thisstudy also provides little insight into the characteris-tics of candidates who were not shortlisted (i.e. thosewho performed poorly on the CPST and SJT). Wehave little idea of how these candidates mightperform in the SC exercises and how this would affectthe overall correlations. However, this study collectedprospective data with no attrition rates for a completecohort of candidates to the current general practiceselection process and provides evidence for thevalidity of the CPST and SJT in selection to specialtytraining. This paper adds to the small body ofresearch available on postgraduate selection practicesby providing evidence for the correlation betweenwritten knowledge and judgement tests and SC

4

6

8

10

12

14M

ean

acro

ss s

elec

tion

cent

re e

xerc

ises

150 200 250 300 350CPST

95% confidence interval Regression line

Relationship between scores on the CPST and selection centre exercises

Intercept 1.90

Slope (B) 0.034 (95% confidence interval 0.026–0.042; p < 0.001)

4

6

8

10

12

14

Mea

n ac

ross

sel

ectio

n ce

ntre

exe

rcis

es

150 200 250 300SJT

95% confidence interval Regression line

Relationship between scores on the SJT and selection centre exercises

Intercept 0.50

Slope (B) 0.040 (95% confidence interval 0.033–0.046; p < 0.001)

Figure 2 Scatterplots and regression lines showing the relationship between scores on the clinical problem-solving test (CPST)and situational judgement test (SJT) and mean scores across selection centre exercises

Table 1 Correlations between scores on the clinical problem-solving test (CPST) and the situational judgement test (SJT) and selectioncentre exercise scores for the 2011 sample

CPST, r (95%% CI) SJT, r (95%% CI) CPST and SJT, r (95%% CI)

Written exercise 0.373 (0.218–0.509) 0.439 (0.292–0.565) 0.461 (0.317–0.584)

Patient simulation 0.466 (0.323–0.588) 0.463 (0.319–0.586) 0.524 (0.390–0.636)

Carer simulation 0.469 (0.326–0.591) 0.681 (0.430–0.664) 0.661 (0.473–0.693)

Colleague simulation 0.482 (0.341–0.601) 0.558 (0.430–0.664) 0.594 (0.473–0.693)

Mean across exercises 0.598 (0.478–0.696) 0.717 (0.624–0.790) 0.749 (0.665–0.814)

All correlations are statistically significant at p < 0.0195% CI = 95% confidence interval

ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784 781

Knowledge and judgement tests for selection

Page 6: Can knowledge tests and situational judgement tests predict selection centre performance?

exercises in a high-stakes, competency-based selectionprocess. This study also highlights the evolutionarynature of these tests and provides evidence that showshow the psychometric properties of these testsimprove through a method of continual evaluation.

Patterson and colleagues correlated scores from 463candidates for general practice training and showedcorrelations of 0.30 between CPST and SC scores, and0.46 between SJT and SC scores.14 The study alsocorrelated scores from candidates’ structured appli-cation forms with SC scores (r = 0.26). The results ofthe present study show how these correlations haveincreased in strength and are considerably strongerthan the correlation for the original shortlistingmethod of using structured application forms. In alater study that evaluated the use of the CPST and SJTfor selection into core medical training,18 Pattersonet al. again showed good correlation between scoreson these tests and core medical training interviewscores, indicating that their strength extends beyond

the area of general practice. In a Department ofHealth funded study, Anderson et al. trialled the useof the CPST and SJT for selection into anaesthesiaand acute hospital specialties.19,20 Their data, com-prising 131 candidates, showed correlations of 0.51between SJT scores and 0.47 between CPST scoresand candidates’ overall selection scores, after only1 year of question development. As our study shows,it is likely that these correlations will improve withcontinued evaluation and development.

The incremental validity of the SJT over the CPST hasalso been shown previously by Patterson et al. in theirstudy of data for candidates for selection in 2006(DR2 = 0.143).14 The data from the present studydemonstrate that the incremental validity offered bythe SJT has increased, and that the SJT accounts forup to 22% of the variance in SC scores (R2 = 0.35 forthe CPST; R2 = 0.57 for the CPST and SJT com-bined). This highlights the importance of the SJTin the selection process, particularly in the context of

Table 2 Regression analysis of scores on the clinical problem-solving test (CPST) and situational judgement test (SJT) for predicting selectioncentre performance

Linear regression

Predictor variable B (95% CI) p-value R2

SJT 0.040 (0.033–0.046) < 0.01 0.51

CPST 0.034 (0.026–0.042) < 0.01 0.35

Multiple regression

Predictor variable R2 R2 change p-value

CPST 0.35 < 0.01

CPST and SJT 0.57 0.22 < 0.01

B = slope of the regression line; 95% CI = 95% confidence interval; dependent variable = mean score across selection centre exercises

Table 3 Correlations between total shortlisting scores and mean selection centre scores from 2009 to 2011

Selection year Sample size, n R-value p-value

2011 135 0.749 0.01

2010 135 0.612 0.01

2009 168 0.624 0.01

Overall 438 0.618 0.01

782 ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784

H Ahmed et al

Page 7: Can knowledge tests and situational judgement tests predict selection centre performance?

its ability to predict job performance. Lievens andPatterson demonstrated this in a paper publishedin 2011,5 in which, based on a sample of candidatesin the 2007 round of selection (n = 195), hierarchicalmultiple regression showed that the SJT ex-plained 5.9% of additional variance over the CPSTin the context of job performance, scored on a24-point inventory by a trainee’s educational super-visor. These studies demonstrate the potentialof the SJT as a predictor of both SC and jobperformance.

Further investigation into the use of writtenknowledge and judgement tests is required to exploittheir full potential. Evidence for the ability of theCPST and SJT to predict SC performance suggeststhat these tests could play a greater role in selectiondecisions (rather than just in shortlisting), therebystreamlining entry to the SC and potentially resultingin significant cost-savings. The role of SJTs could befurther developed to enable testing in non-clinicaldomains. As the shape of general practice in the UKchanges, the SJT could be moulded to assesscompetencies such as management skills andleadership potential. Its role in assessing thesecompetencies could be developed beyond generalpractice to almost any medical specialty and could tiein with initiatives led by the Academy of MedicalRoyal Colleges that aim to engage clinicians inmanagement and leadership.21 Further investigationinto the predictive validity of these tests is alsorequired. There are limitations to using supervisorratings and examination performance as outcomemeasures and therefore debate on the appropriateobjective measures of job performance is required.

CONCLUSIONS

The CPST and SJT play valuable roles in shortlistingduring selection to specialty training and predictperformance in SCs. The predictive validity and cost-savings they offer over other methods should ensuretheir continued use in general practice, and theirexpansion to other medical specialties. Further workis needed to expand the role of the SJT in selectionand to explore its potential use in training.

Contributors: HA conceived the original idea for the study,collected and analysed the data, and wrote the initial andfinal drafts of the paper. MR contributed significantly to theconception and design of the study, and to the criticalrevision of the first draft and formulation of the final draft.PM contributed significantly to the design of the study and

to the critical revision of each draft. All authors approvedthe final manuscript for submission.Acknowledgements: we would like to thank theadministrative staff at the Department of PostgraduateGeneral Practice Education, Cardiff University, for theirhelp with data collection.Funding: none.

Conflicts of interest: none.

Ethical approval: ethical approval was not sought for thisstudy. No contact was made with candidates or trainees atany time. The data used were held by the Wales Deaneryfollowing the 2011 selection round. Data were anonymisedand stripped of all personal details before use.

REFERENCES

1 Prideaux D, Roberts C, Eva K, Centeno A, McCrorie P,McManus C, Patterson F, Powis D, Tekian A,Wilkinson D. Assessment for selection for the healthcare professions and specialty training: consensusstatement and recommendations from the Ottawa 2010conference. Med Teach 2011;33:215–23.

2 Patterson F, Ferguson E, Lane P, Farrell K, Martlew J,Wells A. A competency model for general practice:implications for selection, training, and development.Br J Gen Pract 2000;50:188–93.

3 Schmidt FL, Hunter JE. The validity and utility ofselection methods in personnel psychology: practicaland theoretical implications of 85 years of researchfindings. Psychol Bull 1998;124:262–74.

4 Patterson F, Ferguson E, Norfolk T, Lane P. A newselection system to recruit general practice registrars:preliminary findings from a validation study. BMJ2005;330:711–4.

5 Lievens F, Patterson F. The validity and incrementalvalidity of knowledge tests, low-fidelity simulations, andhigh-fidelity simulations for predicting job perfor-mance in advanced-level high-stakes selection. J ApplPsychol 2011;96(5):927–40.

6 Ahmed H, Rhydderch M, Matthews P. Do generalpractice selection scores predict success at MRCGP? Anexploratory study. Educ Prim Care 2012;23:95–100.

7 Davison I, Burke S, Bedward J, Kelly S. Do selectionscores for general practice registrars correlate withend of training assessments? Educ Prim Care 2006;17:473–8.

8 Hofmeister H, Lockyer J, Crutcher R. The multiplemini-interview for selection of international medicalgraduates into family medicine residency education.Med Educ 2009;43:573–9.

9 Quintero A, Segal S, King T, Black K. The personalinterview, assessing the potential for personality simi-larity to bias the selection of orthopaedic residents.Acad Med 2009;84:1364–72.

10 Thordarson D, Ebramzadeh E, Sangiorgio S, Schall S,Patzakis M. Resident selection: how are we doing andwhy? Clin Orthop Relat Res 2007;459:255–9.

ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784 783

Knowledge and judgement tests for selection

Page 8: Can knowledge tests and situational judgement tests predict selection centre performance?

11 Plint S, Patterson F. Identifying critical success factorsfor designing selection processes into postgraduatespecialty training: the case of UK general practice.Postgrad Med J 2010;86:323–7.

12 Irish B, Patterson F. Selecting general practice specialtytrainees: where next? Br J Gen Pract 2010;60:849–52.

13 National Recruitment Office for General PracticeTraining. http://www.gprecruitment.org.uk. [Accessed7 February 2012.]

14 Patterson F, Baron H, Carr V, Plint S, Lane P. Evalua-tion of three shortlisting methodologies for selectioninto postgraduate training in general practice. MedEduc 2009;43:50–7.

15 Lievens F. Situational judgement tests: introductionand research review. Rev Psicol Trab Organ 2007;23:93–110.

16 Lievens F, Coetsier P. Situational tests in studentselection: an examination of predictive validity, adverseimpact, and construct validity. Int J Select Assess2002;10:245–57.

17 Patterson F, Ferguson E, Thomas S. Using job analysisto identify core and specific competencies: implicationsfor selection and recruitment. Med Educ 2008;42:1195–204.

18 Patterson F, Carr V, Zibarras L, Burr B, Berkin L, PlintS, Irish B, Gregory S. New machine-marked tests forselection into core medical training: evidence from twovalidation studies. Clin Med 2009;9:417–20.

19 Anderson I, Gale T, Roberts M et al. A machine-markedtest for recruitment to anaesthesia and acute carecommon stem core training programmes in thesouthwest of England. Br J Anaesth 2010;104 (4):528.

20 Crossingham G, Gale T, Roberts M, Carr A, Langton J,Anderson I. Content validity of a clinical problemsolving test for use in recruitment to the acute spe-cialties. Clin Med 2011;11:23–35.

21 Academy of Medical Royal Colleges and NHS Institutefor Innovation and Improvement. Medical leadershipcompetency framework: enhancing engagement inmedical leadership. 2009. www.aomrc.org.uk/.../132-medical-leadership-competency-framework.html.[Accessed 7 February 2012.]

Received 14 September 2011; editorial comments to authors2 December 2011, 30 March 2012; accepted for publication12 April 2012

784 ª Blackwell Publishing Ltd 2012. MEDICAL EDUCATION 2012; 46: 777–784

H Ahmed et al