Joan Anzia , M.D. Tony Rostain , M.D.

21
Clinical Skills Verification rater Training MODULE 2 Training Faculty Evaluators of Clinical Skills: Drivers of Change in Assessment Joan Anzia, M.D. Tony Rostain, M.D.

description

Clinical Skills Verification rater Training MODULE 2 Training Faculty Evaluators of Clinical Skills: Drivers of Change in Assessment. Joan Anzia , M.D. Tony Rostain , M.D. Outline. Mini pretest! Brief history of assessment in medical education - PowerPoint PPT Presentation

Transcript of Joan Anzia , M.D. Tony Rostain , M.D.

Page 1: Joan  Anzia , M.D. Tony  Rostain , M.D.

Clinical Skills Verification rater TrainingMODULE 2

Training Faculty Evaluators of Clinical Skills: Drivers of Change in Assessment

Joan Anzia, M.D.Tony Rostain, M.D.

Page 2: Joan  Anzia , M.D. Tony  Rostain , M.D.

Outline

• Mini pretest!• Brief history of assessment in medical education• Drivers of change in assessment in medical

education• Miller’s pyramid• Why is faculty training necessary?• Methods to train faculty to evaluate clinical skills.• Post-test

Page 3: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Pre-Test

1. A clinical skills exam of a trainee assesses whether he or she “knows how” according to Miller’s Pyramid.

a. True

b. False

Page 4: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Pre-Test

2. Faculty evaluators in a group are preparing their individual evaluation scores for a

videotaped trainee clinical skills exam, and comparing their scores with the scores of “expert” raters. This activity is called:

a. Behavioral Observation Trainingb. Frame of Reference Trainingc. Direct Observation of Competence Trainingd. Performance Dimension Training

Page 5: Joan  Anzia , M.D. Tony  Rostain , M.D.

Brief history of assessment in medical education

• Through the 1950s: knowledge evaluated through essays and open-ended questions graded by faculty. Clinical skill and judgment tested with live oral examinations, sometimes after bedside data-gathering by the examinee.

• 1960s: multiple-choice exams to test knowledge base

Page 6: Joan  Anzia , M.D. Tony  Rostain , M.D.

Clinical Skills Exams vs Multiple Choice Question Exams

Page 7: Joan  Anzia , M.D. Tony  Rostain , M.D.

New technologies come on the scene

• Introduction of computers in the 1980s enabled large-scale testing using MCQs that are machine-scanned and scored.

• Computers also allow the assessment of clinical decision-making through use of interactive item formats.

• Advances in psychometrics allow shorter tests, reduction of bias, and identification of error sources.

Page 8: Joan  Anzia , M.D. Tony  Rostain , M.D.

Since the 1980s

• OSCEs (Objective Structured Clinical Exams) have been fine-tuned with improved psychometric qualities.

• Assessment of clinical skills and performance has lagged behind – faculty are inexperienced, don’t share common standards, and have not been trained to apply them consistently.

Page 9: Joan  Anzia , M.D. Tony  Rostain , M.D.

Drivers of change in medical education

• Outcomes-based education: a focus on the “end product” rather than the process. What should a psychiatrist “look like” at the end of training?

• National initiatives in accountability, patient safety and quality assurance: maintaining the public trust in the medical profession and improving the quality of healthcare.

Page 10: Joan  Anzia , M.D. Tony  Rostain , M.D.

Levels of assessment: Miller’s Pyramid

Page 11: Joan  Anzia , M.D. Tony  Rostain , M.D.

Miller’s Pyramid

• Knows: what a trainee “knows” in an area of competence. MCQ-based exam.

• Knows how: does the trainee know how to use the knowledge (acquire data, analyze and interpret findings). An interactive reasoning exam.

• Shows how: can the trainee deliver a competent performance of the skill with a patient. Clinical skills exams.

• Does: does the clinician routinely perform at a competent level outside of a controlled testing environment? Performance-in-practice assessment, critical incident systems.

Page 12: Joan  Anzia , M.D. Tony  Rostain , M.D.

Why is faculty training necessary?

• Assessment methods based on observation are only as good as the individuals using them.

Holmboe and Hawkins, 2008

• Faculty sometimes don’t possess sufficient knowledge, skills and attitudes in particular competencies.

• Competencies evolve over time, and faculty may not have been trained in specific competencies.

Page 13: Joan  Anzia , M.D. Tony  Rostain , M.D.

How do we train evaluators?Empirically studied training methods:

• Behavioral Observation Training (BOT)• Performance Dimension Training (PDT)• Frame of Reference Training (FoRT)• Direct Observation of Competence Training

Page 14: Joan  Anzia , M.D. Tony  Rostain , M.D.

Behavioral Observation Training

• Get faculty to increase the number of their observations of their trainees.

• Provide a form of observational aide that raters can use to record observations ( a “behavioral diary”).

• Help faculty members learn how to prepare for an observation. (Determining goals, evaluator position, etc.)

Page 15: Joan  Anzia , M.D. Tony  Rostain , M.D.

Performance Dimension Training

• Designed to teach the faculty with the appropriate performance dimensions used in the evaluation system.

• It is a critical element for all rater training programs: goal is to define all the criteria for each dimension of performance.

• Faculty interact to further define criteria (what constitutes “superior performance” etc.) and work towards consensus on framework and specific criteria.

Page 16: Joan  Anzia , M.D. Tony  Rostain , M.D.

Frame of Reference Training

• First, Performance Dimension Training must be completed.

• FoRT targets accuracy in rating: goal is to achieve consistency.

• First, minimal criteria for satisfactory performance defined, then marginal criteria.

• Faculty are given clinical vignettes describing performance in different ranges.

Page 17: Joan  Anzia , M.D. Tony  Rostain , M.D.

Frame of Reference Training (cont.)

• Faculty use vignettes to provide ratings.• Trainer provides feedback on what the

“true” ratings should be, with an explanation for each rating.

• Discussion of discrepancy between faculty ratings and “true” ratings from trainer.

• Repeated practice: “calibration.”

Page 18: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Post-Test

1. A clinical skills exam of a trainee assesses whether he or she “knows how” according to Miller’s Pyramid.

a. True

b. False

Page 19: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Post-Test

1. A clinical skills exam of a trainee assesses whether he or she “knows how” according to Miller’s Pyramid.

b. False

A CSV exam assesses whether a resident can “show how.”

Page 20: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Post-Test

2. Faculty evaluators in a group are preparing their individual evaluation scores for a videotaped trainee clinical skills exam, and comparing their scores with the scores of “expert” raters. This activity is called:

a. Behavioral Observation Trainingb. Frame of Reference Trainingc. Direct Observation of Competence

Trainingd. Performance Dimension Training

Page 21: Joan  Anzia , M.D. Tony  Rostain , M.D.

Module 2Post-Test

2. Faculty evaluators in a group are preparing their individual evaluation scores for a videotaped trainee clinical skills exam, and comparing their scores with the scores of “expert” raters. This activity is called:

b. Frame of Reference Training