Audience assessment of bias in continuing medical education programs

1
Innovations Audience Assessment of Bias in Continuing Medical Education Programs DAVID PRICE, MD; CAROL HAVENS, MD; PHILIP BELLMAN, MPH Context and Setting The Accreditation Council for Continuing Medical Educa- tion requires continuing medical education ~CME! programs to be free of commercial bias, and patient-care recommen- dations to be based on the highest-quality available evi- dence. Concern has been expressed both in the lay press and medical literature about commercial influence and imbal- ance in CME programming. Why We Undertook This Initiative Several investigators are developing tools to help mitigate bias in CME programs. Many CME providers ask audiences about perceived bias as part of their program-monitoring pro- cesses; it is uncertain, however, that audiences are able to detect bias reliably or consistently. Lack of a common def- inition of bias makes it difficult to compare bias assessments between programs or CME providers. Additionally, personal bias can be present in CME programs even in the absence of industry program support or industry-speaker relationships. We are developing a tool that asks CME participants to rate specific aspects of CME programs to create a more objective and consistent assessment of bias and balance in and across CME programs. What We Did We pilot tested a 9-item questionnaire assessing the poten- tial for influence or imbalance in CME presentations: 1. Did the speaker use only generic names of products on slides and verbally? 2. If brand names were used, did generic names appear on the same slide? 3. Did the speaker make any patient-care recommendations with- out citing evidence? 4. Did the speaker make any patient-care recommendations that you know to be inconsistent with the best evidence available? 5. If the speaker discussed studies that support a specific prod- uct, did he 0she also discuss studies presenting different con- clusions about the product? 6. Did the speaker include harms as well as benefits when dis- cussing studies or specific products? 7. Did the speaker’s presentation include benefits without harms of some products while including only harms ~or benefits and harms! of other products? 8. If the speaker discussed multiple studies, did he 0she include both weaknesses and strengths of each study? 9. Did any corporate names or logos appear on the speaker’s slides or handouts? The questionnaire was tested at 7 sessions during 4 differ- ent 2007 Kaiser Permanente national CME conferences, cov- ering primary care, medical, and surgical specialty topics. Three of the 7 faculty disclosed commercial relationships. Roughly half of the attendees received this questionnaire; the other half received an evaluation with our traditional question: “This presentation was free of commercial bias” ~5-point Likert scale, strongly agree to strongly disagree!. What We Learned Of those returning evaluations with the traditional question, 163 of 183 agreed or strongly agreed the presentation was free of commercial bias; 10 were neutral, 0 disagreed, and 10 did not answer the question. Of the test questionnaires handed out, 221 were returned. All sessions had at least some questions answered in a manner that would indicate possible bias; individual question percents ranged from 0%–36%. The average for all 9 questions over 7 sessions ranged from 3.2%– 14.0%. Our findings support our suspicion that one question may not be sufficient to detect bias; more detailed questions may be helpful in identifying some aspects of bias or in- complete presentation of evidence. Our further work will focus on narrowing the number of questions, assessing inter- and intrarater reliability, and comparing audience answers to the responses of trained expert observers. Disclosures: The authors report none. Dr. Price: Director of Education and Clinician Researcher, Colorado Per- manente Medical Group, Denver, CO, and Professor of Family Medicine, University of Colorado at Denver Health Sciences Center, Denver, CO; Dr. Havens: Director of Clinical Education, The Permanente Medical Group, Oakland, CA; Mr. Bellman: Practice Leader, The Permanente Med- ical Group, Oakland, CA. Correspondence: David W. Price, Colorado Permanente Medical Group & The University of Colorado Denver Health Sciences Center, 10065 E. Har- vard Avenue, Suite 300, Denver, CO 80231; e-mail: David.Price@ ucdenver.edu. © 2009 The Alliance for Continuing Medical Education, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education. • Published online in Wiley InterScience ~www.interscience.wiley.com!. DOI: 10.10020chp.20011 JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 29(1):76, 2009

Transcript of Audience assessment of bias in continuing medical education programs

Innovations

Audience Assessment of Bias in ContinuingMedical Education Programs

DAVID PRICE, MD; CAROL HAVENS, MD; PHILIP BELLMAN, MPH

Context and Setting

The Accreditation Council for Continuing Medical Educa-tion requires continuing medical education ~CME! programsto be free of commercial bias, and patient-care recommen-dations to be based on the highest-quality available evi-dence. Concern has been expressed both in the lay press andmedical literature about commercial influence and imbal-ance in CME programming.

Why We Undertook This Initiative

Several investigators are developing tools to help mitigatebias in CME programs. Many CME providers ask audiencesabout perceived bias as part of their program-monitoring pro-cesses; it is uncertain, however, that audiences are able todetect bias reliably or consistently. Lack of a common def-inition of bias makes it difficult to compare bias assessmentsbetween programs or CME providers. Additionally, personalbias can be present in CME programs even in the absence ofindustry program support or industry-speaker relationships.We are developing a tool that asks CME participants to ratespecific aspects of CME programs to create a more objectiveand consistent assessment of bias and balance in and acrossCME programs.

What We Did

We pilot tested a 9-item questionnaire assessing the poten-tial for influence or imbalance in CME presentations:

1. Did the speaker use only generic names of products on slidesand verbally?

2. If brand names were used, did generic names appear on thesame slide?

3. Did the speaker make any patient-care recommendations with-out citing evidence?

4. Did the speaker make any patient-care recommendations thatyou know to be inconsistent with the best evidence available?

5. If the speaker discussed studies that support a specific prod-uct, did he0she also discuss studies presenting different con-clusions about the product?

6. Did the speaker include harms as well as benefits when dis-cussing studies or specific products?

7. Did the speaker’s presentation include benefits without harmsof some products while including only harms ~or benefitsand harms! of other products?

8. If the speaker discussed multiple studies, did he0she includeboth weaknesses and strengths of each study?

9. Did any corporate names or logos appear on the speaker’sslides or handouts?

The questionnaire was tested at 7 sessions during 4 differ-ent 2007 Kaiser Permanente national CME conferences, cov-ering primary care, medical, and surgical specialty topics.Three of the 7 faculty disclosed commercial relationships.Roughly half of the attendees received this questionnaire;the other half received an evaluation with our traditionalquestion: “This presentation was free of commercial bias”~5-point Likert scale, strongly agree to strongly disagree!.

What We Learned

Of those returning evaluations with the traditional question,163 of 183 agreed or strongly agreed the presentation wasfree of commercial bias; 10 were neutral, 0 disagreed, and10 did not answer the question. Of the test questionnaireshanded out, 221 were returned. All sessions had at least somequestions answered in a manner that would indicate possiblebias; individual question percents ranged from 0%–36%. Theaverage for all 9 questions over 7 sessions ranged from 3.2%–14.0%. Our findings support our suspicion that one questionmay not be sufficient to detect bias; more detailed questionsmay be helpful in identifying some aspects of bias or in-complete presentation of evidence. Our further work willfocus on narrowing the number of questions, assessing inter-and intrarater reliability, and comparing audience answers tothe responses of trained expert observers.

Disclosures: The authors report none.

Dr. Price: Director of Education and Clinician Researcher, Colorado Per-manente Medical Group, Denver, CO, and Professor of Family Medicine,University of Colorado at Denver Health Sciences Center, Denver, CO;Dr. Havens: Director of Clinical Education, The Permanente MedicalGroup, Oakland, CA; Mr. Bellman: Practice Leader, The Permanente Med-ical Group, Oakland, CA.

Correspondence: David W. Price, Colorado Permanente Medical Group &The University of Colorado Denver Health Sciences Center, 10065 E. Har-vard Avenue, Suite 300, Denver, CO 80231; e-mail: [email protected].

© 2009 The Alliance for Continuing Medical Education, the Society forAcademic Continuing Medical Education, and the Council on CME,Association for Hospital Medical Education. • Published online in WileyInterScience ~www.interscience.wiley.com!. DOI: 10.10020chp.20011

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 29(1):76, 2009