Post on 31-Dec-2015
description
CSR Quick Feedback Pilot
Mary Ann Guadagno, PhD
Senior Scientific Review Officer CSR Office of the Director
To collect feedback on CSR peer review in a survey
• Evaluate the utility of asking reviewers in chartered study sections about their assessments of meeting experience:
–Quality of Prioritization–Collective Expertise–Assignment of Applications to Reviewers–Quality of Discussion
Pilot Objective
• Two CSR Integrated Review Groups (IRGs)–Genes, Genomes, and Genetics (GGG)–Dr. Richard Panniers
–Brain Disorders and Clinical Neuroscience (BDCN)–Dr. Samuel Edwards
• 18 CSR Study Sections (January –March 2014)
• Very short questionnaire – 4 agreement statements with ability to answer in about 5 minutes and 1 open answer
• Delivered via email
• Completed near end of study section meeting
Pilot Scope
• S1 - The Panel was able to prioritize applications according to their impact/scientific merit.
• S2 – The roster of reviewers was an appropriate assembly of scientific expertise for the set of applications in the meeting.
• S3 – Assignment of applications to reviewers made appropriate use of their broad expertise.
• S4 – The nature of the scientific discussions supported the ability of the panel to evaluate the applications being reviewed.
• General Comments – In addition to the answers you provided in this questionnaire, please add any other comments in the text box below.
Agreement Statements and Comments – on line
50%
60%
70%
80%
90%
100%
87.6%89.1%
87.1%87.5%
Quality of Prioritization Collective Expertise Assignments Quality of Discussion
Perc
ent S
tron
gly
Agre
e/Ag
ree
n=249 n=248 n=248 n=248
Overall Feedback was Favorable
– CSR panels are generally high quality.
– Clear commitment of all reviewers to fairly review applications.
– Video review once a year is a great idea.
– Assignments are balanced and appropriate.
– Differing score calibration by reviewers is a problem.
– Scoring is uneven among reviewers. Still have score inflation.
– Should separate overall scientific impact rating from technical merit.
– IAM was difficult to move back and forth between so many discussions.
Verbatim Comments from Reviewers
What Did We Learn?
• Identification of reviewer likes and concerns.–Some SRGs and some practices received constructive feedback.
• Strengths and limitations of methodology.–Technical issues – email, survey software, compliance, ease of
analysis.
• Input for future surveys – next steps.–Platform evaluation–Input from program observers–Change over time
• Charles Dumais• George Chacko• Mei-Ching Chen• Paul Kennedy• Amanda Manning• Adrian Vancea• Richard Panniers and GGG SROs• Samuel Edwards and BDCN SROs• Michael Micklin
Acknowledgements