CAEP GEMeS | Does a good learner bias your impression of others?

In Great Evidence in Medical education Summary (GEMeS) by Teresa Chan1 Comment

by Alexandra Stefan

Dr. Singh looked up from his charting and saw two learners holding their daily evaluation cards expectantly. One was an senior medical student visiting on elective. Dr. Singh thought that she had performed very well and worked independently. The other learner was a junior clerk who had just started his rotations. Dr. Singh found working with this learner to be quite difficult, since it seemed like he had to stop and explain everything–from writing notes to developing a list of differential diagnoses. Frankly, compared to the other clerk, his performance was poor. Dr. Singh wondered, did a good learner bias his impression of the other?

If you are in a setting where multiple learners are juxtaposed with each other, then this situation probably resonates with you.  Bearing in mind the effects of another trainee’s performance on the learner in front of you is very important. This “Great Evidence in Medical education Summary” (GEMeS – pronounced “gems”) was originally posted by the CAEP EWG GEMeS Team on October 17, 2014 and answers the question: Does prior exposure to a ‘good’ versus ‘poor’ trainee performance change an attending’s assessment of subsequent trainee performance? A PDF version is available here: 2014-10 GEMeS.

Educational Question or Problem:

Does prior exposure to ‘good’ versus ‘poor’ trainee performance bias attending physicians in their assessment of subsequent trainee performance?

Bottom Line

Yes. Attending physicians who were primed by viewing a ‘good’ performance gave lower scores and more failing grades to a subsequent borderline performance by a different intern on a clinical evaluation exercise than those who were primed with a ‘bad’ performance. The ability to judge performance was significantly affected by contrast bias, even in experienced raters.

DETAILSBIAS OF ‘GOOD’ VS ‘POOR’ TRAINEE PERFORMANCE ON ASSESSMENT
Reference
Yeates P, O’Neill P, Mann K, Eva KW. JAMA 2012 Dec 5; 308 (21):2226-32.
PMID: 23212500
Study Design
Randomized experimental design with participants blinded to study hypothesis
Funding sources
Association for Study of Medical Education
Setting
Attending physicians in Internal Medicine (and subspecialties) and Emergency Medicine were recruited from teaching hospitals in the United Kingdom.
Level of Learning
Postgraduate year 1 (PGY-1) medical trainees’ performances were assessed.

Why is it relevant to Emergency Medicine Education?

As clinical teachers in the emergency department, we are exposed to many learners from different specialties, with various skill levels. This article draws attention to an additional and concealed source of assessor bias in evaluating clinical performance by trainees. Interestingly, the assessors’ high level of experience did not protect against contrast bias This bias is particularly relevant in the current context of competency-based medical education, a model which relies on the assessor’s ability to compare performance to a set standard of competence.

Synopsis of Study

In order to investigate for the presence of contrast (relativity) bias, the investigators randomised 41 attending physicians with a mean of ten years of consulting experience to view scripted videotaped clinical evaluation exercises by PGY-1 trainees with either good performance or poor performance.  After exposure to the randomly assigned priming condition, assessors were asked to rate scripted borderline quality performances on a Likert scale, using the Mini Clinical Evaluation Exercise (Mini-CEX) assessment tool. All cases and scripts were previously validated.

The primary outcome was the score assigned by participants to scripted borderline performances after exposure to study intervention. The borderline performances received significantly lower scores from those physicians assigned to good prior performances (mean score 2.7 vs. 3.4 on a 6-point Likert scale). The borderline performances were also more likely to be characterized as failing by those who had been exposed to good performances than poor performances (55% vs. 24%,p<0.001) and less likely to be characterized as passing (8.3% vs. 39.5% p<0.001).

Multiple linear regression showed that the priming scenario and the stringency index (a measure of participant leniency compared with the peer group) were independently associated with the primary outcome.

The findings support the view that assessors are significantly influenced in their rating by prior experience and thus draw awareness to a potential threat to validity of assessments. The educational implications of this phenomenon require further study. The study findings highlight the one of the cognitive processes that influence the rater’s assessments, a field which merits further exploration.

Have you had experiences similar to this?  Feel free to share in the comment section below!

[bg_faq_start]

More About the CAEP GEMeS

This post was originally authored for the Canadian Association of Emergency Physicians (CAEP) Great Evidence in Medical Education Summaries (GEMeS) project sponsored by the CAEP Academic Section’s Education Working Group and edited by Drs. Teresa Chan and Julien Poitras. CAEP members receive GEMeS each month in the CAEP Communiqué. CanadiEM will be reposting some of these summaries, along with a case/contextualizing concept to highlight some recent medical education literature that is relevant to our nation’s teachers.[bg_faq_end]

Teresa Chan

Senior Editor at CanadiEM
Emergency Physician. Medical Educator. #FOAMed Supporter, Producer and Researcher. Chief Strategy Officer of CanadiEM. Associate Professor, Division of Emergency Medicine, Department of Medicine, McMaster University.

Alexandra Stefan

Alexandra is a clinician-teacher and an assistant professor in the Division of Emergency Medicine at the University of Toronto. Her academic projects include the development of procedure videos for the New England Journal of Medicine “Videos in Clinical Medicine Series."