Failure to Fail Part 2: Types of Evaluation Bias

In Education & Quality Improvement by Nadim Lalani1 Comment

We know that the best way to evaluate learners is by directly observing what they do in the workplace. Unfortunately [for a variety of reasons] we do not do enough of this. In my last post I described some reasons why we sometimes fail at making appropriate judgments about failing learners.

When it comes to providing feedback, there is much room for improvement.We know that feedback can be influenced by the source, the recipient and the message. What most people don’t know is that, when you’re evaluating a learner, you yourself could be unwittingly introducing bias  just like when we make diagnoses.

Types of Evaluation Bias:

Screen Shot 2014-02-12 at 2.55.15 PMIf a learner really excels in one area,  this may positively influence their evaluation in other areas. For example, a resident successfully/quickly intubates a patient with respiratory arrest. The evaluator is so impressed she minimizes deficiencies in knowledge, punctuality, ED efficiency.

Screen Shot 2014-02-12 at 2.56.41 PMIt is a human tendency to NOT give an extreme answer. Recall from part 1 [link] that faculty tend to overestimate learners’ skills and furthermore tend to pass them on rather than fail them. This may explain why learners are all ranked “above average”.

Screen Shot 2014-02-12 at 2.59.28 PMSome faculty are inherently  more strict [aka “Hawks”] while others are more  lenient [Doves]. Research is inconclusive on demographics that predispose them to being either of these One study suggested ethnicity and experience may be correlated with hawkishness

Screen Shot 2014-02-12 at 3.02.53 PMRecency bias is more relevant to end-of-year evaluations. Despite fact that we all have highs and lows – recent performance tends to overshadow remote performance

Screen Shot 2014-02-12 at 3.03.55 PMWe tend to contrast learners with each other. After just recently working shift with exemplary learner, the evaluator may be unfairly harsh when paired with a less capable learner.

Screen Shot 2014-02-17 at 3.02.19 PMEvaluators can be biased by their own mental ‘filters’. These develop from their experiences, background etc… They will therefore form subjective opinions based on first impressions (the Primacy Effect), bad impressions (the horn effect –  This is the opposite of halo effect. [e.g. if you’re shabbily dressed you may be brilliant, but might be under appreciated]), and the “similar-to-me” effect (faculty will be more favourable to trainee who is perceived to be similar to themselves).

Opportunity for EM Faculty

In part 1 you have seen how we as faculty are partly to blame for evaluation failure. Above, I have illustrated how we also introduce bias when we try to evaluate learners. Now for the good news …

Faculty need to ascertain how well the trainee provides high quality patient care. We can do this through direct observations (thankfully, the ED provides a ripe environment for doing this) and synthesizing these observations into an evaluation (stay tuned for part 3 of the Failure to Fail series for how I go about doing this).

Screen Shot 2014-02-12 at 3.06.31 PMDuring a shift in the ED, multiple domains of competency can be assessed through observation including physical exam, procedural skills, and written communication skills.  In fact, ALL of the CanMEDs competencies can be assessed in real time. Additionally we [at U of S] already use some best practices by providing daily feedback using a validated evaluation tool, having learners collect multiple assessments by several faculty members during their rotation, and providing attendings with 360 evaluations (with feedback from ancillary staff, coworkers, patients, and learners) to help them improve.

Summary:

Hopefully you now know a bit more about why we as medical faculty fail at failing learners and how evaluation bias plays into this. I have tried to show that the ER provides a perfect learning lab for direct observation and feedback. In the next post I will prescribe my personal formula for successful trainee assessment. Comments please!

Note: This post was originally published on the ERMentor blog in February of 2014. It was reposted on the CanadiEM blog after copyediting by Stephanie Zhou and Rob Carey on July 14, 2016.

References:

2.
Team M. Performance Appraisal Biases. MSG: Management Study Guide. http://managementstudyguide.com/performance-appraisal-bias.htm. Published 2008.
Nadim is an emergency physician at the South Health Campus in Calgary, Alberta. He is passionate about online learning and recently made a transition into human performance coaching. He is currently working on introducing the coaching model into medical education.