How Can EM Faculty Be Better Evaluators?

In Education & Quality Improvement by Nadim Lalani4 Comments

failing-grade

from http://www.fitsnews.com

One of my colleagues  – Dr Van De Kamp –  gave us a talk on how we can improve on our evaluations of learners. [I have taken her talk and added some of my own reflections/literature].Duff et al in 2003 illustrate:

“Giving the benefit of the doubt has consequences for future mentors, students and, may ultimately, have professional consequences”

This talk was quite topical as a recent publication in the New York Times [read here] highlighted how we as a medical community seem to continually pass problem learners [nursing also seems to be afflicted with the same blight]. As one colleague recently remarked:

“The only thing harder than getting into medical school is getting out!”

Schaana collates all the learner evaluations and lamented about what she observed as “leniency bias” on the evaluations …

The vast majority of evaluation forms evaluate learners as “exceeding expectations” when, in reality, it’s IMPOSSIBLE for ALL these  learners to be excellent!

Why do we do this? What is so hard about grading students?

  • Our fellow high school, undergraduate and postgraduate educators don’t seem to have a problem with failing students!
  • As many as 30% PhD candidates fail.
  • Interestingly we don’t seem to have the same problem with evaluating International Medical Grads [article]

Why our feedback fails:

There is no one reason why faculty aren’t very good at evaluation. Most of the factors that I have listed below aren’t entirely exclusive of each other.

Why Leniency Bias?

Woodward et al [Pubmed Link] suggest that there exists a leniency bias whereby evaluators inflate the ratings of students. Bass [in an ancient article – link] suggests 8 reasons why we’re so lenient [I have highlighted the ones that seem valid to me]

  1. Rating a learner poorly [who is under your jurisdiction] may reflect on our own unworthiness.
  2. Assuming that the real under-performers should have failed already.
  3. Fear of interpersonal discord from giving a poor evaluation.
  4. Trying to pass a learner on in order to influence them in the future.
  5. Projecting.
  6. Feel the need to approve of others as a way of feeling self-approval.
  7. Operating on the basis that “he who associates with me is meritorious therefore I too am meritorious”
  8. We exist in a culture of approval.

There’s little doubt that leniency bias exists and it’s roots may be multifactorial and difficult to get at. One of the tenets of curing a disease is to identify it.

The Feedback Form May be Flawed

  • Thompson et al in 1990 [PubMed Link] suggested that the problem might be with the actual evaluation forms. In the last three years, we’ve modified ours twice.
  • However, Bandiera and Lendrum show that, even when we create a better daily evaluation card, leniency bias still creeps in [Link].

Despite these drawbacks, one should never be afraid of modifying and re-modifying the evaluation tool – because, in truth, the data on the evaluation form needs to reflect the outcome that you are trying to assess.

 The Quality and Timing of the Evaluation

  • We seldom take the time to actually observe a history being taken, physical exam or discharge instructions being stated [infact we may inadvertently hijack the latter].
  • In the ER evaluations usually occur at the end of a busy shift when one is rushing to go pick up the kids – this also sets us up to fail. One has to set aside time for a proper evaluation.
  • Furthermore, we know that instructor presence positively influences student evaluations of the instructor – so does this mean that if the learner is sitting in front of you – you’re more likely to be lenient? I think so.

Having a learner on shift comes with responsibility. You have an aprrentice that needs observation, guidance and feedback. You have to change the way you approach the shift. [Refer to my previous blogs about teaching in a busy ED and assessing the learner] I cannot stress enough the importance of direct observation.

The “Halo Effect”

  • Thompson et al also refer to a “halo effect” – allowing the general perception of the learner to bias the evaluation of specific competencies. i.e. “I like Bob – so I am more likely to overlook his below-average suturing skills”

I would argue – if you really like Bob – for his own benefit you need to highlight his inadequacies.

Lack of Self-Efficacy:

  • Most EM docs are just that – EM docs! That is many perceive that they are clinicians and not educators. This lack of self-belief [in ones ability to effectively evaluate] leads to leniency.

Here in Saskatoon we have tried to address this by having Faculty Development [where this topic was discussed].

“Isolated event” hypothesis.

  • Many of us only get one shift with that specific learner. We therefore may tend to discount our ability to grade a learner objectively – after all – what if the student is just having a bad day?

Enter the Daily Encounter Card. We need to stress to our faculty that they are providing formative feedback for that shift only. Faculty need to feel empowered to “be the bad guy” and fail the student on a specific role … or even fail that particular shift.

Additionally scheduling faculty and learner together for a series of shifts may help identify weaknesses.

Lack of support, engagement and coordination.

  • Most EM clinicians are “community faculty” they don’t have an office. They don’t know the who’s-who in the Undergrad office, and most of them have never met the Dean. They work in isolation without much engagement from the college.
  • There may also be a perception that they are not ultimately responsible for this student.

These are clear disincentives to take the time and effort to properly evaluate learners rather than give a cursory shot at it. There is a dire need for a coordinated and multi-disciplinary approach to all learners that includes 360 feedback, more observation, more emphasis on the “soft skills” and perhaps – prescribing more failure.

“Big Deal” Hypothesis.

  • Okay there’s no such label. But in my experience, evaluators tend to be more lenient when they perceive that their negative evaluation may have negative consequences.We know from the literature that feedback for the purposes of academic promotion tends to be more lenient.
  • Related to this is the huge investment that has already been made and needs to be made if the learner were to be held back. I think that this is at the heart of what happens when we pass on problem learners. I have heard – it takes an inordinate amount of effort to remediate and potentially fail a learner rather than minimise some inadequacies – especially if they are “soft skills”.

Collectively as faculty we need to take ownership and almost seek opportunities to critique [or even fail a leaner] – It’s like screening for sepsis … you won’t find it unless you look for it.

We shouldn’t feel like its a huge challenge because it’s not. The conscientious learner will actually thank you for it. The rewards of turning a learner around is well worth it:)

Humans are flawed

  • We’re not perfect. Far from it, we’re in fact set up to make biased decisions. We are thus predisposed to make flawed evaluations.

The key is to recognise when you’re making judgements about the learners and when you may not be fit to evaluate objectively [you’re stressed and angry]. Critique only directly observed characteristics objectively and specifically [more on this to follow]

HOMEWORK

I am interested in learning more from your comments. In the meantime my short-term goals are to:

Give Specific Feedback about characteristics observed during that shift:

  • Download a picture

    medical education

    of the CANMEDs Roles. Use them as a guide!
  • Alternatively use Pangoro’s RIME Criteria [Link]

Give More Tough Love

Screen Shot 2013-03-06 at 1.39.14 PM

REFERENCES:

BERNARD M. BASS. Reducing Leniency in Merit Ratings. Personnel Psychology. Volume 9, Issue 3, September 1956, Pages: 359–369,

Howard K. Wachtel (1998): Student Evaluation of College Teaching Effectiveness: a brief review, Assessment & Evaluation in Higher Education, 23:2, 191-212

This is a great article for would-be edumacators:

Reed G. Williams , Debra A. Klamen & William C. McGaghie (2003): SPECIAL ARTICLE: Cognitive, Social and Environmental Sources of Bias in Clinical Performance Ratings, Teaching and Learning in Medicine: An International Journal, 15:4, 270-292

 

 

 

Nadim is an emergency physician at the South Health Campus in Calgary, Alberta. He is passionate about online learning and recently made a transition into human performance coaching. He is currently working on introducing the coaching model into medical education.
- 4 days ago