A Review of Systematic Reviews

In Knowledge Translation by Brent Thoma9 Comments

Dr. Wikipedia said that:“An understanding of systematic reviews and how to implement them in practice is becoming mandatory for all professionals involved in the delivery of health care.”

And to me, the word of Wikipedia is the next best thing to the word of Weingart.

As usual, I think Dr. Wikipedia is correct. Systematic reviews are where a lot of the evidence-based medicine that we aspire to practice is consolidated, and we require literacy in their methodology to understand the evidence for many of the things that we do or don’t do. Their importance in modern medicine is evident with this statement and this review noting that with them we stay up-to-date, ground clinical practice guidelines and plan research agendas.

An expert could mislead us, a case report could dupe us, an RCT could fool us, but systematic reviews are the Pharaoh that lives in the penthouse of the Evidence-Based Pyramid (picture credit here), they wouldn’t mislead us. Or would they?

Evidence-based pyramid

Definition

Check out some long, formal definitions here (section 1.2.2), here, here and here. My definition in a sentence:

A systematic review is the result of smart people analyzing every piece of literature they can find related to a well-defined question, assessing its methodology and appropriateness, and synthesizing all of it to provide the best answer possible with the available evidence.

I also really like this picture as an analogy (picture credit here):

networking circle with puzzle pieces

Each puzzle piece represents a study and a systematic review is the picture that results when smart people put the pieces together.

History

After reading of the glories of systematic reviews I finally understand why the library ladies that taught us how to do literature searches incessantly referred us to that Cochrane website. While the concept of a systematic review seems obvious to those of us that were trained with resources like the Cochrane Collaboration at our fingertips, they are a relatively new concept.

The history of systematic reviews is summarized here and here. These sources note that it was Archie Cochrane who initially agitated for developing medicine based on randomized controlled trials in his seminal book, Effectiveness and Efficiency: Random Reflections on Health Services (1972, available freely for download here or for purchase for a ridiculous amount of money here). Later, his call for the critical summary of all RCT’s (1979) led to the establishment of a collaborative database of perinatal trials. In the 1980’s systematic reviews of RCT’s began being published and in 1987 he encouraged others to adopt the methodologies used in these reviews. This led to the formation of the Cochrane Collaboration shortly after his death.

What are the characteristics of a systematic review?

After reading many articles on systematic reviews, I was pretty convinced that the characteristics it required could not be published in anything besides a point-form list half the length of a computer screen. Fortunately, my hero Sherlock Holmes managed to pull it off in a single sentence in a statement published >90 years posthumously. As he explained to his dear assistant Dr. Watson using a preponderance of commas, semi-colons and colons:

“there are four main indicators of a sound review: firstly, a comprehensive literature search; secondly, explicit, detailed, inclusion and exclusion criteria; thirdly, a detailed assessment of the quality of the included studies; and, fourthly, appropriate methods of pooling the data. The `Sign of Four,’ if you like, gentlemen!” He turned to me. “Is that succinct enough for your memoirs, Watson?” I nodded. “In fact it’s… er… elementary!”

Thanks for breaking it down for us, Sherlock! You came and went before your time.

For a much, much, much more detailed outline of what makes the ideal systematic review, check out The PRISMA Statement (PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analyses). This 2009 open-source statement (written in Canada, eh!) consists of a 27 item checklist of items to include when reporting a systematic review.

What’s a meta-analysis?

Prior to writing this post I often confused systematic reviews and meta-analyses. The terms are pretty much interchangeable, aren’t they?

Apparently not. Contrary to a systematic review, a meta-analysis is a statistical process used to summarize and combine data from multiple studies. They are graphically represented by a blobbogram (aka a “Forest Plot” – see the distinction that Cochrane makes between the two here (section 1.2.2)). So, while systematic reviews often include meta-analyses, they do not necessarily require one. A systematic review that does not include a meta-analysis can also be called a narrative review.

What’s a narrative review?

This is where the definitions get a bit mucky. The definition of a systematic review that I provided above refers to a systematic review rooted in the evidence of a meta-analysis. A narrative review is a different thing. This article defines it:

Narrative reviews are the traditional approach and usually do not include a section describing the methods
used in the review. They are mainly based on the experience and subjectivity of the author, who is often an
expert in the area. The absence of a clear and objective method section leads to a number of methodological
flaws, which can bias the author’s conclusions.

Put more bluntly, while the search strategies used may be included to provide the guise of a systematic approach, a narrative review is expert opinion in a systematic review’s clothing and is likely to contain all of the reviewer’s inherent biases. While these reviews can be useful to examine questions in which there is not sufficient data appropriate for meta-analysis or to review broader topics, they fulfill a different function than a systematic review of a particular clinical question that is supported by a meta-analysis.

Throughout this post the term “systematic review” refers to a review supported by a meta-analysis. Interestingly, while they were not what Archie Cochrane was asking for when he advocated for summaries of RCT’s, narrative reviews are much more prolific than systematic reviews.

Critique

Systematic reviews sound awesome. As outlined in this spectacular article, a well-done systematic review can increase the precision of a conclusion, assimilate a large amount of data, decrease the delay in knowledge translation, allow formal comparison of studies, and identify/reflect on heterogeneous results. One would think that their accessibility and brevity (at least relative to the studies they summarize) would give them an important role in knowledge translation.

While systematic reviews can be a great resource for all of these reasons, they also have less desirable characteristics. To summarize: too many are published and too few are updated, reporting and quality standards are variable, and bias is often not well controlled. These problems may continue to contribute to the delay in translating knowledge into practice.

Systematic review overload

The rate of publication of systematic reviews was pegged at 11/day in 2007 (>4000 per year!!) with the trend suggesting that this will continue to increase. With so many systematic reviews, how can we possibly keep up? In addition to the difficulty of keeping up with all of the systematic reviews that are produced, there is substantial opportunity cost associated with the publication of multiple reviews on the same topic.

Opportunity cost explained (cartoon credit here):

opport

Efforts have been made to address the problem of redundant reviews. In the future, the PROSPERO project, the creation of a database of prospectively registered systematic reviews with >1000 records, may allow for notification of systematic reviews in progress and prevent redundancy of effort.

They are out of date

This study noted that the rate of trials had increased from 14/day in the 1970’s to 75/day in 2007. A 2007 Cochrane Colloquium presentation found outlined in the same study concluded that more than 1/2 of the Cochrane Collaboration’s systematic reviews were out of date! A survival analysis of systematic reviews found a median survival time of only 5.5 years and that 7% were out of date at the time of publication! Systematic reviews “expired” secondary to quantitative (change in primary outcome or mortality of >50%) or qualitative (changed statement of effectiveness, new evidence of harm, or new caveats that affect practical application) new evidence. With this ongoing proliferation of trials and the resource intensiveness required to complete a systematic review, how will the medical community possibly keep up?

I am unaware of any projects specifically aimed at addressing this problem. However, one intriguing idea (unfortunately, I lost the reference – help! If anyone sees something on this please let me know.) that might have a small effect if it gained widespread adoption, was to direct residents to write and/or revise systematic reviews instead of conducting their basic research projects. This missing publication argued that a systematic review would be better for the residents’ development of critical appraisal and methodological skills as well as being better for curation of the medical literature than the (often) small resident studies of questionable significance.

They vary in their structure and reporting

Examinations of systematic reviews have reported substantial variability in the methodologies used and the characteristics reported. This heterogeneity makes it difficult or impossible to determine quality, compare methodology between studies, or perform critical appraisals. Additionally, it allows for the possibility of complicating contradictory reviews being published, likely as a result of methodology that it may not be possible to effectively compare because of incongruous reporting.

This problem is not universal. The Cochrane Collaboration has strict guidelines for how their systematic reviews must be reported. Additionally, the PRISMA guidelines are readily available to guide the reporting characteristics of systematic reviews. Hopefully the next time a review is done there is substantially more compliance and homogeneity in reporting standards.

They do not account for bias

Publication bias and reporting bias are well-documented phenomena that results from the selective publication and submission of trials with desirable and/or positive findings. Their potential effect in systematic reviews are a double whammy: in addition to having to contend with the biased publication of the trials that make up the components of its meta-analysis, systematic reviews without positive/desirable findings may also be less likely to be published. As this is one of the biggest criticisms of systematic reviews, substantial effort has been made to combat it.

The biggest effort to minimize publication bias in trials has been to deny publication to those that were not prospectively registered. In 2004 the International Committee of Medical Journal Editors (ICMJE) announced in the NEJM that their journals would no longer publish trials that had not been prospectively registered. It was thought that prospective registration would prevent the data from small trials with null results from “disappearing” if not published as a result of publication bias or selective reporting bias. Unfortunately, for reasons beyond the scope of this post the prospective registration of trials has not been completely successful. As outlined in this article that was based on a sample of trials registered with the WHO’s International Clinical Trial Registry Platform, these registrations often contained non-specific, poor quality, or missing information. This article showed that, while the ICMJE are publishing registered trials, they don’t seem to mind if that registration is inadequate. Hopefully, efforts to improve compliance are ongoing.

Systematic reviews do no better. This 2007 review found that only 23.1% of the 300 systematic reviews that it reviewed from 2004 assessed for publication bias. While difficult, there are analytical techniques that can be used to quantify publication bias in meta-analyses. Additionally, efforts must be made to track down every piece of relevant data as the major databases miss a significant number of relevant studies, unpublished clinical trials and other grey literatureThe same 2007 review also noted that not a single one of the systematic reviews that it reviewed were registered. Although this was not a common practice at the time of the review, PROSPERO (a database of prospectively registered systematic reviews), has since been developed for this purpose. While its goal is primarily may have a role in bringing unpublished systematic reviews to light, if used effectively it could reduce reporting and publication bias in systematic reviews.

They have not improved knowledge translation

This is a debatable statement. I certainly think that systematic reviews improve knowledge translation. When was the last time that you needed a quick answer to a clinical question and passed over a systematic review for an RCT? One benefit of the proliferation of systematic reviews is that there seems to be one for everything. Searching “Systematic Review” on Google Scholar returns 2.58 million results in 0.04 seconds.

On the other hand, despite our proliferation of systematic reviews, challenges still exist in translating the massive amount of information that is available into evidence-based clinical practice.  Simply disseminating the best evidence does not seem to translate it effectively into practice. This may be partially because of the many problems that still exist with systematic reviews, or it may simply be because change is hard.

Perhaps they have not improved knowledge translation enough.

The Next Frontier

While systematic reviews have demonstrated their utility in medical science, they are not perfect.  If even these are insufficient, what is the next frontier?

Could it be FOAM? If you are reading this blog, you are likely engaged in the online community dedicated to providing Free Open-Access Medical Education (they’re pretty much the only people who read this stuff). The content produced by this group is made freely available, open to discussion and free of industry-bias. As discussed in my previous post FOAM: A Market of Ideas, the dissemination of the best content is supported when other members of the community publicize it.

It could be argued that FOAM is a regression to the bottom of the evidence-based pyramid where bias-soaked expert opinion rules the day. However, the expert opinion at the bottom of the pyramid is supposed to prevail only in the absence of evidence. In a world with an overwhelming number of systematic reviews, I would like to think that we could flip the pyramid on its head (picture credit here):

pyramid

to represent a movement that allows the masses to take control of the medical literature through an ongoing, crowdsourced, instantaneous review of the best evidence. The Skeptic’s Guide to Emergency Medicine has been explicit in stating that his goal is to decrease the knowledge translation gap to a single year… and I think he’s on to something.

Conclusion

I think this may have been my longest post ever and I didn’t include everything that I intended. Stay tuned for more on systematic reviews including an (over)simplification of chi2, funnel plots, blobbograms and an approach to appraising a systematic review. This stuff is certainly boring, but I hope that explaining gives you (and me) a better understanding of evidence-based medicine. I’ll try to keep it tolerable by looking at contemporary and/or historically significant studies.

As always, thanks for reading. I always appreciate the feedback left in my comments so please leave some! If you thought this was a helpful review, I would also appreciate it if you referred your friends, followed my through e-mail (right column), signed up for my RSS feed (top right corner), tweeted about it on twitter or followed me on twitter.

Thanks!

Brent Thoma @boringem

Dr. Brent Thoma is a medical educator, blogging geek, and trauma/emergency physician who works at the University of Saskatchewan College of Medicine. He founded BoringEM and is the CEO of CanadiEM.