Article Text

Download PDFPDF

Book review: systematic reviews to support evidence-based medicine. Second edition: from expert to novice and back again
Free
  1. Gerben ter Riet
  1. Department of General Practice, Academic Medical Center, University of Amsterdam, Room J2-117, Meibergdreef 15, 1105 AZ, Amsterdam, The Netherlands
  1. Correspondence to Gerben ter Riet
    Department of General Practice, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands g.terriet{at}amc.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

I ran into systematic reviews (SRs) in 1989. Dodging the draft, I had just spent 4 months on the middle ear reflex of Wistar rats, hoping to, some day, prevent human hearing damage. Instead of unraveling nature, I discovered that I was attracted more to applied science. At the Maastricht University Epidemiology department, I started reading randomised trials on acupuncture. My boss had convinced the Department of Health that reading the existing evidence was a better investment than spending at least twice the amount on the next trial that carefully ignored its predecessors' lessons. We were busy carrying out, what we called ‘criteria-based meta-analyses’ (CBMA), which we then (mis)judged superior to taking weighted averages (statistical pooling). These CBMAs boiled down to vote-counts for subgroups that differed on quality. Our basic (or biased?) expectation was that as quality goes up, the proportion of ‘positive’ studies should go down. But in the early nineties, issues emerged at the methodological horizon that still haunt SRs today. Schulz et al showed that concealment of treatment assignment, double blinding and patient exclusions were associated with exaggerated results in randomised trials.1 All the more refined quality items that we were fond of did not seem to matter much. A paper by Peter Jüni et al showed that the choice of a particular quality measurement ‘instrument’ (they tested 25) largely determines a trial's quality rankings in a SR.2 Bad news for CBMA. Sander Greenland pointed out that capturing the different quality items in a single ‘quality score’ blurs the (potentially opposing) effects of the different quality items.3 He argued that we should look at which effects different quality items separately exert on the study results (using ‘meta-regression’). Too bad, because it had seemed so attractive to statistically pool the results of the best two trials, in turn, adding the third best and all the way to the worst study and look if the pooled estimate was stable. Where does this leave systematic reviewers? Well, methodology items matter, but these are poorly reported. They should not be mixed in an overall quality score, but most reviews are too small (around eight trials) to follow Greenland's meta-regression path. For other reasons too, conducting a systematic review is an audacious job. One (or the team) has to be expert at three levels at least: the subject-matter (what is formula acupuncture?), the study designs in the review (when should matching be used in case-control studies?), and review methodology itself (which databases to search?, does the trim-and-fill trick really repair publication bias?). Omissions at any of these levels risk making a systematic review from an asset to evidence-based medicine to a big headed act of (mis)judgment.

There have been alarming reports about pharmaceutical industries that suppress or manipulate evidence they deem harmful to their profits. Should we abandon systematic reviews on drug effects until the problem of reporting and non-publication bias has been solved or risk becoming inadvertent advocates of their products?

The second edition of Systematic reviews to support evidence-based medicine by Khan et al is a gentle and carefully written non-technical introduction to SRs for novices to the field. It touches on most of the above issues and should not be blamed for not solving them.4 Half of the book covers judiciously chosen case studies illustrating different review topics, such as (adverse) effects of treatment and accuracy of diagnostic tests. I was particularly intrigued by case study five on reviewing qualitative evidence because there I did not have to feign the role of the novice. Although I have previously been positively inspired by qualitative workers such as Erving Goffman5 and Bruno Latour,6 I was slightly disappointed. First, many terms are not defined (understanding of a phenomenon, emergent themes, aberrant findings and sources of knowledge), and second, qualitative reviews are contrasted to caricatures of quantitative reviews, and finally, inconsistencies (‘primary qualitative research deals with very individual responses.’ (p. 131) versus ‘by conducting this SR […], you have gained a better understanding of your own patient’s experience.' (p. 138)). Why to review if a thorough patient interview may bring what is needed?

On my wish list for the third edition are gentle introductions to such topics as individual patient data meta-analysis, network meta-analysis, meta-analysis of animal experiments drawing on the CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) collaboration's experiences, and reviewing drug effects across industries' wider agendas, as recently suggested by Ioannidis and Karassa.7

References

View Abstract

Footnotes

  • Competing interests None.