Article Text

Download PDFPDF

Letter: Meta-analysis can be statistically misleading
Free
  1. Jacob M Puliyel, MRCP, MPHIL, MD1,
  2. Vishnubhatla Sreenivas, PhD2
  1. 1West Middlesex University Hospital London London, UK
  2. 2All India Institute of Medical Science Delhi, India
    1. The Editors

      Statistics from Altmetric.com

      Request Permissions

      If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

      The double blind randomised controlled trial (RCT) is the basis of good evidence-based medicine because it eliminates problems of bias and confounding. However, systematic reviews show different RCTs arriving at diametrically opposite conclusions. The reason for this is that the samples for the RCTs are drawn from different populations and it reflects the truth in those various populations. This matter is often overlooked when meta-analysis is done. When RCTs are aggregated in a meta-analysis, we have to aggregate the populations they represent—not the sample sizes. Large samples from small populations will get undue weightage otherwise. Meta-analysis as done presently can be misleading and unreliable.

      In response: ... but they also present an opportunity to learn more.

      When RCTs are consistent across a variety of populations and settings, we should feel more secure about the applicability of the intervention. If it works in low risk and high risk, young and old, east and west, it will probably work in my patient. However, as Puliyel and Sreenivas point out, RCTs don’t always agree, and sometimes diverge widely. When that happens, we would like to know why. It could be any of the PICO elements: the populations studied, the way the intervention is delivered (ie, dose, vehicle, route, timing, etc), the comparator and background treatments, or when or how the outcomes were measured.1 Or it could be that the PICOs are the same but some of the trials are flawed (poor randomisation, poor followup, non-blinding, etc) and some are not, leading to confounding by trial quality. Systematic (and unsystematic!) reviews should look for such differences, and if they occur, use them as an opportunity to learn more about when and why a treatment works or does not. However, it requires considerable care to separate out the possible true and artefactual causes of apparent disagreement between studies.

      References