Article Text

Download PDFPDF

What evidence affects clinical practice? An analysis of Evidence-Based Medicine commentaries
  1. Charles Coombs1,
  2. Igho Onakpoya2,
  3. Kamal Mahtani2,
  4. Jeffrey Aronson2,
  5. Jack O’Sullivan2,
  6. Annette Pluddemann2,
  7. Carl Heneghan2
  1. 1 Winchester College, Winchester, UK
  2. 2 Nuffield Department of Primary Care Health Sciences, Centre for Evidence-Based Medicine, University of Oxford, Oxford, UK
  1. Correspondence to Professor Carl Heneghan, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX1 2JD, UK; carlheneghan{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evidence-Based Medicine (EBM) published by BMJ aims to alert clinicians to significant advances in healthcare by selecting original and systematic review articles, from 100 candidate journals, whose results are likely to be both reliable and useful.1 We select articles if they concern topics relevant to internal medicine, general and family practice, surgery, emergency and critical care, psychiatry, paediatrics or obstetrics and gynaecology. Articles are summarised in value-added abstracts and commented on by clinical experts in the field.

To better understand the impact our choices may have on clinical practice, we audited 1 year’s worth of the journal’s commentaries, asking what journals we select from, what types of studies we choose and whether we identify articles likely to change practice.

To do this, we surveyed EBM commentaries published between December 2016 and September 2017 and extracted the following information: study type, original journal, setting and type of intervention. We used the commentaries from clinical experts to assess the potential benefits and harms of each finding and whether the study results were potentially practice changing. Two clinical editors checked each commentary and its implications independently and a third resolved discrepancies.

In 1 year, we published 87 commentaries, of which 22 (25%) were thought to be practice changing (see the online supplementary file 1 for the references). Three journals, namely, the BMJ, JAMA and the New England Journal of Medicine (NEJM), provided just over half (50/87, 57%) and 18 journals provided the remainder (range 1–5). The BMJ and the NEJM had the most practice-changing articles (4/22, 18% each). JAMA and Pediatrics came a close second (3/22, 14% each). Most of the commentaries were either randomised controlled trials (RCTs; 38/87, 44%) or cohort studies (36/87, 41%). Systematic reviews and guidelines made up the rest (11/87, 13% and 2/87, 2%, respectively). Cohort studies provided just over half of the practice-changing articles (13/22, 59%), RCTs just under a third (7/22, 32%) and systematic reviews about one-tenth (2/22, 9%). Regarding interventions, drugs accounted for the largest proportion of commentaries (36/87, 41%) and non-drug interventions accounted for about a quarter (21/87, 24%). We published similar numbers of commentaries on surgical interventions (14/87, 16%) and diagnostic and screening tests (13/87, 15%). Devices and a combined drug/device provided the rest (3/87, 3.4%).

Supplementary file 1

Our results show that while we source from a large number of journals, relatively few provide research for our commentaries; currently, three journals provide over half of our content in this area. This figure could be affected by a BMJ bias or a generalist bias on the part of the editors.

Disappointingly, most of the studies that we chose for commentaries were either of unclear value or were thought not to be practice changing. How does one ever know that practice will be changed, or even potentially changed; how does one ever know that it won’t? It sometimes takes years before the value of some results is appreciated, leading to changes in practice.2 Only a few published research studies are likely to change practice.3 As editors, it is likely we miss important studies in our weekly search of 100 top medical journals; our expert commentators may also not report whether a study’s results are likely, in their view, to change practice. We chose very few practice-changing RCTs, and we selected a large number of cohort studies and very few systematic reviews.



  • Contributors CH conceived the idea, CC did the initial data extraction with IO and wrote the draft of the methods and the results. KM and JOS independently checked whether the commentaries indicated were practice changing and CH resolved disagreements. All authors contributed to the writing of the manuscript.

  • Competing interests IO, KM, JA, JOS, AP and CH are all editors for EBM and so have an active interest in the results and improving the quality of articles selected.

  • Provenance and peer review Commissioned; internally peer reviewed.

  • Author note The likelihood that a study will change practice for the better is a good measure of its impact. We believe there is value in assessing a study’s potential to change practice. Overall, we think we can and should do better at selecting articles whose results are most likely to affect practice. We will therefore work towards developing criteria that identifies research most likely to have an impact on practice. We will ask our commentators to outline more carefully why they think the results described in an article are likely to be practice- changing. Our methods are not perfect. You may disagree with our approach. But as editors, we will commit to auditing our performance annually, and work towards better identifying research evidence that matters to patient care.