Article Text

Download PDFPDF
Can a replication revolution resolve the duplication crisis in systematic reviews?
  1. Sathya Karunananthan1,2,
  2. Jeremy M Grimshaw3,4,
  3. Lara Maxwell5,
  4. Phi-Yen Nguyen6,
  5. Matthew J Page6,
  6. Jordi Pardo Pardo7,
  7. Jennifer Petkovic2,
  8. Brigitte Vachon8,
  9. Vivian Andrea Welch2,9,
  10. Peter Tugwell3,4
  1. 1 Interdisciplinary School of Health Sciences, University of Ottawa, Ottawa, Ontario, Canada
  2. 2 Bruyere Research Institute, Ottawa, Ontario, Canada
  3. 3 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  4. 4 Department of Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
  5. 5 Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
  6. 6 School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
  7. 7 Cochrane Musculoskeletal Group, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  8. 8 School of Rehabilitation, Universite de Montreal, Montreal, Quebec, Canada
  9. 9 School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
  1. Correspondence to Dr Sathya Karunananthan; skarunan{at}uottawa.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The development of knowledge is for the most part a gradual and iterative process by which every new study either confirms, extends or refutes what has been reported in previous ones. In evidence-based medicine, ideally a finding should be replicated by multiple studies before it can inform policy or practice decisions. While there is ongoing debate about the exact definition of replication, it is clear that replication has an irrefutable role in establishing the credibility of scientific findings.1

Nearly two decades ago, scientists declared a ‘replication crisis’ on demonstrating that results of a disproportionately high number of published studies in psychology and medicine could not be replicated.2 3 Under the pressure of academic policies and norms that incentivise novelty and positive findings, researchers have been steering away from replication studies, favouring publishability over credibility.1 4 Several interventions and solutions have since emerged in response to the crisis: grassroots communities and scientific networks are actively promoting new norms and building tools to support replication; publishers, funders and institutions are establishing policies to require more rigour, transparency and sharing of methods and data that enable and encourage replication.1

Systematic reviews serve as an important tool in evaluating the replicability of primary studies. The consistency of effect sizes or findings across studies included in a systematic review is indicative of the replicability of an effect. If results are inconsistent, there are approaches to explore what factors contribute to variability in results across studies.5 Thus, decision-makers have recognised systematic reviews as the highest level of evidence, and increasingly rely on them to inform clinical guidelines and policy.6

However, as a study in and of itself, a systematic review is also susceptible to the same types of errors—random, systematic and computational—as the primary studies it includes. Despite the potentially …

View Full Text

Footnotes

  • X @GrimshawJeremy

  • Contributors SK: conceptualisation, writing—original draft, writing—review and editing; JMG: writing—review and editing; LM: Writing—review and editing; P-YN: writing—review and editing; MJP: writing—review and editing; JPP: writing—review and editing; JP: writing—review and editing; BV: writing—review and editing; VAW: writing—review and editing; PT: writing—review and editing.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests All coauthors have a direct or indirect interest in systematic reviews and replication as part of their job or academic career.

  • Provenance and peer review Commissioned; externally peer reviewed.