Article Text

Download PDFPDF
8  When should systematic reviews be replicated and when is it wasteful: a checklist and framework
  1. Sathya Karunananthan1,
  2. Vivian Welch1,
  3. Jeremy Grimshaw1,
  4. Lara Maxwell1,
  5. Maureen Smith2,
  6. Peter Tugwell1,
  7. On behalf of the Replication SR Research Group2
  1. 1University of Ottawa, Ottawa, Canada
  2. 2None, Ottawa, Canada


In my postdoctoral research, I have used an evidence-driven, transparent process and implemented consensus approaches to develop value-added guidance on when and how to replicate systematic reviews. As outlined below, the aims, methodological approach and dissemination strategies of this project align closely with the EBM manifesto of making evidence relevant, replicable, and accessible to end-users.

Background Replication is a cornerstone of the scientific method, yet replication of systematic reviews is too often overlooked, done unnecessarily or done poorly. Systematic review replication is conducted with the objective of testing whether results of an index review can be repeated or extended. Failure to replicate may lead to continued uncertainty about the implications of a body of evidence. The compelling case for replicating systematic reviews is complicated by concerns about research waste – too frequent replication of systematic reviews can represent an inefficient use of scarce research resources. There is a lack of guidance for when to, and when not to replicate systematic reviews.

Objective To develop evidence-driven, consensus-based recommendations on when and how to replicate systematic reviews, taking into account the needs and preferences of the various stakeholder groups. METHODS: We used an integrated knowledge translation approach by involving an international multidisciplinary team of methodologists and knowledge users (authors, commissioners, funders, and consumers of systematic reviews, including patients, clinicians, and representatives from organizations involved with policy-making) at every stage of this research. The project was conducted in 4 phases: 1) semi-structured interviews with key informants to seek their opinions on definitions and criteria for systematic review replication; 2) a systematic review of evidence on when and how to replicate systematic reviews and an analysis of discordant reviews; 3) an online survey of knowledge users to assess level of agreement on draft criteria for systematic review replication; 4) a consensus meeting of 36 participants representing key stakeholder groups: patients, clinicians, journal editors, researchers, systematic review organizations, and guideline developers, to discuss the findings of the first three phases of the project and seek agreement on a checklist and framework for systematic review replication.

Results Based on the opinion-gathering, literature review, and consensus meeting discussions, we developed: 1) a 4-item checklist applying the value of information (VOI) concept to determine whether the benefits of replicating an existing systematic review outweigh alternative uses of resources; and 2) a framework to determine how issues in the conduct of an index review represent threats to validity sufficient to justify formal replication and what are the appropriate review methods to address the specific threat of validity within the replicated systematic review.

Conclusions Given the role of systematic reviews in policy-making and guideline development, the validity and reliability of their findings should be tested. The checklist and framework serve as explicit prompts to carefully consider the value of systematic review replication. Next steps will include assessing usability and acceptability of the checklist and framework, and adapting them to different users.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.