Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The development of knowledge is for the most part a gradual and iterative process by which every new study either confirms, extends or refutes what has been reported in previous ones. In evidence-based medicine, ideally a finding should be replicated by multiple studies before it can inform policy or practice decisions. While there is ongoing debate about the exact definition of replication, it is clear that replication has an irrefutable role in establishing the credibility of scientific findings.1
Nearly two decades ago, scientists declared a ‘replication crisis’ on demonstrating that results of a disproportionately high number of published studies in psychology and medicine could not be replicated.2 3 Under the pressure of academic policies and norms that incentivise novelty and positive findings, researchers have been steering away from replication studies, favouring publishability over credibility.1 4 Several interventions and solutions have since emerged in response to the crisis: grassroots communities and scientific networks are actively promoting new norms and building tools to support replication; publishers, funders and institutions are establishing policies to require more rigour, transparency and sharing of methods and data that enable and encourage replication.1
Systematic reviews serve as an important tool in evaluating the replicability of primary studies. The consistency of effect sizes or findings across studies included in a systematic review is indicative of the replicability of an effect. If results are inconsistent, there are approaches to explore what factors contribute to variability in results across studies.5 Thus, decision-makers have recognised systematic reviews as the highest level of evidence, and increasingly rely on them to inform clinical guidelines and policy.6
However, as a study in and of itself, a systematic review is also susceptible to the same types of errors—random, systematic and computational—as the primary studies it includes. Despite the potentially …
Contributors SK: conceptualisation, writing—original draft, writing—review and editing; JMG: writing—review and editing; LM: Writing—review and editing; P-YN: writing—review and editing; MJP: writing—review and editing; JPP: writing—review and editing; JP: writing—review and editing; BV: writing—review and editing; VAW: writing—review and editing; PT: writing—review and editing.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests All coauthors have a direct or indirect interest in systematic reviews and replication as part of their job or academic career.
Provenance and peer review Commissioned; externally peer reviewed.