Article Text
Abstract
Objectives Previous studies about the replicability of clinical research based on the published literature have suggested that highly cited articles can be contradicted or found to have inflated effects and depending of the area under study these founds can vary. An estimate done by John Ioannidis between 1993 and 2004 was the first of its kind, using highly cited articles published in general (specially NEJM and The Lancet) or specialty medical journals with high impact factor. In light of this, there are no recent updates of Ioannidis’s work, and the replicability of this particular research field may have changed over time, since many changes occurred in clinical research such as mandatory pre-registration and reassessments of reporting guidelines. This said, the goal of this study is to estimate the replicability of highly cited clinical studies published between 2004 and 2018.
Method We searched the Web of Science database for articles studying medical interventions with more than 2000 citations, published between 2004 and 2018 in high-impact medical journals. We then searched for replications of these studies in PubMed using the PICO framework. We evaluated whether replications were successful by the presence of a statistically significant effect in the same direction and by overlap of their effect sizes’ 95% confidence intervals (CIs) with those of the highly cited studies. Potential predictors or replicability and evidence of effect size inflation and were also analyzed. The effects of highly cited studies and replications were used to estimate the latter. For unfavorable outcomes, in which effectiveness increases as the outcome decreases, inflation ratio was defined as the point estimate of the replication divided by that of the original study. For favorable outcomes, in which effectiveness increases along with the outcome measure, the calculation was inverted.
Results We found a total of 89 eligible studies, of which 24 had valid replications. Of these, 21 (88%) had effect sizes with overlapping 95% CIs. Of 14 highly cited studies with a statistically significant comparison, 12 (86%) had a significant effect in the replication as well. When both criteria were considered together, the replicability was of 83%. Our sample of highly cited studies was composed by randomized clinical trials and seven phase 1 trials. Effect size inflation was marginal, but with higher sample size the findings could be more conclusive. Due to the small number of contradicted studies, our analysis was underpowered to detect predictors of replicability.
Conclusions Although most highly cited studies did not have eligible replications. The replicability rate of highly cited clinical studies in our sample remains high as in previous estimate in the same research field, with little evidence of systematic effect size inflation. They run counter to the assertion that there is a widespread reproducibility crisis in science, and suggest that this may not be the case for every scientific field. Further research is warranted to examine whether this higher replicability of highly cited clinical research is related to particular research practices that are not as widely used in other areas of biomedical science, such as randomization, blinding or prospective protocol registration. If such links can be reliably established, they can be used to inform attempts to improve replicability in different research fields.