Animal models are primary tools for understanding the development of human disease and testing treatment interventions before they are escalated to clinical trials. There is a rapidly growing literature utilising models of neurological disease, with many novel drug candidates demonstrating efficacy in vivo. However, the translational success of interventions which emerge from this pipeline remains extremely low – with an estimated failure rate of 99.6% for Alzheimer’s disease clinical trials. Systematic reviews and meta-analyses of preclinical data provide us with an overview of the available evidence, allow us to assess how generalisable the findings may be to other species and other environments (external validity), and the likelihood that the evidence is unbiased and trustworthy (internal validity). Findings from systematic reviews can inform guidance to improve the reporting quality and rigor of future research, generate hypothesis, and most importantly form part of a framework to deliver evidence-based medicine. For this reason, systematic reviews are used regularly to appraise the evidence from clinical studies – however are not yet common practice for the larger body of preclinical literature that underpins clinical trial design. Preclinical systematic reviews are resource-intensive and often out of date by the time they are complete. To enable preclinical systematic reviews to form part of a translationally relevant evidence framework, we have developed and integrated a series of automation tools and methodologies for the continual synthesis and quality assessment of in vivo experiments. To obtain relevant records as they are published, we have utilised the PubMed API to fetch new records based on predetermined search strategies. For study selection, we have trained machine learning algorithms (based at the EPPI-Centre, UCL) to include studies which meet our inclusion criteria. To assess reporting quality, we have used text-mining techniques on the full-text publications. Regular expressions have been developed and validated within our group for the reporting of randomisation to experimental groups, blinded assessment of outcome, sample size calculations and conflict of interest statements. We have also built regular expression dictionaries to categorise studies by disease model(s), treatment(s), and outcome measure(s) reported. Using these categorised datasets, we have built interactive web applications or ‘living’ evidence summaries using the R programming language. These web applications visualise the literature, allow users to interrogate the dataset and to download the relevant citations for a model(s), intervention(s) and outcome measure(s) of interest. So far, we have used this approach using evidence in preclinical models of depression (https://camarades.shinyapps.io/Preclinical-Models-of-Depression/) and Alzheimer’s disease (https://camarades.shinyapps.io/LivingEvidence_AD/). We aim to curate and improve upon these applications and expand this methodology to other disease areas. This work forms the basis of a ‘living’ framework to synthesise preclinical evidence as it emerges and track research trends and reporting quality over time. We anticipate that this framework can enhance the speed at which systematic reviews of preclinical research can be performed and provide an important resource for all stakeholders in AD research.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.