Intended for healthcare professionals

Editorials

The automation of systematic reviews

BMJ 2013; 346 doi: https://doi.org/10.1136/bmj.f139 (Published 10 January 2013) Cite this as: BMJ 2013;346:f139
  1. Guy Tsafnat, senior research fellow1,
  2. Adam Dunn, research fellow1,
  3. Paul Glasziou, professor2,
  4. Enrico Coiera, professor1
  1. 1Centre for Health Informatics, Australian Institute of Health Innovation, University of New South Wales, Sydney, NSW 2052, Australia
  2. 2Centre for Research on Evidence Based Practice, Bond University, Gold Coast, Australia
  1. guyt{at}unsw.edu.au

Would lead to best currently available evidence at the push of a button

The Cochrane handbook stipulates that systematic reviews should be examined every two years and updated if needed,1 but time and resource constraints mean that this occurs for only a third of reviews.2 Indeed, it may take as much time to update a review as it did to produce the original review. If this effort were redirected at developing methods to automate reviews, then updating might one day become almost effortless, immediate, and universal.

In his novel Player Piano, Kurt Vonnegut Jr described machines that record the hand motions of artisans and replay them to reproduce a perfect copy of the artefact, more quickly and more economically. Such automation is needed in the update and even creation of systematic reviews, because the capability of the human machinery for review increasingly lags behind our capacity to produce primary evidence.3 The current reality is that many reviews are missing or outdated,4 and it is hard to imagine a solution that does not involve some automation.5

Technology has advanced such that software can be used at least to semi-automate evidence discovery and synthesis. The idea of automating aspects of systematic review is not new, and computer systems that can reason from the literature to support clinical decision making have long been imagined.6

Four basic tasks underpin systematic review—retrieving the relevant evidence in the literature, evaluating risk of bias in selected trials, synthesising the evidence, and publishing the systematic review—and technology can help in each.

Evidence retrieval is now well understood and easily done, and it should be the primary function of automation. Meta-search engines can retrieve published trials from multiple databases, automatically translating between different query languages.7 This is aided by specialised databases for clinical trials, which include well structured trial information.8 Whether curated manually by experts or automatically by computer, such structured trial banks are suitable for further automation. Machine learning systems are being developed to help further with the process of citation screening.9

The effort devoted to evaluating risk of bias and evidence synthesis can be reduced by text extraction algorithms that identify specific information elements in a document.10 ExACT, for example, is designed to help systematic reviewers by highlighting sentences and phrases containing information about population, intervention, control, outcome (“PICO”) and randomisation. This algorithm has a reported precision and recall of greater than 90%.11

Moving from information extraction to its synthesis is far more challenging and will depend on computational reasoning across multiple documents.12 An early example is a system that monitors the literature and alerts reviewers when new evidence appears that is likely to change the conclusions of a systematic review.13 Although text extraction algorithms typically use statistical methods to identify specified elements in a document, multi-document synthesis will probably require mixed methods that harness specific knowledge about the structure and process of clinical trials to guide interpretation.14 Multi-document methods are needed both for multi-trial meta-analyses and for single trials reported in multiple places—for example, if randomisation is reported in a protocol paper but not in the results paper.

Natural language generation algorithms can help publish systematic reviews by generating human readable text from trial reports or banks. Together with visualisation tools (for example, for creating CONSORT diagrams), introducing automation here may lead to more uniform and systematic accounts of the evidence.

In light of systems already available, intelligent systems could probably be derived to help across these four main tasks of performing systematic reviews, to learn from reviewers, and then to replicate their approaches. As reliability improves, these tools will move from aiding humans to becoming reliable autonomous systems that can update systematic reviews with the latest available evidence.

Currently, many systematic reviews, and all Cochrane reviews, require well structured peer reviewed protocols before any review of the evidence starts,1 to ensure objectivity and repeatability of the review. These protocols are a formal representation of the actions that a reviewer is about to execute and can become the recipe for automation. Developing these protocols is distinct from conducting the reviews. We envisage development environments that allow protocols to be edited, tested, and then executed at the push of a button, freeing the reviewer to focus on developing and validating the review question and protocol. Validated protocols could be disseminated to open repositories that archive and index them. These repositories could then conduct reviews on demand.

For this vision to become reality, computer scientists, informaticians, and clinicians must join forces. Throwing our limited resources at the diminishing returns of hand crafting systematic reviews is no longer sustainable. Instead, some of that energy and creativity needs to be diverted into building the machinery for the next stage of evidence based medicine. The size of the task need not be daunting. Automating even small steps in the process of systematic review will shorten the time before reviews are published and increase the number of questions for which reviews are created. With time and trust, more of the process will be delegated to automation.

Eventually, the notion of a review having a fixed publication date and becoming almost immediately out of date will disappear as autonomous agents sift the evidence continuously and use their protocols to provide updated reviews on demand.15 Furthermore, providing systematic review “machines” at the point of care will mean that clinicians will know that they always have access to the best evidence.

Notes

Cite this as: BMJ 2013;346:f139

Footnotes

  • Competing interests: We have read and understood the BMJ Group policy on declaration of interests (http://bit.ly/S9aNY7) and have no relevant interests to declare.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References

View Abstract