Article Text

Download PDFPDF
Development of literature search strategies for evidence syntheses: pros and cons of incorporating text mining tools and objective approaches
  1. Gaelen P Adam1,
  2. Robin Paynter2
  1. 1 Center for Evidence Synthesis in Health, Brown University School of Public Health, Providence, Rhode Island, USA
  2. 2 Scientific Resource Center, AHRQ Effective Health Care Program, Portland, Oregon, USA
  1. Correspondence to Gaelen P Adam, Brown University School of Public Health, Providence, Rhode Island, USA; gaelen_adam{at}brown.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The problem

Systematic reviews and related evidence synthesis products (eg, rapid reviews, evidence maps and scoping reviews) are foundational to evidence-based medicine, informing all levels of healthcare, from patient–provider decisions to national policy-making.1 However, an ongoing challenge to systematic reviews is the exponential growth in the number of journal articles and other related reports of medical research (eg, clinical trial records, conference abstracts and preprint articles). Since 2017, there have been over a million new records added annually to PubMed alone.2 Thus, there is a compelling need to streamline the information retrieval process for systematic reviews.

The current process and its limitations

The current approach to information retrieval for systematic reviews involves two discrete and successive steps: (1) formulating a search strategy, which normally consists of a combination of a set of Boolean queries with both indexed (eg, PubMed MeSH terms) and free-text terms and (2) manual screening of the relevant citations returned by the search, preferably by at least two independent reviewers.3 These two steps influence each other, in as much as a more comprehensive search strategy will return a larger number of citations that must be manually screened. To limit the time and resources required for screening, search strategies are often artificially limited to return only the number of citations considered feasible for the team to screen manually. However, this may lead to the inadvertent exclusion of relevant studies, leading to less robust and possibly even biased conclusions.

Enter technology

Tools that leverage text-mining are increasingly available for use in systematic review search strategy design to improve the identification of relevant articles,4 which is referred to as sensitivity, and reduce screening burden by semiautomating aspects of the search design process. Text mining software typically analyses a ‘seed’ set of bibliographic citations or documents to assess the frequency of terms or phrases. The goal is …

View Full Text

Footnotes

  • Contributors GPA wrote the content of this piece. RP edited it and made suggestions for improvement.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; externally peer reviewed.