Elsevier

Journal of Clinical Epidemiology

Volume 106, February 2019, Pages 121-135
Journal of Clinical Epidemiology

Review
Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review

https://doi.org/10.1016/j.jclinepi.2018.10.003Get rights and content

Abstract

Objectives

The aim of the article was to identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews.

Study Design and Setting

A systematic review was conducted, searching MEDLINE, EMBASE, and the Cochrane Library from inception to September 1, 2016. Quality appraisal of included studies was undertaken using a modified Quality Assessment of Diagnostic Accuracy Studies 2, and key results on accuracy, reliability, efficiency of a methodology, or impact on results and conclusions were extracted.

Results

After screening 5,600 titles and abstracts and 245 full-text articles, 37 studies were included. For screening, studies supported the involvement of two independent experienced reviewers and the use of Google Translate when screening non-English articles. For data abstraction, studies supported involvement of experienced reviewers (especially for continuous outcomes) and two independent reviewers, use of dual monitors, graphical data extraction software, and contacting authors. For quality appraisal, studies supported intensive training, piloting quality assessment tools, providing decision rules for poorly reported studies, contacting authors, and using structured tools if different study designs are included.

Conclusion

Few studies exist documenting common systematic review practices. Included studies support several systematic review practices. These results provide an updated evidence-base for current knowledge synthesis guidelines and methods requiring further research.

Introduction

Systematic reviews (SRs)—the gathering of all evidence relevant to a research question in a transparent and unbiased way—are considered the gold standard for synthesizing health care evidence because of their methodological rigor [1]. Guidance for their conduct and reporting are readily available and produced by several well-known organizations [2], [3], [4], [5], [6], [7], [8]. The conduct of an SR comprises six main steps: defining a clear research question and literature search strategy, selecting relevant studies, assessing their methodological quality or risk of bias (RoB), abstracting relevant data, synthesizing results, and reporting findings [6].

Much research has been conducted on optimal literature search strategies, developing tools for assessing RoB, and establishing components to assess the quality of reporting [9], [10], [11], [12], [13], [14], [15]. However, there is much less information to support current standards on how to select studies for inclusion in an SR, abstract their data, and appraise their quality (or RoB) [16]. This information should also be valuable to those conducting rapid SRs because rapid reviews necessarily must streamline the SR process while attempting to maintain the integrity of an SR [17].

As the knowledge synthesis community advocates for evidence-based practice, it is imperative that our knowledge synthesis methods are informed by research evidence. We thus aimed to conduct an SR to determine the accuracy, reliability, impact, and efficiency of different methods for study selection, data abstraction, and quality appraisal in SRs.

Section snippets

Study protocol

We registered the protocol for our SR with PROSPERO (CRD42016047877) [18].

Eligibility criteria

Studies examining methodological approaches for the selection of studies according to defined eligibility criteria, abstraction of their data, or their quality appraisal were included [19]. Specifically, studies were included if they compared or evaluated the accuracy or reliability of a method or described factors that affect the method's accuracy or reliability.

We defined accuracy studies as those that compared the

Literature search

After screening 5,602 titles and abstracts, and 245 potentially relevant full-text articles, 37 studies (Fig. 1) describing 12 methods (Table 1) for the selection (11 studies), abstraction (13 studies), or appraisal (15 studies) of studies were eligible for inclusion. A list of key excluded studies can be found in Appendix B.

Study characteristics

Table 1 summarizes the characteristics of the 37 included studies. A high proportion of studies were published between 2010 and 2014 (45.9%). The most common study designs

Discussion

To our knowledge, this is the first SR of methods for the conduct of several essential steps in the SR process. Our study focused on methods relevant to study selection decisions, data abstraction, and quality appraisal, and findings confirm several current practices and provide evidence for some new or alternative practices while discouraging a few. Our results can be used to update guidance on the conduct of SRs [2], [3], [4] and rapid reviews. In addition, SR teams can use our results to

Conclusion

Few studies exist documenting common SR practices. However, limited evidence was identified supporting several practices. Our review of methodologies for the selection, abstraction, and appraisal of studies for SR provides an updated evidence-base for current guidelines for SRs, considerations for rapid reviews, as well as methods that warrant further research.

Acknowledgments

The authors are grateful for the assistance from Dr. Jessie McGowan for developing our search strategy, Elise Cogo for peer-reviewing the search strategy, Alissa Epworth for running the search and obtaining full-text articles, and Dr. Sharon Straus for reviewing and providing helpful feedback on the article. The authors also thank Krystle Amog and Shazia Siddiqui for formatting the appendices and article.

References (61)

  • A.R. Jadad et al.

    Assessing the quality of reports of randomized clinical trials: is blinding necessary?

    Control Clin Trials

    (1996)
  • A. Berard et al.

    Reliability of Chalmers' scale to assess quality in meta-analyses on pharmacological treatments for osteoporosis

    Ann Epidemiol

    (2000)
  • H.D. Clark et al.

    Assessing the quality of randomized trials: reliability of the Jadad scale

    Control Clin Trials

    (1999)
  • A.P. Verhagen et al.

    Balneotherapy and quality assessment: interobserver reliability of the Maastricht criteria list and the need for blinded quality assessment

    J Clin Epidemiol

    (1998)
  • L. Hartling et al.

    Testing the risk of bias tool showed low reliability between individual reviewers and across consensus assessments of reviewer pairs

    J Clin Epidemiol

    (2013)
  • D. Moher et al.

    Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement

    Syst Rev

    (2015)
  • J. Higgins et al.

    Cochrane handbook for systematic reviews of interventions version 5.1.0

    (2011)
  • Systematic reviews: CRD's guidance for undertaking reviews in health care

    (2009)
  • D. Owens et al.

    Methods guide for effectiveness and comparative effectiveness reviews

    (2014)
  • Institute of medicine committee on standards for systematic reviews of comparative effectiveness R

  • D. Moher et al.

    Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement

    Ann Intern Med

    (2009)
  • Joanna Briggs Institute Reviewers’ Manual: 2015 edition/supplement

    (2015)
  • Methodological expectations of Campbell Collaboration intervention reviews: conduct standards

    (2017)
  • J.P. Higgins et al.

    The Cochrane Collaboration's tool for assessing risk of bias in randomised trials

    BMJ

    (2011)
  • Wells G, Shea B, O'Connell D, Peterson J, Welch V, Losos M, et al. The Newcastle-Ottawa Scale (NOS) for assessing the...
  • P. Whiting et al.

    The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews

    BMC Med Res Methodol

    (2003)
  • J.P. Vandenbroucke et al.

    Strengthening the reporting of observational studies in epidemiology (strobe): explanation and elaboration

    Ann Intern Med

    (2007)
  • T. Mathes et al.

    Frequency of data extraction errors and methods to increase data extraction quality: a methodological review

    BMC Med Res Methodol

    (2017)
  • A.C. Tricco et al.

    A scoping review of rapid review methods

    BMC Med

    (2015)
  • A. Tricco et al.

    Accuracy, reliability, impact, and efficiency of different methods for selecting studies, abstracting data, and appraising quality in a systematic review: a systematic review protocol

    (2016)
  • Cited by (28)

    • Critical Appraisal of Systematic Reviews With Costs and Cost-Effectiveness Outcomes: An ISPOR Good Practices Task Force Report

      2021, Value in Health
      Citation Excerpt :

      There are a number of tools and methodologic recommendations on study selection in clinical SRs that are relevant for a SR-CCEO. For example, AMSTAR-2 (A MeaSurement Tool to Assess systematic Reviews) appraises the quality of conduct around study selection,3 and Robson et al (2018) summarizes the key conclusions of a SR related to study selection methods.31 The common recommendation to minimize the risk of excluding a relevant study or including an irrelevant study, is to perform each step of the study selection process, ideally independently, in duplicate, with conflicts resolved through discussion or by a third party while a combination of both is to be preferred.

    • Best-worst scaling identified adequate statistical methods and literature search as the most important items of AMSTAR2 (A measurement tool to assess systematic reviews)

      2020, Journal of Clinical Epidemiology
      Citation Excerpt :

      Recently, Gartlehner and his team [23] highlighted that single-reviewer abstract screening misses approximately 13% of relevant studies and could not be appropriate for the SR/MA process. Once again, few studies document these steps for SR/MA practices [24]. To the best of our knowledge, this is the first study investigating the relative importance of AMSTAR2 items to critically appraise SRs/MAs.

    • Few evaluative studies exist examining rapid review methodology across stages of conduct: a systematic scoping review

      2020, Journal of Clinical Epidemiology
      Citation Excerpt :

      It is possible that studies may have fit into more than one category, so we used the main focus of the study to assign the most appropriate category. An additional six studies were labeled as SR surrogates (i.e., studies that evaluated methods in SRs that may be transferrable to RRs), which were supplemented with those identified in the Robson review [10]. This list of surrogate studies may not be comprehensive, as it was not the purpose of the search of this scoping review.

    View all citing articles on Scopus

    Competing interest: A.C.T. is an associate editor of the journal but was not involved with the publication process.

    Ethics approval and consent to participate: not applicable.

    Consent for publication: not applicable.

    Informed consent and patient details: not applicable.

    Submission declaration and verification: This article has not been published previously.

    Availability of data and materials: The datasets used during this study are available from the corresponding author on reasonable request.

    Funding: This review was funded by an Ontario Ministry of Research, Innovation, and Science Early Researcher Award (2015 to 2020) awarded to A.C.T. A.C.T. is also funded by a Tier 2 Canada Research Chair in Knowledge Synthesis (2016 to 2021). J.H. is supported by the Canadian Institutes of Health Research Doctoral Award. M.J.P. is supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535).

    View full text