ReviewFew studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review
Introduction
Systematic reviews (SRs)—the gathering of all evidence relevant to a research question in a transparent and unbiased way—are considered the gold standard for synthesizing health care evidence because of their methodological rigor [1]. Guidance for their conduct and reporting are readily available and produced by several well-known organizations [2], [3], [4], [5], [6], [7], [8]. The conduct of an SR comprises six main steps: defining a clear research question and literature search strategy, selecting relevant studies, assessing their methodological quality or risk of bias (RoB), abstracting relevant data, synthesizing results, and reporting findings [6].
Much research has been conducted on optimal literature search strategies, developing tools for assessing RoB, and establishing components to assess the quality of reporting [9], [10], [11], [12], [13], [14], [15]. However, there is much less information to support current standards on how to select studies for inclusion in an SR, abstract their data, and appraise their quality (or RoB) [16]. This information should also be valuable to those conducting rapid SRs because rapid reviews necessarily must streamline the SR process while attempting to maintain the integrity of an SR [17].
As the knowledge synthesis community advocates for evidence-based practice, it is imperative that our knowledge synthesis methods are informed by research evidence. We thus aimed to conduct an SR to determine the accuracy, reliability, impact, and efficiency of different methods for study selection, data abstraction, and quality appraisal in SRs.
Section snippets
Study protocol
We registered the protocol for our SR with PROSPERO (CRD42016047877) [18].
Eligibility criteria
Studies examining methodological approaches for the selection of studies according to defined eligibility criteria, abstraction of their data, or their quality appraisal were included [19]. Specifically, studies were included if they compared or evaluated the accuracy or reliability of a method or described factors that affect the method's accuracy or reliability.
We defined accuracy studies as those that compared the
Literature search
After screening 5,602 titles and abstracts, and 245 potentially relevant full-text articles, 37 studies (Fig. 1) describing 12 methods (Table 1) for the selection (11 studies), abstraction (13 studies), or appraisal (15 studies) of studies were eligible for inclusion. A list of key excluded studies can be found in Appendix B.
Study characteristics
Table 1 summarizes the characteristics of the 37 included studies. A high proportion of studies were published between 2010 and 2014 (45.9%). The most common study designs
Discussion
To our knowledge, this is the first SR of methods for the conduct of several essential steps in the SR process. Our study focused on methods relevant to study selection decisions, data abstraction, and quality appraisal, and findings confirm several current practices and provide evidence for some new or alternative practices while discouraging a few. Our results can be used to update guidance on the conduct of SRs [2], [3], [4] and rapid reviews. In addition, SR teams can use our results to
Conclusion
Few studies exist documenting common SR practices. However, limited evidence was identified supporting several practices. Our review of methodologies for the selection, abstraction, and appraisal of studies for SR provides an updated evidence-base for current guidelines for SRs, considerations for rapid reviews, as well as methods that warrant further research.
Acknowledgments
The authors are grateful for the assistance from Dr. Jessie McGowan for developing our search strategy, Elise Cogo for peer-reviewing the search strategy, Alissa Epworth for running the search and obtaining full-text articles, and Dr. Sharon Straus for reviewing and providing helpful feedback on the article. The authors also thank Krystle Amog and Shazia Siddiqui for formatting the appendices and article.
References (61)
- et al.
PRESS peer review of electronic search strategies: 2015 guideline statement
J Clin Epidemiol
(2016) - et al.
The development of a quality appraisal tool for studies of diagnostic reliability (QAREL)
J Clin Epidemiol
(2010) - et al.
CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials
Int J Surg
(2012) - et al.
Identifying studies for systematic reviews of diagnostic tests was difficult due to the poor sensitivity and precision of methodologic filters and the lack of information in the abstract
J Clin Epidemiol
(2005) - et al.
An efficient strategy allowed English-speaking reviewers to identify foreign-language articles eligible for a systematic review
J Clin Epidemiol
(2014) Does blinding of readers affect the results of meta-analyses?
Lancet
(1997)- et al.
Dual computer monitors to increase efficiency of conducting systematic reviews
J Clin Epidemiol
(2014) - et al.
Single data extraction generated more errors than double data extraction in systematic reviews
J Clin Epidemiol
(2006) - et al.
Systematic review data extraction: cross-sectional study showed that experience did not increase accuracy
J Clin Epidemiol
(2010) - et al.
Use of kappa statistic in determining validity of quality filtering for meta-analysis: a case study of the health effects of electromagnetic radiation
J Clin Epidemiol
(1996)
Assessing the quality of reports of randomized clinical trials: is blinding necessary?
Control Clin Trials
Reliability of Chalmers' scale to assess quality in meta-analyses on pharmacological treatments for osteoporosis
Ann Epidemiol
Assessing the quality of randomized trials: reliability of the Jadad scale
Control Clin Trials
Balneotherapy and quality assessment: interobserver reliability of the Maastricht criteria list and the need for blinded quality assessment
J Clin Epidemiol
Testing the risk of bias tool showed low reliability between individual reviewers and across consensus assessments of reviewer pairs
J Clin Epidemiol
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement
Syst Rev
Cochrane handbook for systematic reviews of interventions version 5.1.0
Systematic reviews: CRD's guidance for undertaking reviews in health care
Methods guide for effectiveness and comparative effectiveness reviews
Institute of medicine committee on standards for systematic reviews of comparative effectiveness R
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
Ann Intern Med
Joanna Briggs Institute Reviewers’ Manual: 2015 edition/supplement
Methodological expectations of Campbell Collaboration intervention reviews: conduct standards
The Cochrane Collaboration's tool for assessing risk of bias in randomised trials
BMJ
The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews
BMC Med Res Methodol
Strengthening the reporting of observational studies in epidemiology (strobe): explanation and elaboration
Ann Intern Med
Frequency of data extraction errors and methods to increase data extraction quality: a methodological review
BMC Med Res Methodol
A scoping review of rapid review methods
BMC Med
Accuracy, reliability, impact, and efficiency of different methods for selecting studies, abstracting data, and appraising quality in a systematic review: a systematic review protocol
Cited by (28)
Systematic reviewers used various approaches to data extraction and expressed several research needs: a survey
2023, Journal of Clinical EpidemiologyCritical Appraisal of Systematic Reviews With Costs and Cost-Effectiveness Outcomes: An ISPOR Good Practices Task Force Report
2021, Value in HealthCitation Excerpt :There are a number of tools and methodologic recommendations on study selection in clinical SRs that are relevant for a SR-CCEO. For example, AMSTAR-2 (A MeaSurement Tool to Assess systematic Reviews) appraises the quality of conduct around study selection,3 and Robson et al (2018) summarizes the key conclusions of a SR related to study selection methods.31 The common recommendation to minimize the risk of excluding a relevant study or including an irrelevant study, is to perform each step of the study selection process, ideally independently, in duplicate, with conflicts resolved through discussion or by a third party while a combination of both is to be preferred.
Best-worst scaling identified adequate statistical methods and literature search as the most important items of AMSTAR2 (A measurement tool to assess systematic reviews)
2020, Journal of Clinical EpidemiologyCitation Excerpt :Recently, Gartlehner and his team [23] highlighted that single-reviewer abstract screening misses approximately 13% of relevant studies and could not be appropriate for the SR/MA process. Once again, few studies document these steps for SR/MA practices [24]. To the best of our knowledge, this is the first study investigating the relative importance of AMSTAR2 items to critically appraise SRs/MAs.
Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps
2020, Journal of Clinical EpidemiologyFew evaluative studies exist examining rapid review methodology across stages of conduct: a systematic scoping review
2020, Journal of Clinical EpidemiologyCitation Excerpt :It is possible that studies may have fit into more than one category, so we used the main focus of the study to assign the most appropriate category. An additional six studies were labeled as SR surrogates (i.e., studies that evaluated methods in SRs that may be transferrable to RRs), which were supplemented with those identified in the Robson review [10]. This list of surrogate studies may not be comprehensive, as it was not the purpose of the search of this scoping review.
“One more time”: why replicating some syntheses of evidence relevant to COVID-19 makes sense
2020, Journal of Clinical Epidemiology
Competing interest: A.C.T. is an associate editor of the journal but was not involved with the publication process.
Ethics approval and consent to participate: not applicable.
Consent for publication: not applicable.
Informed consent and patient details: not applicable.
Submission declaration and verification: This article has not been published previously.
Availability of data and materials: The datasets used during this study are available from the corresponding author on reasonable request.
Funding: This review was funded by an Ontario Ministry of Research, Innovation, and Science Early Researcher Award (2015 to 2020) awarded to A.C.T. A.C.T. is also funded by a Tier 2 Canada Research Chair in Knowledge Synthesis (2016 to 2021). J.H. is supported by the Canadian Institutes of Health Research Doctoral Award. M.J.P. is supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535).