Article Text

Download PDFPDF

The Pandora’s Box of Evidence Synthesis and the case for a living Evidence Synthesis Taxonomy
  1. Zachary Munn1,
  2. Danielle Pollock1,
  3. Timothy Hugh Barker1,
  4. Jennifer Stone1,
  5. Cindy Stern1,
  6. Edoardo Aromataris1,
  7. Holger J Schünemann2,3,
  8. Barbara Clyne4,
  9. Hanan Khalil5,
  10. Reem A Mustafa6,
  11. Christina Godfrey7,
  12. Andrew Booth8,
  13. Andrea C Tricco9,10,
  14. Alan Pearson1
  1. 1 JBI, The University of Adelaide Faculty of Health and Medical Sciences, Adelaide, South Australia, Australia
  2. 2 Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  3. 3 Department of Biomedical Sciences, Humanitas University, Milan, Italy
  4. 4 Department of General Practice, RCSI University of Medicine and Health Sciences, Dublin, Ireland
  5. 5 School of Psychology and Public Health, La Trobe University, Melbourne, Victoria, Australia
  6. 6 Internal Medicine, Division of Nephrology and Hypertension, University of Kansas School of Medicine, Kansas City, Missouri, USA
  7. 7 Queen’s Collaboration for Health Care Quality: A JBI Centre of Excellence, School of Nursing, Queen's University Kingston, Kingston, Ontario, Canada
  8. 8 School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
  9. 9 Epidemiology Division and Institute of Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
  10. 10 Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Unity Health Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Zachary Munn, JBI, The University of Adelaide Faculty of Health and Medical Sciences, Adelaide, South Australia, Australia; zachary.munn{at}adelaide.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Have we, as an evidence-based health community, opened the Pandora’s box of evidence synthesis? There now exists a plethora of overlapping evidence synthesis approaches and duplicate, redundant and poor-quality reviews.1–4 After years of advocating for the need for systematic reviews of the evidence, there is a risk that this message been disseminated too widely and has been misinterpreted in this process. We have reached a point where in some fields more reviews exist than clinical trials, where same topic reviews are being conducted in parallel, and evidence syntheses possess limited utility for decision-making because of their poor quality or poor reporting.To paraphrase the late Douglas Altman,5 it is possible we are now at a stage where we need less reviews, better reviews and reviews done for the right reason—as opposed to the current state of mass production (approximately 80 reviews per day).6

How have we arrived at this point - and is it a point of no return? One obvious reason is that over time, systematic review methods (and evidence synthesis more broadly) have become increasingly demanding, complex and multifaceted. This evolution in review methods has caused, and continues to cause, great confusion for both novice and experienced synthesisers of evidence.7–10 For example, confusion persists between scoping reviews and systematic reviews and the correct application of these approaches to evidence synthesis. This, in turn, results in scoping reviews assessing the effects of interventions when they are neither intended nor equipped to do so.11 This confusion is evident with other evidence synthesis methodologies, including scoping reviews and mapping reviews, where further guidance may be needed to help with the appropriate choice of methodologies.12 Indeed, it has already been argued that the current proliferation of review types is creating challenges for the terminology for describing such reviews, creating fundamental issues for evidence synthesisers.7 Although the utility of an encompassing toolkit for evidence synthesis to assist in answering an array of complex and multifaceted questions is to be welcomed, the resultant confusion (and complexity) associated with this expansion cannot be overlooked. With supervisors encouraging increasing numbers of novice researchers to undertake systematic reviews (fuelled by the misplaced idea of a quick or easy publication that requires little [if any] infrastructure support [or ethics approval] or as a requirement of doctorate studies or grant applications), and funders and frameworks rightly promoting interventional research be developed based on the evidence, we are likely to encounter further proliferation of misplaced, misconducted and redundant evidence synthesis projects.13

We are concerned about this current state of affairs within the field of evidence synthesis, but believe we have not yet reached the point of no return. As such, this article discusses some of the pitfalls associated with an ever-expanding toolkit for evidence synthesis (likened to the opening of Pandora’s Box) and discuss potential solutions for improving the cohesiveness of evidence synthesis.

Confusion regarding the many evidence synthesis approaches

Within the family of systematic reviews there are different approaches, including reviews addressing interventions, prognosis, test accuracy, values and other overarching types of systematic reviews.14 Despite the usefulness of these systematic reviews to synthesise research, circumstances persist where they are unwieldy or not the tool of choice to meet the requirements of knowledge users.14–16 This recognition has led to many alternative approaches to evidence synthesis, including realist reviews, scoping reviews, umbrella reviews, concept analyses and others.7–10 17 In addition, numerous methods have been proposed for the synthesis of qualitative research,18 19 including thematic synthesis, realist synthesis, content analysis, meta-ethnography and meta-aggregation.18–20 These developments have expanded the array of methods to conduct evidence syntheses as described in the literature.

Although the differentiation of evidence synthesis types has made it possible to ensure that methods are tailored to particular questions, one pitfall has been the emergence of other (perhaps) less useful and/or rigorous approaches. These include (among others) the numerous sets of products collectively referred to as ‘rapid’ reviews and integrative reviews. Rapid reviews can be critiqued for their lack of uniformity or agreement on an ideal approach, and often appear haphazard in their conduct. However, guidance on the conduct of rapid reviews is being produced21 and they have played a key role in providing evidence in emergency situations such as during the COVID-19 pandemic,22 and when done appropriately (and transparently) rapid reviews do have a place in the evidence synthesis ecosystem when requested by decision-makers. Integrative reviews, which emerged as a reaction to the early practice of only including randomised controlled trials in systematic reviews, represent a diffuse methodology which now appear redundant given the appearance of more rigorous approaches to incorporating different study designs (such as non-randomised and randomised studies in intervention reviews23 or quantitative and qualitative studies in a single review24 and synthesising different types of evidence (such as qualitative evidence synthesis), in addition to statistical methods to meta-analyse non-randomised studies.

Other evidence synthesis types possess clearly defined methods but are yet to realise their full potential within the context of evidence based practice. For example, concept analyses,25 26 ‘a process whereby concepts are logically and systematically investigated to form clear and rigorously constructed conceptual definitions,’26 share at least one similarity with scoping reviews (which are more robust in their approaches) in their intention to clarify concepts in the literature.27 However, concept analyses lack the immediately apparent utility of scoping reviews. A case can be made, given that concept analyses have been critiqued for having no impact on practice,25 to revisit and reconsider whether a separate role exists outside of the scoping review.

It is now possible that almost any question can be answered by some application of an appropriate evidence synthesis process. However, it can be daunting and difficult for researchers to determine what type of synthesis approach is required, how to conceptualise and phrase the synthesis question, inclusion criteria and select the appropriate methods for analysis and synthesis. Typologies, taxonomies and other evidence synthesis classification systems may play a role here.

Typologies and taxonomies of evidence synthesis

Typologies of evidence synthesis types offer a starting reference point for researchers, policy makers and funders when deciding on a review approach.8–10 These individual typologies provide useful guidance on different review types but remain static—largely as traditional publications in a journal format that are not up-to-date and with minimal links to supporting materials. Furthermore, the development processes for many of these taxonomies are largely unclear with non-transparent methods. In a discussion paper by Gough et al,7 the authors distinguished between various review designs and methods but stopped short of providing a taxonomy of review types. The rationale for this was that in the field of evidence synthesis ‘the rate of development of new approaches to reviewing is too fast and the overlap of approaches too great for that to be helpful.’7 Instead, the authors provide a useful description of how reviews may differ and more importantly, an explanation why this may be the case. We concur that evidence synthesis methodology is a rapidly developing field, and that within different systematic review types there may be many different subsets and complexities that need to be addressed. These classifications can be considered to lie at the roots of a much larger family tree.

The evidence synthesis taxonomy initiative

Previous taxonomies and typologies are useful to categorise and direct researchers to the ideal synthesis approach, however limitations regarding their comprehensiveness, development methods and their currency exist. As the research and policy communities become increasingly aware of the need for evidence-informed choices, the breadth of evidence synthesis approaches also continues to expand. As this enthusiasm for evidence synthesis continues to propagate, we need measures in place to ensure: (1) we are making informed choices in the design and conduct of evidence syntheses; (2) we are producing these reviews as efficiently and accurately as possible, using reproducible and transparent methods and (3) all results and implications from these syntheses are trustworthy. As such, the value of a resource that guides and drives appropriate synthesis approaches in response to relevant clinical and policy needs, such as a research-informed evidence synthesis taxonomy, becomes essential. The authors of this paper are now collaborating with like-minded researchers on such an initiative, including the shared commitment to keep this a living taxonomy, continuously updated (such as via a living wiki platform) and refined alongside advances in the field. This work complements a decision tool that currently steers researchers and decision-makers towards the right choice when conducting a review—aptly named ‘Right Review’.28 In addition, this work will align neatly with the position statement from Evidence Synthesis International, in recognising the need to develop and share standards, terminology and methodology consistently across the field while acknowledging the need for ‘fit for purpose’ approaches for diverse evidence requirements.29

We envisage a number of challenges when attempting this work. First, we will need to ensure we commit to a truly living approach given the rate of development of new methods and approaches. Second, given our collaborative approach, it is likely there will be strong views among the various contributors and organisations, which will require the use of formal consensus development methods. Third, we want to ensure all stakeholders are included, which will result in challenges organising a diverse group of people and opinions, but we see this as critical to ensuring the utility of the final taxonomy. The inclusion of these diverse stakeholders will be critical to ensure the taxonomy addresses research waste and overproduction of reviews. As an example, by ensuring the inclusion of editors, we hope that journals and publishers may be able to better identify redundant reviews or reviews applying inappropriate methodology prior to publication, in addition to updating author guidelines and requirements as a means to prevent these problems.

Conclusion

We suggest that the current proliferation of evidence synthesis types is creating challenges for the terminology for describing such reviews and contributing to confusion in the field and has contributed to the mass production of redundant, misleading and conflicted evidence base. Currently, no well-developed, continuously updated framework exists to name and categorise the different approaches to evidence synthesis. As such, a rigorous, empirically derived taxonomy is required that can comprehensively identify extant methods; clarify distinctive nomenclature; and can provide a classification system of methods and approaches. We hope that these efforts will reduce confusion within evidence synthesis, leading reviewers to select the best approach for their question and purpose among all current evidence synthesis options. Despite the challenges expected with developing this work (including achieving consensus across such diverse fields), we remember that at the bottom of Pandora’s box, there was hope, after all. We, too, remain hopeful regarding the continued utility of appropriate evidence synthesis to inform high quality decisions.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @ZacMunn

  • Contributors ZM conceived the article, wrote the first draft, incorporated feedback, finalised the manuscript. DP, THB, JS, CS, EA, HJS, BC, HK, RAM, CG, AB, ACT, AP all contributed to discussions and initial ideas in the article, reviewed drafts, provided feedback and approved.

  • Competing interests ZM is employed by JBI, an evidence-based healthcare research and development organisation situated within the University of Adelaide and is supported by an NHMRC Investigator Grant 1195676. ACT is funded by the Tier 2 Canada Research Chair in Knowledge Synthesis.

  • Provenance and peer review Not commissioned; externally peer reviewed.