Elsevier

Journal of Clinical Epidemiology

Volume 77, September 2016, Pages 125-133
Journal of Clinical Epidemiology

Original Article
Most systematic reviews of adverse effects did not include unpublished data

https://doi.org/10.1016/j.jclinepi.2016.05.003Get rights and content

Abstract

Objectives

We sought to identify the proportion of systematic reviews of adverse effects which search for unpublished data and the success rates of identifying unpublished data for inclusion in a systematic review.

Study Design and Setting

Two reviewers independently screened all records published in 2014 in the Database of Abstracts of Reviews of Effects (DARE) for systematic reviews where the primary aim was to evaluate an adverse effect or effects. Data were extracted on the types of adverse effects and interventions evaluated, sources searched, how many unpublished studies were included, and source or type of unpublished data included.

Results

From 9,129 DARE abstracts, 348 met our inclusion criteria. Most of these reviews evaluated a drug intervention (237/348, 68%) with specified adverse effects (250/348, 72%). Over a third (136/348, 39%) of all the reviews searched, a specific source for unpublished data, such as conference abstracts or trial registries, and nearly half of these reviews (65/136, 48%) included unpublished data. An additional 13 reviews included unpublished data despite not searching specific sources for unpublished studies. Overall, 22% (78/348) of reviews included unpublished data/studies.

Conclusion

Most reviews of adverse effects do not search specifically for unpublished data but, of those that do, nearly half are successful.

Introduction

Adverse effects are harmful or undesirable outcomes that occur during or after the use of a drug or intervention, for which there is at least a reasonable possibility of a causal relation [1]. Information on the adverse effects of health care interventions is important for decision making by regulators, policy makers, health care professionals, and patients. Serious or important adverse effects may occur rarely, and as such, systematic reviews and meta-analyses that synthesize harms data from numerous sources (potentially involving both published and unpublished data sets) can provide useful insights. However, because adverse effects data are poorly reported in published clinical trials [2], [3], [4], [5], [6], [7], [8], [9], systematic reviews of adverse effects may be incomplete if they rely on peer-reviewed journal publications alone, or if the reviewers conduct only a relatively limited search for unpublished sources.

A consensus on a clear definition of “published” and “unpublished” data is difficult to reach. For practical reasons and to maintain consistency with our previous research work [10], “published” will refer to peer-reviewed journal articles and “unpublished” data will refer to all other material. It is acknowledged, however, that unpublished data can be publically available (e.g., through Web registries or regulatory agencies), but these do not undergo the processes of peer-reviewing, editing, formatting, and document identification that are part and parcel of established journal publications.

Serious concerns have emerged regarding publication bias or selective omission of outcomes data whereby negative results are less likely to be published than positive results and where adverse effects are underreported [11]. One way to attempt to overcome these biases is to include unpublished studies or data. Current guidance for all types of systematic reviews (irrespective of outcome) recommends searching unpublished sources [12], [13], [14] such as contacting authors or manufacturers, seeking conference abstracts, and searching trial registries (including industry trial registries). For reviews of adverse effects, the Cochrane Handbook also recommends searching regulatory authorities web sites such as the US Food and Drug Administration (FDA), the Medicines & Healthcare products Regulatory Agency, and the European Medicines Agency (EMA) [12]. Such guidance may have led to more systematic reviewers searching for unpublished data.

Nevertheless, previous research of systematic reviews of adverse effects from 1994 to 2011 has indicated that few attempts are made to search for unpublished data or industry-funded data [10], [15]. This may be due to an expected low return or the difficulties of searching for unpublished data or in obtaining and incorporating unpublished data into systematic reviews [16] or a concern that unpublished data are not peer reviewed. In addition, it is unknown whether this situation is improving.

In contrast, research has indicated that much of the data on adverse effects are unpublished accounting for between 43% and 100% of the number of adverse effects and also a wider range of types of adverse effects are reported in the unpublished literature [9], [17], [18], [19], [20], [21], [22], [23], [24], [25]. A considerable amount of otherwise “missing” adverse effects data therefore may potentially be retrieved from a diverse range of other sources such as trial registries, regulatory agencies, or authors. This has particularly important implications for evaluations of adverse effects because conclusions based on only published studies may not present a true picture of the adverse effects.

A lack of searching for and identification of unpublished data may pose serious threats to the validity of systematic reviews of adverse effects. Yet little is known as to whether (1) systematic reviewers fail to search for unpublished data or (2) whether they fail to identify unpublished data when they search and (3) which data sources are most fruitful for searching for unpublished data. Hence, we aimed to estimate the extent to which unpublished data are sought and identified within systematic reviews of adverse effects by carrying out a retrospective analysis of systematic reviews published in 2014.

Section snippets

Search strategy

Systematic reviews of adverse effects were identified by screening all records published in 2014 in the Database of Abstracts of Reviews of Effects (DARE) (via the Centre for Reviews and Dissemination web site, April 2015). No search strategy was implemented, as previous research has indicated that even very broad search strings would miss relevant records [26]. The DARE database was chosen because it was the most accessible major collection of systematic reviews of health care interventions.

Results

From 9,129 DARE abstracts screened, 451 full reports were retrieved and 348 reviews met the inclusion criteria. Overall 4% (348/9,129) of reviews in DARE with a publication date of 2014 focused on adverse effects.

Discussion

Thirty-nine percent (136/348) of systematic reviews of adverse effects published in 2014 searched at least one source of unpublished studies (such as conference abstract databases or trial registries). Encouragingly, nearly half of these reviews (65/136, 48%) were successful in identifying and including an unpublished study or unpublished data.

The overall proportion of all systematic reviews of adverse effects including unpublished data or studies, however, remains low at just over a fifth

Conclusions

Most reviews of adverse effects do not search specifically for unpublished data but, of those that do, nearly half are successful. Given the potential for publication and outcome reporting bias, easier access and greater transparency in reporting of adverse effects data is urgently required, and more reviews should make efforts to identify such unpublished data.

We also need detailed guidance on the most useful sources to search for unpublished adverse effects data. Further research, therefore,

References (41)

  • R.C. Pezo et al.

    Quality of safety reporting in oncology-randomized controlled trials (RCTs)

    J Clin Oncol

    (2011)
  • I. Pitrou et al.

    Reporting of safety results in published reports of randomized controlled trials

    Arch Intern Med

    (2009)
  • O. Scharf et al.

    Adverse event reporting in publications compared with sponsor database for cancer clinical trials

    J Clin Oncol

    (2006)
  • B. Hart et al.

    Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses

    BMJ

    (2012)
  • Undertaking systematic reviews of effectiveness: CRD guidance for those carrying out or commissioning reviews

    (2008)
  • Finding What Works in Health Care: Standards for Systematic Reviews

    (2011)
  • S. Golder et al.

    Is there evidence for biased reporting of published adverse effects data in pharmaceutical industry-funded studies?

    Br J Clin Pharmacol

    (2008)
  • M.L. van Driel et al.

    Searching for unpublished trials in Cochrane reviews may not be worth the effort

    J Clin Epidemiol

    (2009)
  • D.M. Hartung et al.

    Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications

    Ann Intern Med

    (2014)
  • Cited by (22)

    • Harms in Systematic Reviews Paper 1: An introduction to research on harms

      2022, Journal of Clinical Epidemiology
      Citation Excerpt :

      Although the current paradigm for conducting systematic reviews of interventions recommends that harms be assessed so that there can be a balanced discussion of potential benefits and harms, most reviews are not designed to assess harms rigorously[6,11,78–80] and might reach conclusions that are misleading or wrong. Early assessments of systematic review methods and reporting of harms revealed limitations in the approaches taken by reviewers to assessing harms at each stage of the review process, including: restrictions on the sources that are searched for evidence and the types of studies included, limitations in the analyses of harms, and poor reporting of methods used to assess harms (Box 6).[58,78–86] Common limitations in harm assessment in systematic reviews

    • Harms in Systematic Reviews Paper 2: Methods used to assess harms are neglected in systematic reviews of gabapentin

      2022, Journal of Clinical Epidemiology
      Citation Excerpt :

      For example, whereas non-randomized studies of statins showed an increased risk of myalgias, randomized trials and meta-analyses of them have shown no increase in risk.[52] Additionally, reviewers should anticipate including unpublished data when published data on harms is incomplete or likely to be inadequate for addressing the review question.[13,14,41–46,51,53] Second, the reporting of analyses of harms requires greater detail in reviews.

    • Overview: comprehensive and carefully constructed strategies are required when conducting searches for adverse effects data

      2019, Journal of Clinical Epidemiology
      Citation Excerpt :

      When discrepancies between published and unpublished data are discovered, it is recommended to attempt a sensitivity analysis to determine the potential impact on review findings and contact study authors to clarify potential causes of disparity. These additional search requirements are likely to increase demands on time and resources for the review team; however, including unpublished material may modify critical conclusions regarding the safety of medicinal products and increase precision of estimates incorporated in meta-analyses [32,41]. It is recommended that review authors specify the number of unpublished studies identified and document where details of adverse effects data were inaccessible [30].

    • Safety of Complementary and Alternative Medicine (CAM) Treatments and Practices

      2017, Side Effects of Drugs Annual
      Citation Excerpt :

      The majority of evidence is found in case reports, which are often of poor quality, and the best level of evidence, systematic reviews, “strongly rely on the quality and quantity of primary data (clinical trials), which is low in the field of herbal medicine” [4R], making it difficult to incorporate herbal products into the evidence-based medicine criteria for clinical use [8r]. In addition, the majority of systematic reviews of adverse effects do not include unpublished data, which introduces a publication bias as well as an omission bias that could be a serious threat to the validity of systematic reviews on adverse effects, further complicating the availability of evidence for clinical decision making [9R]. Herbal and dietary supplements are related to an increase in hepatotoxicity reports in the United States and elsewhere, increasing the urgency to identify a means of diagnosing and predicting toxicity, adverse effects and interactions of herbal and natural products [10R].

    View all citing articles on Scopus

    Funding: S.G. is supported by the National Institute for Health Research (PDF-2014-07-041).

    Conflicts of interest: This report is independent research arising from a Postdoctoral Research Fellowship.

    The views expressed in this presentation are those of the authors and not necessarily those of the NHS, the National Institute for Health Research, or the Department of Health.

    View full text