Article Text

Download PDFPDF

Value and usability of unpublished data sources for systematic reviews and network meta-analyses
Free
  1. Nicholas James Anthony Halfpenny,
  2. Joan Mary Quigley,
  3. Juliette Catherine Thompson,
  4. David Alexander Scott
  1. Department of Health Economics & Epidemiology, ICON plc, Abingdon, Oxon, UK
  1. Correspondence to Nicholas James Anthony Halfpenny
    ; Nicholas.halfpenny{at}iconplc.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Systematic reviews and evidence-based medicine form the foundation of decision-making for healthcare agencies, making the robustness of the evidence base paramount.1–4 Where direct randomised evidence between all relevant comparators is limited or unavailable, network meta-analysis (NMA) is increasingly being used to inform healthcare decision-making.5 Feasibility of an NMA is determined by the presence of a connected network and the comparability of the data.6

It is rare that all data for all end points of interest are available or reported in peer-reviewed publications, and gaps in the available data can preclude evidence synthesis due to the absence of parameters to inform or connect evidence networks.7 ,8

In conducting a systematic review, peer-reviewed publications and (non-peer-reviewed) conference proceedings are searched to identify studies relevant to the research question.1 ,2 When such evidence is limited, inclusion of data from other sources can be important.9 ,10

Owing to the increased transparency in clinical trial reporting mandated by a number of authorities, additional sources of data are becoming more accessible, for example, through publicly accessible websites of clinical trial registries. The US Food and Drug Administration (FDA) requires all interventional studies of all drugs, biologics and devices, falling within its jurisdiction to be registered on ClinicalTrials.gov, and the European Medicines Agency (EMA) stipulates that all trials carried out in the European Union are registered in the European Clinical Trials database (EudraCT). On 21 July 2014, it became mandatory for sponsors to post summary information on the EudraCT, including trial design, objective(s), baseline characteristics and end point data, which are made available to the public.11 ,12

As part of regulatory submissions, the EMA and the FDA publish publically available documents that provide selected trial results used to inform these submissions. For example, the EMA's European Public Assessment Report (EPAR) provides information on the medicine as well as the EMA assessment, and the FDA's preapproval and postmarketing documents report limited clinical trial information. Both agencies may also request additional data on end points that are not otherwise reported or published should they feel the study sponsor has not provided enough data to inform the regulatory decision.11 ,13 Redacted versions of clinical study reports, the most detailed source of data for clinical trials, are sometimes made available by trial sponsors through the sponsor websites. However, these websites are often not user-friendly and finding the clinical study report of interest is tricky. In addition, if the report is available, these documents are long and challenging to navigate.

Thus, systematic reviewers have an increasing number of sources available to populate data extraction forms appropriately and completely, in order to have all available information pertinent to the assessment and conduct of an NMA. We illustrate, through the use of case studies, the importance of considering additional data sources for clinical trial data beyond peer-reviewed publications, such as clinical trials registries and clinical study reports. Failure to do so may result in incomplete data and preclude or potentially bias analyses.

Methods

Case reports 1 and 2: reporting in peer-reviewed literature versus clinical trials registries and clinical study reports

We selected two examples of clinical trials, based on our experience in systematic reviews, where we had previously observed differences in reporting of efficacy results between peer-reviewed publications and clinical trials registries. Case 1 was the METRIC trial (NCT01245062), a phase 3 randomised controlled trial (RCT) sponsored by GlaxoSmithKline (GSK) that investigated efficacy and safety of trametinib versus chemotherapy in advanced or metastatic BRAF V600E/K melanoma. For Case 2, we selected CHERISH (NCT00988221), a Hoffman-La Roche-sponsored RCT investigating efficacy and safety of tocilizumab versus placebo in patients with juvenile idiopathic arthritis (JIA). For both case studies, we compared outcomes reported in peer-reviewed publications with those available from ClinicalTrials.gov, EudraCT and the corresponding clinical study reports.

Case reports 3 and 4: augmenting the evidence base with data from regulatory authorities

We evaluated how data from regulatory authorities added to the evidence base for systematic reviews and NMAs. To illustrate this, we used the example of the impact of data from the EMA on NMA in hepatitis C virus (HCV) previously conducted by our group,14 and an NMA published by Dong et al15 in chronic obstructive pulmonary disease (COPD).

In each case study, differences between the sources were reported and their potential impact on conducting systematic reviews and NMAs was assessed.

All data were extracted by a single reviewer and the extractions were independently verified by a second reviewer.

Results and discussion

Case report 1: reporting in peer-reviewed literature versus clinical trials registries and clinical study reports—melanoma

The primary peer-reviewed publication for METRIC is Flaherty et al.16 A secondary publication reports a quality-of-life analysis only and is not discussed further here.17 We identified the associated ClinicalTrials.gov webpage using the NCT01245062 trial identifier and the key outcomes investigated (this trial was not registered on EudraCT). We obtained a redacted version of the clinical study report from the GSK clinical study register (http://www.gsk-clinicalstudyregister.com).18 Aggregate data were not redacted and all end points from the trial were available.

Table 1 summarises the efficacy outcomes reported in each data source. An important difference was in the reporting of end point by treatment line, whereas Flaherty et al16 reported on the intention-to-treat (ITT) population (including patients who were chemotherapy naïve (first-line) and those who were chemotherapy experienced (second-line)), ClinicalTrials.gov reported progression-free survival (PFS) separately for first-line and second-line patients, data particularly useful in systematic reviews, and NMAs investigating treatments in melanoma for a specific treatment line. Furthermore, Flaherty et al16 reported results only for the ITT population, whereas ClinicalTrials.gov reported outcomes for several subpopulations including presence of brain metastases (table 1).

Table 1

Comparison of reporting between sources in METRIC

We found further discrepancies: Flaherty et al16 reported hazard ratios (HRs) for PFS for the subpopulations, but did not report median PFS. In contrast, ClinicalTrials.gov reported median PFS for these subpopulations but did not report HRs. Different presentations of data may consequently inhibit or complicate evidence syntheses.19

While all study outcomes considered in this example were available from the redacted clinical study report, as this consisted of 2236 pages, considerable time was needed to source these data. Additional end points were also available from the clinical study report with several results reported for multiple subgroups. However, due to the limited benefit to this case study and the extensive nature of the document, these additional end points were not extracted.

Case report 2: reporting in peer-reviewed literature versus clinical trials registries and clinical study reports—JIA

Key outcomes for the CHERISH trial were identified from the peer-reviewed publication20 and from ClinicalTrials.gov again using the unique study identifier NCT00988221 (table 2). No results were available from the EudraCT entry, and the clinical study report was not publically available from the Roche website.

Table 2

Comparison of reporting between sources in CHERISH

All results reported in Brunner et al20 were available through ClinicalTrials.gov, but several outcomes available on ClinicalTrials.gov were not reported in the peer-reviewed publication.

A major difference was that ClinicalTrials.gov reported 104-week outcome data, whereas the latest time point reported by Brunner et al20 was week 40. Longer follow-up data are important as they permit assessment of long-term efficacy and safety; this is particularly true for trials in which survival is a measure of efficacy.20 Reporting multiple time points can also be informative when conducting longitudinal NMA.21

Efficacy was assessed using the JIA American College of Rheumatology (ACR) response criteria, within which there are JIA core response variables and JIA-ACR 30/50/70/90 responses (achieved if patients have a 30/50/70/90% or greater improvement in three or more of the JIA-core response variables (CRVs)). Brunner et al20 reported only the proportion of patients achieving the core response variables at week 16, whereas ClinicalTrials.gov reported, in addition, change in each JIA core response variables from baseline to week 16. Furthermore, ClinicalTrials.gov reported all efficacy outcomes at week 104, none of which were reported in the peer-reviewed publication.

Case report 3: augmenting the evidence base with data from regulatory authorities—HCV

To address the lack of head-to-head studies comparing simeprevir with boceprevir or telaprevir in treatment-naïve HCV patients, our group indirectly compared the efficacy and safety of these regimens in an NMA.14 Results were presented at the American Association for the Study of Liver Disease 2013 meeting,14 including a subgroup analysis to determine the effect of baseline METAVIR score on sustained virological response to treatment. Although METAVIR subgroup data for telaprevir were available in the primary publication, those for boceprevir were not;22 these latter were reported in the EMA EPAR (table 3). Without the addition of these EPAR data, an indirect comparison of simeprevir versus telaprevir and boceprevir, in accordance with the licensing in patients with specific stages of fibrosis would not have been possible.

Table 3

Availability of METAVIR score subgroup data from Poordad et al22 and EPAR for boceprevir

Case report 4: augmenting the evidence base with data from regulatory authorities—COPD

Although mortality is an increasingly important consideration in COPD, publications of trial data do not always report on-trial mortality. During a review by our research group,23 we identified Dong et al15 which conducted a systematic review and NMA of RCTs in COPD reporting overall death or cardiovascular death. Overall and cardiovascular mortality data were unavailable from four peer-reviewed publications, but these data were sourced from the FDA website allowing a more comprehensive review than would have been possible using peer-reviewed publications only (table 4).

Table 4

Studies for which Dong et al15 sourced additional data from the FDA

Limited availability of end point data is a constraint of many systematic reviews and meta-analyses. In the context of healthcare decision-making, it is vital to have all available data to make the best informed decisions. Where evidence is lacking from conventional peer-reviewed sources, efforts should be made to retrieve data from alternative sources in order of usability.

Health technology assessment (HTA) agencies expect evidence submissions to incorporate all the available, relevant evidence and responsibility lies with the systematic reviewer and meta-analyst to ensure this evidence is captured.

Through our examples, we show differences exist in the level of reporting between peer-reviewed publications and the corresponding clinical trials registries webpage, the website of regulatory agencies and also the trial clinical study report where available. Arguably the most important difference identified in reporting was end point availability by previous therapies for the METRIC trial. The peer-reviewed publication reported PFS results for the ITT population regardless of their previous exposure to chemotherapy. ClinicalTrials.gov reported results by chemotherapy status: naïve or experienced. In the current landscape of oncology drug approval, where products are often licensed for a specific set of patients based on their previous treatment exposure, these results could impact on any systematic reviews or NMAs in melanoma.28 As the EMA and FDA now require all trials within their jurisdiction to be registered on either EudraCT or ClinicalTrials.gov, results will increasingly be made available through these registries.

Schroll et al29 recently investigated the accessibility and usefulness of data from EMA and FDA reports and concluded while FDA reports provide more substantial outcome data, they are more difficult to navigate than EMA reports, are often considerably larger and lack standardisation in reporting.29 Locating these documents on the EMA and FDA websites can be challenging; there is a need for further improvement in ease of website navigation and in clarity of reporting.30 ,31

Although several authors have investigated the influence of unpublished data from FDA reports on systematic reviews and NMAs, disparity exists in the conclusions reached. MacLean et al32 searched for FDA reviews of new drug applications for selected non-steroidal anti-inflammatory drugs (NSAIDs) and compared these with published trial data, focussing on dyspepsia as a toxicity outcome. They found no significant difference between the published and unpublished data with respect to pooled estimates of the risk of dyspepsia. Hart et al33 recalculated selected meta-analyses that were conducted on trials that had unpublished FDA data available. Comparison of the summary data from reanalyses incorporating unpublished FDA data revealed increased and reduced drug efficacy in approximately equal numbers. McDonagh et al34 investigated Drug Effectiveness Review Project (DERP) reports, including data from FDA documents. They reported that 30 of 175 FDA reviews contained unpublished data that were used in 34 DERP reports. Conclusions were affected by the inclusion of these unpublished data in 11 of the DERP reviews, 9.6% of the total DERP reports investigated. Including unpublished data may not always influence the results of evidence syntheses, but it is apparent that the potential for it to do so exists.

Our case studies were selected based on our previous experience in systematic reviews in these disease areas. Although not systematic in our selection, these isolated examples illustrate the need to consider the additional sources of unpublished data discussed in this paper.

Conclusion

Peer-reviewed publications remain the most robust source of data for a systematic review, having long been regarded as an essential tool for assessing and improving the quality of publications, in their methodology and presentation of data.35 ,36 However, as limitations in the availability of data can lead to bias in systematic reviews and NMAs, potentially overestimating or underestimating treatment effects,37–39 and as data in the grey literature sources become increasingly available, these should be considered for inclusion. Nevertheless, we should bear in mind, grey literature lead to additional, sometimes substantial costs of retrieval and are not subjected to peer-review before being made publically available; thus, responsibility for the accuracy and presentation of the data lies with the sponsor.

We have discussed examples where unpublished data have informed reviews, analyses and potentially subsequent healthcare decisions. In addition, we have considered multiple sources of unpublished data and the benefit of each medium with respect to accessibility of the data and the potential to obtain additional evidence. We found that additional data could be obtained by searching these sources; however to search for and navigate all of these documents for every study in a systematic review would be unfeasible. The extensive nature of clinical study reports and the difficulties we and others have encountered in navigating regulatory websites make these sources less favourable than clinical trials registries. We believe researchers will gain the greatest benefit by considering first clinical trials registries, then FDA and EMA documents, and finally clinical study reports. We recommend researchers consider the evidence base for each research question and determine whether the benefit of searching the sources explored here outweigh the increased burden of searching and negative implications of incomplete data. Further research should investigate the consistency between peer-review publications and supplementary sources of data.

Acknowledgments

The authors thank Jo Whelan and Moira Hudson for their reviewing and editing this manuscript. No funding was received for the development of this manuscript.

References

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.