Objectives The purpose of investigator’s brochures (IB) is to compile the relevant evidence in order to enable an informed risk-benefit assessment by different reviewers including the principal investigator, a research ethics committees (REC), regulatory authorities, or data safety monitoring boards. Although a vast literature exists on the methodology of evidence synthesis for systematic reviews and meta-analyses, there is almost no literature examining the role of evidence synthesis in IBs. The primary objective of this contribution is to examine the adherence of IBs to fundamental principles of knowledge synthesis. These principles include a systematic search strategy, an evaluation of the risk of bias of the included studies and a comprehensive data collection and transparent synthesis procedure that accounts for variation in information quality.
Method We systematically examined a random sample of 30 IBs of a large sample (N = 109) from the application materials of industry-sponsored pharmaceutical clinical trials conducted between 2010–2016. IBs were obtained from three RECs of German university medical centers under data confidentiality agreements. Multiple independent examiners assessed the IBs to identify clinical trials reported in the IBs and extracted relevant data about the reported clinical trials. For this assessment, a coding book was operationalized and specifically tailored to the context and role of IBs. The coding book included items on the reporting ofa) a search strategy to find relevant evidence b) a method of evaluating the risk of bias of included studies and c) a data collection and syntheses procedure. The coding book included a second set of items to evaluate the reporting quality of all clinical trials listed in the IB.
Results A total number of 321 clinical trials were identified in the 30 IBs. Despite the relatively similar aims of evidence reporting in IBs and systematic reviews, no IB reported a search strategy that could have demonstrated how comprehensive the reported clinical evidence is. Furthermore, no IB presented a risk of bias assessment for the reported studies. All of the included IBs reported the results of relevant clinical trials exclusively in a narrative form and did not use any method to synthesize the evidence to create a concise overview of the results. The majority of the 321 trials reported the sample size (96%, n = 308), blinding (83%, n = 266), and randomization (66%, n = 212). Specific details, however, about sample size calculation, randomization methods or blinding procedures were reported for less than 1% of the 321 studies reported in the IBs. Baseline characteristics and participant flow were reported for 10% (n = 31) of all 321 trials.
Conclusions IBs serve the important function to compile the evidence that shall justify the approval and conduct of a planned clinical trial. Our results in a random subsample of 30 IBs that reported on 321 clinical trials show that IBs do not adhere to any established principles of evidence synthesis known from systematic reviews. In sum, our findings cast doubts about the robustness and transparency of decision-making based on the evidence provided by IBs. In particular, it raises the question whether they enable the above-mentioned reviewers to conduct a meaningful risk-benefit assessment that is ethically and legally required to justify clinical trials. In the presentation, we will outline recommendations on how to improve evidence reporting in IBs.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.