Article Text

Consistency between trials presented at conferences, their subsequent publications and press releases
  1. Anisa Rowhani-Farid1,
  2. Kyungwan Hong1,
  3. Mikas Grewal2,
  4. Jesse Reynolds3,
  5. Audrey D Zhang4,
  6. Joshua D Wallach5,6,
  7. Joseph S Ross2,7,8,9
  1. 1 Department of Practice, Sciences, and Health Outcomes Research, University of Maryland Baltimore, Baltimore, Maryland, USA
  2. 2 Section of General Internal Medicine, Yale School of Medicine, New Haven, Connecticut, USA
  3. 3 Department of Biostatistics, Yale University School of Public Health, New Haven, Connecticut, USA
  4. 4 Department of Internal Medicine, Duke University School of Medicine, Durham, North Carolina, USA
  5. 5 Department of Environmental Health Sciences, Yale University School of Public Health, New Haven, Connecticut, USA
  6. 6 Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA
  7. 7 National Clinician Scholars Program, Yale School of Medicine, New Haven, Connecticut, USA
  8. 8 Center for Outcomes Research and Evaluation (CORE), Yale-New Haven Hospital, New Haven, Connecticut, USA
  9. 9 Department of Health Policy and Management, Yale School of Public Health, New Haven, Connecticut, USA
  1. Correspondence to Dr Anisa Rowhani-Farid, Department of Practice, Sciences, and Health Outcomes Research, University of Maryland Baltimore, Baltimore, MD 21201, USA; anisarowhani{at}gmail.com

Abstract

Objective This study examined the extent to which trials presented at major international medical conferences in 2016 consistently reported their study design, end points and results across conference abstracts, published article abstracts and press releases.

Design Cross-sectional analysis of clinical trials presented at 12 major medical conferences in the USA in 2016. Conferences were identified from a list of the largest clinical research meetings aggregated by the Healthcare Convention and Exhibitors Association and were included if their abstracts were publicly available. From these conferences, all late-breaker clinical trials were included, as well as a random selection of all other clinical trials, such that the total sample included up to 25 trial abstracts per conference.

Main outcome measures First, it was determined if trials were registered and reported results in an International Committee of Medical Journal Editors-approved clinical trial registry. Second, it was determined if trial results were published in a peer-reviewed journal. Finally, information on trial media coverage and press releases was collected using LexisNexis. For all published trials, the consistency of reporting of the following characteristics was examined, through comparison of the trials’ conference and publication abstracts: primary efficacy endpoint definition, safety endpoint identification, sample size, follow-up period, primary end point effect size and characterisation of trial results. For all published abstracts with press releases, the characterisation of trial results across conference abstracts, press releases and publications was compared. Authors determined consistency of reporting when identical information was presented across abstracts and press releases. Primary analyses were descriptive; secondary analyses included χ2 tests and multiple logistic regression.

Results Among 240 clinical trials presented at 12 major medical conferences, 208 (86.7%) were registered, 95 (39.6%) reported summary results in a registry and 177 (73.8%) were published; 82 (34.2%) were covered by the media and 68 (28.3%) had press releases. Among the 177 published trials, 171 (96.6%) reported the definition of primary efficacy endpoints consistently across conference and publication abstracts, whereas 96/128 (75.0%) consistently identified safety endpoints. There were 107/172 (62.2%) trials with consistent sample sizes across conference and publication abstracts, 101/137 (73.7%) that reported their follow-up periods consistently, 92/175 (52.6%) that described their effect sizes consistently and 157/175 (89.7%) that characterised their results consistently. Among the trials that were published and had press releases, 32/32 (100%) characterised their results consistently across conference abstracts, press releases and publication abstracts. No trial characteristics were associated with reporting primary efficacy end points consistently.

Conclusions For clinical trials presented at major medical conferences, primary efficacy endpoint definitions were consistently reported and results were consistently characterised across conference abstracts, registry entries and publication abstracts; consistency rates were lower for sample sizes, follow-up periods, and effect size estimates.

Registration This study was registered at the Open Science Framework (https://doi.org/10.17605/OSF.IO/VGXZY).

  • policy
  • evidence-based practice
  • health services research

Data availability statement

Data are available in a public, open access repository. All data and code generated from this study are publicly available at the Open Science Framework at the following link: https://osf.io/q853p/.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • The integrity of clinical trials is compromised when researchers selectively report or misreport their findings.

  • Previous studies have been limited in scope as they have typically focused on a single conference in a particular therapeutic area.

WHAT THIS STUDY ADDS

  • In this cross-sectional analysis of clinical trials presented at 12 major medical conferences in the USA in 2016, we analysed the extent to which these trials transparently and consistently reported their outcomes across conference abstracts, publication abstracts and press releases.

  • Primary efficacy endpoint definitions were consistently reported and results were consistently characterised across conference abstracts and publication abstracts.

  • Trials also consistently characterised their results across conference abstracts, publication abstracts and press releases.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • Although the majority of trials consistently reported their primary efficacy endpoint definitions and characterised their results consistently, there is still room for improvement in conference reporting practices.

  • To ensure the integrity, consistency and transparency of all health and medical conference submissions globally, we recommend the adoption of standardised conference reporting guidelines across all health and medical conferences.

Introduction

Scientific and medical conferences have typically been regarded as a critical component of the research enterprise, where recent findings are shared, innovative research ideas are generated and new collaborations are created.1 Scientific and medical conferences attract global researchers and clinicians, sponsors, pharmaceutical companies and media representatives, with attendance ranging from 11 000 to well over 17 000 participants.1 2

Clinical trials are often considered the gold standard of evidence on medical interventions and techniques. Therefore, it is critical that their findings, at conferences and their subsequent publications, are reported transparently, consistently and with integrity.3 If clinical trial reporting does not present an accurate and unbiased reflection of trial results, patients might suffer serious consequences.4

Publication bias occurs when trialists do not publish their findings, or only publish positive findings. Relatedly, reporting bias occurs when investigators selectively report outcomes, switch outcomes and under-report adverse events.5 Another term given to reporting bias and which covers some of its nuances is ‘spin’—investigators could inappropriately spin their results positively, including inaccurate claims of benefit, extrapolations from incomplete data or selective or misreporting.5–8 Publication bias and/or reporting bias may be particularly concerning at global scientific/medical conferences during presentations of clinical trials and/or their subsequent publications, or lack thereof, as investigators might experience pressure to report exciting findings at these meetings. The nature of this pressure is connected with the ‘publish or perish’ phenomenon, where researchers are incentivised to publish, typically novel positive findings, in order to advance their careers.9 Such pressure creates an environment which is highly susceptible to misreporting and spin, the consequences of which might include clinicians adopting incorrect evidence into their practice and patients’ requesting flawed medical ‘advances’ due to the press highlighting and promoting these inaccurate findings in their news reports.10

Previous studies have shown that inconsistent reporting between conference presentations and subsequent publications is prevalent. For instance, Toma et al 11 studied the publication fate and degree of consistency between meeting abstract results and subsequent publications of late-breaking randomised controlled trials (RCT) presented at the American College of Cardiology scientific meeting from 1999 to 2002.11 The authors identified that 41% of all the RCTs that were subsequently published exhibited discrepancies between the efficacy estimate reported in the meeting abstract versus the one reported in the subsequent publication for the primary outcome.11 The discrepancy rate was the same for late-breaker RCTs as for RCTs presented in other sessions.11 Similarly, Pocock and Collier12 looked at seven of the late-breaker clinical trials presented at American College of Cardiology Scientific Sessions in 2018.12 They found that of these seven trials, six (86%) spun their results to emphasise the positive aspects of their findings for the purposes of their conference presentations.12 While previous evaluations have focused on a single conference in a particular therapeutic area, less is known about inconsistency of reporting at medical conferences across specialties.

Accordingly, our study was conducted to address this gap in the literature. The primary objective of our study was to examine the extent to which trials presented at major international medical conferences in 2016 consistently reported their study design, endpoints and results across conference abstracts, published article abstracts and press releases. As a secondary objective, we aimed to measure the consistency of the reporting of the primary efficacy endpoint definition across conference abstracts, registration entries and publication abstracts for all trials that were both registered and subsequently published.

Methods

We conducted a cross-sectional analysis of clinical trials presented at major medical conferences in the USA in 2016. Although held in the USA, these medical meetings are typically attended by a global audience. The year 2016 was picked to provide sufficient time for these trials to be published to be able to conduct the comparisons between conference presentation abstracts and corresponding publication abstracts.13 14

Patient and public involvement

Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Study sample—inclusion/exclusion

Conference selection

Two authors (AR-F and JSR) identified conferences from a list of the largest clinical research meetings held in 2015 in the USA aggregated in by the Healthcare Convention and Exhibitors Association (HCEA).15 The HCEA is a trade association whose mission is ‘to improve the effectiveness and promote the value of all conventions, meetings and exhibitions for the healthcare industry’.15 We used its report of the top 50 healthcare meetings in the USA in 2015 to identify the clinical research meetings to be included in our study sample. This report was used as it was the most comprehensive report of all clinical research meetings in the USA with details about attendance. Conferences were included if the abstract text was publicly reported and accessible online. We excluded dental conferences from this list as there was no one on our team familiar with clinical trials in the field of dentistry. From 22 major medical conferences identified, 12 conferences were included to compile the clinical trial study sample.

Clinical trial selection

Most major conferences have incorporated specialised plenary sessions dedicated to ‘late-breaking’ clinical research, during which time researchers leading large clinical trials with novel findings present their results for the first time.11 16 Thus, we comprehensively searched each of the 12 respective conference programmes and websites as well as contacted organising committees and librarians and other staff affiliated with the associations/societies to ascertain whether they had dedicated late-breaker sessions in 2016. For each conference, we included all late-breaking clinical trials, where available, as well as a random selection of all other clinical trials presented orally or through a poster, such that the total sample included up to 25 trial abstracts per conference. This sample size was not calculated a priori—it was determined based on the practical considerations of reading conference abstracts and identifying their registration details, publications, press releases and media coverage. Conference presentations were eligible for inclusion if they were of clinical studies of any type (eg, interventional or observational), and of any design (eg, superiority, non-inferiority, equivalence or non-comparative). All non-clinical studies, including systematic reviews and meta-analyses, were all excluded from the sample. Since this cross-sectional analysis’ unit of analysis is a clinical study, it falls under the broader category of a meta-epidemiological study.17 We have used the Strengthening the Reporting of Observational Studies in Epidemiology checklist for cross-sectional analyses to ensure that this study reflects the highest standards of reporting.18

Data extraction

Trial characteristics

We extracted data on the characteristics of each clinical trial. The characteristics included: late-breaker status (yes vs no), type of study (interventional vs observational vs not applicable), primary purpose (therapeutic vs diagnostic vs other vs not applicable), study design (superiority vs non-inferiority vs equivalence vs non-comparative), funder (industry vs academia vs government vs combination vs other vs unclear) and medical conference (American Society of Clinical Oncology (ASCO) vs American Society of Hematology (ASH), American Heart Association (AHA) vs American College of Rheumatology vs American Diabetes Association (ADA) vs Digestive Disease Week (DDW) vs American Thoracic Society (ATS) vs Radiological Society of North America (RSNA) vs American Association for Cancer Research vs Association for Research in Vision & Ophthalmology (ARVO) vs American Urological Association (AUA) vs the American Psychiatric Association (APA)). The majority of trial characteristics were extracted from conference or publication abstracts and information on funding was extracted from registration entries.

Main outcome measures

Trial registration and reporting of summary results

We determined if trials were registered and reported results in an International Committee of Medical Journal Editors (ICMJE)-approved clinical trial registry.19 20 If the trial registration information was not clear from the trial abstract, we manually searched Google, ClinicalTrials.gov or other ICMJE-approved registries using search terms associated with the intervention name, first author and sponsor.

Trial publication

We searched PubMed and Google using the trials’ registration number or abstract title and lead author to determine if trial results were published in a peer-reviewed journal.

Media coverage and press releases

Finally, we searched LexisNexis and Google using the trials’ abstract title and lead author to collect information on the trials’ media coverage and press releases. LexisNexis is a database that aggregates information from thousands of new agencies, including press releases and other media-related documents.21

Consistency measures

For all published trials, we examined the consistency of reporting of the following characteristics, through comparison of the trials’ abstracts from conference materials and corresponding manuscript publications (referred to as ‘publication abstracts’ throughout this study): primary efficacy end point definition, identification of safety end point, sample size, follow-up period, primary end point effect size and characterisation of trial results. We determined consistency of reporting when identical information was presented across the various reporting platforms being compared. As such, consistency was measured as a dichotomous variable (yes vs no).

The primary objective of our study was to measure the consistency of reporting of the primary efficacy end point definition between conference and publication abstracts. As to our study’s secondary objective, if the trial was registered, we examined the consistency of the reporting of the primary efficacy end point definition between conference abstract and registration entry; in addition, if the trial was published and registered, we examined the consistency between publication abstracts and registration entries and across all three: conference abstract, registration entry and publication abstract.

We also compared the characterisation of trial results, both between conference abstracts and publication abstracts and across conference abstracts, press releases and publications. Trial results were categorised as ‘positive’ if the primary end point was characterised positively, as ‘neutral’ if the primary end point was characterised negatively and the secondary end point was characterised positively and as ‘negative’ if the primary and secondary end points were both characterised negatively. These categorisations were made by reading the results and conclusion sections of the abstracts and assessing overall trial findings’ ‘favourability’ that are typically outlined in the ‘Conclusion’ section.

Data verification

AR-F independently extracted all the data for the study. An initial random sample of 10% abstracted data was selected for validation by ADZ. Next, the entire (excluding media-related variables) final sample was validated by KH, who also assisted with the extraction of registry-related variables. MG validated all and assisted with the extraction of a proportion of trials for all media-related variables. All discrepancies were resolved through discussion with all data extractors/verifiers. Data extraction and verification were completed in two stages: between 16 November 2018 and 14 June 2019 and between 10 September 2021 and 18 January 2022.

Statistical analysis

Descriptive statistics were used to characterise the studies selected. Associations between the primary efficacy end point and trial characteristics (eg, late breaker status (yes vs no), primary purpose (categorised as therapeutic, diagnostic or other), funder (categorised as industry, academia, government, combination, other or unclear) and medical conference meeting) were assessed using χ2 test or Fisher’s exact test. Multiple logistic regression was used to assess the primary efficacy end point when including trial characteristics in the model to determine if the odds of the outcome were influenced by any combination of relevant trial characteristics. Trial characteristics that were included in the model included late breaker status, primary purpose and funder. Analyses were exploratory and multiple comparisons were not accounted for in determining statistical significance (p<0.05). Analyses were conducted in RStudio (V.2022.02.0+443).

Results

Trial identification, screening and randomisation

Of the 612 total clinical studies presented at 12 medical conferences in 2016, 586 were eligible for inclusion in the study sample while 36 were excluded as they were later characterised as non-clinical studies. Among the 586 clinical studies, 49 were late-breaking clinical trials and were included in our study sample. Of the remaining 537 trials, 191 were randomly selected for inclusion. Figure 1 outlines this trial identification, screening and random selection process.22

Figure 1

PRISMA 2020 flow diagram of selection of trials included in the cross-sectional analysis.22

Trial characteristics

Table 1 summarises the characteristics of the trials included in our study sample. The 12 conferences selected to make up the clinical trial sample of our study were from the: ASCO, ASH, AHA, American College of Rheumatology, ADA, DDW, ATS, RSNA, American Association for Cancer Research, ARVO, AUA and the APA. Concerning the number of trials per conference, 7 conferences had 25 trials each, the remainder had trials ranging from 2 to 24 per conference.

Table 1

Characteristics of clinical trials included in the study sample

The majority of the trials in our sample, 191/240 (79.6%), were not late-breaking clinical trials. With regard to type of study, the vast majority were interventional trials (232/240, 96.7%). Concerning the primary purpose of the trials in our sample, most were therapeutic trials, 178/240 (74.2%), 14/240 (5.8%) were diagnostic trials, 47/240 (19.6%) were classified as ‘other’—which included trials testing surgical techniques, exercise/lifestyle interventions, among others—and 1/240 (0.4%) trial did not fall into any category and was classified as ‘not applicable’ (this study used trial data to identify prevalence and risk factors). As to trial design, just over half of the sample were superiority trials (131/240, 54.6%). Regarding the sources of funding, the majority of the trials were industry-funded, 89/240 (37.0%) trials. A sizeable proportion of 30/240 (12.5%) trials did not specify their funding sources.

Transparency and media-related practices

Table 2 summarises the transparency and media-related practices of the clinical trials included in our sample. On transparency practices, 208/240 (86.7%) were registered at an ICMJE-approved registry, 95/240 (39.6%) reported summary results at an ICMJE-approved registry and 177/240 (73.8%) were published. As to media-related practices, 82/240 (34.2%) were covered by the media, and 68/240 (28.3%) had press releases issued.

Table 2

Transparency and media-related practices of clinical trials included in the study sample

Consistency of reporting

Primary efficacy endpoint definition

Primary objective

Among the 177 published trials (table 3), 171 (96.6%) reported consistent definitions of primary efficacy endpoints between conference abstracts and publication abstracts. Analyses showed that there were no statistically significant associations found between trial characteristics and primary efficacy endpoint definition consistency between conference and publication abstracts (online supplemental file 1). The logistic regression model also demonstrated that there were no statistically significant trial characteristics that altered the chances of the occurrence of consistent reporting of the primary efficacy endpoint definition between conference and publication abstracts.

Supplemental material

Table 3

Consistency of reporting between conference, press release and publication abstracts for the clinical trials included in the study sample

Secondary objective

For the 208 clinical trials that were registered at an ICMJE-approved registry, 187 (89.9%) reported consistent primary efficacy endpoint definitions between conference abstracts and registration entries. For the registered and published trials, 152/171 (88.9%) reported consistency primary efficacy endpoint definitions across conference abstracts, registration entries and publication abstracts.

Safety endpoint identification, sample size, follow-up period and effect size

Comparing conference abstracts with publication abstracts, 96/128 (75.0%) trials reported consistent safety endpoints, 107/172 (62.2%) reported consistent sample sizes, 101/137 (73.7%) reported consistent follow-up periods and 92/175 (52.6%) reported consistent effect sizes.

Results characterisation

Among the 238/240 (99.2%) trials in our sample that characterised their results in their conference abstracts (table 4), 197 (82.8%) were characterised positively, 33 (13.9%) were characterised negatively and 8 (3.4%) had neutral results characterisation. Among 177/240 (73.8%) published trials in our sample, 143 (80.8%) characterised results positively, 26 (14.7%) characterised results negatively and 8 (4.5%) had neutral results characterisation. For the published trials (table 3), 157/175 (89.7%) characterised their results consistently between conference and publication abstracts. For the trials that had press releases and were published, 32/32 (100%) characterised their results consistently across conference abstracts, press releases and publication abstracts.

Table 4

Conference and publication abstract results characterisation for the clinical trials included in the study sample

All data and code generated from this study are publicly available.23

Discussion

In this cross-sectional analysis of clinical trial abstracts presented at major medical conferences in 2016, nearly all reported consistent primary efficacy endpoint definitions between conference and publication abstracts. Similarly, for nearly 90% of these trials, results were characterised consistently between conference abstracts and publication abstracts. Previous studies however show varied findings about the prevalence of consistent reporting in the medical literature. Naturally, methodological differences in measuring consistent reporting may have led to such variations between studies. For instance, a retrospective review of discrepancies between conference abstracts and published manuscripts in plastic surgery studies published in 2021 found that 81% of the published studies were consistently reported.24 In a scoping review of comparisons between abstracts and full reports in primary biomedical research, Li et al 25 found that the level of consistency ranged from 55% to 95%, which included consistency in designating a primary outcome measure.25 On a related note, some studies looked at the prevalence of spin in the medical literature and a systematic review found that 75% of non-inferiority oncology trials had evidence of misleading reporting (spin).26 Another study identified spin in 34.5% of abstracts of systematic reviews and meta-analyses in emergency medicine.27 Similarly, Roberts et al 28 evaluated spin in abstracts of cardiology trials and found evidence of spin in 27.3% of trial abstracts.28

There were lower rates of consistent reporting among other trial characteristics between conference and publication abstracts, including sample sizes, follow-up periods and effect size estimates, which does not rule out the possibility of misreporting at conferences and/or their subsequent publications. For instance, for sample size, safety endpoint and effect size reporting consistency, we found rates that ranged between 50% and 75%. These differences are to be expected for some of these characteristics as trial conference presentations often present preliminary data with shorter follow-up periods and smaller sample sizes, as trials could still be recruiting participants, and consequently, varying effect sizes. We are in agreement with Dagi et al,24 who recommended that authors should indicate in their conference abstracts whether sample size is final at the time of presentation.24 That trial safety endpoint consistency between conference and publication abstracts was not as high as the consistency of primary efficacy endpoint definition is an indication that trials typically focus more on reporting efficacy endpoints over safety endpoints.5

As for trial transparency and media-related practices, it was noteworthy that close to 90% of trials were registered at an ICMJE-approved registry. A much smaller proportion of around 40% reported summary results; this was expected as not all trials are required to report their summary results in ClinicalTrials.gov under the Food and Drug Administration Amendments Act of 2007,29–31 and not all ICMJE-approved registries have a section dedicated to the reporting of summary results. Although still low at 73.8%, the publication rate of the trials in our sample was higher than a previous study that found that 39.1% of abstracts from the American Academy of Ophthalmology meeting in 2008 made it to formal publication,32 however our study’s publication rate was similar to a study by Schwartz et al,10 who found that in 3 years after the meeting, 25% of studies presented major scientific meetings held in 1998 still remained unpublished.10 In 2009, Chalmers and Glasziou estimated that 85% of research is wasted annually, equating to around US$170 billion; one major contributor to this figure is that 50% of studies are never published in full.33 Ross et al 34 also discussed the implications of the lack of dissemination of study results: the disruption in the scientific process, the lack of evidence forming the basis of decision-making and the ethical responsibilities to honour trial participants who risked their health to participate in scientific research.34 The Declaration of Helsinki also outlined the ethical obligations of publishing all results from clinical trials, positive and negative.34 35 One recommendation to vouchsafe the formal publication of all clinical trials is that funders should play an active role in ensuring the clinical studies they fund make it to formal publication. For instance, funders could support a dedicated team of staff responsible to follow-up with primary investigators of clinical trials to follow the progress of the clinical trials they lead and to ensure that their results are published, be they positive or negative.

Moving forward, it would be ideal that all health and medical conferences adopt standardised reporting practices for abstract submissions, such as those recommended by Good Practice for Conference Abstracts and Presentations (GPCAP).36 37 These standardised reporting practices ensure that conference submissions accurately report their methods and results according to the Consolidated Standards of Reporting Trials abstract guidelines, along with sharing their trial registration numbers, funding sources and other study characteristics.36 37 By following standardised reporting practices such as GPCAP, conferences can ensure the integrity, consistency and transparency of all health and medical conference submissions globally.36 In addition, with the growing adoption of preprints for clinical and health sciences research,38 39 health and medical conferences should encourage authors of abstracts to be presented at meetings to post full articles describing their research on a preprint platform. This would allow all interested members of the research community, including those unable to attend the conferences, to have free and complete access to the new information being disseminated via the meeting, enhancing scientific communication.

Our study has several limitations. First, by only evaluating the consistency of reporting of trial characteristics between conference and publication abstracts for the primary efficacy endpoint, we did not consider the consistency of reporting for secondary endpoints. However, secondary endpoints are often not included in conference/publication abstracts due to word count restrictions. Furthermore, on many occasions we noticed that trials registered multiple primary efficacy endpoints and so it is likely that our results could have overestimated the consistency of reporting of the primary efficacy endpoint definition as trials could have also consistently misreported their primary efficacy endpoint definition. Third, we did not identify the various types of spin or the nuances of spin that could have taken place, for instance, overinterpretation or distortion of results, and discordance between results and their interpretation.7 Fourth, we did not investigate which trials were required to share their summary results according to the Food and Drug Administration Amendments Act of 2007, which mandated registration and results reporting for certain clinical trials at ClinicalTrials.gov.30 Fifth, while measuring effect size consistency between conference and publication abstracts, we looked for identical effect sizes and did not measure differences of effect size or overlapping 95% CIs between conference and publication abstracts. Sixth, the methods to retrieve subsequent publications might have missed publications in less visible sources such as journals with poor indexation or in a different language. Lastly, we originally set out to analyse 550 clinical trial conference abstracts. However, not all conferences had their abstracts available online, and it is unclear whether our findings remain consistent with a larger sample. A strength of this study is that 12 conferences were assessed, providing insight into the overall consistency of results and outcome reporting between conference abstracts, registration entries and publication abstracts in a wide cross-section of health and medical conferences.

Clinical trial integrity is enhanced and ensured through the consistent and transparent reporting of conduct and results. Therefore, it is encouraging to note that the majority of clinical trials presented at major health and medical conferences in 2016 consistently reported the primary efficacy endpoint definition and characterisation of results across conference abstracts and publication abstracts. However, there were lower rates of consistent reporting for other trial characteristics such as sample size, safety and effect size reporting. Investigators should make efforts to ensure that all reports of a clinical trial are consistent, and provide clear explanations for when changes occur. Adoption of preprints and standardised reporting practices for conference abstracts may foster more consistent and transparent clinical trial reporting.

Data availability statement

Data are available in a public, open access repository. All data and code generated from this study are publicly available at the Open Science Framework at the following link: https://osf.io/q853p/.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @AnisaFarid, @jsross119

  • Contributors AR-F designed and led the study, extracted and verified data, conducted the statistical analyses, wrote the first draft of the manuscript, edited the manuscript and is the guarantor of the study; KH extracted and verified data and edited the manuscript; MG extracted and verified data and edited the manuscript; JR provided statistical consulting for the study and edited the manuscript; ADZ verified data and edited the manuscript; JDW provided statistical consulting and edited the manuscript and JSR designed the study, provided statistical consulting, edited the manuscript and provided mentorship for AR-F’s postdoctoral fellowship throughout the study.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests When this work was conducted, the salaries of AR-F and KH were supported by the RIAT Support Center at the University of Maryland. The RIAT Support Center was supported by the Laura and John Arnold Foundation. KH was supported by the Food and Drug Administration (FDA) of the US Department of Health and Human Services (HHS) as part of a financial assistance award U01FD005946, unrelated to this manuscript, totalling US$5000 with 100% funded by FDA/HHS. The statistical support for this publication, provided by JR, was made possible by CTSA Grant Number UL1 TR001863 from the National Center for Advancing Translational Science (NCATS), a component of the National Institutes of Health (NIH). ADZ currently receives research support from the National Institutes of Aging through the Duke Creating ADRD Researchers for the Next Generation—Stimulating Access to Research in Residency (CARiNG-StARR) programme (R38AG065762). JDW reported receiving grant support by the US Food and Drug Administration, Arnold Ventures, Johnson & Johnson through Yale University, and the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health under award No. 1K01AA028258; he reported serving as a consultant for Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC. JSR currently receives research support through Yale University from Johnson and Johnson to develop methods of clinical trial data sharing, from the Medical Device Innovation Consortium as part of the National Evaluation System for Health Technology (NEST), from the Food and Drug Administration for the Yale-Mayo Clinic Center for Excellence in Regulatory Science and Innovation (CERSI) programme (U01FD005938), from the Agency for Healthcare Research and Quality (R01HS022882), from the National Heart, Lung and Blood Institute of the National Institutes of Health (NIH) (R01HS025164, R01HL144644) and from the Laura and John Arnold Foundation to establish the Good Pharma Scorecard at Bioethics International; in addition, JSR is an expert witness at the request of Relator’s attorneys, the Greene Law Firm, in a qui tam suit alleging violations of the False Claims Act and Anti-Kickback Statute against Biogen. MG has no conflicts of interest to disclose.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.