Article Text

other Versions

Download PDFPDF
Catalogue of bias: novelty bias
  1. Yan Luo1,
  2. Carl Heneghan2,
  3. Nav Persaud3,4
  1. 1Department of Health Promotion and Human Behavior, Kyoto University Graduate School of Medicine Faculty of Medicine, Kyoto, Japan
  2. 2Center for Evidence-Based Medicine, University of Oxford Nuffield Department of Primary Care Health Sciences, Oxford, UK
  3. 3Department of Family and Community Medicine, St Michael's Hospital, Toronto, Ontario, Canada
  4. 4Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Nav Persaud, Department of Family and Community Medicine, St Michael's Hospital, Toronto M5B 1X2, Ontario, Canada; nav.persaud{at}utoronto.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Novelty bias is the tendency for an intervention to appear better when it is new. It is also known as the ‘novel agent effects’ or ‘fading of reported effectiveness’.1 2 The mechanisms by which interventions appear better when new or new for a specific purpose are unknown and may involve other forms of bias having a more significant effect when an intervention is new. Novelty bias can arise when the internal or external validity is compromised. Regarding internal validity, performance bias3 and detection bias4 may cause novelty bias because unblinded researchers may be particularly enthusiastic about new treatments, leading to differences in the care received by the intervention and control groups apart from the intended treatment or differences in the outcome assessment. Selective outcome reporting bias can also be a critical reason for novelty bias.5 6 Positive result bias7 (eg, positive results of a treatment are selectively reported when it is new and less selectively reported later), confirmation bias8 (eg, only the evidence supporting the new treatments is gathered while the others are disregarded) and hot stuff bias9 (eg, researchers may be keen to confirm the positive findings regarding a new and hot topic rather than falsifying them) are examples of selective reporting bias. They can lead to overinterpretation of the point estimates …

View Full Text

Footnotes

  • Twitter @yluo06, @carlheneghan, @NavPersaud

  • Contributors All authors (YL, CH and NP) contributed to manuscript drafting and revision. CH and NP conceived of the article. NP wrote the first draft. All authors (YL, CH and NP) revised the article for important scientific content and gave final approval of the version to be published.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests YL reports funding from the Japan Society for the Promotion of Science (JSPS Grant-in-Aid for Scientific Research, KAKENHI Grant Number JP22K21112) outside the submitted work. NP reports funding from the Canadian Institutes of Health Research and the Public Health Agency of Canada outside the submitted work.

  • Provenance and peer review Not commissioned; externally peer reviewed.