Article Text

Download PDFPDF
Ethical implications of AI-driven bias assessment in medicine
  1. Yanyi Wu1,2,
  2. Chenghua Lin1,2
  1. 1 School of Public Affairs, Zhejiang University, Hangzhou, China
  2. 2 Institute of China's Science, Technology and Policy, Zhejiang University, Hangzhou, China
  1. Correspondence to Dr Yanyi Wu, School of Public Affairs, Zhejiang University, Hangzhou, China; yanyi.wu{at}hotmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Barsby et al present a thought-provoking pilot study which discusses the application of large language models (LLMs) for automating risk-of-bias (RoB) assessments in system evaluation.1 Although LLMs show potential in streamlining evidence synthesis, the major ethical concerns caused by their integration into the medical decision-making process require careful consideration.

Patient safety is paramount. RoB assessments directly impact the quality of evidence used to guide clinical decisions. As highlighted by Barsby et al, current LLM performance in RoB assessment remains suboptimal, with both ChatGPT 3.5 and ChatGPT 4 demonstrating only moderate agreement with human assessors.1 Prematurely relying on these models could lead to misinformed judgements, …

View Full Text

Footnotes

  • Contributors YW conceptualised the study and drafted the manuscript. CL contributed to reviewing and editing the manuscript. Both authors reviewed and approved the final manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles