Article Text

Download PDFPDF

Evidence-based medicine has already adapted and is very much alive
  1. Mohammad Hassan Murad1,2,
  2. Samer Saadi1,2
  1. 1Public Health, Infectious Diseases and Occupational Medicine, Mayo Clinic, Rochester, Minnesota, USA
  2. 2Evidence-Based Practice Center, Kern Center for the Science of Healthcare Delivery, Mayo Clinic, Rochester, Minnesota, USA
  1. Correspondence to Dr Mohammad Hassan Murad, Internal Medicine, Mayo Clinic, Rochester, Minnesota, USA; murad.mohammad{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Greenhalgh et al1 argue that the COVID-19 pandemic has uncovered a need for a shift in the evidence-based medicine (EBM) paradigm in which mechanistic evidence is also used as a complementary source for decision-making. Their justification is based on limitations of evidence hierarchies, which prioritise meta-analyses and randomised controlled trials (RCTs)1; and the urgency, threat and complexity of a pandemic. However, in defence of EBM, we present a counter argument.

First, mechanistic evidence may be misleading. Textbooks of clinical epidemiology and EBM manuals include numerous examples of mechanistic evidence supporting various treatments that subsequently turned out to be ineffective or harmful.2 A recent example, hydroxychloroquine has antiviral activity in vitro inhibiting viral entry, uncoating, assembly and budding through different molecular mechanisms.3 Yet, trials in thousands of patients have shown that hydroxychloroquine is not effective and is only exposing patients to adverse events.3 When we followed this mechanistic evidence early in the pandemic, we exposed people worldwide to adverse events, drug costs and opportunity costs. Conversely, most drugs used in clinical practice do not have a known mechanism or a known target. A drug needs to be safe and effective from the perspective of patients, as well as the perspective of regulatory agencies and guideline developers. For instance, we do not know how gabapentin works for all its indications; however, it is effective for many conditions, including focal seizures, diabetic neuropathy, postherpetic neuralgia, among others.

Second, RCTs and meta-analyses have been very valuable and feasible even early in the pandemic and adapted to its fast pace. For example, a trial of lopinavir–ritonavir in adults hospitalised with severe COVID-19 started enrolment on 18 January 2020, 7 weeks before WHO declared a global pandemic.4 A meta-analysis was electronically indexed in PubMed also before such declaration.5 Several frameworks were quickly established to synthesise and appraise evidence about the pandemic.6 Thus, the scientific community has been able to respond swiftly and generate randomised evidence in the early days of a pandemic. Adaptive, pragmatic and platform RCTs were used to accommodate the pandemic, with the RECOVERY trial as a well-known example.

Third, none of the definitions of EBM include randomisation. Instead, EBM is based on the use of the best available evidence.2 The best available evidence is often non-randomised. Consider aortic transection, a catastrophic event that is difficult to study in RCTs. A systematic review of small surgical case series7 has summarised the evidence for a subsequent evidence-based clinical practice guideline.8 Frameworks for appraisal and synthesis of case series9 and for rating certainty of evidence when a meta-analysis is not feasible,10 are already published. In cases where only mechanistic evidence is available, we may have to depend on it. This would be consistent with EBM because mechanistic evidence would be the best available evidence. However, we should confirm the mechanistic evidence with empirical experiments, preferably RCTs if feasible.

Fourth, EBM has already adapted its tools for complex interventions. It has been long recognised that some interventions are too complex, affected by several effect modifiers, can be delivered in multiple ways or by different health professionals, or have multiple components. Examples include public health interventions and pandemic-related policies. These complex interventions cannot be evaluated by asking ‘Does it work?’ Instead, they should be addressed by asking ‘When, how and in whom does it work?’ Analytic approaches for complex interventions are established11 12 and are already well recognised in EBM.13

In summary, EBM is a paradigm in which healthcare decisions are made based on the best available evidence that was systematically identified and synthesised; and supplemented by key contextual evidence to decision criteria.2 These principles continue to apply during a pandemic. We agree with the authors that mechanistic evidence may be considered in decision making, but we also believe that it needs to be confirmed in empirical studies, which can adapt to provide a rapid response.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.



  • Contributors MHM is a professor of Medicine and director of the Mayo Clinic Evidence-based Practice Centre. SS is postdoctoral research fellow at the Mayo Clinic. They both conceived the idea and critically revised and approved the manuscript. MHM is the guarantor of this work.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests MHM is a member of the GRADE Working Group and cofounder of the US GRADE Network.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles