Article Text

Download PDFPDF

Understanding of research results, evidence summaries and their applicability—not critical appraisal—are core skills of medical curriculum
  1. Kari A O Tikkinen1,2,
  2. Gordon H Guyatt3
  1. 1 Departments of Urology and Public Health, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
  2. 2 Department of Surgery, South Karelian Central Hospital, Lappeenranta, Finland
  3. 3 Departments of Health Research Methods, Evidence, and Impact (HEI), and Medicine, McMaster University, Hamilton, Ontario, Canada
  1. Correspondence to Professor Kari A O Tikkinen, Department of Urology, Helsinki University Hospital, Helsinki 00290, Finland; kari.tikkinen{at}helsinki.fi

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

To practice high quality healthcare, clinicians must be able to diagnose correctly, provide preventative and treatment interventions based on the best available evidence, and ensure decisions are consistent with patients’ values and preferences. The educational approaches to teaching evidence-based medicine (EBM) to ensure the clinical decisions reflect both the best evidence and patients’ values are, however, open to question.

EBM experts devoted to optimising EBM education often suggest that to practice high-value, evidence-based care requires ensuring that clinicians are able to critically appraise original research studies, as well as systematic reviews. Critical appraisal includes addressing risk of bias, and that involves a careful reading of methods and results.

If indeed optimal practice requires such critical appraisal, it naturally follows that in introducing EBM one should educate clinicians so that they can competently make risk of bias assessments of randomised trials and observational studies, and similarly assess the rigour of systematic reviews. Much—perhaps almost all—of the EBM educational community has adopted this position and, therefore, EBM lectures and workshops often have their primary focus on critical appraisal. These sessions usually involve detailed assessment of risk of bias by careful, critical reading of methods and results of research studies.

The Centre for Evidence-Based Medicine website,1 presents critical appraisal as the systematic evaluation of clinical research papers and aims to answer the following questions: (1) does this study address a clearly focused question? (2) did the study use valid methods to address this question? (3) are the valid results of this study important? and (4) are these valid, important results applicable to my patient or population? If the answer to any of these questions is ‘no’, it is also stated on the website that ‘you can save yourself the trouble of reading the rest of it’. The second criterion represents the risk of bias assessment, and often occupies much, if not most, of the available EBM education time.

Thirty years ago, one of us (GG) began a 7-year tenure as director of the McMaster Internal Medicine residency training programme. The programme that resulted followed the EBM educational philosophy that posits that evidence-based practice requires the regular practice of risk of bias assessment of original articles. After 7 years running the programme with a priority in producing clinicians skilled in EBM,2 the perspective had changed: the experience revealed that intense exposure (most usually in the form of graduate courses) was required to be competent in risk of bias assessment. Despite efforts to recruit residents most interested in EBM, few graduates proved interested in achieving, or ever achieved this level of proficiency. Moreover, even those most interested and competent faced daunting time limitations in appraisal of sufficient articles both to justify current practices and to stay current.

Moreover, it was clear that internal medicine residents could achieve evidence-based practice through secondary sources of evidence—such as trustworthy clinical practice guidelines, and through feedback from their clinical mentors. That is, the ability to critically appraise independently was not a necessity for evidence-based practice.

Reflecting on this experience, a group of EBM educators published an article acknowledging the limitations of EBM educational goals for the vast majority of clinicians, and suggesting alternative realistic approaches to ensuring evidence-based practice—that is, practice based on the best evidence and patients’ values and preferences.3

Formal evaluation of these conclusions from the McMaster Internal Medicine residency programme is limited, but those that are available support our inferences. Our trainees’ responses mirror those of British general practitioners, who often use evidence-based summaries generated by others (72%) and evidence-based practice guidelines (84%) but who overwhelmingly (95%) believe that ‘learning the skills of evidence-based medicine’ is not the most appropriate method for ‘moving … to evidence based medicine’.4

A French study of doctors, nurses and pharmacists reported that 19% of physicians declared regularly using EBM in their professional practice—estimates were even lower for pharmacists (8%) and nurses (5%).5 When clinicians did use evidence-based resources, they were overwhelmingly clinical practice guidelines. A Dutch study of otolaryngologists reported time constraints as serious barriers to evaluation of original articles.6

Flying in the face of the evidence, a recent consensus statement based on a systematic review and Delphi survey identified core competencies in evidence-based practice for health professionals.7 The statement suggested that there are as many as 68 evidence-based practice core competencies, many of which involve risk of bias assessment. This article has considerable merit—one of us is co-author—but the overemphasis on critical appraisal remains.

In summary, the notion that most clinicians emerging from professional training will regularly evaluate the risk of bias in methods and results of primary studies is deluded. Most will be uninterested in acquiring the sophisticated skills that such appraisal requires; most of those who are interested will never make obtaining the training to acquire these skills a sufficient priority; and those who do obtain the training and skills will often not have the time to apply them. Moreover, the most skilled will also be best at what all clinicians aiming at evidence-based practice will require: identifying secondary sources that include evaluation of the quality of evidence underlying research results.

What then are the appropriate goals of health professional education that will eventually result in optimal evidence-based practice? First, clinicians must understand the notion that evidence comes with a gradient of trustworthiness, and that well-developed methods of differentiating the more or less trustworthy are available. This is the reason that we should not drop risk of bias assessment from the curriculum: trainees must understand—transiently—randomised controlled trial issues such as concealment, blinding, loss to follow-up and intention to treat and observational study issues such as adjusted analysis. Having understood them at the time, the process of risk of bias assessment will no longer appear as a black box. In later years, they will be able to accept risk of bias judgments of those trained to make them, knowing that there exists a well-developed rationale for their application.

Second, we must teach trainees how to identify secondary sources of trustworthy information. Systematic reviews constitute one source, but clinical practice guidelines will provide the most efficient and useful guidance. More and more, specialty societies provide trustworthy evidence-based guidelines, electronic resources, such as UpToDate and Dynamed, provide whole repositories of largely trustworthy evidence summaries and guidelines, and new resources, such as BMJ Rapid Recommendations, are becoming increasingly available.8

Third, when EBM educators convey the notion of trustworthy evidence, risk of bias assessment of individual studies should not be the primary focus. Because clinicians will appropriately look to summaries of bodies of evidence in systematic reviews and guidelines, they must develop a basic understanding of the issues that bear on their quality/certainty/trustworthiness. These include study design (randomised trials vs observational studies) and issues not only of risk of bias, but precision, consistency, directness and magnitude of effect.9

Fourth, and perhaps most important, trainees need to understand that evidence is never sufficient to guide clinical practice: patients’ values and preferences are always crucial. Many, likely the majority, of important clinical decisions are value and preference dependent: the right choice for one individual will be the wrong choice for another.

Thus, shared decision-making is crucial for evidence-based practice. To engage in shared decision-making clinicians must understand the magnitude of benefits, harms and burdens associated with alternative management options—along with the quality of the evidence—and be able to discuss these with patients. To do so, they need a deep understanding of certain EBM basics: what is a relative effect, an absolute effect and how they are related. They need to recognise when (absolute) effects are trivial, small, moderate or large, and understand that absolute effects are far more important to patients than relative effects.

Do clinicians in training currently have a sufficient understanding of systematic review results? Unfortunately not. A multinational survey of more than 600 staff and trainees (response rate 87%) in internal medicine and family medicine programmes in Canada, Spain, USA, Finland, Chile, Norway, Lebanon and Switzerland explored clinicians’ understanding and perceptions of usefulness of six statistical formats for presenting continuous outcomes from meta-analyses (standardised mean difference, minimal important difference units, mean difference in natural units, ratio of means, relative risk and risk difference).10

Although clinicians best understood the dichotomous presentations of continuous outcomes (relative and absolute effects) and perceived them to be the most useful, none of the presentation formats were well understood or perceived as extremely useful. One remedy for this situation may be to improve presentation by using plain language11–13 and effective visual presentation.8 13 14 To conduct optimal shared decision-making, however, our numerically challenged young healthcare practitioners must still understand what a relative risk reduction of 25% means, and whether in context it implies a reduction from 4% to 3% or from 40% to 30%. Hence, our educational time should be spent much less on risk of bias in individual studies, and less—depending on how much time we are spending on it now—on quality of a body of evidence, and much more on understanding results.

In summary, if by a core skill we mean one that they must ultimately apply in their clinical practice, risk of bias assessment is not a core skill for clinicians. EBM educators should spend more time and emphasis, relative to risk of bias in primary studies, on quality/certainty of bodies of evidence, and much more time and emphasis on understanding of magnitude of effect and applicability of results.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @KariTikkinen, @EBCPMcMaster

  • Contributors KAOT and GG conceived, drafted, revised and approved this article.

  • Funding Tikkinen is supported by the Academy of Finland (309387), Competitive Research Funding of the Helsinki and Uusimaa Hospital District (TYH2019321; TYH2020248) and Sigrid Jusélius Foundation. The sponsors had no role in the analysis and interpretation of the data or the manuscript preparation, review or approval.

  • Competing interests GG consultant for UpToDate.

  • Provenance and peer review Not commissioned; externally peer reviewed.