Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Preparing for my decennial recertification exam in internal medicine opened my eyes to a startling disconnect in how evidence-based medicine (EBM) is communicated to practising clinicians compared with how it is implemented by expert panels and EBM authorities. Indeed, sources of continuing clinical education often omit basic tenets of EBM, in particular how to integrate the best available evidence with clinical expertise.1–3 This disconnect is mystifying: why should education experts well versed in EBM use communication templates disjointed from EBM to communicate evidence for practising medicine? For example, a common source of preparation for the American Board of Internal Medicine recertification exam is the Medical Knowledge Self-Assessment Program (MKSAP) published by the American College of Physicians.4 Even though the American College of Physicians prioritises the mission of teaching and implementing EBM,5 two advisements from the 2017 MKSAP illustrate how the curriculum they sponsor deviates from this mission.
Advisement 1: ‘Routine screening for skin cancer using a total body skin examination is not recommended. The United States Preventive Services Task Force (USPSTF) found insufficient evidence that routine skin examination was effective at reducing the morbidity and mortality from cutaneous melanoma, basal cell carcinoma, or squamous cell carcinoma. Patients with high cumulative levels of sun exposure should be encouraged to wear sunscreen and protective clothing, although the benefit of such counselling is unknown.’
Here, clinicians are not told whether the advisement’s basis in EBM is because there is no direct evidence or because there is direct evidence, but it is unfavourable. Yet, this distinction is of key importance for integrating evidence with expertise to implement EBM and guiding decisions. No direct evidence could just as easily lead to screening or no screening, whereas direct unfavourable evidence should lead to no screening in the absence of a compelling reason to do otherwise.
Advisement 2: ‘Routine screening for CAD is not recommended. Resting electrocardiography, exercise treadmill testing, and electron-beam CT may identify some patients with asymptomatic disease, but these strategies lack supportive evidence.’
Again, clinicians are not told whether the advisement’s basis is because there is no direct evidence or because there is direct evidence, but it is unfavourable, thereby muddying inferences for decision-making. For example, should a clinician perform an ECG on an asymptomatic patient with an expected 10-year incidence of cardiac disease greater than 10%? One could argue that such a patient would not be ‘routine’ and therefore should be excluded from the prescribed inference by the ‘routine screening’ clause in the recommendation, but an improved method of disseminating EBM would remove this ambiguity.
Clinical resources that deviate from EBM principles are not unique to MKSAP, but rather are endemic. A well-regarded reference, UpToDate, routinely issues advisements with oblique relationships to EBM; for example, advising melanoma screening for high-risk persons after describing the uncertainty that screening lowers mortality and the biases in supporting studies.6
Indeed, failing to discriminate between no evidence and unfavourable evidence is similar to indiscriminate use of the phrase ‘there is no evidence to suggest’,7 which is also endemic and bears the imprint of contexts removed from medicine where there is an implicit desire for a contrary bias (eg, legal proceedings, where innocence is presumed in the absence of evidence of guilt).
How might clinician resources become more aligned with EBM? It is not an easy question—media like those cited above are valued precisely because they transform difficult, often grey-shaded questions into digestible and actionable maxims. Nonetheless, I believe that substantial improvement is possible.
One approach to improve the fidelity of disseminating EBM
It is possible to design a taxonomy focused on EBM implementation that could be adopted by sources of continuing clinical education and other clinician resources. The key is to make it simple but not stupid. I will suggest one possible taxonomy which differs from other EBM-related taxonomies8–12 because it is designed around distinct inferences for clinical decisions rather than around distinct tiers of evidence quality.
First, define a ‘benefit harm expectancy’ (BHE) as an informal qualitative assessment of balance of benefits to harms that may be based on clinical expertise or other subjective factors in the absence of scientific evidence. Categorical evaluations of BHE should be rank ordered and qualitative, such as ‘favourable’, ‘unfavourable’ or ‘unclear’. The notion of BHE is then a keystone that makes possible the next step, iterating an easily understood taxonomy of EBM-based inferences that is mutually exclusive and collectively exhaustive (table 1). Note that the quantitative analogue of BHE is a Bayesian ‘prior probability distribution’, but because this quantitative formalisation is difficult to perform in practice and is fraught with estimation error, I will not consider it further. The table 1 illustrates how the evidence summaries in advisements 1 and 2 are compatible with at least three very different EBM-based inferences. Rather than being ambiguous, advisements should clarify which inference(s) are sound, for example, employing phrases in the second column of the table 1 such as ‘probably indicated until better evidence emerges’.
While every clinical situation will never be ‘map-able’ to a specified set of inferences, higher fidelity dissemination of EBM would improve the quality of inferences in a large variety of scenarios. Prescribe an inference to a clinician, help her to make a decision for that day; teach a clinician how to make inferences, teach her to make EBM-based decisions for her whole career.
It may be observed that the strategy advocated here simply embodies a ‘poor man’s’ use of Bayesian logic in the absence of estimable probability distributions. It also may be observed that similar roads have been travelled by EBM investigators that seek to use formal methods to synthesise a body of evidence. But it is long-past due for the conceptual underpinnings of EBM to be disseminated more broadly and with greater fidelity and to be implemented by a wider cross-section of practising physicians.
Some may contend that because much clinical decision-making is pattern recognition and/or subject to many well-described biases,13 there is no point in trying to impose on it or embed within it an explicitly rational structure. But all too often, these biases are invoked a specious effort to counter any endeavour to improve the transparency or clarity of clinical decision-making. Just because clinicians are not explicitly rational thinkers all the time (nor should they be), it does not mean they should not be rational thinkers sometimes, particularly when scientific evidence is robust. After all, what patient would choose a clinician that never thinks rationally? Additionally, it is important to note that my perspective is not universally held. Some would consider BHEs no more easy to formulate than quantitative specifications of prior probability distributions, especially if any new evidence is viewed sceptically.14 Others would omit experiential data from EBM on the grounds that it is not really evidence.
EBM should be more than just a slogan. The taxonomy described here could facilitate its dissemination and anchor its role in routine clinical practice.
Contributors RSB conceived the project and drafted the manuscript.
Competing interests None declared.
Patient consent Not required.
Provenance and peer review Not commissioned; externally peer reviewed.