Evidence-based medicine: clinicians are taught to say it but not taught to think it =================================================================================== * R Scott Braithwaite Preparing for my decennial recertification exam in internal medicine opened my eyes to a startling disconnect in how evidence-based medicine (EBM) is communicated to practising clinicians compared with how it is implemented by expert panels and EBM authorities. Indeed, sources of continuing clinical education often omit basic tenets of EBM, in particular how to integrate the best available evidence with clinical expertise.1–3 This disconnect is mystifying: why should education experts well versed in EBM use communication templates disjointed from EBM to communicate evidence for practising medicine? For example, a common source of preparation for the American Board of Internal Medicine recertification exam is the Medical Knowledge Self-Assessment Program (MKSAP) published by the American College of Physicians.4 Even though the American College of Physicians prioritises the mission of teaching and implementing EBM,5 two advisements from the 2017 MKSAP illustrate how the curriculum they sponsor deviates from this mission. First, > Advisement 1: ‘Routine screening for skin cancer using a total body skin examination is not recommended. The United States Preventive Services Task Force (USPSTF) found insufficient evidence that routine skin examination was effective at reducing the morbidity and mortality from cutaneous melanoma, basal cell carcinoma, or squamous cell carcinoma. Patients with high cumulative levels of sun exposure should be encouraged to wear sunscreen and protective clothing, although the benefit of such counselling is unknown.’ Here, clinicians are not told whether the advisement’s basis in EBM is because there is no direct evidence or because there is direct evidence, but it is unfavourable. Yet, this distinction is of key importance for integrating evidence with expertise to implement EBM and guiding decisions. No direct evidence could just as easily lead to screening or no screening, whereas direct unfavourable evidence should lead to no screening in the absence of a compelling reason to do otherwise. Second, > Advisement 2: ‘Routine screening for CAD is not recommended. Resting electrocardiography, exercise treadmill testing, and electron-beam CT may identify some patients with asymptomatic disease, but these strategies lack supportive evidence.’ Again, clinicians are not told whether the advisement’s basis is because there is no direct evidence or because there is direct evidence, but it is unfavourable, thereby muddying inferences for decision-making. For example, should a clinician perform an ECG on an asymptomatic patient with an expected 10-year incidence of cardiac disease greater than 10%? One could argue that such a patient would not be ‘routine’ and therefore should be excluded from the prescribed inference by the ‘routine screening’ clause in the recommendation, but an improved method of disseminating EBM would remove this ambiguity. Clinical resources that deviate from EBM principles are not unique to MKSAP, but rather are endemic. A well-regarded reference, UpToDate, routinely issues advisements with oblique relationships to EBM; for example, advising melanoma screening for high-risk persons after describing the uncertainty that screening lowers mortality and the biases in supporting studies.6 Indeed, failing to discriminate between no evidence and unfavourable evidence is similar to indiscriminate use of the phrase ‘there is no evidence to suggest’,7 which is also endemic and bears the imprint of contexts removed from medicine where there is an implicit desire for a contrary bias (eg, legal proceedings, where innocence is presumed in the absence of evidence of guilt). How might clinician resources become more aligned with EBM? It is not an easy question—media like those cited above are valued precisely because they transform difficult, often grey-shaded questions into digestible and actionable maxims. Nonetheless, I believe that substantial improvement is possible. ## One approach to improve the fidelity of disseminating EBM It is possible to design a taxonomy focused on EBM implementation that could be adopted by sources of continuing clinical education and other clinician resources. The key is to make it simple but not stupid. I will suggest one possible taxonomy which differs from other EBM-related taxonomies8–12 because it is designed around distinct inferences for clinical decisions rather than around distinct tiers of evidence quality. First, define a ‘benefit harm expectancy’ (BHE) as an informal qualitative assessment of balance of benefits to harms that may be based on clinical expertise or other subjective factors in the absence of scientific evidence. Categorical evaluations of BHE should be rank ordered and qualitative, such as ‘favourable’, ‘unfavourable’ or ‘unclear’. The notion of BHE is then a keystone that makes possible the next step, iterating an easily understood taxonomy of EBM-based inferences that is mutually exclusive and collectively exhaustive (table 1). Note that the quantitative analogue of BHE is a Bayesian ‘prior probability distribution’, but because this quantitative formalisation is difficult to perform in practice and is fraught with estimation error, I will not consider it further. The table 1 illustrates how the evidence summaries in advisements 1 and 2 are compatible with at least three very different EBM-based inferences. Rather than being ambiguous, advisements should clarify which inference(s) are sound, for example, employing phrases in the second column of the table 1 such as ‘probably indicated until better evidence emerges’. View this table: [Table 1](http://ebm.bmj.com/content/24/5/165/T1) Table 1 Taxonomy of EBM-based inferences While every clinical situation will never be ‘map-able’ to a specified set of inferences, higher fidelity dissemination of EBM would improve the quality of inferences in a large variety of scenarios. Prescribe an inference to a clinician, help her to make a decision for that day; teach a clinician how to make inferences, teach her to make EBM-based decisions for her whole career. It may be observed that the strategy advocated here simply embodies a ‘poor man’s’ use of Bayesian logic in the absence of estimable probability distributions. It also may be observed that similar roads have been travelled by EBM investigators that seek to use formal methods to synthesise a body of evidence. But it is long-past due for the conceptual underpinnings of EBM to be disseminated more broadly and with greater fidelity and to be implemented by a wider cross-section of practising physicians. ## Limitations Some may contend that because much clinical decision-making is pattern recognition and/or subject to many well-described biases,13 there is no point in trying to impose on it or embed within it an explicitly rational structure. But all too often, these biases are invoked a specious effort to counter any endeavour to improve the transparency or clarity of clinical decision-making. Just because clinicians are not explicitly rational thinkers all the time (nor should they be), it does not mean they should not be rational thinkers sometimes, particularly when scientific evidence is robust. After all, what patient would choose a clinician that never thinks rationally? Additionally, it is important to note that my perspective is not universally held. Some would consider BHEs no more easy to formulate than quantitative specifications of prior probability distributions, especially if any new evidence is viewed sceptically.14 Others would omit experiential data from EBM on the grounds that it is not really evidence. ## Summary EBM should be more than just a slogan. The taxonomy described here could facilitate its dissemination and anchor its role in routine clinical practice. ## Footnotes * Contributors RSB conceived the project and drafted the manuscript. * Competing interests None declared. * Patient consent Not required. * Provenance and peer review Not commissioned; externally peer reviewed. This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/). ## References 1. 1.American College of Physicians. MKSAP 18 Medical Knowledge Self-Assessment Program. American College of Physicians, 2017. 2. 2.Journal watch, 2018. New England Journal of medicine journal watch. [https://www.jwatch.org/](https://www.jwatch.org/) (accessed 8 Mar 2018). 3. 3. Sackett DL , Rosenberg WM , Gray JA , et al . Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71–2.[doi:10.1136/bmj.312.7023.71](http://dx.doi.org/10.1136/bmj.312.7023.71) [FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjExOiIzMTIvNzAyMy83MSI7czo0OiJhdG9tIjtzOjIwOiIvZWJtZWQvMjQvNS8xNjUuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 4. 4.Physicians ACo, Staff ACoP. MKSAP 16 Medical knowledge self-assessment program. American College of Physicians, 2012. 5. 5.ACP, 2004. American college of physicians mission, vision and goals. [https://www.acponline.org/about-acp/who-we-are/mission-vision-and-goals](https://www.acponline.org/about-acp/who-we-are/mission-vision-and-goals) (accessed 17 Aug 2018). 6. 6. Geller AC , Swetter S , Tsao H . Screening and early detection of melanoma in adults and adolescents. Up To Date 2018;4845. 7. 7. Braithwaite RS . A piece of my mind. EBM’s six dangerous words. JAMA 2013;310:2149–50.[doi:10.1001/jama.2013.281996](http://dx.doi.org/10.1001/jama.2013.281996) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1001/jama.2013.281996&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=24281458&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom) 8. 8. Han PK , Klein WM , Arora NK . Varieties of uncertainty in health care: a conceptual taxonomy. Med Decis Making 2011;31:828–38.[doi:10.1177/0272989X10393976](http://dx.doi.org/10.1177/0272989X10393976) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1177/0272989X10393976&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=22067431&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom) [Web of Science](http://ebm.bmj.com/lookup/external-ref?access_num=000296697100010&link_type=ISI) 9. 9. Ebell MH , Siwek J , Weiss BD , et al . Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature. J Am Board Fam Pract 2004;17:59–67.[doi:10.3122/jabfm.17.1.59](http://dx.doi.org/10.3122/jabfm.17.1.59) [Abstract/FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NToiamFiZnAiO3M6NToicmVzaWQiO3M6NzoiMTcvMS81OSI7czo0OiJhdG9tIjtzOjIwOiIvZWJtZWQvMjQvNS8xNjUuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 10. 10. Guyatt G , Oxman AD , Akl EA , et al . GRADE guidelines: 1. Introduction – GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011;64:383–94.[doi:10.1016/j.jclinepi.2010.04.026](http://dx.doi.org/10.1016/j.jclinepi.2010.04.026) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1016/j.jclinepi.2010.04.026&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=21195583&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom) 11. 11. Raftery J , Hanney S , Greenhalgh T , et al . Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technol Assess 2016;20:1–254.[doi:10.3310/hta20760](http://dx.doi.org/10.3310/hta20760) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.3310/hta20220&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=27029490&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom) 12. 12. Colquhoun H , Leeman J , Michie S , et al . Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implement Sci 2014;9:51.[doi:10.1186/1748-5908-9-51](http://dx.doi.org/10.1186/1748-5908-9-51) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/1748-5908-9-51&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=24885553&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom) 13. 13. Avorn J . The psychology of clinical decision making - implications for medication use. N Engl J Med 2018;378:689–91.[doi:10.1056/NEJMp1714987](http://dx.doi.org/10.1056/NEJMp1714987) 14. 14. Stegenga J . Is meta-analysis the platinum standard of evidence? Stud Hist Philos Biol Biomed Sci 2011;42:497–507.[doi:10.1016/j.shpsc.2011.07.003](http://dx.doi.org/10.1016/j.shpsc.2011.07.003) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=22035723&link_type=MED&atom=%2Febmed%2F24%2F5%2F165.atom)