Article Text

Download PDFPDF

Does clinical experience make up for failure to keep up to date?
Free
  1. Geoffrey R Norman, PhD,
  2. Kevin W Eva, PhD
  1. McMaster University, Hamilton, Ontario, Canada

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    You have just moved to a new town to open up your internal medicine practice, and now must find a doctor for your own young family. Two doctors are accepting new patients:

    Jane completed residency 2 years ago, and scored in the top 2% of her class on the certification exams.Susan completed residency 10 years ago, and scored in the top 25% of her class on the certification exams.

    Who do you choose?

    When this question has been posed to large clinical audiences over the past few years, only half a dozen people ever choose Jane over Susan. Why? Most people would accept that not only did Susan start off worse in formal knowledge, but also that it is very likely that her knowledge and application of current recommended care is likely to have fallen off further, as Choudhry et al have shown in a recent systematic review.1 Why, then, do audiences so unanimously choose the more experienced practitioner?

    Perhaps for just that reason—experience. The practice of medicine, like many other areas of human endeavour, requires considerable “hands on” experience to achieve mastery. Most physicians, when asked, indicate that they did not really feel competent for several years after they entered practice, which tallies with estimates from other domains that suggest 10 years or 10 000 hours are required to become a virtuoso.2 But what is gained from experience?

    Unfortunately, if you believe the conclusions of Choudhry et al, the answer is that nothing is gained and much is lost. They claim to have identified several studies where increasing years in practice are associated with increased mortality. But on closer scrutiny, the differences, when present, are small. The paper states that, in the best study of outcome,3 every year since graduation resulted in an increase of 0.5% mortality in the management of post-myocardial infarction (MI) patients. However, this was an increase in relative risk so that each year of practice equated to an absolute increase of mortality of only about 0.05%, on a baseline mortality rate post-MI of about 10%. After 20 years of practice, the mortality would be projected to rise to 12%. A second study by the same authors4 showed a non-significant increase corresponding to about 0.02% mortality per year, or an absolute increase in risk of 0.4% after 20 years. Since treatment of MI is an area of medicine where swift advances in treatment are the norm, this figure might well be regarded as an upper limit, and indeed closer examination of the other studies of mortality cited by Choudhry et al show small or absent effects.

    These findings reveal an intriguing paradox. Physicians in practice tend not to keep up on either knowledge tests or adherence to practice guidelines, but this seems to have minimal impact, if any, on patient outcomes. It could, of course, be the case that practice guidelines do not make that much difference to patient outcomes, and indeed there are few studies showing a benefit of guidelines.5 However, it is not the case that outcomes are simply insensitive to provider differences, because, in the studies by Norcini et al,3,4 subspecialisation resulted in absolute mortality differences of about 2.5%, and success on a certification examination resulted in an absolute mortality difference of about 2%. So we are left with the conclusion that the outcome measures do not seem to decline with age commensurate with the drop in knowledge or guideline compliance. Does that mean that there is something acquired from experience that is able to compensate for a fairly large decline in knowledge and therapeutic approach?

    Some indication of this compensatory mechanism can be gleaned from studies of expertise in other domains. The most studied area of expertise is chess, which has been systematically explored for over 50 years. It has been shown that chess expertise is due, in large part, to thousands of hours of deliberate practice, taking 10 to 20 years.2 The consequence is that chess performance shows a curvilinear relation to age, peaking at about age 35 while expertise is slowly acquired, and declining slowly thereafter.

    Expert chess play consists, in large part, of matching the current play to some learned moves in memory—what one might call pattern recognition. Not surprisingly, since this is the essential nature of the skill, chess expertise is best observed in speeded play6 where the player must rapidly select the best move, and where the ability to recognise patterns of play is a decided asset. Studies in other areas of expertise have also shown that experts actually do better under speeded conditions; expert golfers have more accurate putts when told to be rapid than when told to be accurate.7

    Some similar observations exist in medicine. Expert dermatologists, when they are right, are quick—an average of 8 seconds. When they are wrong, it takes them about 12 seconds, and when they are unsure, they will ponder for 28 seconds on average.8 Hobus et al9 have shown that when clinicians are provided with minimal information, the correlation between diagnostic accuracy and experience is +0.68.

    Although these findings suggest that experience enables practitioners to make decisions rapidly, it remains unclear how this skill relates to experience. We have pursued a line of inquiry based on the assertion that expert clinicians frequently arrive at a diagnosis by mentally comparing the presenting situation to a specific previous case.10–12 The process occurs without conscious reflection, analogous to the way we recognise a friend on the street.12 Thus, it is reasonable to presume that a major component of medical expertise that is learned during the course of many years of practice is the accumulation of a vast mental storehouse of clinical cases, on which experts draw repeatedly in arriving at a diagnosis.

    Cast in this light, the real perplexity in the review by Choudhry et al is the lack of a positive relation between experience and clinical outcomes. Although the clinician becomes more and more accomplished at diagnosis based on pattern recognition processes with increasing experience, it may be the case that the benefits of the strategy come at the cost of reduced flexibility. Hashem et al13 have presented data showing that specialists have a tendency to cognitively “pull” cases toward the domains with which they have most experience.

    Confirmation of this possibility comes from studies of the Physician Review and Enhancement Program (PREP) in Ontario, Canada, using a battery of tests of physician performance. Overall, they too found a negative association between age and expertise.14 Systematic consideration of the causes of poor performance in the older physicians, however, suggests that premature closure (ie, excessive reliance on one’s early impressions of a case) is a major source of difficulty.15 In other words, more experienced physicians seem more likely to accurately diagnose using pattern recognition, but as a result of increased reliance on this strategy, they are also less likely to give due consideration to competing diagnoses.16

    The discussion above relates primarily to diagnostic expertise. But by and large, the performance measures in the review by Choudhry et al reflect either surgical skill or management strategies, in circumstances where the diagnosis is a given. However, a recent study sheds some light on the relation between experience and management. Schuwirth et al17 assessed rheumatologists with 2 kinds of clinical problems: A computer-based test consisting of 55 written management cases focusing on 1–4 essential decisions, and a series of 8 incognito standardised patients who visited their offices and completed a performance checklist after the encounter. The standardised patient test had a strong negative correlation (r  =  −0.50) with the total number of patients seen during the rheumatologist’s professional life, again presumably because experts do not require as much information as novices, so their expertise is impaired by use of checklist scoring systems.18 In contrast, the computer test was positively correlated (r  =  +0.58) with lifetime experience. Why the discrepancy with Choudhry et al’s findings? Perhaps because responses were subjectively scored by other experts on a “case by case” basis, rather than compared with a detailed set of items like a practice guideline. Perversely, it may just be that expertise is evidenced as much in knowing when to depart from guidelines as knowing what the guidelines say. Indeed, a recent study19 of hospital clinicians indicated that consultants’ approaches to drug treatment were more idiosyncratic than those of house officers, mainly because the consultants were more holistic and adapted the prescribing to the individual patient, whereas junior doctors used a more formulaic approach.

    In summary, we have suggested 2 mechanisms to explain the paradox that apparently large declines in measures of knowledge and process of care do not translate into commensurate large differences in patient outcome. Firstly, adherence to prescribed practices of care may be, in some sense, optimal at a population level, but at an individual level, experienced physicians may deliberately and systematically depart from these guidelines to recognise individual patient needs. As a consequence, they may be penalised on measures based on adherence to prescribed regimens. Secondly, there is an accumulation of evidence that with experience, physicians rely more on pattern recognition strategies that can, to some degree, compensate for failure to keep up in formal knowledge, but can themselves lead to negative consequences.20

    We are not suggesting that the findings of Choudhry et al should be lightly dismissed. They do indicate that physicians are not keeping up with current approaches to patient care. Current approaches to maintenance of competence that are dependent on self assessment of one’s own knowledge/abilities should be critically re-examined. It seems unlikely that admonitions to be more reflective or to identify weaknesses can overcome the negative trends identified.

    Viewing experience as a double edged sword, as we have, creates the opportunity for more effective continuing education. Lectures and distribution of printed materials are not effective.21 Learning around specific cases in which individuals are challenged to apply the latest research evidence might be.22 Doing so in a context in which participants are required to respond to feedback is likely to provide further incremental benefit, especially if that feedback is derived from individuals with heterogeneous backgrounds and varied levels of expertise. In general, the position presented in this editorial leads us to advocate recognizing the unique strengths that experience provides while simultaneously developing and investigating continuing education strategies that re-ignite the analytic tendencies of individuals for whom medical practice has become excessively automated and routinised.

    References

    View Abstract