Intended for healthcare professionals

Editorials

“Is my practice evidence-based?”

BMJ 1996; 313 doi: https://doi.org/10.1136/bmj.313.7063.957 (Published 19 October 1996) Cite this as: BMJ 1996;313:957
  1. Trisha Greenhalgh, Senior lecturer
  1. Joint Department of Primary Care and Population Sciences, University College London Medical School/Royal Free Hospital School of Medicine, Whittington Hospital, London N19 5NF

    Should be answered in qualitative, as well as quantitative terms

    The growing interest in evidence based medicine among practising clinicians1 has prompted doctors in every specialty to ask themselves, “to what extent is the care of my patients evidence based?” The search is on for a means of answering this question that is achievable, affordable, valid, reliable, and responsive to change.

    Evaluating one's own performance is the final step in the five stage process of traditional evidence based practice. The first four steps are: to formulate for each chosen clinical problem an answerable question, to search the medical literature and other sources for information pertaining to that question, to assess the validity (closeness to the truth) and usefulness (relevance to the problem) of the evidence identified, and to manage the patient accordingly.2

    Several papers have been published3 4 5 and many more are being written whose stated objective is “to assess whether my/our clinical practice is evidence based.” Most describe prospective surveys of a consecutive series of doctor-patient encounters in a particular specialty, in which the primary intervention for each patient was classified by the doctors (and in some cases verified by an independent observer) according to whether it was based on evidence from randomised controlled trials, convincing non-experimental evidence, or inadequate evidence.

    Such surveys have generated the widely quoted figures that 82% of interventions in general medicine,3 81% of interventions in general practice,4 and 62% of interventions in psychiatry5 are evidence based. Questionnaire surveys of what doctors say they do in particular circumstances are starting to add to this literature.6 The public may soon be offered a “league table” of specialties ranked according to how evidence based they have shown themselves to be.

    Figures produced in the early 1980s suggested that only about 15% of medical practice was based on sound scientific evidence.7 Is the spate of new studies, therefore, grounds for reassurance that medical practice has become dramatically more evidence based in the past 15 years? Probably not. The earlier estimates were derived by assessing all diagnostic and therapeutic procedures currently in use, so that each procedure, however obscure, carried equal weight in the final figure. A more recent evaluation using this method classified 21% of health technologies as evidence based.8 The latest surveys, which looked at interventions chosen for real patients, were designed with the laudable objective of assessing the technologies which were actually used rather than simply those that are on the market.

    But the impressive percentages obtained in these series should be interpreted cautiously. As the protagonists of evidence based medicine themselves have taught us, a survey of any aspect of medical care should, in order to be generalisable beyond the particular sample studied, meet criteria for representativeness (are the health professionals and patients described typical?), data collection (is the sample unbiased?), data analysis (were all potential subjects included in the denominator or otherwise accounted for, and was assessment “blind”?), validity (were appropriate criteria used to classify subjects, and were these criteria applied rigorously?), comprehensiveness (was the study large enough and complete enough to make the results credible?), and repeatability (would the same results be obtained if the sample were studied on another occasion?).2

    Is my practice evidence based? A context specific checklist for individual clinical encounters

    Have you

    1. Identified and prioritised the clinical, psychological, social, and other problems, taking into account the patient's perspective?

    2. Performed a sufficiently competent and complete examination to establish the likelihood of competing diagnoses?

    3. Considered additional problems and risk factors?

    4. Where necessary, sought relevant evidence—from systematic reviews, guidelines, clinical trials, and other sources?

    5. Assessed and taken into account the completeness, quality, and strength of the evidence, and its relevance to this patient?

    6. Presented the pros and cons of the different options to the patient in a way they can understand, and incorporated the patient's utilities into the final recommendations?

    A survey which addressed the question “Is my practice evidence-based?” and which fulfilled all these criteria would be a major and highly expensive undertaking. But even if it were practically possible, several theoretical limitations would remain. The most important of these is that patients rarely enter the consulting room (or the operating theatre) with a discrete, one dimensional problem. A study which, for good practical reasons, looks at one clinical decision per case necessarily reduces the complexities of each patient's wants and needs to a single decision node. Such an approach might occasionally come close to being valid, but many aspects of management in primary care,9 care of older people,10 and chronic medical conditions11 do not lend themselves to the formulation of single answerable questions or the application of discrete, definitive interventions. In general practice, for example, the usual diagnostic and therapeutic sequence of diagnosis by epidemiological classification—symptoms and signs leading to identification of the disease, leading to treatment—may be less appropriate than diagnosis by prognosis—symptoms and signs leading to a provisional hypothesis, leading to watchful waiting, leading to identification of the disease—or diagnosis by therapeutic response—symptoms and signs leading to a provisional hypothesis, leading to empirical treatment, leading to identification of the disease.2

    Failure to recognise the legitimacy of these variations in approach has created a somewhat spurious divide between those who seek to establish general practice on an equal “scientific” footing to that of the secondary care sector4 12 and those who emphasise the value of the intuitive, narrative, and interpretive aspects of the consultation.13 Others have argued that both “science” and “art” are essential elements of evidence based care, which strives to integrate the best external evidence with all round clinical expertise.1 14 Nevertheless, debate continues as to whether all round clinical expertise can be dissected down to a set of objective and measurable components that are amenable to formal performance review15 or whether it is ultimately subjective and one of the unsolvable mysteries of the art of medicine.16

    Perhaps the most difficult aspect of evidence based practice to evaluate is the extent to which the evidence, insofar as it exists, has been applied with due regard to the personal priorities of the patient being treated.17 It is said that this step can be made objective by incorporating the weighted preferences of patients (utilities) into a decision tree.18 But researchers have found that defining and measuring the degree of patient centredness of a medical decision is a methodological minefield.19

    Here lies the real challenge of evidence based practice. Randomised controlled trials may constitute the ideal of experimental design, but they alone cannot prove that the right intervention has been provided to the right patient at the right time and place. To show that a decision on drug treatment was evidence based, for example, it is not sufficient to cite a single randomised controlled trial (or meta-analysis of several similar trials) in which the drug was shown to be more effective than placebo. It must also be shown that the prescriber defined the ultimate objective of treatment (such as cure, prevention of later complications, palliation, or reassurance) and selected the most appropriate treatment using all available evidence. This decision requires consideration of whether a different drug, or no drug, would suit the patient better, and whether the so called “treatment of choice” is viewed as such by the patient.2

    To seek, through scientific inquiry, an honest and objective assessment of how far we are practising evidence based medicine is an exercise which few of us would dare embark on. But research studies designed to address this question via the methodology of traditional “process of care” audit3 4 5 6 inform the doctor of a limited aspect of his or her efforts. In measuring what is most readily measurable, they reduce the multidimensional doctor-patient encounter to a bald dichotomy (“the management of this case was/was not evidence based”) and may thereby distort rather than summarise the doctor's overall performance.

    Measuring every dimension of care in a large consecutive series of cases would be impossible. It is surely time that we eschewed the inherent reductionism of audit by numbers and tried to capture more context in our reviews of clinical performance. Issues that are complex, multidimensional, and grounded in individual experience lend themselves to study by descriptive and qualitative methods.20 At the very least, future attempts to answer the question “how evidence based is my practice?” should include some measure of how competing clinical questions were prioritised for each case and how the evidence obtained was particularised to reflect the needs and choices of the individual patient.

    I thank Professor Ann-Louise Kinmonth and the members of the evidence-based-health mailbase on the Internet for valuable comments on earlier drafts of this manuscript.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.
    17. 17.
    18. 18.
    19. 19.
    20. 20.