Statistics from Altmetric.com
A large health plan has asked you to help them develop a clinical practice guideline for colon cancer screening. The plan currently covers annual fecal occult blood testing (FOBT) and flexible sigmoidoscopy every 5 years as screening methods. Member rates for both types of screening are very low (hence the impetus for the guideline). At the behest of several local gastroenterologists, the plan is also considering whether to cover colonoscopy every 10 years for average-risk people ≥50 years of age. Part of the process involves reviewing the cost-effectiveness literature, because the chief executive officer of the health plan is skeptical that colonoscopy represents a wise use of their ever-tightening budget. You are familiar with US Preventive Services Task Force Guidelines for colon cancer screening. After reviewing several recent cost-effectiveness studies, however, you find them daunting in terms of methodological complexity, terminology, and representation of outputs (a confusing array of large numerical tables and graphs that strongly resemble something you studied in first-year college economics). Is it possible to translate these studies into something you understand, believe reflects sound clinical practice, and believe would be useful to the health plan for their decision making?
The situation just described, an apparent disconnect between costs and clinical practice guidelines and the confusion it causes for readers, is not unusual. Economic analyses are rarely included in guidelines, mostly because each discipline has different and often conflicting views of what constitutes “best practice”.1,2 This exclusion is unfortunate, because both guidelines and cost-effectiveness studies offer important information to help us practice more effective and efficient health care. Decision makers today have a need to synthesise and interpret these studies rapidly and efficiently. This editorial offers suggestions to help clinicians understand cost-effectiveness studies and to use them in developing guidelines. Its approach is similar to the methods for defining and answering clinical queries used by those who practice evidence-based medicine: defining the clinical question, searching for evidence, and evaluating the quality of evidence.
(1) What is the clinical problem that is the subject of the guideline?
In this case we focus on screening options for adults ≥50 years of age who do not have symptoms of and who are at average risk for colorectal cancer.
(2) What interventions are being considered? What is the default intervention, if any?
Guidelines are designed to reduce variation in care, with the goal of using treatments that have been shown to improve desirable outcomes or minimise undesirable ones. Likewise, cost-effectiveness studies most often focus on comparing new with existing care. In the case of colon cancer screening, the health plan already covers flexible sigmoidoscopy and FOBT, treatments with good evidence of efficacy. It is thus reasonable to consider the cost-effectiveness of colonoscopy compared with the following alternatives: no screening, flexible sigmoidoscopy alone, FOBT alone, and sigmoidoscopy plus FOBT.
(3) Search the literature for cost-effectiveness studies
Searching for high-quality economic articles can be laborious, and the outcomes are not always satisfactory,3 primarily because of overuse of the term cost-effectiveness in the literature. Searching for this term will retrieve many articles, but most are not formal economic analyses. To address this problem, I suggest starting by using the medical subject heading (MeSH) for cost-effectiveness analysis, that is, “cost-benefit analysis” (when searching in Medline) with other terms that are specific to the clinical situation and technologies being compared (as outlined in 1 and 2 above). For colon cancer screening in PubMed (www.ncbi.nlm.nih.gov/entrez/query.fcgi), start with “cost-benefit analysis” as well as the content terms “screening, colonoscopy, sigmoidoscopy, and fecal occult blood.” Although this approach is not perfect, it improves the specificity of the search and substantially reduces the volume of articles compared with using the last 4 terms alone. In this case, we retrieve 24 articles. Reviewing the abstracts, it appears that several are formal cost-effectiveness analyses of all 3 screening methods. For this exercise, we select 3 recent analyses.4–6 These will be a good start for the next phase of the evaluation.
(4) What is the bottom line of each study for the interventions being considered?
The bottom line of most economic analyses is an incremental cost-effectiveness ratio—that is, the ratio of the difference in costs over the difference in outcomes for the interventions being compared. While starting with the conclusion does not help us determine the quality of the methods, it allows us to address the question: “If this were true, could it influence how I practise (or write or implement the guidelines)?” The economic analysis could be influential, for example, if it states that a little-used treatment is highly cost-effective, or conversely, that a widely advocated treatment has poor cost-effectiveness. If your answer to the question is “no,” then stop reading. If the answer is “yes,” we need to go on and review the methods to answer another question: “Is it likely to be true?”
In this case, the 3 studies consider the relative values of colonoscopy, flexible sigmoidoscopy, and FOBT as screening tools for people ≥50 years of age at average risk for colorectal cancer.4–6 Comparisons worth considering include the value of each compared with no screening and the incremental value of colonoscopy compared with either flexible sigmoidoscopy or FOBT. Because colonoscopy is more costly than the other approaches (at least initially) it is important to ask what one gets for this additional expenditure. Table 1 gives the results.
These results tells us 2 things. Firstly, all 3 studies conclude that compared with no screening, all strategies are reasonably cost-effective when using a (rather arbitrary, but widely cited) threshold of $50 000/life-year gained. Secondly, the articles reach very different conclusions regarding the incremental value of colonoscopy compared with FOBT and flexible sigmoidoscopy. Sonnenberg4 et al and Vijan6 et al state that compared with FOBT and flexible sigmoidoscopy alone, colonoscopy provides good value; study 2 finds that colonoscopy is only marginally effective.
(5) What factors “drive” the outcome?
Cost-effectiveness analyses are by nature synthetic, usually integrating data from multiple sources to derive the result. Reviewing each source (there can be dozens) for quality and accuracy is impractical for most decision makers. To help with this issue, ask: “What factors drive the outcome, that is, would have the greatest influence on the bottom line of the study if they were changed from their baseline value?” This issue, formally termed sensitivity analysis, is a logical next step for evaluating the quality of a cost-effectiveness study. As a rule of thumb, 2–4 factors usually have the greatest effect on the outcome, especially the cost and efficacy of the intervention. If the cost-effectiveness ratio changes from favourable to unfavourable (or vice versa) at plausible values for influential factors, this reduces confidence in the conclusions.a
Most cost-effectiveness studies have a sensitivity analysis section that addresses this issue. Unfortunately, it is difficult to identify the most influential factors for the 3 screening methods in the sensitivity analysis sections of the 3 studies. The 2 mentioned consistently are procedure cost and compliance. These are useful because they are seldom specifically evaluated in clinical trials and are likely to vary from setting to setting. Table 2 summarises the sensitivity analysis for these variables.
It seems that varying procedure cost or compliance across a probable range is unlikely to have an extremely adverse effect on the cost-effectiveness of the 3 strategies.
(6) How valid are the most influential variables at their default values?
If the bottom line of the study is highly dependent on specific factors, then these should be scrutinised closely. It is particularly important that influential data are based on the highest quality trials available, preferably randomised controlled trials (RCTs). Of note, the ability of colonoscopy to prevent colorectal cancer or mortality has not been measured in a randomized screening trial. Two articles base their results on 1 case-control study,7 suggesting that screening colonoscopy reduces the risk for dying of colon cancer. None of these articles would pass the screening criteria for cost-effectiveness studies used by Evidence Based Medicine which requires ≥1 of the studies included in the analysis of effectiveness to be an RCT. One could easily justify ignoring these articles at this point if the purpose was simply to keep up to date. However, our purpose is to address a policy problem (about screening), and for this the lack of strong evidence should be considered in reaching a decision about colonoscopy. The effectiveness of FOBT, on the other hand, is supported by 3 large RCTs showing benefit in reducing colorectal cancer-related mortality.8–10 Flexible sigmoidoscopy is intermediate in effectiveness, with 1 small RCT and 2 case-control studies showing benefit.11–13
(7) Interpretation: how should the economic studies influence the guidelines?
If we are reasonably confident that the economic studies are valid and cover the major clinical choices facing the organisation, next we need to consider what part (if any) they should play in shaping the guideline. These include issues that extend beyond the scope of the studies, such as implementation costs and the availability of clinicians trained in colonoscopy. Other issues, such as the first-year budget effect of adopting a particular policy, may or may not be available from the economic studies (in this case they are not). All should be considered (and weighted) in conjunction with the economic and clinical data.
The economic studies suggest that given the assumptions about efficacy, compliance, and cost, all methods—including colonoscopy—are relatively cost-effective for colorectal cancer screening compared with no screening. From an evidence-based medicine perspective, the weak supporting clinical evidence of the efficacy of colonoscopy is problematic. The clinical and economic outcomes of FOBT are more certain because efficacy rates are drawn from RCTs, and sensitivity analyses show that the outcomes are probably maintained across a reasonable range of costs and compliance. The same applies to a lesser extent for flexible sigmoidoscopy.
How much “worse” would colonoscopy be from a cost-effectiveness standpoint if the efficacy was not as assumed? Unfortunately, this is not addressed in the economic studies. The studies together suggest that incremental gain of switching from FOBT or flexible sigmoidoscopy to colonoscopy is highly uncertain, ranging from “dominant” to very cost-ineffective. As with clinical studies, such a degree of uncertainty should make one pause before adopting a new standard of care for an organisation. Thus, the review of the cost-effectiveness and clinical evidence do not support adopting colonoscopy as a covered screening procedure at this time. At this point it is reasonable to note that “absence of proof is not proof of absence.” It could well be that colonoscopy is more effective than FOBT or flexible sigmoidoscopy, even taking into account increased morbidity. However, the onus of proof rests with those who believe this is the case.
Meanwhile, the evidence and cost-effectiveness analyses support screening with FOBT (or flexible sigmoidoscopy when affordable) and justify efforts to increase utilisation of this programme. As it happens, success with such a programme would also increase the use of colonoscopy to evaluate “positive” findings by FOBT.
Economic analyses have much to offer in the creation of clinical practice guidelines. They often “put the pieces together” between treatment and outcome and give a sense of health value for money, this latter issue being an unavoidable fact of life in today’s environment. Economic analyses should not dictate clinical practice guideline policies, but they can inform them by identifying the evidence that is available and helping to identify cost-inefficient strategies and making explicit the tradeoffs between benefit and expenditure for others.
↵a One-way sensitivity analyses generally understate the uncertainty in the cost-effectiveness ratio. One should check to see if the article presents confidence intervals around the cost-effectiveness ratio. New tecniques that can estimate confidence intervals are being used more frequently today.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.