0
Challenges of Summarizing Better Information for Better Health: The Evidence-based Practice Center Experience |

Challenges in Systematic Reviews of Economic Analyses FREE

Michael Pignone, MD, MPH; Somnath Saha, MD, MPH; Tom Hoerger, PhD; Kathleen N. Lohr, PhD; Steven Teutsch, MD, MPH; and Jeanne Mandelblatt, MD, MPH
[+] Article and Author Information

From University of North Carolina, Chapel Hill, North Carolina; Portland Veterans Affairs Medical Center and Oregon Health & Science University, Portland, Oregon; RTI International and the RTI–University of North Carolina Evidence-based Practice Center, Research Triangle Park, North Carolina; Merck & Co., Inc., West Point, Pennsylvania; and Georgetown University Medical Center and Lombardi Comprehensive Cancer Center, Washington, DC.


Disclaimer: The authors are responsible for the contents of this article, including any clinical or treatment recommendations. No statement in this article should be construed as an official position from the U.S. Agency for Healthcare Research and Quality, the U.S. Department of Health and Human Services, or Merck & Co., Inc.

Acknowledgments: The authors thank Loraine Monroe of RTI International for dedicated document preparation efforts.

Grant Support: By contract 290-02-0016 from the Agency for Healthcare Research and Quality to RTI International (Drs. Pignone, Hoerger, and Lohr); by contract 290-97-0018 from the Agency for Healthcare Research and Quality to Oregon Health Sciences University (Dr. Saha); and by grant numbers U01CA88283A and K05CA96940 from the National Cancer Institute (Dr. Mandelblatt). Dr. Saha was also supported by a Research Career Development award from the Health Services Research & Development Service of the Department of Veterans Affairs and a Generalist Physician Faculty Scholar award from the Robert Wood Johnson Foundation.

Potential Financial Conflicts of Interest: Authors of this paper have received funding for Evidence-based Practice Center reports.

Requests for Single Reprints: Michael P. Pignone, MD, MPH, Division of General Internal Medicine, Department of Medicine, University of North Carolina, Chapel Hill, 5039 Old Clinic Building, UNC Hospital, Chapel Hill, NC 27599-7110; e-mail, Pignone@med.unc.edu.

Current Author Addresses: Dr. Pignone: Cecil Sheps Center for Health Services Research, Department of Medicine, University of North Carolina, 5039 Old Clinic Building UNC Hospital, Chapel Hill, NC 27599-7110.

Dr. Saha: Portland Veterans Affairs Medical Center (P3MED), 3710 SW U.S. Veterans Hospital Road, Portland, OR 97239.

Drs. Hoerger and Lohr: RTI International and the RTI—University of North Carolina Evidence-based Practice Center, RTI International, 3040 Cornwallis Road, Research Triangle Park, North Carolina 27709.

Dr. Teutsch: Outcomes Research and Management, PO Box 4, WP39-130, Merck & Co., Inc., West Point, PA 19486-0004.

Dr. Mandelblatt: Georgetown University Medical Center and Cancer Control Program, Lombardi Comprehensive Cancer Center, Cancer Control Program, 2233 Wisconsin Avenue, Suite 317, Washington, DC 20007.


Ann Intern Med. 2005;142(12_Part_2):1073-1079. doi:10.7326/0003-4819-142-12_Part_2-200506211-00007
Text Size: A A A

Economic analyses can provide valuable information for health care decision makers. Systematic reviews of economic analyses can integrate information from multiple studies and provide important insights by systematically examining how differences between models lead to different results. We use our experience in developing and implementing systematic reviews of economic analyses for the U.S. Preventive Services T ask Force, particularly our systematic review of the cost–effectiveness of colorectal cancer screening, to illustrate key methodologic challenges and suggest a framework for other researchers in this area.

Economic analyses, including cost-effectiveness analyses, cost–utility analyses, cost–benefit analyses, and cost-minimization studies, can provide valuable information for health care decision makers (see Glossary). Economic analyses can be performed within the context of a clinical trial or other study, or they can be based on a model, or a combination of the two. Economic analyses that incorporate models allow investigators to explore multiple questions (Table 1). Most important, models integrate information about effectiveness and costs and thus can provide a measure of health care value. As more economic analyses are produced, researchers and policymakers need to have methods to synthesize and interpret the results of multiple analyses that address a single issue; systematic review offers a framework for doing this.

Table Jump PlaceholderTable 1.  Potential Aims of Systematic Reviews of Economic Analyses

Systematic reviews of economic analyses pose special challenges for those who perform reviews and those who use them. Traditional techniques of meta-analysis are not appropriate for economic analyses, which are themselves syntheses, and hence should not be combined as one might combine the results of different randomized trials. Instead, systematic reviews of economic modeling studies are most useful for comparing and contrasting how different investigators have chosen to structure their models and estimate key variables. They can also clarify how results differ between studies based on these different assumptions. Identifying sources of variation across studies can help individual decision makers determine which studies best apply to their particular settings and can also guide future research by identifying important areas of uncertainty. Systematic assessment of study quality can help reviewers interpret individual study results (12).

We examine key methodologic challenges in performing systematic reviews of economic analyses, based on our experiences and those of other experts in the field (1, 3). To illustrate our general principles, we also describe our experience in performing a systematic review of colorectal cancer screening for the Agency for Healthcare Research and Quality (AHRQ) and the U.S. Preventive Services Task Force (USPSTF) (45).

Literature searches to identify economic analyses should generally begin with the British National Health Service Economic Evaluation Database (NHS EED) (available at http://www.york.ac.uk/inst/crd/nhsdhp.htm), which systematically identifies and catalogs health-related economic analyses. The NHS EED uses a rigorous set of search terms in 4 main databases: Current Contents–Clinical Medicine (1994 onward), MEDLINE (1995 onward), CINAHL (1995 onward), and EMBASE (2002 onward). In addition, they hand-search key journals and incorporate technology assessments and working papers from a range of research institutes to assure coverage. Their search strategies are available at http://www.york.ac.uk/inst/crd/nfaq2.htm. When using the NHS EED, reviewers should combine topic search terms with the “economic evaluations” tab to identify relevant articles. Assistance from a research librarian will help assure good search-term coverage.

To identify other potentially eligible studies, including older analyses, reviewers can apply the full search terms recommended by the NHS EED to studies published before 1995, or they can use an abbreviated search strategy. We have found that using the Medical Subject Headings terms costs and cost analysis and cost-benefit analysis works well for identifying relevant studies in prevention and screening. Another useful resource is the CEA registry at the Harvard School of Public Health (available at http://www.hsph.harvard.edu/cearegistry). In addition, one may wish to query experts in the field and hand-search relevant article bibliographies.

Effectively selecting economic analyses for review begins with carefully defining key questions; taking a methodical approach to the research topic facilitates stating them in ways that a systematic review truly can address. For the USPSTF and its partner Evidence-based Practice Centers (EPCs), the main step in this process is development of an analytic framework and related key questions (6). In this process, the USPSTF and the EPC specify the health or intermediate outcomes of interest, the types of comparisons to be made, and the populations to be included.

Systematic reviews of economic analyses can inform a variety of clinical or policy questions (Table 1). Once reviewers or decision makers specify the research questions, reviewers should develop inclusion (eligibility) criteria a priori so that they can identify studies most likely to produce unbiased, generalizable, and useful information for the issues at hand. In practical terms, determining how stringent to be in setting eligibility criteria is often difficult until the number of potential studies has been determined. Regardless of the number of studies available, analysts must apply some minimum quality criteria (12). In addition, reviewers may set criteria relating to the analytic perspective taken (for example, societal, payer, health care system, or patient).

Outcomes are a significant element of eligibility criteria. Cost-effectiveness analyses and cost–utility analyses that measure generic outcomes such as cost per life-year or quality-adjusted life-year (QALY) gained, respectively, are generally more valuable than studies that measure cost per event prevented because the former 2 types yield metrics such as QALYs that can be compared across different clinical conditions. Studies examining disease-specific outcomes may be appropriate if decision makers are interested in comparisons relevant to a single intervention or condition. This might include determining the optimal frequency for delivering a particular screening intervention (for example, mammography or cervical cancer screening) or examining the best way to address a certain variable, such as adherence to clinical advice about screening (for example, for colorectal cancer).

In economic analyses, usually only a few studies are relevant for any given topic, and the number of high-quality studies is usually lower. Thus, reviewers must set eligibility criteria to exclude low-quality analyses while still retaining enough studies, if possible, to evaluate important differences in model structure and inputs for key variables. This approach of using the best available evidence typically requires an iterative process; in this, analysts may set final selection criteria only after initially assessing of the quantity and quality of the extant literature (7).

Critical appraisal begins by first systematically identifying, cataloging, and analyzing key study features of the included articles (Table 2). Several checklists for appraising study reporting and quality have been developed (12, 8); however, a recent systematic review found that economic analyses published in the past 10 years frequently have important methodologic flaws (9). In the following paragraphs, we describe key areas for assessment and appraisal in greater depth, including model type and structure, analytic perspective, time horizon, and ways to address uncertainty.

Table Jump PlaceholderTable 2.  Common Data Elements To Be Identified from Individual Studies
Model Type and Structure

Those investigating the same clinical question may structure their models differently. For example, one investigator may use Markov analysis, whereas another investigator may use microsimulation or a discrete-events analysis for the same clinical question. Even within a particular model type, differences in how the model is structured and how information on natural history is used can have important effects on results. How much such differences will affect the results of the model, apart from differences in input variables, is not clear.

Analytic Perspective

Studies may differ in the perspective they use. Those using the societal perspective may produce results different from those done from the payer or the patient perspective because they include different costs and effects. Many experts prefer the societal perspective because it is the broadest and includes the greatest range of costs and effects. Even studies done from the societal perspective, however, commonly fail to address certain costs, such as those associated with patient time requirements or changes in economic productivity.

Time Horizon

Different models may use different time periods over which benefits, adverse effects, and costs of a given intervention accrue. Differences in time horizon can lead to important differences in outcomes, particularly when the benefits and adverse effects occur at different points in time. For example, carotid endarterectomy for stroke prevention leads to a short-term increase in adverse events from perioperative complications, but it reduces later events (10).

Ways To Address Uncertainty

Another important shortcoming of published economic analyses is the limited degree to which many economic analyses have examined and represented uncertainty. Uncertainty should be considered at a minimum of 2 levels: uncertainty with respect to the values used for input variables, and uncertainty with respect to the effect of combining multiple estimated variables. Many studies address the first type of uncertainty with sensitivity analysis of a limited number of variables over an often limited range of values; in some cases, the model structure might permit multiway sensitivity analyses. Few studies, however, consider the higher-level uncertainty that comes from the combined effects of the different variables, including best- and worst-case scenarios. For example, few economic analyses report confidence or credible intervals around their main results; omitting them may engender a false sense of precision. One useful example of how to consider uncertainty effectively comes from Briggs and colleagues, who used Bayesian probabilistic sensitivity analysis to examine different treatments to heal and prevent recurrence of reflux esophagitis (11). Given the state of the literature, reviewers must be careful to point out that comparisons among the results of different analyses should be interpreted with caution, as differences in the point estimates of cost-effectiveness do not account for the uncertainty around those estimates.

Determining the best way to report the results of a systematic review of economic analyses can be challenging, as the amounts of data synthesized and results generated are often quite large. When reporting results, one should present, for each analysis, the incremental cost-effectiveness ratios (for example, the cost per life-years or QALY gained for each nondominated intervention that is being considered compared to the next most effective intervention) for all key strategies being tested, updated to current year dollars (5, 1213). When such information is not available, analysts may be able to calculate such values if the original authors have tabulated costs and outcomes for each key option. In addition to these data, reviewers should present a table of input variables from each study so that readers can see how differences in each variable affect the cost-effectiveness ratio.

Our experience in developing a systematic review of cost-effectiveness analyses on colorectal cancer (CRC) screening illustrates how a systematic review of economic analyses can help inform preventive care decisions (5). Such work can also stimulate further cooperative research among model builders to better identify and explain sources of variation between models.

Colorectal cancer screening is a good example for several reasons. It is an important issue for preventive health for U.S. adults. Several different potential screening tests are available, each with different advantages and disadvantages. These different tests have never been compared head-to-head in clinical trials, but evidence is sufficient to allow reasonable estimation of their efficacy. In addition, several different researchers have developed cost-effectiveness models for CRC screening, allowing for fruitful comparisons (1420). We sought to determine: What is the cost-effectiveness of CRC screening compared with no screening, and is one method of screening preferred above others with respect to effectiveness or cost-effectiveness?

Our systematic review of economic analyses on CRC screening found that all commonly recommended screening tests were effective and cost-effective compared with no screening (Table 3). Five studies compared multiple methods of screening. The results were less clear for the question of whether one method of screening should be preferred because it is either more effective or more cost-effective: the different models were not consistent with respect to the most effective or most cost-effective method of screening.

Table Jump PlaceholderTable 3.  Illustrative Method of Displaying Study Results: Incremental Cost-Effectiveness Ratios in 2000 U.S. Dollars

Differences in results appeared to arise from differences in input variables among models and differences in model structure and assumptions. For example, the studies by Frazier and colleagues (15) and Vijan and colleagues (20) used estimates of the cost of colonoscopy ($1012 and $550, respectively) that differed by almost 100%. This difference was associated with a similar magnitude of difference in the cost-effectiveness ratio for colonoscopy screening (5).

In some cases, the differences in variables used reflected the uncertainty about some inputs. In others, the differences represented somewhat arbitrary or context-specific choices about inputs. Specifically, important factors included information about natural history of CRC (for example, what proportion of polyps to assume will progress to cancer and how long the period is for polyps to progress to cancer), differences in cost inputs (for example, which value to use for the cost of colonoscopy), differences in how complications were modeled, and variation in how adherence was addressed, if at all. Most of the models did not estimate the degree of uncertainty around the estimates of cost-effectiveness.

We attempted to use differences in the published input variables to understand differences in the results of these various analyses. Unfortunately, as noted for critical appraisal, the multiple differences between studies and the limited data available in the published reports on the analyses made it difficult to determine with confidence the sources of variation between studies. To overcome these problems, we called for collaborative work among modelers and researchers to standardize inputs, perform additional research, and re-run existing models to answer questions about variation in results (5).

In response to our call for collaboration, the National Cancer Institute (NCI), in partnership with the National Academy of Sciences, convened a workshop of modelers (21). Each modeling group was asked to perform a series of standardized analyses to assess indirectly the magnitude of the effect of differences in input variables among models.

All modelers were asked to use a standard set of basic assumptions, including a 3% discount rate, 2003 U.S. dollars, and the assumption that screening begins at age 50 years and continues until age 80 years. In addition to these basic assumptions, modelers were given assumptions to be used in various combinations for the following factors: test and treatment costs, test accuracy, complication (colon perforation) rates, surveillance patterns, and adherence (set at 100% for standardized analyses).

In the interest of feasibility, the workshop organizers chose 10 scenarios for consideration. The first scenario used only the basic assumptions; the remaining variables were those of the original models. A second analysis used all the standardized inputs. Eight other scenarios then varied the variables that were or were not standardized.

The modeling exercise yielded several useful results. In summary, the models, in their nonstandardized forms, differed with respect to both costs and effectiveness of the specific screening strategies considered. Full standardization removed many, but not all, of the differences in costs. In terms of effectiveness (life-years saved), substantial differences remained after full adjustment, suggesting that variables for which we did not adjust, such as model structure or natural history assumptions, may account for much of the variation observed. Additional analyses suggested that test costs were responsible for much of the variation in total costs between models. The preferred test at different thresholds of generally accepted levels of costs per life-years saved varied widely before standardization but much less so after standardization.

We concluded from this exercise that attempts to standardize cost inputs could allow greater focus on differences that arise from variables with true uncertainty (and hence a need for more research), such as the effectiveness of a screening test or assumptions about natural history, and from underlying differences in the model structure. In addition, we suggested that modelers develop means by which to test more sophisticated ways of representing the components of adherence to CRC screening, which are only crudely represented in several models. Overall, the workshop offered a framework for how to approach other health economic issues for which multiple models exist: perform an initial systematic review to identify potential sources of variation and follow this with collaborative work among modelers to understand these sources of variation and help set priorities for future primary research. The NCI's Cancer Intervention and Surveillance Modeling Network (CISNET)—a consortium of NCI-sponsored investigators who use modeling to improve understanding of the impact of cancer control interventions—illustrates how this can be done (available at http://cisnet.cancer.gov).

Economic analyses, particularly those that are based on models, can provide important information for health care decision makers. They have particular appeal for decision making about preventive care, for which direct studies often require very large sample sizes and take many years to complete.

Economic modeling studies are complex, however, and their interpretation and evaluation can be difficult. These difficulties are greater when analysts have used several different approaches to address the same question, thus often generating different results. A systematic review of multiple well-performed analyses can improve interpretation and validation of economic questions because it offers a rigorous approach for evaluating reasons for different results among studies.

We have described our approach to identification, appraisal, and presentation of systematic reviews of economic analyses. We illustrated this approach using a case example of CRC screening. Our example illustrates the promise, difficulties, and current limitations of the use of economic analyses by health care decision makers. We, and others, have demonstrated that, at least for some well-studied topics, investigators can identify relevant articles, extract key information, report results, and consider reasons for observed differences between studies (3, 5, 9). In short, systematic reviews of economic analyses can provide decision makers with important information for policy decisions, but systematic reviews published to date have not used consistent methods for identifying or evaluating economic analyses (9).

To improve the quality of future systematic reviews of economical analyses, we offer several recommendations (see Table 4). First, modelers should follow consensus recommendations from the Panel on Cost-Effectiveness Research for Performing and Reporting Economic Analyses in developing models and reporting results (22). In particular, they should include appropriate expressions of uncertainty. Second, model developers and policymakers should engage in greater ongoing dialogue (through means such as CISNET). Such discussions could facilitate greater use and better understanding of model results and improve existing models. Third, reviewers should read and use, when possible, guidelines for systematic reviews of economic analyses (5, 9, 2326). Fourth, journal editors and research funding agencies should develop better mechanisms for presenting modeling results, including greater use of electronic publishing. Ideally, results of different analyses would be presented together to allow comparisons (as in the CISNET effort). Reviewers or users could access a version of the model that would allow them to explore differences in different variables without affecting the underlying model structure.

Table Jump PlaceholderTable 4.  Recommendations for Improving Systematic Reviews of Economic Analyses

In conclusion, we underscore the importance of health care decision making and of taking value and economic outcomes into account when considering clinical or policy questions. Systematic reviews of economic analyses integrate information from multiple studies and offer the opportunity to provide additional insight by examining how differences in individual models influence costs, outcomes, or cost-effectiveness. Information from systematic reviews can then be used to further refine individual models, as described in the colorectal cancer screening example, thus offering another means of advancing our understanding of clinical and policy questions. To be most effective, however, systematic reviews of economic analyses must adhere to high methodologic standards for identifying, appraising, and integrating information.

Cost–benefit analysis: A technique for measuring net gain or loss to society of a new program or project. It considers allocative efficiency (see below). Values of benefits are usually given in monetary terms.

Cost-effectiveness analysis: A technique for comparing alternative approaches to care, using metrics such as cost per life-year gained. Originally derived to assess the technical efficiency (see below).

Cost-effectiveness ratio: This calculation estimates the value of additional resources (costs) required to achieve an additional unit of a health outcome.

Cost–utility analysis: A technique for comparing the costs and the utility of health gained for different alternatives, such as cost per quality-adjusted life-year gained.

Discounting: A technique for estimating the present value of costs and benefits occurring in different time periods.

Discrete-events simulation: A modeling technique in which individual patients pass from one health state to other health states and in which the time spent in each state is varied randomly on the basis of an underlying distribution defined by the modeler.

Efficiency: A term used to indicate optimal use of resources. Technical efficiency assesses which is the best program to meet a specific objective. Allocative efficiency measures the extent to which programs improve overall social welfare.

Markov model: A modeling technique commonly used to simulate health conditions that occur over time. In a Markov model, modelers assign cohorts (groups) of patients' outcome states deterministically for each time cycle of the model. Modelers determine both the distributional proportions and transition probabilities.

Microsimulation: A modeling technique in which a large number of people pass through the model one at a time. Each person's path through the model is randomly generated on the basis of the underlying probability distributions defined by the modeler. Analysts then aggregate the results for each person to give the overall results and a probability distribution of outcomes.

Perspective: The point of view from which the analysis is conducted. An economic evaluation from one perspective (for example, the patient's) may consider the impact of different sets of costs and outcomes than one conducted from another perspective (for example, the insurance company's). Most bodies recommend that analyses be conducted from the societal perspective because it considers the broadest range of costs and benefits.

Sensitivity analysis: A method of exploring uncertainty about assumptions or data included in an economic evaluation. In one-way sensitivity analysis, only one variable is changed at a time; in multiway analysis, many variables are adjusted at the same time. The method can be used to consider thresholds of patient risk, effectiveness, or cost at which a health intervention might be judged a “good buy.”

Strong dominance (also known as simple dominance): An option is strongly dominated if another alternative has lower costs and is more effective; thus, the strongly dominated option can be ruled out of contention.

Utility: A term used by economists to sum up the satisfaction gained from a good or service. In health care evaluations, utility is often used in measures such as the quality-adjusted life-year or healthy-year equivalent, which take into account effect on quality of life as well as life-years gained.

Weak dominance: An option is weakly dominated when a more costly, more effective alternative exists and that alternative has a lower cost-effectiveness ratio.

Drummond MF, Jefferson TO.  Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ. 1996; 313:275-83. PubMed
CrossRef
 
Siegel JE, Weinstein MC, Russell LB, Gold MR.  Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health and Medicine. JAMA. 1996; 276:1339-41. PubMed
 
Mugford M.  Using Systematic Reviews for Economic Evaluations. Systematic Reviews in Health Care. London: BMJ Publishing; 2001.
 
Pignone M, Rich M, Teutsch SM, Berg AO, Lohr KN.  Screening for colorectal cancer in adults at average risk: a summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med. 2002; 137:132-41. PubMed
 
Pignone M, Saha S, Hoerger T, Mandelblatt J.  Cost-effectiveness analyses of colorectal cancer screening: a systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2002; 137:96-104. PubMed
 
Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM. et al.  Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001; 20:21-35. PubMed
 
Slavin RE.  Best evidence synthesis: an intelligent alternative to meta-analysis. J Clin Epidemiol. 1995; 48:9-18. PubMed
 
Gerard K, Seymour J, Smoker I.  A tool to improve quality of reporting published economic analyses. Int J Technol Assess Health Care. 2000; 16:100-10. PubMed
 
Jefferson T, Demicheli V, Vale L.  Quality of systematic reviews of economic evaluations in health care. JAMA. 2002; 287:2809-12. PubMed
 
Benade MM, Warlow CP.  Costs and benefits of carotid endarterectomy and associated preoperative arterial imaging: a systematic review of health economic literature. Stroke. 2002; 33:629-38. PubMed
 
Briggs AH, Goeree R, Blackhouse G, O'Brien BJ.  Probabilistic analysis of cost-effectiveness models: choosing between treatment strategies for gastroesophageal reflux disease. Med Decis Making. 2002; 22:290-308. PubMed
 
Brown ML, Fintor L.  Cost-effectiveness of breast cancer screening: preliminary results of a systematic review of the literature. Breast Cancer Res Treat. 1993; 25:113-8. PubMed
 
Siegel JE, Torrance GW, Russell LB, Luce BR, Weinstein MC, Gold MR.  Guidelines for pharmacoeconomic studies. Recommendations from the panel on cost effectiveness in health and medicine. Panel on cost Effectiveness in Health and Medicine. Pharmacoeconomics. 1997; 11:159-68. PubMed
 
Wagner JL.  Cost-effectiveness of screening for common cancers. Cancer Metastasis Rev. 1997; 16:281-94. PubMed
 
Frazier AL, Colditz GA, Fuchs CS, Kuntz KM.  Cost-effectiveness of screening for colorectal cancer in the general population. JAMA. 2000; 284:1954-61. PubMed
 
Khandker RK, Dulski JD, Kilpatrick JB, Ellis RP, Mitchell JB, Baine WB.  A decision model and cost-effectiveness analysis of colorectal cancer screening and surveillance guidelines for average-risk adults. Int J Technol Assess Health Care. 2000; 16:799-810. PubMed
 
Loeve F, Brown ML, Boer R, van Ballegooijen M, van Oortmarssen GJ, Habbema JD.  Endoscopic colorectal cancer screening: a cost-saving analysis. J Natl Cancer Inst. 2000; 92:557-63. PubMed
 
Ness RM, Holmes AM, Klein R, Dittus R.  Cost-utility of one-time colonoscopic screening for colorectal cancer at various ages. Am J Gastroenterol. 2000; 95:1800-11. PubMed
 
Sonnenberg A, Delco F, Inadomi JM.  Cost-effectiveness of colonoscopy in screening for colorectal cancer. Ann Intern Med. 2000; 133:573-84. PubMed
 
Vijan S, Hwang EW, Hofer TP, Hayward RA.  Which colon cancer screening test? A comparison of costs, effectiveness, and compliance. Am J Med. 2001; 111:593-601. PubMed
 
Institute of Medicine.  Cost-Effectiveness Modeling: Outcomes of an Invitational Workshop. Washington, DC: National Academy Pr; 2005.
 
Gold MR, Siegel JE, Russell LB, Weinstein MC.  Cost-Effectiveness in Health and Medicine. New York: Oxford Univ Pr; 1996.
 
Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM.  The quality of reporting in published cost-utility analyses, 1976-1997. Ann Intern Med. 2000; 132:964-72. PubMed
 
Carande-Kulis VG, Maciosek MV, Briss PA, Teutsch SM, Zaza S, Truman BI. et al.  Methods for systematic reviews of economic evaluations for the Guide to Community Preventive Services. Task Force on Community Preventive Services. Am J Prev Med. 2000; 18:75-91. PubMed
 
Chiou CF, Hay JW, Wallace JF, Bloom BS, Neumann PJ, Sullivan SD. et al.  Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care. 2003; 41:32-44. PubMed
 
Ofman JJ, Sullivan SD, Neumann PJ, Chiou CF, Henning JM, Wade SW. et al.  Examining the value and quality of health economic analyses: implications of utilizing the QHES. J Manag Care Pharm. 2003; 9:53-61. PubMed
 

Figures

Tables

Table Jump PlaceholderTable 1.  Potential Aims of Systematic Reviews of Economic Analyses
Table Jump PlaceholderTable 2.  Common Data Elements To Be Identified from Individual Studies
Table Jump PlaceholderTable 3.  Illustrative Method of Displaying Study Results: Incremental Cost-Effectiveness Ratios in 2000 U.S. Dollars
Table Jump PlaceholderTable 4.  Recommendations for Improving Systematic Reviews of Economic Analyses

References

Drummond MF, Jefferson TO.  Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ. 1996; 313:275-83. PubMed
CrossRef
 
Siegel JE, Weinstein MC, Russell LB, Gold MR.  Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health and Medicine. JAMA. 1996; 276:1339-41. PubMed
 
Mugford M.  Using Systematic Reviews for Economic Evaluations. Systematic Reviews in Health Care. London: BMJ Publishing; 2001.
 
Pignone M, Rich M, Teutsch SM, Berg AO, Lohr KN.  Screening for colorectal cancer in adults at average risk: a summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med. 2002; 137:132-41. PubMed
 
Pignone M, Saha S, Hoerger T, Mandelblatt J.  Cost-effectiveness analyses of colorectal cancer screening: a systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2002; 137:96-104. PubMed
 
Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM. et al.  Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001; 20:21-35. PubMed
 
Slavin RE.  Best evidence synthesis: an intelligent alternative to meta-analysis. J Clin Epidemiol. 1995; 48:9-18. PubMed
 
Gerard K, Seymour J, Smoker I.  A tool to improve quality of reporting published economic analyses. Int J Technol Assess Health Care. 2000; 16:100-10. PubMed
 
Jefferson T, Demicheli V, Vale L.  Quality of systematic reviews of economic evaluations in health care. JAMA. 2002; 287:2809-12. PubMed
 
Benade MM, Warlow CP.  Costs and benefits of carotid endarterectomy and associated preoperative arterial imaging: a systematic review of health economic literature. Stroke. 2002; 33:629-38. PubMed
 
Briggs AH, Goeree R, Blackhouse G, O'Brien BJ.  Probabilistic analysis of cost-effectiveness models: choosing between treatment strategies for gastroesophageal reflux disease. Med Decis Making. 2002; 22:290-308. PubMed
 
Brown ML, Fintor L.  Cost-effectiveness of breast cancer screening: preliminary results of a systematic review of the literature. Breast Cancer Res Treat. 1993; 25:113-8. PubMed
 
Siegel JE, Torrance GW, Russell LB, Luce BR, Weinstein MC, Gold MR.  Guidelines for pharmacoeconomic studies. Recommendations from the panel on cost effectiveness in health and medicine. Panel on cost Effectiveness in Health and Medicine. Pharmacoeconomics. 1997; 11:159-68. PubMed
 
Wagner JL.  Cost-effectiveness of screening for common cancers. Cancer Metastasis Rev. 1997; 16:281-94. PubMed
 
Frazier AL, Colditz GA, Fuchs CS, Kuntz KM.  Cost-effectiveness of screening for colorectal cancer in the general population. JAMA. 2000; 284:1954-61. PubMed
 
Khandker RK, Dulski JD, Kilpatrick JB, Ellis RP, Mitchell JB, Baine WB.  A decision model and cost-effectiveness analysis of colorectal cancer screening and surveillance guidelines for average-risk adults. Int J Technol Assess Health Care. 2000; 16:799-810. PubMed
 
Loeve F, Brown ML, Boer R, van Ballegooijen M, van Oortmarssen GJ, Habbema JD.  Endoscopic colorectal cancer screening: a cost-saving analysis. J Natl Cancer Inst. 2000; 92:557-63. PubMed
 
Ness RM, Holmes AM, Klein R, Dittus R.  Cost-utility of one-time colonoscopic screening for colorectal cancer at various ages. Am J Gastroenterol. 2000; 95:1800-11. PubMed
 
Sonnenberg A, Delco F, Inadomi JM.  Cost-effectiveness of colonoscopy in screening for colorectal cancer. Ann Intern Med. 2000; 133:573-84. PubMed
 
Vijan S, Hwang EW, Hofer TP, Hayward RA.  Which colon cancer screening test? A comparison of costs, effectiveness, and compliance. Am J Med. 2001; 111:593-601. PubMed
 
Institute of Medicine.  Cost-Effectiveness Modeling: Outcomes of an Invitational Workshop. Washington, DC: National Academy Pr; 2005.
 
Gold MR, Siegel JE, Russell LB, Weinstein MC.  Cost-Effectiveness in Health and Medicine. New York: Oxford Univ Pr; 1996.
 
Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM.  The quality of reporting in published cost-utility analyses, 1976-1997. Ann Intern Med. 2000; 132:964-72. PubMed
 
Carande-Kulis VG, Maciosek MV, Briss PA, Teutsch SM, Zaza S, Truman BI. et al.  Methods for systematic reviews of economic evaluations for the Guide to Community Preventive Services. Task Force on Community Preventive Services. Am J Prev Med. 2000; 18:75-91. PubMed
 
Chiou CF, Hay JW, Wallace JF, Bloom BS, Neumann PJ, Sullivan SD. et al.  Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care. 2003; 41:32-44. PubMed
 
Ofman JJ, Sullivan SD, Neumann PJ, Chiou CF, Henning JM, Wade SW. et al.  Examining the value and quality of health economic analyses: implications of utilizing the QHES. J Manag Care Pharm. 2003; 9:53-61. PubMed
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Related Point of Care
Topic Collections
PubMed Articles
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)