0
Challenges of Summarizing Better Information for Better Health: The Evidence-based Practice Center Experience |

Better Information for Better Health Care: The Evidence-based Practice Center Program and the Agency for Healthcare Research and Quality FREE

David Atkins, MD, MPH; Kenneth Fink, MD, MGA, MPH; and Jean Slutsky, PA, MSPH
[+] Article and Author Information

From the Agency for Healthcare Research and Quality, Rockville, Maryland.


Disclaimer: The opinions expressed in this article are those of the authors and do not represent the official policy of the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.

Potential Financial Conflicts of Interest: Authors of this paper have received funding for Evidence-based Practice Center reports.

Requests for Single Reprints: David Atkins, MD, MPH, Agency for Healthcare Research and Quality, 540 Gaither Road, Rockville, MD 20850; e-mail, datkins@ahrq.gov.

Current Author Addresses: Drs. Atkins and Fink and Ms. Slutsky: Agency for Healthcare Research and Quality, 540 Gaither Road, Rockville, MD 20850.


Ann Intern Med. 2005;142(12_Part_2):1035-1041. doi:10.7326/0003-4819-142-12_Part_2-200506211-00002
Text Size: A A A

To provide decision makers with the best available evidence, the Agency for Healthcare Research and Quality established a network of Evidence-based Practice Centers across North America. The centers perform systematic reviews on important questions posed by partner organizations about clinical, organizational, and policy interventions in health care. The Agency works closely with partners and other decision maker s to help translate that evidence into practice or policy. In this paper, we review important lessons we have learned over the past 7 years about how to increase the efficiency and impact of systematic reviews. Lessons concern selecting the right topics and scope, working effectively with partners, and balancing consistency and flexibility in methods. We examine continuing evolutions of the program and the impact of planned work on comparative effectiveness performed as part of the Medicare Modernization Act of 2003.

In 1989, the U.S. Congress established the Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality [AHRQ]) with the ambitious mission “to improve the quality, safety, efficiency, and effectiveness of health care for all Americans” (http://www.ahrq.gov/about/budgtix.htm). Motivated by research showing that practice varied widely across the United States, lawmakers hoped that AHRQ could improve quality and efficiency by clarifying the evidence about what works and what doesn't work in health care (12). Improving quality and reducing costs have proven to be formidable challenges, reflecting systemic problems in the health care system that will not be solved by evidence alone. Nonetheless, a 2001 Institute of Medicine report on how to address this “quality chasm” identified evidence-based decision making as one of the core components of delivering safe, effective, efficient, and patient-centered care (34). Recent examples of high-dose chemotherapy for breast cancer, postmenopausal hormone therapy, and cyclooxygenase-2 inhibitors have starkly illustrated the dangers to patients and the system at large when health care decisions are driven by advocacy and marketing rather than a balanced examination of the evidence.

The Agency for Healthcare Research and Quality has pursued its commitment to evidence-based practice through research and programs that have produced important new information about the benefits, risks, costs, and cost-effectiveness of specific treatments and technologies for common and costly conditions such as back pain, prostate disease, and pneumonia (57). Perhaps the most visible commitment to evidence-based practice was AHRQ's support of a clinical practice guideline program that produced 19 guidelines from 1990 to 1996 (http://www.ahrq.gov/clinic/cpgonline.htm) on important conditions such as heart failure and otitis media (8). A program initially requested by Congress, the guidelines proved popular with many stakeholders, and their explicit methods helped elevate the standards for evidence-based guidelines. At the same time, they also revealed limitations of centrally developed, government-sponsored guidelines. The process of convening individual panels, commissioning background reports, and developing guidelines was slow and expensive. Although panel members were all independent nongovernment experts, critics assumed that the primary purpose of the guidelines was to help the government save money by discouraging expensive procedures. Bringing primary care clinicians, specialists, nurses, and methodologists together was an important advance but increased the perceived threat to some specialists. More important, no single guideline could anticipate all the considerations that influence clinical practice recommendations. Organizations that implemented AHRQ guidelines often modified them to address local issues and facilitate adoption by their physicians (9). The greatest challenge to AHRQ's guideline program occurred when several guidelines became lightning rods amidst a larger movement to shrink government's role in clinical policy.

Although AHRQ narrowly escaped an attempt in 1995 to eliminate its funding (10), it was already redesigning its guideline program to address the barriers to getting evidence into practice. In 1997, the Agency began preparations for the National Guideline Clearinghouse, launched the first of a series of research programs on how to translate research into practice, and announced that it would cease producing guidelines while increasing the production of evidence through a new Evidence-based Practice Center (EPC) program.

The EPC program was designed to provide the best available evidence to decision makers while increasing the roles for a wide variety of stakeholders. The Agency selected 12 centers across North America to develop systematic reviews of the evidence on important health care questions. Questions to be addressed would be nominated by professional societies, health plans, insurers, federal and state agencies, and other private and public groups, who would now assume the responsibility of using the evidence to improve practice—through guidelines, quality initiatives, coverage decisions, research programs, advocacy, and other activities.

The initial EPCs were chosen on the basis of their broad expertise in research methods and systematic reviews. They consisted of academic centers, including the 4 North American Cochrane Centers at that time, and private nonprofit research organizations. In June 2002, AHRQ awarded new 5-year contracts to 13 centers, including renewed contracts for 10 of the original EPCs (Table 1).

Table Jump PlaceholderTable 1.  Evidence-based Practice Centers

The systematic reviews produced by the EPCs are intended to inform a wide range of health care decisions. Of the current 13 EPCs, 3 conduct technology assessments for the Centers for Medicare & Medicaid Services to inform coverage decisions for new technologies. Another center is dedicated to supporting the work of the U.S. Preventive Services Task Force, and the remaining 9 “generalist” EPCs conduct reviews on a more diverse range of topics nominated by outside partners. In addition, various federal agencies fund systematic reviews through the 13 EPCs to support consensus conferences, research planning, policy initiatives, and other programs. Table 2 lists the reports released in 2004, along with their partners; the AHRQ Web site lists all 119 reports released to date (http://www.ahrq.gov/clinic/epcix.htm).

Table Jump PlaceholderTable 2.  Reports Released by the Evidence-based Practice Center Program in 2004

Nominating organizations describe why the issue is important, what preliminary questions should be addressed, and how they plan to implement the findings of an evidence report. The Agency prioritizes nominated topics according to clinical and economic burden of the condition; controversy over existing evidence; relevance to AHRQ priority populations and federal health care programs; and the potential for the review to change practice, including the partner's plan for using and evaluating the impact of the report. An EPC Coordinating Center (run by the Lewin Group, a health care and human services consulting firm) conducts an abbreviated literature search to ensure that evidence for a systematic review is sufficient; surveys existing guidelines and reviews and consults with experts to determine which key questions are worth addressing; assesses areas of controversy or practice variation; and scans for important ongoing studies that may justify deferring a review.

Once a topic's key questions are developed and the work is assigned, the EPC convenes a panel of 5 to 8 content experts nominated by the EPC, AHRQ, and the partner or partners. The panel helps refine and prioritize questions, provides advice on which types of studies to include or exclude, and suggests other analyses that may be useful. The EPCs conduct a comprehensive, structured search of MEDLINE, EMBASE, and additional databases as appropriate to the topic. Articles are selected according to the predefined inclusion and exclusion criteria, the reviewers rate the quality of individual studies, and results are summarized by using quantitative (that is, meta-analytic) or qualitative methods as deemed appropriate. Draft reviews are circulated widely for peer review to content experts, representatives of relevant specialty organizations, federal agencies, and AHRQ scientific staff. A report detailing the disposition of reviewers' comments is submitted to AHRQ with the final report. The timelines for EPC reports are challenging. We usually request final reports within 12 months from the time of topic assignment. Reports are released on the AHRQ Web site and are made available in print by request.

The EPC program exists within a growing international array of programs that develop evidence-based information to help guide policy and practice. Many EPC researchers are active in the Cochrane Collaboration (http://www.cochrane.org), and EPCs regularly search the Cochrane Controlled Trials Register to identify relevant trials. Where Cochrane or other high-quality reviews have addressed a question of interest, reports will incorporate or update that information and focus on areas where the research has not been synthesized. Several EPC reports have involved formal collaborations with Cochrane and the Health Technology Assessment program of the United Kingdom National Health Service (11). The Agency is an active member of INAHTA, the International Network of Agencies for Health Technology Assessment (http://www.inahta.org), and reports draw on work of other INAHTA organizations where appropriate. In addition to being available through the National Library of Medicine Bookshelf, AHRQ EPC reports are included in University of York's Centre for Reviews and Dissemination databases (http://www.york.ac.uk/inst/crd).

Despite similarities, the AHRQ EPC program differs in important ways from these programs. Although supported by government, many of our partners are private organizations. As the primary funder, AHRQ exerts more centralized control to ensure that reports address the needs of users, in contrast to a more decentralized “bottom-up” approach in the Cochrane Collaboration. The EPC reports summarize the evidence but do not translate the findings into specific clinical recommendations or guidance, as done by the United Kingdom National Institute for Health and Clinical Excellence. The range of evidence considered by EPC reports is also unique, necessitated by the diversity of clinical and policy questions they have been asked to address, from bioterrorism training to health literacy (12).

An important goal of the EPC program is to advance the methods for conducting and reporting systematic reviews. In response to a request from Congress, the Research Triangle Institute/University of North Carolina EPC produced a report on systems for classifying the strength of scientific evidence. This report examined both empirically derived and consensus-derived criteria for rating the quality of individual studies and for characterizing the overall strength of a body of evidence (13). Both AHRQ staff and EPC members have also participated in the Grades of Recommendation Assessment, Development and Evaluation (GRADE) initiative, which recently released recommendations on grading quality of evidence (14), and they continue to refine approaches for assessing the quality of individual studies and for characterizing the strength of a body of evidence.

As reviewed in another paper in this supplement (15), the frequent inclusion of nonrandomized studies has stimulated a variety of approaches to evaluating quality of observational studies and other study designs (1516). To evaluate safety of ephedra, the RAND EPC developed a system for grading the validity of adverse event reports (17). For a report on carotid endarterectomy, the Oregon Health & Science University EPC developed an instrument to rate the quality of surgical case series (18). This EPC is now examining whether scores calculated by using this instrument are associated with reported adverse event rates in other surgical and nonsurgical case series.

Priorities for future research also include examining more efficient review strategies. The marginal value of increasingly intensive efforts to locate all potentially eligible studies is an important but largely unresolved question. The EPC reviews have generally used somewhat narrower search strategies than Cochrane reviews. Whether locating smaller studies, unpublished studies, and studies without English abstracts substantially reduces bias or alters conclusions of reviews is a question we are exploring, along with other factors that may contribute to discordant conclusions of systematic reviews on the same topic.

Systematic reviews do little to improve health care unless their findings are incorporated into practice or policy. Unfortunately, the link between describing a body of evidence and translating that evidence into practice or policy has been tenuous (19). Our experience confirms that the impact of a review is not necessarily a function of its comprehensiveness or rigor. Reviews must also produce knowledge that is relevant to specific clinical and policy decisions and present this information in a concise and easily understood format. Tools to facilitate using that evidence at the point of care often need to be developed—for example, incorporating findings into guidelines, quality measures, computerized reminders, or patient education materials.

Planning for translation and implementation should be part of the initial planning of the review. Partners are involved early in the EPC process to ensure that the report addresses relevant clinical or policy issues. Moreover, anticipating how the partner will use the findings of the report can help the EPC decide what evidence to consider, how to present results, and what additional analyses to perform. Guidelines and practice recommendations are one of the more common tools for translation, as in EPC reports prepared for the U.S. Preventive Services Task Force, the Consensus Development Program of the National Institutes of Health (NIH), and other professional societies. Other partners, however, have distinctly different goals in translation: The NIH Office of Dietary Supplements has used many EPC reports to develop their research agenda, the National Quality Forum uses them to develop quality measures, and the Centers for Medicare & Medicaid Services and health plans look to them to support coverage decisions. The article by Matchar and colleagues in this supplement details additional uses of EPC reports (20).

Given the systemic roots of the quality gap in health care, targeting evidence only to clinicians or any single party is unlikely to substantially improve practice. We will need to do a better job at distilling relevant messages from EPC reports for the multiple decision makers who influence practice: clinicians, health plan administrators, purchasers, patients, and those making public policy. The audience for the full evidence reports, technical documents that often run over 100 pages, will probably remain limited to those who need to understand the detailed evidence in order to create guidelines or coverage decisions. Patients and policymakers need much more concise information, framed in the appropriate perspective. The Agency produces fact sheets (http://www.ahrq.gov/news/factix.htm) on selected evidence reports and has recently begun to develop tools for patients and providers based on EPC reports, such as “Managing Obesity: A Clinician's Aid” (http://www.ahrq.gov/clinic/obesaid.htm). Organizational and system change may be the most critical factor in promoting implementation of evidence, however. The Stanford/University of California, San Francisco, EPC has completed the first 3 of a series of planned reports on “Closing the Quality Gap” that examine the most effective interventions to improve quality of care in high-priority conditions (21).

Measuring the ultimate impact of EPC reports remains a continuing challenge. Through the Coordinating Center, AHRQ monitors the use of EPC reports through follow-up with partners and tracking of products such as publications, guidelines, and implementation efforts. There is anecdotal evidence of situations in which reports led to important policy changes (withdrawal of ephedra, coverage of specific new technologies) or contributed to successful initiatives to change practice (improving antibiotic use for acute otitis), but more comprehensive assessment of areas where reports have contributed to changing practice is difficult. Our impact and ability to measure change are greatest when AHRQ and the EPCs can serve as the science partner to larger collaborative efforts to improve care, as was done with reports on coronary heart disease in women (22).

Perhaps the greatest challenge of all is creating a demand for evidence-based information while shaping realistic expectations about what it can and cannot do. Clinicians and policymakers are dismayed at how frequently reports conclude that there is insufficient evidence to answer critical questions, but evidence is never the only element in clinical and policy decisions. Reports need to do a better job clarifying the implications of what we know and don't know for different types of decisions.

Both AHRQ and the EPCs have learned many useful lessons concerning the support and conduct of systematic reviews in the first 7 years of this program.

Identifying the Right Targets for Evidence

Choosing the right topics is probably the most important yet most difficult part of the process. Systematic reviews are a tool but must be matched to the right problem. Uncertainty about the evidence is only one of many factors that may contribute to the frequent gaps between typical practice and the best evidence (23). Evidence reports are less useful when the primary barriers to evidence-based practice are problems such as limited access, misaligned financial incentives, or organizational barriers. The impact of a systematic review can be further undermined by a lack of sufficient good-quality evidence; imminent results of a major, ongoing study; lack of involvement of a critical clinical partner; or lack of a well-thought-out plan to change practice.

Defining Questions and Scope of a Review

Defining the appropriate scope of a review early in the process is critical. There is a continual tension between the desire to be exhaustive and rigorous and the aim of producing reports that meet the needs of partners at a reasonable cost. It is essential to identify the particular added value of each EPC report and to understand the precise context within which the partners need information. The benefit of a new review may be concentrated in only a few questions where the evidence has changed or is confusing, but addressing a comprehensive set of questions may be of greatest value when the intention is to create a guideline. A new analytic approach, such as a cost-effectiveness analysis, decision analysis, or careful assessment of benefits and harms, may fill an important gap. For example, a review of new technologies for cervical cancer screening used modeling to illustrate that new technologies would be cost-effective only if they were used to lengthen the customary interval between Papanicolaou tests (24).

Working with Partners

Working with partners can also involve a delicate balancing act. Partners often come in with exaggerated hopes that the evidence will support clear and definitive recommendations or support the practices of their constituents. Reports need to address the needs of partners without excessively tailoring their scope to the narrow interest of a single organization. Matching the questions to the nature of the literature and the needs of the partner has required close working relationships among the EPC, AHRQ staff, and partner representatives.

The Agency tries to leverage its investment by facilitating collaboration among multiple partners. The most successful efforts are those that have brought together different constituencies with a common interest. For example, the American Academy of Family Physicians and the American Academy of Pediatrics used an EPC report for jointly developing a guideline on diagnosing and managing acute otitis media (25). A report on perinatal depression involved collaboration of 10 organizations within the U.S. Department of Health and Human Services working on the Safe Motherhood Initiative (26). Reports have had a smaller impact when we were unable to get the early buy-in of a major stakeholder.

Balancing Consistency and Flexibility in Methods

Over the first 5 years of the EPC program, AHRQ did not attempt to impose uniform methods on the centers. Empirical evidence to define the best way to rate studies or bodies of research was limited, and the breadth of topics addressed by the EPC program required flexibility. The primary criteria used to evaluate specific study designs have been relatively consistent across reports, however. For example, randomization, blinding, and dropout rate are consistently assessed for randomized trials. The Agency and other users have nonetheless recognized the value of a more consistent approach to describing the strength of a body of evidence. Working with federal partners, we have identified common elements within the criteria used to describe overall strength of evidence by groups such as the U.S. Preventive Services Task Force, the GRADE initiative, the Physicians Data Query program of the National Cancer Institute, the report by the Research Triangle Institute, and the Strength of Recommendation Taxonomy (SORT) from family medicine journals (27): study design, quality of individual studies (internal validity), strength of association, consistency of findings, directness of the evidence, and applicability (external validity) of findings to the population or intervention of interest.

Choosing Inclusion and Exclusion Criteria

The most common criticism of EPC reports has been that they exclude too many studies, including some that reviewers feel are informative. Many reports have restricted evidence to randomized, controlled trials when addressing effectiveness questions in order to protect against sources of bias in other study designs. When there are few well-done trials, however, users are usually dissatisfied if reports conclude the evidence is insufficient without considering other study designs, especially since these may be the very studies that are influencing current policy debates. We have encouraged EPCs to take a “best-evidence” approach that expands inclusion criteria where higher-quality studies are lacking. As illustrated in the papers in this supplement, EPCs have recognized the value of alternative study designs for capturing information important to clinicians, such as the rate of adverse effects of specific treatments, the proportion of patients having a good outcome after surgery, or the outcomes of diagnostic tests. In other cases, the primary value of reviewing nonrandomized studies may be to clearly describe their limitations and to recommend the types of studies that would provide better evidence.

Involving Specialized Experts

The Agency elected to emphasize broad, general expertise among the EPCs rather than establishing centers specializing in a single content area, such as cardiology. Generalist centers give the program more flexibility and can better serve the interests of a broad group of stakeholders. They may also be less likely to be perceived as having a bias than a specialized center that regularly works with a dominant partner. Knowledgeable content experts, however, need to be involved in all stages of the review. Reports will not be credible with the community they seek to influence if they do not include substantial input from experts. The challenge is to involve experts who understand the nuances of the research but are open-minded enough to critically reexamine some of the accepted conclusions in their field.

The landscape for health care information has changed substantially since 1997, when the EPC program was initiated. Information on different treatments and tests is increasingly available through the Internet and in traditional media. Patients, providers, and purchasers all want more direct access to the evidence but need the assistance of a credible and objective synthesis as a starting point for their decisions (28). Moreover, changes in the health care market and in clinical practice will require patients to become active participants in a widening array of health care decisions: whether to enroll in a consumer-driven health plan, which pharmacy benefit plan to select, or whether to undergo surgery for their prostate cancer. The EPC program will need to pay increasing attention to patients and the public as important customers for the evidence produced.

These changes are also reflected in the Medicare Modernization Act of 2003, which ushers in a variety of significant changes to the Medicare program, including a new outpatient drug benefit. Recognizing that critical clinical questions (for example, What is the best treatment for gastroesophageal reflux? Should hospitals establish specialized stroke centers?) often can't be answered by the types of studies required for approval of a new drug or device, the Act directs new attention to questions of comparative effectiveness of health care services and strategies for improving how services are organized and delivered (http://www.medicare.gov/medicarereform). The Act requires that AHRQ 1) develop a process to identify the highest-priority topics to address, 2) establish a timetable for completing initial systematic reviews of the evidence in priority areas, and 3) continually consult with relevant stakeholders. In fiscal year 2005, $15 million was appropriated for this work. The systematic reviews performed under section 1013 of the Act will not only highlight what is known but will identify knowledge gaps for which new research is most critical for making informed decisions. The Act also makes clear that the evaluations generated by this process must be appropriate for various decision makers, from patients to managers of pharmacy benefits. These goals will force the EPC program to shape reports and develop new products to make their key findings accessible to varied audiences. The lessons we learn under the activity of the Act will benefit our efforts at implementation of all EPC reports.

The ultimate verdict on the value of the EPC program will come from the clinicians, policymakers, and patients who are seeking the best evidence to help them make difficult health care decisions. The success of future reports is not likely to hinge on further methodologic innovations but will depend on the ability of EPC reports to address specific clinical or policy questions in a timely manner and communicate clearly to these distinct audiences. The goal remains the same—better health outcomes through better-informed decisions—but getting there will require that we continually adapt our process and products to the individual needs of our partners.

Gray BH.  The legislative battle over health services research. Health Aff (Millwood). 1992; 11:38-66. PubMed
CrossRef
 
Kosterlitz J.  Cookbook medicine. Natl J (Wash). 1991; 23:574-7. PubMed
 
Committee on Quality Health Care, Institute of Medicine.  Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: Institute of Medicine; 2001.
 
. Adams K, Corrigan JM Priority Areas for National Action: Transforming Health Care Quality. Washington, DC: Institute of Medicine; 2003.
 
Wennberg JE, Barry MJ.  Outcomes research [Letter]. Science. 1994; 264:758-9. PubMed
 
Salive ME, Mayfield JA, Weissman NW.  Patient Outcomes Research Teams and the Agency for Health Care Policy and Research. Health Serv Res. 1990; 25:697-708. PubMed
 
Stryer D, Tunis S, Hubbard H, Clancy C.  The outcomes of outcomes and effectiveness research: impacts and lessons from the first decade. Health Serv Res. 2000; 35:977-93. PubMed
 
Nutting PA.  AHCPR clinical practice guidelines [Editorial]. Am Fam Physician. 1992; 46:57-8. PubMed
 
Brown JB, Shye D, McFarland B.  The paradox of guideline implementation: how AHCPR's depression guideline was adapted at Kaiser Permanente Northwest Region. Jt Comm J Qual Improv. 1995; 21:5-21. PubMed
 
Gray BH, Gusmano MK, Collins SR.  AHCPR and the changing politics of health services research. Health Aff (Millwood). 27 October 2003. 10.1377/hlthaff.w3.283. Accessed athttp://content.healthaffairs.org/cgi/content/abstract/hlthaff.w3.283v1on 19 April 2005..
 
Shekelle PG, Morton SC, Maglione MA, Suttorp M, Tu W, Li Z, et al.  Pharmacological and surgical treatment of obesity. Evidence Report/Technology Assessment No. 103 (Prepared by the Southern California-RAND Evidence-based Practice Center under contract 290-02-0003). Rockville, MD: Agency for Healthcare Research and Quality; July 2004. AHRQ publication no. 04-E028-2.
 
Berkman ND, DeWalt DA, Pignone MP, Sheridan SL, Lohr KN, Lux L, et al.  Literacy and health outcomes. Evidence Report/Technology Assessment No. 87 (Prepared by Research Triangle Institute/University of North Carolina Evidence-based Practice Center under contract 290-02-0016). Rockville, MD: Agency for Healthcare Research and Quality; January 2004. AHRQ publication no. 04-E007-2.
 
West S, King V, Carey TS, Lohr KN, McKoy N, Sutton S, et al.  Systems to rate the strength of scientific evidence. Evidence Report/Technology Assessment No. 47 (Prepared by the Research Triangle Institute–University of North Carolina Evidence-based Practice Center under contract 290-97-0011). Rockville, MD: Agency for Healthcare Research and Quality; April 2002. AHRQ publication no. 02-E016.
 
Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al.  Grading quality of evidence and strength of recommendations. BMJ. 2004; 328:1490. PubMed
 
Norris SL, Atkins D.  Challenges in using nonrandomized studies in systematic reviews of treatment interventions. Ann Intern Med. 2005; 142:1112-9.
 
Deeks JJ, Dinnes J, D'Amico R, Sowden AJ, Sakarovitch C, Song F. et al.  Evaluating non-randomised intervention studies. Health Technol Assess. 2003; 7:1-186.
 
Shekelle P, Hardy M, Morton SC, Maglione M, Suttorp M, Roth E, et al.  Ephedra and ephedrine for weight loss and athletic performance enhancement: clinical efficacy and side effects. Evidence Report/Technology Assessment No. 76 (Prepared by Southern California-RAND Evidence-based Practice Center under contract 290-97-0001). Rockville, MD: Agency for Healthcare Research and Quality; March 2003. AHRQ publication no. 03-E022.
 
Meenan RT, Saha S, Chou R, Swartztrauber K, Krages KP, O'Keefee-Rosetti M. et al.  Effectiveness and cost-effectiveness of echocardiography and carotid imaging in the management of stroke. Evidence Report/Technology Assessment No. 49 (Prepared by Oregon Health & Science University Evidence-based Practice Center under contract 290-97-0018). Rockville, MD: Agency for Healthcare Research and Quality; 2002.
 
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L. et al.  Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004; 8:iii-iv, 1-72. PubMed
 
Matchar DB, Westermann-Clark EV, McCrory DC, Patwardhan M, Samsa G, Kulasingam S. et al.  Dissemination of Evidence-based Practice Center reports. Ann Intern Med. 2005; 142:1120-5.
 
Shojania KG, McDonald KM, Wachter RM, Owens DK.  Closing the quality gap: a critical analysis of quality improvement strategies. Volume 1. Series overview and methodology. Technical Review No. 9. Rockville, MD: Agency for Healthcare Research and Quality; August 2004. AHRQ publication no. 04-0051-1.
 
Grady D, Chaput L, Kristof M.  Results of systematic review of research on diagnosis and treatment of coronary heart disease in women. Evidence Report/Technology Assessment No. 80 (Prepared by the University of California, San Francisco-Stanford Evidence-based Practice Center under contract 290-97-0013). Rockville, MD: Agency for Healthcare Research and Quality; May 2003. AHRQ publication no. 03-0035.
 
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A. et al.  The quality of health care delivered to adults in the United States. N Engl J Med. 2003; 348:2635-45. PubMed
 
McCrory DC, Matchar DB, Bastian L, Datta S, Hasselblad V, Hickey J, et al.  Evaluation of cervical cytology. Evidence Report/Technology Assessment No. 5 (Prepared by Duke University Evidence-based Practice Center under contract 290-97-0014). Rockville, MD: Agency for Health Care Policy and Research; February 1999. AHCPR publication no. 99-E010.
 
.  Diagnosis and management of acute otitis media. Pediatrics. 2004; 113:1451-65. PubMed
 
Gaynes BN, Gavin N, Meltzer-Brody S, Lohr KN, Swinson T, Gartlehner G, et al.  Perinatal depression: prevalence, screening accuracy, and screening outcomes. Evidence Report/Technology Assessment No. 119 (Prepared by the RTI-University of North Carolina Evidence-based Practice Center under contract 290-02-0016). Rockville, MD: Agency for Healthcare Research and Quality; February 2005. AHRQ publication no. 05-E006-2.
 
Ebell MH, Siwek J, Weiss BD, Woolf SH, Susman J, Ewigman B. et al.  Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature. Am Fam Physician. 2004; 69:548-56. PubMed
 
Von Knoop C, Lovich D, Silverstein MB, Tutty M.  Vital Signs: E-Health in the United States. Boston: Boston Consulting Group; 2003.
 

Figures

Tables

Table Jump PlaceholderTable 1.  Evidence-based Practice Centers
Table Jump PlaceholderTable 2.  Reports Released by the Evidence-based Practice Center Program in 2004

References

Gray BH.  The legislative battle over health services research. Health Aff (Millwood). 1992; 11:38-66. PubMed
CrossRef
 
Kosterlitz J.  Cookbook medicine. Natl J (Wash). 1991; 23:574-7. PubMed
 
Committee on Quality Health Care, Institute of Medicine.  Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: Institute of Medicine; 2001.
 
. Adams K, Corrigan JM Priority Areas for National Action: Transforming Health Care Quality. Washington, DC: Institute of Medicine; 2003.
 
Wennberg JE, Barry MJ.  Outcomes research [Letter]. Science. 1994; 264:758-9. PubMed
 
Salive ME, Mayfield JA, Weissman NW.  Patient Outcomes Research Teams and the Agency for Health Care Policy and Research. Health Serv Res. 1990; 25:697-708. PubMed
 
Stryer D, Tunis S, Hubbard H, Clancy C.  The outcomes of outcomes and effectiveness research: impacts and lessons from the first decade. Health Serv Res. 2000; 35:977-93. PubMed
 
Nutting PA.  AHCPR clinical practice guidelines [Editorial]. Am Fam Physician. 1992; 46:57-8. PubMed
 
Brown JB, Shye D, McFarland B.  The paradox of guideline implementation: how AHCPR's depression guideline was adapted at Kaiser Permanente Northwest Region. Jt Comm J Qual Improv. 1995; 21:5-21. PubMed
 
Gray BH, Gusmano MK, Collins SR.  AHCPR and the changing politics of health services research. Health Aff (Millwood). 27 October 2003. 10.1377/hlthaff.w3.283. Accessed athttp://content.healthaffairs.org/cgi/content/abstract/hlthaff.w3.283v1on 19 April 2005..
 
Shekelle PG, Morton SC, Maglione MA, Suttorp M, Tu W, Li Z, et al.  Pharmacological and surgical treatment of obesity. Evidence Report/Technology Assessment No. 103 (Prepared by the Southern California-RAND Evidence-based Practice Center under contract 290-02-0003). Rockville, MD: Agency for Healthcare Research and Quality; July 2004. AHRQ publication no. 04-E028-2.
 
Berkman ND, DeWalt DA, Pignone MP, Sheridan SL, Lohr KN, Lux L, et al.  Literacy and health outcomes. Evidence Report/Technology Assessment No. 87 (Prepared by Research Triangle Institute/University of North Carolina Evidence-based Practice Center under contract 290-02-0016). Rockville, MD: Agency for Healthcare Research and Quality; January 2004. AHRQ publication no. 04-E007-2.
 
West S, King V, Carey TS, Lohr KN, McKoy N, Sutton S, et al.  Systems to rate the strength of scientific evidence. Evidence Report/Technology Assessment No. 47 (Prepared by the Research Triangle Institute–University of North Carolina Evidence-based Practice Center under contract 290-97-0011). Rockville, MD: Agency for Healthcare Research and Quality; April 2002. AHRQ publication no. 02-E016.
 
Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al.  Grading quality of evidence and strength of recommendations. BMJ. 2004; 328:1490. PubMed
 
Norris SL, Atkins D.  Challenges in using nonrandomized studies in systematic reviews of treatment interventions. Ann Intern Med. 2005; 142:1112-9.
 
Deeks JJ, Dinnes J, D'Amico R, Sowden AJ, Sakarovitch C, Song F. et al.  Evaluating non-randomised intervention studies. Health Technol Assess. 2003; 7:1-186.
 
Shekelle P, Hardy M, Morton SC, Maglione M, Suttorp M, Roth E, et al.  Ephedra and ephedrine for weight loss and athletic performance enhancement: clinical efficacy and side effects. Evidence Report/Technology Assessment No. 76 (Prepared by Southern California-RAND Evidence-based Practice Center under contract 290-97-0001). Rockville, MD: Agency for Healthcare Research and Quality; March 2003. AHRQ publication no. 03-E022.
 
Meenan RT, Saha S, Chou R, Swartztrauber K, Krages KP, O'Keefee-Rosetti M. et al.  Effectiveness and cost-effectiveness of echocardiography and carotid imaging in the management of stroke. Evidence Report/Technology Assessment No. 49 (Prepared by Oregon Health & Science University Evidence-based Practice Center under contract 290-97-0018). Rockville, MD: Agency for Healthcare Research and Quality; 2002.
 
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L. et al.  Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004; 8:iii-iv, 1-72. PubMed
 
Matchar DB, Westermann-Clark EV, McCrory DC, Patwardhan M, Samsa G, Kulasingam S. et al.  Dissemination of Evidence-based Practice Center reports. Ann Intern Med. 2005; 142:1120-5.
 
Shojania KG, McDonald KM, Wachter RM, Owens DK.  Closing the quality gap: a critical analysis of quality improvement strategies. Volume 1. Series overview and methodology. Technical Review No. 9. Rockville, MD: Agency for Healthcare Research and Quality; August 2004. AHRQ publication no. 04-0051-1.
 
Grady D, Chaput L, Kristof M.  Results of systematic review of research on diagnosis and treatment of coronary heart disease in women. Evidence Report/Technology Assessment No. 80 (Prepared by the University of California, San Francisco-Stanford Evidence-based Practice Center under contract 290-97-0013). Rockville, MD: Agency for Healthcare Research and Quality; May 2003. AHRQ publication no. 03-0035.
 
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A. et al.  The quality of health care delivered to adults in the United States. N Engl J Med. 2003; 348:2635-45. PubMed
 
McCrory DC, Matchar DB, Bastian L, Datta S, Hasselblad V, Hickey J, et al.  Evaluation of cervical cytology. Evidence Report/Technology Assessment No. 5 (Prepared by Duke University Evidence-based Practice Center under contract 290-97-0014). Rockville, MD: Agency for Health Care Policy and Research; February 1999. AHCPR publication no. 99-E010.
 
.  Diagnosis and management of acute otitis media. Pediatrics. 2004; 113:1451-65. PubMed
 
Gaynes BN, Gavin N, Meltzer-Brody S, Lohr KN, Swinson T, Gartlehner G, et al.  Perinatal depression: prevalence, screening accuracy, and screening outcomes. Evidence Report/Technology Assessment No. 119 (Prepared by the RTI-University of North Carolina Evidence-based Practice Center under contract 290-02-0016). Rockville, MD: Agency for Healthcare Research and Quality; February 2005. AHRQ publication no. 05-E006-2.
 
Ebell MH, Siwek J, Weiss BD, Woolf SH, Susman J, Ewigman B. et al.  Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature. Am Fam Physician. 2004; 69:548-56. PubMed
 
Von Knoop C, Lovich D, Silverstein MB, Tutty M.  Vital Signs: E-Health in the United States. Boston: Boston Consulting Group; 2003.
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Topic Collections
PubMed Articles

Want to Subscribe?

Learn more about subscription options

Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)