Dissemination of Evidence-based Practice Center Reports

In 1997, the Agency for Healthcare Research and Quality (AHRQ) created 12 Evidence-based Practice Centers (EPCs). The objective of the EPC program was to provide a public service information resource for policymakers, clinicians, and other decision makers. According to AHRQ, the EPCs' mission is to develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other health care organization and delivery issues (1). The principal products of the EPCs are called evidence reports or technology assessments; for simplicity's sake, we refer to them here as evidence reports. These evidence reports are detailed evaluations of the scientific literature on specific clinical, behavioral, organizational, and financing topics of interest to policymakers, clinicians, and other decision makers. The reports are requested by organizations representing these interested parties, and these organizations are called partners in the context of the EPC process. The real benefits of an evidence report are achieved through disseminationpublic distribution of the report and its incorporation into efforts aimed at some specific objective, such as practice improvement (sometimes termed implementation). The Agency for Healthcare Research and Quality plays an intermediary role between the EPCs and partners and other private and public users in their efforts to improve the quality, effectiveness, and appropriateness of health care by synthesizing the evidence and facilitating the translation of evidence-based research findings (1). From the outset, dissemination has been part of the EPC program, but not as a funded component. During the first 5 years of the program, each evidence report project included a dissemination plan and, later, an implementation plan as defined work products. In the current 5-year cycle, AHRQ expects that EPCs will facilitate translation of the reports into quality improvement tools, educational programs, and reimbursement policies (2). This underscores the general thrust of the EPC program as an information resource that generates products intended to serve the needs of users. Product dissemination, however, has usually been left in the hands of the defined partners and other users of evidence reports. In this article, we briefly review dissemination activities since the EPC program's inception and consider one case study in greater detail to identify factors that promote or inhibit the successful dissemination of EPC evidence reports. In the absence of a current census of dissemination activities linked to EPC products, we base our review of such activities on semistructured telephone interviews, conducted from April to October 2004, with 22 colleagues, including EPC directors, AHRQ representatives, and participants from partner organizations. We also support our observations with references to the literature on successful dissemination of evidence reports. Interviews with Colleagues The EPCs have been involved in producing 107 evidence reports, of which 101 have been disseminated by AHRQ in printed form (both full report and executive summary, with or without codissemination by partners), and are on a publically available Web site (2). On the basis of our interviews with colleagues in EPCs, AHRQ staff, and representatives of partner organizations, we identified several distinct categories of dissemination activities (Table 1). Table 1. Dissemination Activities Related to Evidence-based Practice Center Evidence Reports Our interviews also yielded a list of factors that promote effective dissemination of evidence reports. Table 2 lists factors mentioned by at least one interviewee. Consistent with the interview script, we have organized these factors according to which party ultimately takes responsibility for ensuring their execution. Table 2. Factors Influencing Successful Dissemination of Evidence Reports All interviewees agreed that the success of an evidence report is ultimately measured by the degree to which it influences or is directly used to induce actionthat is, by the degree to which it is effectively disseminated. However, as noted above, dissemination activities are not built into each EPC project; specifically, these activities are not funded under the EPC program. This leaves dissemination of the reports to the good graces of the participants. As Table 1 illustrates, the successes thus achieved can be impressive. It is important to recognize, however, that failing to integrate research and dissemination goals can derail efforts to translate an evidence report into meaningful action, while actively integrating research and dissemination goals can promote more effective dissemination. To illustrate the latter point, we briefly consider a series of projects conducted by the EPC at Duke University in Durham, North Carolina, that together serve as an example of the benefits of integration. Case Study: Chronic Kidney Disease Evidence Report The following example shows how a report can be effectively disseminated when an EPC and partner share goals, plan sequential steps carefully, participate actively and equally, and plan each step with the ultimate objective of improving practice. The Renal Physicians Association (RPA), a professional society, solicited proposals to improve care of individuals with advanced chronic kidney disease, based on best evidence. In partnership with the Duke EPC, RPA pursued 4 steps: 1) development of a comprehensive evidence report on the care of people who have advanced chronic kidney disease but are not yet receiving dialysis (Appropriate Patient Preparation for Renal Replacement Therapy [28], approved as an EPC report in 2001); 2) production of a clinical practice guideline and a set of clinical performance measures based on this evidence report (29); 3) creation of a set of tools for all individuals involved in care (ranging from calculators for estimating renal function for primary care providers, referral-letter templates and flow sheets for renal specialists, and diaries and educational materials for patients), called The Advanced CKD [chronic kidney disease] Patient Management Tool Kit; and 4) pilot testing of the tool kit in community settings. This will be followed by a national rollout of the toolkit in 2005. This series of projects has several features that appear to have influenced the overall success of the effort. First, the EPC and RPA shared a common goaldeveloping an evidence report that would ultimately be used by clinicians to optimize the care of patients with advanced chronic kidney disease. Furthermore, each step in the processfrom the evidence report to the national rollout of the tool kitwas planned with the next step in mind. When the evidence report was created, the EPC and RPA were aware that a clinical practice guideline would follow, and when the guideline and performance measures were developed, both parties were aware that a set of tools would follow. This knowledge guided the wording of the recommendation statements of the clinical practice guidelines so that, for example, the performance measures could be directly derived from them. Indeed, the performance measures were part of the tool kit for evaluating the impact of the whole effort. In addition, both parties (the EPC and RPA) partnered effectively in these sequentially planned steps in order to develop useful products for practicing clinicians. For all steps, RPA provided expert advice, and the EPC ensured that scientifically sound procedures were used. During the entire process, over a period of 4 years, both partners reported on the progress of the systematic process at various scientific meetings, including the Health Technology Assessment International meeting, the AHRQ Translating Research Into Practice (TRIP) meeting, and the American Society of Nephrologists meeting. Presentation of milestones encouraged both parties to regularly articulate and clarify their goals. Factors Inhibiting Successful Dissemination of EPC Products Evidence-based Practice Centers strive for meticulous care and methodologic excellence in evidence reports, and they share with AHRQ and partner organizations a fundamental desire to ensure that EPC products are used, rather than relegated to shelves to gather dust. Our interviews document that in the past 7 years, EPC products have been disseminated successfully many times. In addition, we gleaned important insights, which we share below. Cultural Factors Cultural factors influence dissemination efforts because producers of evidence reports (typically academic researchers) and partners and other users of the reports (usually policymakers, clinicians, or other decision makers) often have different values, goals, and perspectives that are shaped by their professional cultures (30). As exemplified in the case study of chronic kidney disease management, when producers and users communicate effectively, they can create a common vision of products, such as tool kits. However, academic researchers who do not have professional experience in providing medical care cannot always create effective clinical tools easily, and both parties must struggle to develop a common goal. Furthermore, producers of evidence reports are generally not trained to think about how different presentations of evidence will affect users' behavior (30), and they may need to be educated to consider innovative ways to present evidence, including video and Internet formats (31, 32). Researchers who produce evidence reports are intimately familiar with the principles of evidence-based medicine. Those who use evidence reports, however, do not necessarily understand evidence-based principles. Even when they do, they often approach evidence differently. Walshe and Rundall (33) have addressed the problem of overuse, underuse, and misuse of reviews by the British National Health Service Centre for Reviews and Dissemination, and they assert that 

I n 1997, the Agency for Healthcare Research and Quality (AHRQ) created 12 Evidence-based Practice Centers (EPCs). The objective of the EPC program was to provide a public service information resource for policymakers, clinicians, and other decision makers. According to AHRQ, the EPCs' mission is to "develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other health care organization and delivery issues" (1).
The principal products of the EPCs are called "evidence reports" or "technology assessments"; for simplicity's sake, we refer to them here as "evidence reports." These evidence reports are detailed evaluations of the scientific literature on specific clinical, behavioral, organizational, and financing topics of interest to policymakers, clinicians, and other decision makers. The reports are requested by organizations representing these interested parties, and these organizations are called "partners" in the context of the EPC process.
The real benefits of an evidence report are achieved through dissemination-public distribution of the report and its incorporation into efforts aimed at some specific objective, such as practice improvement (sometimes termed "implementation"). The Agency for Healthcare Research and Quality plays an intermediary role between the EPCs and partners and other private and public users "in their efforts to improve the quality, effectiveness, and appropriateness of health care by synthesizing the evidence and facilitating the translation of evidence-based research findings" (1). From the outset, dissemination has been part of the EPC program, but not as a funded component. During the first 5 years of the program, each evidence report project included a "dissemination plan" and, later, an "implementation plan" as defined work products. In the current 5-year cycle, AHRQ expects that EPCs will "facilitate translation of the reports into quality improvement tools, educational programs, and reimbursement policies" (2). This underscores the general thrust of the EPC program as an information resource that generates products intended to serve the needs of users. Product dissemination, however, has usually been left in the hands of the defined partners and other users of evidence reports.
In this article, we briefly review dissemination activities since the EPC program's inception and consider one case study in greater detail to identify factors that promote or inhibit the successful dissemination of EPC evidence reports. In the absence of a current census of dissemination activities linked to EPC products, we base our review of such activities on semistructured telephone interviews, conducted from April to October 2004, with 22 colleagues, including EPC directors, AHRQ representatives, and participants from partner organizations. We also support our observations with references to the literature on successful dissemination of evidence reports.

INTERVIEWS WITH COLLEAGUES
The EPCs have been involved in producing 107 evidence reports, of which 101 have been disseminated by AHRQ in printed form (both full report and executive summary, with or without codissemination by partners), and are on a publically available Web site (2). On the basis of our interviews with colleagues in EPCs, AHRQ staff, and representatives of partner organizations, we identified several distinct categories of dissemination activities ( Table 1).
Our interviews also yielded a list of factors that promote effective dissemination of evidence reports. Table 2 lists factors mentioned by at least one interviewee. Consistent with the interview script, we have organized these factors according to which party ultimately takes responsibility for ensuring their execution.
All interviewees agreed that the success of an evidence report is ultimately measured by the degree to which it influences or is directly used to induce action-that is, by the degree to which it is effectively disseminated. However, as noted above, dissemination activities are not built into each EPC project; specifically, these activities are not funded under the EPC program. This leaves dissemination of the reports to the good graces of the participants. As Table 1 illustrates, the successes thus achieved can be impressive. It is important to recognize, however, that failing to integrate research and dissemination goals can derail efforts to translate an evidence report into meaningful action, while actively integrating research and dissemination goals can promote more effective dissemination. To illustrate the latter point, we briefly consider a series of projects conducted by the EPC at Duke University in Durham, North Carolina, that together serve as an example of the benefits of integration.

CASE STUDY: CHRONIC KIDNEY DISEASE EVIDENCE REPORT
The following example shows how a report can be effectively disseminated when an EPC and partner share goals, plan sequential steps carefully, participate actively and equally, and plan each step with the ultimate objective of improving practice.
The Renal Physicians Association (RPA), a professional society, solicited proposals to improve care of individuals with advanced chronic kidney disease, based on best evidence. In partnership with the Duke EPC, RPA pursued 4 steps: 1) development of a comprehensive evidence report on the care of people who have advanced chronic kidney disease but are not yet receiving dialysis ("Appropriate Patient Preparation for Renal Replacement Therapy" [28], approved as an EPC report in 2001); 2) production of a clinical practice guideline and a set of clinical performance measures based on this evidence report (29); 3) creation of a set of tools for all individuals involved in care (ranging from calculators for estimating renal function for primary care providers, referral-letter templates and flow sheets for renal specialists, and diaries and educational materials for patients), called "The Advanced CKD [chronic kidney disease] Patient Management Tool Kit"; and 4) pilot testing of the tool kit in community settings. This will be followed by a national rollout of the toolkit in 2005.
This series of projects has several features that appear to have influenced the overall success of the effort. First, the EPC and RPA shared a common goal-developing an evidence report that would ultimately be used by clinicians to optimize the care of patients with advanced chronic kidney disease. Furthermore, each step in the processfrom the evidence report to the national rollout of the tool Table 1

Report in professional publications
ЉManagement of Chronic Hypertension During PregnancyЉ (3) Presented in an American College of Obstetrics and Gynecology Practice Bulletin (4, 5) Report in the lay press ЉEphedra and Ephedrine for Weight Loss and Athletic Performance EnhancementЉ (6) Newspaper, Internet (7) ЉEvaluation of Cervical CytologyЉ (11) Intermediate products of the evidence report were provided to another EPC (12), and a cost-effectiveness analysis from this report has been detailed in peer-reviewed publications ( kit-was planned with the next step in mind. When the evidence report was created, the EPC and RPA were aware that a clinical practice guideline would follow, and when the guideline and performance measures were developed, both parties were aware that a set of tools would follow. This knowledge guided the wording of the recommendation statements of the clinical practice guidelines so that, for example, the performance measures could be directly derived from them. Indeed, the performance measures were part of the tool kit for evaluating the impact of the whole effort. In addition, both parties (the EPC and RPA) partnered effectively in these sequentially planned steps in order to develop useful products for practicing clinicians. For all steps, RPA provided expert advice, and the EPC ensured that scientifically sound procedures were used. During the entire process, over a period of 4 years, both partners reported on the progress of the systematic process at various scientific meetings, including the Health Technology Assessment International meeting, the AHRQ Translating Research Into Practice (TRIP) meeting, and the American Society of Nephrologists meeting. Presentation of milestones encouraged both parties to regularly articulate and clarify their goals.

FACTORS INHIBITING SUCCESSFUL DISSEMINATION OF EPC PRODUCTS
Evidence-based Practice Centers strive for meticulous care and methodologic excellence in evidence reports, and they share with AHRQ and partner organizations a fundamental desire to ensure that EPC products are used, rather than relegated to shelves to gather dust. Our interviews document that in the past 7 years, EPC products have been disseminated successfully many times. In addition, we gleaned important insights, which we share below.

Cultural Factors
Cultural factors influence dissemination efforts because producers of evidence reports (typically academic researchers) and partners and other users of the reports (usually policymakers, clinicians, or other decision makers) often have different values, goals, and perspectives that are shaped by their professional cultures (30). As exemplified in the case study of chronic kidney disease management, when producers and users communicate effectively, they can create a common vision of products, such as tool kits. However, academic researchers who do not have professional experience in providing medical care cannot always create effective clinical tools easily, and both parties must struggle to develop a common goal. Furthermore, producers of evidence reports are generally not trained to think about how different presentations of evidence will affect users' behavior (30), and they may need to be educated to consider innovative ways to present evidence, including video and Internet formats (31,32).
Researchers who produce evidence reports are intimately familiar with the principles of evidence-based medicine. Those who use evidence reports, however, do not necessarily understand evidence-based principles. Even Table 2

Responsibility of EPC
Ensure that rigorous scientific methods are used to create the evidence report Ensure that evidence report includes all available evidence so that partner can weigh options; report should interpret evidence but not recommend specific actions or policies Strive to understand the clinical or policy context of the research question in order to increase the likelihood that the report will help answer real-life problems Enlist the help of a content expert who can straddle both research and political arenas and who understands goals of all players Finish the report on time

Responsibility of partner
Clearly define and articulate questions to be answered in the evidence report Clearly define and articulate goals of any potential dissemination activity, if needed Facilitate completion of the evidence report within original timeline by not requesting additional services beyond original scope Actively involve the EPC in dissemination efforts such as presenting the evidence report results at a conference Actively involve the EPC in educating partner and wider audience about the principles of evidence-based medicine Facilitate publication of results Separate its need for high-quality evidence from its political objectives, if any Strive to understand the principles of evidence-based medicine

Responsibility of both EPC and partner
Establish common research goals and dissemination goals early As goals evolve, keep each other abreast of progress and any changes in goals Supplement Dissemination of Evidence-based Practice Center Reports when they do, they often approach evidence differently. Walshe and Rundall (33) have addressed the problem of "overuse, underuse, and misuse" of reviews by the British National Health Service Centre for Reviews and Dissemination, and they assert that slow adoption of evidencebased principles is the main reason for dissemination failures. Several EPC colleagues we spoke with expressed dismay that they had to spend significant time educating partners about evidence-based principles. Similarly, some partners expressed regret that EPC staff often do not understand the clinical or political context in which decisions are made. The two parties must educate one another. The necessary education is best transmitted at an early face-toface meeting, as determined by our interviews and the available literature (34,35). Cultural differences mold the types of key questions that EPC researchers and partners, who use the reports to make their decisions, tend to formulate. Partners reported that evidence reports could not be used for decision making when the key questions addressed did not correspond to their real questions. The Canadian Health Services Research Foundation has helped develop the theoretical basis for understanding the role of research in evidence-based decision making, and it has stressed the need for researchers and decision makers to cross cultural bridges in forming key questions (30, 36 -38). The Foundation comments that decision makers sometimes do not understand that some questions are not researchable, while researchers sometimes answer questions other than the one posed by the decision makers. Academic researchers generally prefer to identify a "single information gap," a simple researchable question, while decision makers are trained to address multiple complex issues together (37). Researchers may prioritize intellectually satisfying research questions, while their partners, the decision makers, prioritize questions that will solve real-life problems. Several researchers articulated this sentiment, and the literature supports it (37). This difference results partially from different types of analytic training, but also partially from distinct incentive structures. Researchers strive to push the envelope of knowledge and to publish in peer-reviewed journals; they may also be subject to institutional incentives that discourage them from conducting translational research (36,39). Goering and colleagues have suggested using a "policy forum" to encourage dissemination and to create an arena for discussion that broadens key questions so that they answer the nuanced issues policymakers must face (40).

Strategic Factors
Early development of a shared conceptual framework appears key to a successful dissemination effort. Evidencebased Practice Center researchers need help understanding the framework within which the partner must work, the political forces that constrain or compel the partner, and the population the partner serves (34). Several authors have stressed the need for academic researchers to more thor-oughly consider the social implications of their work (41,42). While serendipity can lead to useful outcomes, practice improvement is less like basic science than like engineering. Like engineers, researchers must tailor their efforts to the needs of the population they serve.
Some research on dissemination of evidence has suggested that researchers may need to actively seek opportunities to participate in dissemination efforts (43). The EPC colleagues we spoke with expressed satisfaction when partners invited them to participate in dissemination efforts, such as conferences, but such involvement seemed to be the exception rather than the rule. Dissemination efforts are generally left to the partners. However, some authors suggest that certain types of collaborative research, such as "action research" and "participatory-action research," are changing traditions in academia and are requiring researchers to be involved in dissemination (44).
Another strategic problem is the rapid turnover of staff in partner organizations (36,45). Our EPC colleagues hesitated to spend the time required to foster relationships with decision makers who might disappear midway through the project. Incentives may need to be changed somewhat to ensure that both researchers and decision makers in partner organizations find collaboration a worthwhile investment.

Resource Factors
Resource factors can strongly influence the success of any dissemination effort. It has been a traditional practice for organizations seeking to develop various products (such as clinical practice guidelines) to rely on the goodwill of their academic grantees when it comes to providing expert input for dissemination efforts. Several EPC participants suggested that this unfortunate historical precedent caused partners to underestimate the true economic cost of an EPC evidence report and associated dissemination efforts. The EPC program itself attests that even a willing academic is not likely to be allowed to donate the hundreds of hours required to create a rigorous evidence report without being formally compensated for this cost. The same should be true for dissemination efforts. Creative financial incentive structures may need to be developed to encourage researchers and partners to participate in dissemination research (43).

CONCLUSION
The EPC program has successfully fostered relationships between researchers and partners and has produced many well-received evidence reports. Success of dissemination efforts has varied, but numerous examples show that evidence reports can be translated into clinical practice guidelines, clinical improvement tools, priority-setting tools for research agendas, and tools for establishing coverage and reimbursement policies. Imaginative collaborative efforts between EPCs and partners could maximize the impact of evidence reports, particularly if the specific changes