Tiffani J. Bright, PhD; Anthony Wong, MTech; Ravi Dhurjati, PhD; Erin Bristow, BA; Lori Bastian, MD, MS; Remy R. Coeytaux, MD, PhD; Gregory Samsa, PhD; Vic Hasselblad, PhD; John W. Williams, MD, MHS; Michael D. Musty, BA; Liz Wing, MA; Amy S. Kendrick, RN, MSN; Gillian D. Sanders, PhD; David Lobach, MD, PhD
Disclaimer: The authors of this report are responsible for its content. Statements in the report should not be construed as endorsements by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.
Acknowledgment: The authors thank Connie Schardt, MSLS, for help with the literature search and retrieval.
Grant Support: This project was funded under contract 290-2007-10066-I from the Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services.
Potential Conflicts of Interest: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M11-1215.
Requests for Single Reprints: Gillian D. Sanders, PhD, Evidence-based Practice Center, Director, Duke Clinical Research Institute, 2400 Pratt Street, Durham, NC 27705; e-mail, firstname.lastname@example.org.
Current Author Addresses: Dr. Bright: 16 Kenilworth Drive, Hampton, VA 23666.
Mr. Wong: 2618 Briar Trail, Apartment 202, Schaumburg, IL 60173.
Dr. Dhurjati: D330-1 Mayo (MMC 729), 420 Delaware Street SE, University of Minnesota, Minneapolis, MN 55455.
Ms. Bristow: 728 Irolo Street, Apartment D, Los Angeles, CA 90005.
Dr. Bastian: Health Services Research & Development, 152 Veterans Affairs Medical Center, 508 Fulton Street, Durham, NC 27705.
Drs. Coeytaux, Hasselblad, Williams, and Sanders; Mr. Musty; Ms. Wing; and Ms. Kendrick: Duke Evidence-based Practice Center, Duke Clinical Research Institute, 2400 Pratt Street, Durham, NC 27705.
Dr. Samsa: Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27705.
Dr. Lobach: Klesis LLC, 6 Harvey Place, Durham, NC 27705.
Author Contributions: Conception and design: T.J. Bright, A. Wong, J.W. Williams, G.D. Sanders, D. Lobach.
Analysis and interpretation of the data: T.J. Bright, A. Wong, R. Dhurjati, G. Samsa, V. Hasselblad, G.D. Sanders, D. Lobach.
Drafting of the article: T.J. Bright, A. Wong, E. Bristow, G.D. Sanders, D. Lobach.
Critical revision of the article for important intellectual content: T.J. Bright, G. Samsa, G.D. Sanders, D. Lobach.
Final approval of the article: T.J. Bright, G. Samsa, G.D. Sanders, D. Lobach.
Provision of study materials or patients: T.J. Bright, M.D. Musty.
Statistical expertise: G. Samsa, V. Hasselblad.
Obtaining of funding: G.D. Sanders, D. Lobach.
Administrative, technical, or logistic support: T.J. Bright, M.D. Musty, L. Wing, A.S. Kendrick, G.D. Sanders.
Collection and assembly of data: T.J. Bright, A. Wong, R. Dhurjati, E. Bristow, L. Bastian, R.R. Coeytaux, M.D. Musty.
Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, et al. Effect of Clinical Decision-Support Systems: A Systematic Review. Ann Intern Med. 2012;157:29-43. doi: 10.7326/0003-4819-157-1-201207030-00450
Download citation file:
Published: Ann Intern Med. 2012;157(1):29-43.
Despite increasing emphasis on the role of clinical decision-support systems (CDSSs) for improving care and reducing costs, evidence to support widespread use is lacking.
To evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation.
MEDLINE, CINAHL, PsycINFO, and Web of Science through January 2011.
Investigators independently screened reports to identify randomized trials published in English of electronic CDSSs that were implemented in clinical settings; used by providers to aid decision making at the point of care; and reported clinical, health care process, workload, relationship-centered, economic, or provider use outcomes.
Investigators extracted data about study design, participant characteristics, interventions, outcomes, and quality.
148 randomized, controlled trials were included. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n = 25; odds ratio [OR], 1.42 [95% CI, 1.27 to 1.58]), ordering clinical studies (n = 20; OR, 1.72 [CI, 1.47 to 2.00]), and prescribing therapies (n = 46; OR, 1.57 [CI, 1.35 to 1.82]). Few studies measured potential unintended consequences or adverse effects.
Studies were heterogeneous in interventions, populations, settings, and outcomes. Publication bias and selective reporting cannot be excluded.
Both commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload, and efficiency outcomes remains sparse. This review expands knowledge in the field by demonstrating the benefits of CDSSs outside of experienced academic centers.
Agency for Healthcare Research and Quality.
Despite increasing emphasis on clinical decision-support systems (CDSSs) in improving care and reducing costs, evidence supporting widespread use is limited. A CDSS is “any electronic system designed to aid directly in clinical decision making, in which characteristics of individual patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration” (1). This review examines 3 types of decision-support interventions.
Classic CDSSs include alerts, reminders, order sets, drug-dose calculations that automatically remind the clinician of a specific action, or care summary dashboards that provide performance feedback on quality indicators. Information retrieval tools, such as an “infobutton” embedded in a clinical information system, are designed to aid clinicians in the search and retrieval of context-specific knowledge from information sources based on patient-specific information from a clinical information system. Knowledge resources, such as UpToDate, Epocrates, and MD Consult, consist of distilled primary literature that allows selection of content germane to a specific patient to facilitate decision making at the point of care or for a specific care situation.
Until recently, most studies of CDSSs came from 4 benchmark settings (Brigham and Women's Hospital/Partners HealthCare, the Department of Veterans Affairs, LDS Hospital/Intermountain Healthcare, and the Regenstrief Institute) (2). Although several reviews have examined the effects of CDSSs (1, 3–9), many questions remain about their impact. This systematic review adds to the literature by summarizing trials of CDSSs implemented in a clinical setting to aid decision making at the point of care or for a specific care situation.
We developed and followed a standard protocol for our review. Full details of our methods, search strategies, results, and conclusions are provided in a report commissioned by the Agency for Healthcare Research and Quality, available at www.effectivehealthcare.ahrq.gov.
We searched for studies done between January 1976 and January 2011 in MEDLINE accessed through PubMed, CINAHL, PsycINFO, and Web of Science.
We identified randomized trials of CDSSs implemented in a real clinical setting and used by health care providers to aid decision making at the point of care or for a specific care situation. Studies had to report at least one of the following types of outcomes: clinical (length of stay, morbidity, mortality, health-related quality of life, and adverse events), health care process (recommended preventive care, clinical study, or treatment ordered or completed), user workload and efficiency (user knowledge, number of patients seen, clinician workload, and efficiency), relationship-centered (patient satisfaction), economic (cost and cost-effectiveness), or use and implementation by a health care provider (acceptance, satisfaction, use, and implementation). We excluded studies that described nonelectronic CDSSs, included fewer than 50 participants, were not published in English, described closed-loop systems that did not involve a provider, evaluated systems that required mandatory compliance with the CDSS, or evaluated only the performance of the system as opposed to its effect on clinical practice.
Data related to study setting and design, sample characteristics, intervention characteristics, comparators, and outcomes were extracted by 1 reviewer and confirmed by another. Two reviewers used a standardized approach to independently categorize the quality of individual studies as good, fair, or poor (10) and evaluated the overall strength of evidence for each outcome as high, moderate, low, or insufficient (11). Reviewers also identified issues related to study setting, interventions, and outcomes that limited applicability of evidence (10, 12).
A priori–defined outcomes believed to be important in measuring the effect of CDSSs in improving clinical practice guided our synthesis process. Studies with a common outcome were grouped together to facilitate qualitative analysis. Quantitative analysis was done where 4 or more studies assessed the same outcome in the same manner, regardless of the specific CDSS intervention. Summary estimates were calculated by using the DerSimonian and Laird (13) random-effects model as implemented in Comprehensive Meta-analysis, version 2.2.055 (Biostat, Englewood, New Jersey).
Primary funding was provided by the Agency for Healthcare Research and Quality. The funding source formulated the initial study questions but otherwise had no role in the design, analysis, or interpretation of the data or in the decision to submit the manuscript for publication.
We screened 15 176 abstracts, evaluated 1407 full-text articles, and included 160 articles, representing 148 unique studies (Appendix Figure). Appendix Table 1 summarizes important characteristics of the studies and their quality rating. A total of 128 studies (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Many studies (n = 51) were performed in environments with established health information technology (IT); many were multisite studies involving multiple institutions (n = 46).
Summary of evidence search and selection.
CDSS = clinical decision-support system; KQ = key question; RCT = randomized, controlled trial.
Appendix Table 1.
The Table summarizes the most important findings and the strength of supporting evidence for those findings, whereas Appendix Table 2 provides examples of interventions that assessed the outcomes of interest. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services, ordering clinical studies, and prescribing therapies. Few studies (n = 15) indicated conceptualization of potential unintended consequences or measured potential adverse effects of implementing decision-support tools.
Summary of Evidence, by Outcome
Appendix Table 2.
Examples of Clinical Decision-Support Interventions
Several studies assessed morbidity outcomes (14–38), such as hospitalizations, Apgar scores, surgical site infections, cardiovascular events, colorectal cancer, deep venous thrombosis, and hypoglycemia events. Topics addressed included diagnosis (16, 21, 29, 30, 34), pharmacotherapy (15, 18–22, 25, 28, 33), chronic disease management (14, 19, 20, 22, 26–28, 31, 32, 36, 38), laboratory test ordering (19, 37), immunizations (19, 35), preventive care (17, 19, 28–30, 37), and discharge planning (23, 24). Approximately 50% of the studies were performed in an academic setting.
Many studies evaluated locally developed interventions implemented in the ambulatory environment. Typical interventions were automatically delivered, system-initiated recommendations provided synchronously at the point of care to enable decision making during the health care provider–patient encounter. Three such interventions required a mandatory response (that is, required that the user respond to the given recommendation, whether that response was to accept or dismiss the recommendation or to modify the user's action) (17, 18, 37).
Comparators included usual care or no CDSS and the same CDSS with additional features. Limitations included short follow-up, low statistical power to detect important differences, and the potential for contamination of providers in control groups that improved because of knowledge of interventions.
Meta-analysis of these heterogeneous studies (n = 16) suggested that CDSSs improved morbidity outcomes (relative risk, 0.88 [95% CI, 0.80 to 0.96]). We rated this level of evidence as moderate. Most studies were good quality, and many of the interventions were evaluated in multiple institutions. However, the interventions were often paper-based or standalone systems implemented in academic or Veterans Affairs settings and were targeted toward a single condition.
Seven studies (17, 20–22, 33, 39, 40) reported mortality outcomes. Issues addressed included diagnosis (21), pharmacotherapy (20–22, 33, 40), chronic disease management (20, 22), preventing deep venous thrombosis (17), and detecting and notifying clinicians of critical laboratory values (39). Most CDSSs were locally developed and integrated into a computerized physician order entry (CPOE) or electronic health record (EHR) system and had system-initiated recommendations delivered synchronously at the point of care that did not require a clinician response.
Interventions were evaluated against usual care or no CDSS, except for 2 studies (20, 22) that compared the same intervention with additional features. Limitations included small sample size, duration shorter than 1 year, and possible contamination of control providers.
Meta-analysis of these heterogeneous studies (n = 6) suggested no significant effect of CDSSs on mortality, although CIs were wide (odds ratio [OR], 0.79 [CI, 0.54 to 1.15]). Of note, 2 studies reported a significant reduction in mortality with use of CDSSs (20, 22). We rated the overall strength of evidence related to mortality outcomes as low. Most studies were conducted in a single academic or Veterans Affairs setting with a comprehensive, well-established health IT infrastructure.
Five studies assessed the effectiveness of CDSSs in reducing or preventing adverse events (23, 24, 39–42). Studies were mostly implemented in an academic, inpatient setting. Studies evaluated the effect of these interventions to improve the timing of warfarin therapy (41), improve discharge planning (23, 24), prevent adverse drug events (42), detect critical laboratory values (39), and detect potentially inappropriate or inadequate antimicrobial therapy (40).
Typical interventions were locally developed, were integrated into a CPOE or an EHR system, and automatically delivered system-imitated recommendations in real time to enable decision making during the provider–patient encounter. Only 1 study clearly required a mandatory response (39).
All of the CDSSs were evaluated against usual care or no CDSS. Limitations included evaluation at a single institution, evaluation periods less than 1 year, and potential improvement in physician performance because of their knowledge of the intervention.
Meta-analysis of these heterogeneous studies estimated a relative risk of 1.01 (CI, 0.90 to 1.14), and neither this summary nor any individual studies demonstrated a significant effect. We rated this level of evidence as low. Most studies were good quality, and 2 were conducted at multiple sites; however, these interventions primarily contained locally developed knowledge, and results may not be generalizable to nonteaching settings.
Forty-three studies examined the effect of CDSSs on the rates of ordering or completing recommended preventive care services (17, 19, 27, 28, 35, 37, 43–82). Most studies were conducted in the academic or ambulatory environment. Topics addressed included diagnosis (43, 47, 62, 71, 82), pharmacotherapy (19, 28, 49, 53, 54, 70, 77, 80), chronic disease management (19, 26, 28, 43, 44, 48, 51, 70, 72, 74, 76), laboratory test ordering (19, 37, 55, 60, 70, 71, 74, 75, 80, 81), preventive care (17, 19, 28, 37, 43, 45, 48, 52–60, 62–65, 68–71, 73–75, 79–82), immunizations (19, 35, 48–50, 52, 56, 66, 67, 78, 79, 81), and initiating discussions with patients (60, 61, 72).
Most interventions were locally developed, paper-based, or standalone systems and automatically delivered recommendations in real time to enable decision making during the health care provider–patient encounter. Only 7 of the interventions required a mandatory response (17, 37, 47, 49) or justification (64, 65, 68, 69, 75) for not adhering to the recommendation.
Comparators included usual care or no CDSS, direct comparison against the same CDSS with additional features, or comparison of the same CDSS for different conditions. Limitations included sparse data measuring patient or economic outcomes; few assessments of long-term outcomes of interventions; and the Hawthorne effect, which probably stimulated more comprehensive preventive care across groups.
In meta-analysis of these 25 heterogeneous studies (17, 27, 28, 35, 37, 43–63), the effect of CDSSs on preventive care services was significant (OR, 1.42 [CI, 1.27 to 1.58]) (Figure 1). We rated this level of evidence as high. Approximately one half of the studies were good quality, one third were evaluated in multicenter trials, and one fourth addressed multiple clinical conditions. However, most CDSSs were locally developed, not integrated into a CPOE or an EHR, and evaluated in academic medical centers, all of which can affect the generalizability of these findings.
Results of studies that examined whether recommended preventive care services were ordered.
Studies reporting the odds ratio of adhering to recommendations for ordering or completing preventive care services of CDSS vs. control groups. In the 25 studies comparing CDSS with control groups, the random-effects–combined odds ratio of adherence to preventive care recommendations was 1.42 (95% CI, 1.27 to 1.58). CDSS = clinical decision-support system.
Twenty-nine studies evaluated the effect of CDSSs on ordering and completing recommended clinical studies (26, 31, 32, 83–109). Many studies were conducted in the academic setting, and most were evaluated in the ambulatory environment. Topics addressed included diagnosis (85, 89–91, 95, 98, 101–103), pharmacotherapy (94, 98), chronic disease management (26, 31, 32, 84, 85, 103), laboratory test ordering (83, 87, 89, 92–94, 96, 97, 99–101, 104–108), initiating discussions with patients (108, 109), and additional clinical tasks (86, 88, 90, 109).
Typical interventions were locally developed, were integrated CDSS recommendations in a CPOE or an EHR system, and automatically delivered system-initiated recommendations to enable decision making during the provider–patient encounter. Eight interventions required a mandatory response (90, 99, 100, 104, 106, 108) or justification (83, 105) for not adhering to the recommendation.
Comparators included usual care or no CDSS, direct comparison against the same CDSS with additional features, or comparison of the same CDSS for different conditions. Limitations of the evidence base included diverse metrics to assess adherence to ordering and completing a recommended action, studies not designed to evaluate the clinical or economic outcomes associated with the CDSS interventions, and limited evidence of the effect of CDSSs on a broad set of conditions.
Meta-analysis of these 20 heterogeneous studies (26, 31, 32, 83, 84, 88, 89, 91, 92, 94–96, 98–103, 105, 108, 109) found a significant effect of CDSSs on ordering or completing of clinical studies (OR, 1.72 [CI, 1.47 to 2.00]) (Figure 2). We rated this level of evidence as moderate. Most studies were good quality, approximately one third were implemented in multiple sites, and almost one fourth included a direct comparison of the effectiveness of the CDSS against the same intervention with additional features. However, we noted a strong suggestion of publication bias in these studies, and these results may not be generalizable to all settings because most studies were either evaluated in environments with a well-established health IT infrastructure or conducted outside of the United States.
Results of studies that examined whether recommended clinical studies were ordered.
Studies reporting the odds ratio of adhering to recommendations for ordering or completing recommended clinical studies of CDSS vs. control groups. In the 20 studies comparing CDSS with control groups, the random-effects–combined odds ratio of adherence to clinical study recommendations was 1.72 (95% CI, 1.47 to 2.00). CDSS = clinical decision-support system.
Sixty-seven studies evaluated the effect of CDSSs on ordering and prescribing therapy (14, 18, 20–22, 25–28, 33, 36, 38–41, 43, 44, 53, 54, 70, 80, 82, 84, 88, 94, 98, 110–154). Many studies were conducted in the academic setting, and most were evaluated in the ambulatory environment. Topics addressed included diagnosis (21, 43, 82, 98, 112, 123, 129, 138, 151, 152), pharmacotherapy (18, 20–22, 25, 28, 33, 40, 53, 54, 70, 80, 94, 98, 110, 111, 114, 116, 117, 119, 122, 123, 125–134, 136, 138–150, 153, 154), laboratory test ordering (70, 80, 94, 110, 130), chronic disease management (14, 20, 22, 26–28, 36, 38, 43, 44, 70, 84, 110, 112, 113, 115, 118, 120, 121, 123–125, 132, 133, 135, 141), preventive care (28, 43, 53, 54, 70, 80, 82, 120, 148), and additional clinical tasks (39, 41, 82, 88, 110, 137, 151, 152).
Typical interventions were locally developed, were integrated into a CPOE or an EHR system, and automatically delivered system-initiated recommendations in real time to enable decision making during the provider–patient encounter. Eighteen CDSSs required a mandatory response (18, 39, 119, 122, 134, 139–143, 147, 148, 151, 152) or justification (112, 113, 136, 137, 145) for not adhering to the recommendation. Limitations included inadequate follow-up periods to observe sustained results, sparse data demonstrating how changes in clinician ordering and prescribing led to improvements in clinical or economic outcomes, and study designs that did not capture the extent to which nonadherence with recommended therapy resulted in adverse events.
Meta-analysis of 46 heterogeneous studies (14, 18, 20–22, 25–28, 33, 36, 38, 40, 43, 44, 53, 54, 70, 82, 84, 94, 98, 110, 112–117, 121–124, 129, 130, 134–136, 141–144, 146, 148, 151–153) showed that intervention providers with decision support were more likely to order the appropriate treatment or therapy (OR, 1.57 [CI, 1.35 to 1.82]) (Figure 3). We rated this level of evidence as high. Most studies were good quality, and most were evaluated in multisite trials. However, generalizability may be limited because most studies were implemented in the ambulatory environment, were evaluated in settings where clinicians were experienced EHR users or provided care in an established health IT infrastructure, and incorporated knowledge that was targeted toward specific conditions.
Results of studies that examined whether recommended treatments were ordered.
Studies reporting the odds ratio of adhering to recommendations for ordering or prescribing treatment of CDSS vs. control groups. In the 46 studies comparing CDSS with control groups, the random-effects–combined odds ratio of adherence to treatment recommendations was 1.57 (95% CI, 1.35 to 1.82). CDSS = clinical decision-support system.
Evidence on the effect of CDSSs on clinician knowledge or improved confidence in managing patient care was insufficient (71, 72, 86, 155, 156). Seven studies examined the effect of CDSSs on efficiency (23, 24, 40, 106, 141, 155–157). Limitations included contamination of clinicians in the control group that improved because of knowledge of the intervention, evaluation periods that were too brief to demonstrate an effect on efficiency, and small clinician sample sizes.
We rated the level of evidence as low. Most interventions contained locally developed knowledge, such as protocols or algorithms derived on the basis of local performance, quality, and outcome data not representative of other sites, and were evaluated in academic settings.
Twenty-two studies reported costs (21, 26, 27, 31, 32, 36, 40, 43, 53, 54, 71, 83, 90, 106, 109, 113, 118, 130, 141, 158–163). Objectives of the CDSSs included diagnosis (21, 43, 71, 90, 160), pharmacotherapy (21, 40, 53, 54, 130, 141), chronic disease management (26, 27, 31, 32, 36, 43, 113, 118, 141, 161, 162), laboratory test ordering (71, 83, 106, 130, 160, 163), preventive care (43, 53, 54, 71, 158, 159), initiating discussions with patients (109, 159), and additional clinical tasks (90, 109). One study reported reduced hospitalization expenses with CDSS use (31, 32), and 12 studies reported that use had a positive effect on costs compared with control groups and other non-CDSS groups (21, 27, 40, 53, 54, 83, 90, 106, 113, 130, 141, 159, 160).
Modest evidence from academic and community inpatient and ambulatory settings showed that locally and commercially developed CDSSs had lower treatment costs, total costs, and reduced costs compared with control groups and other non-CDSS intervention groups. Most studies were conducted in the academic ambulatory setting and evaluated locally developed, integrated CDSSs in CPOE or EHR systems that automatically delivered system-initiated recommendations synchronously at the point of care and did not require a mandatory clinician response.
Six studies (53, 54, 56, 57, 78, 95, 96, 161, 162) examined the cost-effectiveness of CDSSs or their effect on cost-effectiveness of care. These demonstrated conflicting findings, with 3 studies suggesting that CDSSs were cost-effective (53, 54, 78, 95) and 3 reporting that CDSSs were not cost-effective (56, 57, 161, 162). Objectives included diagnosis (95), pharmacotherapy (53, 54), chronic disease management (161, 162), preventive care (53, 54, 56, 57), and immunizations (56, 78).
Twenty-four studies assessed the effect of provider acceptance of CDSSs (19, 41, 55, 62, 75, 90, 105, 113, 119, 120, 136, 137, 145–147, 149, 158, 159, 164–170). Topics addressed included diagnosis (62, 90, 166), pharmacotherapy (19, 119, 136, 145–147, 149, 164, 165), chronic disease management (19, 113, 120, 168–170), laboratory test ordering (19, 55, 75, 105), preventive care (19, 55, 62, 75, 120, 147, 158, 159, 167), immunizations (19), initiating discussions with patients (159), and additional clinical tasks (41, 90, 137).
Comparators included usual care or no CDSS and direct comparison with the same CDSS with additional features. One half of the studies required a mandatory response (55, 90, 119, 147, 166) or justification (75, 105, 113, 120, 136, 137, 145) for not adhering to the recommendation; however, there was no significant effect on provider acceptance. Limitations included an inconsistent definition of provider acceptance; small sample sizes; and scarce data on clinical outcomes, such as morbidity, length of stay, or adverse events. We rated this level of evidence as low. Most of these studies were fair quality and were evaluated in academic medical settings with established health IT infrastructures and experienced EHR users, which may limit the generalizability of the findings.
Provider satisfaction with CDSSs was examined in 19 studies (14, 23–25, 37, 43, 80, 86, 105, 109, 112, 119, 127, 128, 141, 151–153, 155, 156, 158, 165). Topics addressed included diagnosis (43, 112, 151, 152), pharmacotherapy (25, 80, 119, 127, 128, 141, 153, 165), chronic disease management (14, 43, 112, 141), laboratory test ordering (37, 80, 105), preventive care (37, 43, 80), and initiating discussions with patients (109).
Comparators included usual care or no CDSS and direct comparison with the same CDSS with additional features. Seven CDSSs required a mandatory response (37, 119, 141, 151, 152, 155) or justification (105, 112) for not adhering to the recommendation. Limitations included the narrow assessment of the role of provider satisfaction with CDSSs on patient-specific outcomes and small sample sizes of clinicians.
Twelve studies demonstrated provider satisfaction with CDSSs (25, 37, 80, 109, 112, 119, 127, 128, 141, 155, 156, 158, 165); 4 showed a significant effect of satisfaction among intervention providers compared with control providers (109, 112, 155, 165). Provider dissatisfaction with CDSSs was also reported in 6 studies (14, 43, 86, 105, 151–153). We rated this level of evidence as moderate. Most studies were good quality and evaluated CDSSs integrated into CPOE or EHR systems in multiple interventions outside of environments with an established and robust health IT. However, most CDSSs were locally developed and implemented in the ambulatory setting.
Seventeen studies examined provider use of CDSSs by using such metrics as the number of times the CDSS was accessed by the clinician or provided a recommendation to the clinician (51, 71, 80, 86, 107, 110, 117, 119, 123, 138, 142, 145, 156, 165, 168–172). Objectives included diagnosis (71, 123, 138), pharmacotherapy (80, 110, 117, 119, 123, 138, 142, 145, 165), chronic disease management (51, 110, 123, 168–172), laboratory test ordering (71, 80, 107, 110), preventive care (71, 80), and additional clinical tasks (86, 110, 156).
Comparators included usual care or no CDSS, direct comparison with the same CDSS with additional features, or comparison of the same CDSS for different conditions. Limitations included sparse data demonstrating how provider use translated into more appropriate patient care and small sample sizes of clinicians. We rated this level of evidence as low.
Among the 12 studies (80, 107, 117, 119, 123, 138, 145, 168–172) that provided statistical data about provider use, 8 (80, 110, 119, 123, 145, 165, 168–170) documented low use (<50% of the clinician's time or of patient visits) or that less than 50% of clinicians used the CDSS or received alerts to guide therapeutic action. Most of these studies were fair quality and evaluated locally developed interventions in multiple community and ambulatory settings. Additional results are available from the technical report at www.effectivehealthcare.ahrq.gov.
Our systematic review investigated the continuum of information support for clinical care, including traditional CDSSs, as well as information retrieval systems and knowledge resources developed for access at the point of care. Studies were primarily conducted outside of institutions with an established health IT infrastructure. Most interventions targeted specific medical conditions and were evaluated in single settings.
Clinical decision support had a favorable effect on prescribing treatments, facilitating preventive care services, and ordering clinical studies across diverse venues and systems. This finding contrasts with that of another review (2), which showed that most reports of successful CDSS implementation were based on locally developed systems at 4 sites.
Evidence demonstrating positive effects of CDSSs on clinical and economic outcomes remains surprisingly sparse, although this could be because of the relative difficulty of implementing randomized, controlled trials in real clinical settings, as well as the logistics of measuring the direct clinical effect of CDSSs. Evidence was also limited in showing an effect of CDSSs on clinical workload and efficiency. Furthermore, available evidence is insufficient to draw conclusions about the potential negative effect of implementing decision-support tools, which is necessary to truly fulfill the goal of evaluating these interventions and to better address implementation challenges (173).
Our findings are important in light of the increasing political interest and financial investment of the U.S. government in resources for health IT. Meaningful use of CDSSs needs to be objectively informed about the role that they can and should play in reshaping health care delivery. Further understanding is increasingly important to optimally define their role in the context of meaningful use for EHRs.
Our systematic review has several limitations. The heterogeneity of the studies limited general observations about CDSSs. We minimized this limitation in our meta-analyses by including studies that assessed the same outcome in the same manner. Although this investigation was a comprehensive review of randomized, controlled trials that provided the best evidence on CDSS effectiveness, these studies may provide less information about issues related to CDSS implementation, effect on workflow, and factors affecting usability.
Most studies (76%) evaluated the effectiveness of the intervention by using usual care rather than a direct comparator, which may contribute to more positive results. Finally, we acknowledge the possibility of selective reporting or publication bias. However, a recent review by Buntin and colleagues (173) found that 62% of included studies on health IT reported positive effects where the technology was associated with improvement in 1 or more aspects of care (173). Formal assessment by using funnel plots found no consistent bias for most outcomes, except for ordering or completing of clinical studies, where there was a strong suggestion of publication bias.
Significant research is still required to promote widespread use of CDSSs and to augment their clinical effectiveness. Future studies should investigate how to expand CDSS content to accommodate multiple comorbid conditions simultaneously and to determine which members of the care team should receive clinical decision support, what effect CDSSs have on clinical and economic outcomes, and how CDSSs can be most effectively integrated into workflow and deployed across diverse settings. Further work is also needed to understand how CDSSs can aid in the transformation of care delivery models, such as accountable care organizations and patient-centered medical homes; how to incorporate CDSSs into workflow tools, such as medical registries and provider–provider messaging capabilities; and how to integrate CDSSs with workflow-oriented quality improvement programs.
Promoting extensive use of CDSSs will require a better definition of the clinical decision-support infrastructure. Such infrastructure could include consistent underlying frameworks for describing CDSSs, such as the “Clinical Decision Support Five Rights” (174), to aid in the aggregation and synthesis of results; development and evaluation of models for porting CDSSs across settings; and improved identification of characteristics of the environment and workflow into which a CDSS is deployed, as well as characteristics of the intended users.
In summary, evidence demonstrated the efficacy of CDSSs on health care process outcomes across diverse settings by using both commercially and locally developed systems, but data showing an effect on clinical and economic outcomes were sparse. Broad penetration of clinical decision-support tools will require aggressively seeking a better understanding of what the right information is and when and how it should be delivered to the right person, and a critical examination of the unintended consequences of CDSS implementation.
This article was published at www.annals.org on 24 April 2012.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Edward Hoffer MD, FACP
July 9, 2012
To the Editor
The article by Bright et al on Clinical Decision-Support Systems (CDSS) (1) was timely and interesting, but had an important omission. No mention was made of CDSS that focused on helping doctors make the correct diagnosis. Such systems have been broadly available for some 25 years, including QMR/Internist-I, ILIAD and DXplain. Currently, at least two general-purpose Diagnostic Decision Support systems, DXplain and ISABEL, are widely available. Reviews (2,3) have shown that these systems suggest important diseases the clinician had not previously considered, and a study (4) from the Mayo Clinic found that use of DXplain lowered length-of-stay and hospital costs.
1. Bright TJ et al “Effect of Clinical Decision-Support Systems” Ann Intern Med 2012;157:29-432.
2. Berner ES et al “Performance of Four Computer-Based Diagnostic Systems” New Engl J Med 1994; 330:1792-63.
3. Bond WF et al “Differential Diagnosis Generators: an Evaluation of Currently Available Computer Programs” J Gen Intern Med 2011;27:213-94.
4. Elkin PL et al “The Introduction of a Diagnostic Decision Support System (DXplain) into the Workflow of a Teaching Hospital Service can Decrease the Cost of Service for Diagnostically Challenging DRGs” Internat Jl Med Informatics 2010; 79:772-
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only