Basit Chaudhry, MD; Jerome Wang, MD; Shinyi Wu, PhD; Margaret Maglione, MPP; Walter Mojica, MD; Elizabeth Roth, MA; Sally C. Morton, PhD; Paul G. Shekelle, MD, PhD
Disclaimer: The authors of this article are responsible for its contents. No statement in this article should be construed as an official position of the Agency for Healthcare Research and Quality. Statements made in this publication do not represent the official policy or endorsement of the Agency or the U.S. government.
Acknowledgments: The authors thank the Veterans Affairs/University of California, Los Angeles, Robert Wood Johnson Clinical Scholars Program, the University of California, Los Angeles, Division of General Internal Medicine and Health Services Research, and RAND for their support during this research. They also thank Drs. Robert Brook, Kenneth Wells, and Kavita Patel for their review of the manuscript.
Grant Support: This work was produced under Agency for Healthcare Research and Quality contract no. 2002. In addition to the Agency for Healthcare Research and Quality, this work was also funded by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, and the Office of Disease Prevention and Health Promotion, U.S. Department of Health and Human Services.
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: Basit Chaudhry, MD, Division of General Internal Medicine, University of California, Los Angeles, 911 Broxton Avenue, 2nd Floor, Los Angeles, CA 90095; e-mail, BChaudhry@mednet.ucla.edu.
Current Author Addresses: Dr. Chaudhry: Division of General Internal Medicine, University of California, Los Angeles, 911 Broxton Avenue, 2nd Floor, Los Angeles, CA 90095.
Dr. Wang: Cedars-Sinai Health System, 8700 Beverly Boulevard, Los Angeles, CA 90048.
Drs. Wu, Mojica, and Shekelle, Ms. Maglione, and Ms. Roth: RAND Corporation, 1776 Main Street, Santa Monica, CA 90401.
Dr. Morton: RTI International, 3040 Cornwallis Road, Research Triangle Park, NC 27709.
Chaudhry B., Wang J., Wu S., Maglione M., Mojica W., Roth E., Morton S., Shekelle P.; Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Ann Intern Med. 2006;144:742-752. doi: 10.7326/0003-4819-144-10-200605160-00125
Download citation file:
Published: Ann Intern Med. 2006;144(10):742-752.
Health information technology has been shown to improve quality by increasing adherence to guidelines, enhancing disease surveillance, and decreasing medication errors.
Much of the evidence on quality improvement relates to primary and secondary preventive care.
The major efficiency benefit has been decreased utilization of care.
Effect on time utilization is mixed.
Empirically measured cost data are limited and inconclusive.
Most of the high-quality literature regarding multifunctional health information technology systems comes from 4 benchmark research institutions.
Little evidence is available on the effect of multifunctional commercially developed systems.
Little evidence is available on interoperability and consumer health information technology.
A major limitation of the literature is its generalizability.
Health care experts, policymakers, payers, and consumers consider health information technologies, such as electronic health records and computerized provider order entry, to be critical to transforming the health care industry (1-7). Information management is fundamental to health care delivery (8). Given the fragmented nature of health care, the large volume of transactions in the system, the need to integrate new scientific evidence into practice, and other complex information management activities, the limitations of paper-based information management are intuitively apparent. While the benefits of health information technology are clear in theory, adapting new information systems to health care has proven difficult and rates of use have been limited (9-11). Most information technology applications have centered on administrative and financial transactions rather than on delivering clinical care (12).
The Agency for Healthcare Research and Quality asked us to systematically review evidence on the costs and benefits associated with use of health information technology and to identify gaps in the literature in order to provide organizations, policymakers, clinicians, and consumers an understanding of the effect of health information technology on clinical care (see evidence report at http://www.ahrq.gov). From among the many possible benefits and costs of implementing health information technology, we focus here on 3 important domains: the effects of health information technology on quality, efficiency, and costs.
We used expert opinion and literature review to develop analytic frameworks (Table) that describe the components involved with implementing health information technology, types of health information technology systems, and the functional capabilities of a comprehensive health information technology system (13). We modified a framework for clinical benefits from the Institute of Medicine's 6 aims for care (2) and developed a framework for costs using expert consensus that included measures such as initial costs, ongoing operational and maintenance costs, fraction of health information technology penetration, and productivity gains. Financial benefits were divided into monetized benefits (that is, benefits expressed in dollar terms) and nonmonetized benefits (that is, benefits that could not be directly expressed in dollar terms but could be assigned dollar values).
We performed 2 searches (in November 2003 and January 2004) of the English-language literature indexed in MEDLINE (1995 to January 2004) using a broad set of terms to maximize sensitivity. (See the full list of search terms and sequence of queries in the full evidence report at http://www.ahrq.gov.) We also searched the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database; hand-searched personal libraries kept by content experts and project staff; and mined bibliographies of articles and systematic reviews for citations. We asked content experts to identify unpublished literature. Finally, we asked content experts and peer reviewers to identify newly published articles up to April 2005.
Two reviewers independently selected for detailed review the following types of articles that addressed the workings or implementation of a health technology system: systematic reviews, including meta-analyses; descriptive “qualitative” reports that focused on exploration of barriers; and quantitative reports. We classified quantitative reports as “hypothesis-testing” if the investigators compared data between groups or across time periods and used statistical tests to assess differences. We further categorized hypothesis-testing studies (for example, randomized and nonrandomized, controlled trials, controlled before-and-after studies) according to whether a concurrent comparison group was used. Hypothesis-testing studies without a concurrent comparison group included those using simple pre–post, time-series, and historical control designs. Remaining hypothesis-testing studies were classified as cross-sectional designs and other. We classified quantitative reports as a “predictive analysis” if they used methods such as statistical modeling or expert panel estimates to predict what might happen with implementation of health information technology rather than what has happened. These studies typically used hybrid methods—frequently mixing primary data collection with secondary data collection plus expert opinion and assumptions—to make quantitative estimates for data that had otherwise not been empirically measured. Cost-effectiveness and cost-benefit studies generally fell into this group.
Two reviewers independently appraised and extracted details of selected articles using standardized abstraction forms and resolved discrepancies by consensus. We then used narrative synthesis methods to integrate findings into descriptive summaries. Each institution that accounted for more than 5% of the total sample of 257 papers was designated as a benchmark research leader. We grouped syntheses by institution and by whether the systems were commercially or internally developed.
This work was produced under Agency for Healthcare Research and Quality contract no. 2002. In addition to the Agency for Healthcare Research and Quality, this work was also funded by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, and the Office of Disease Prevention and Health Promotion, U.S. Department of Health and Human Services. The funding sources had no role in the design, analysis, or interpretation of the study or in the decision to submit the manuscript for publication.
Of 867 articles, we rejected 141 during initial screening: 124 for not having health information technology as the subject, 4 for not reporting relevant outcomes, and 13 for miscellaneous reasons (categories not mutually exclusive). Of the remaining 726 articles, we excluded 469 descriptive reports that did not examine barriers (Figure). We recorded details of and summarized each of the 257 articles that we did include in an interactive database (http://healthit.ahrq.gov/tools/rand) that serves as the evidence table for our report (14). Twenty-four percent of all studies came from the following 4 benchmark institutions: 1) the Regenstrief Institute, 2) Brigham and Women's Hospital/Partners Health Care, 3) the Department of Veterans Affairs, and 4) LDS Hospital/Intermountain Health Care.
*Includes 1 descriptive or quantitative article with costs outcomes ( article).
The reports addressed the following types of primary systems: decision support aimed at providers (63%), electronic health records (37%), and computerized provider order entry (13%). Specific functional capabilities of systems that were described in reports included electronic documentation (31%), order entry (22%), results management (19%), and administrative capabilities (18%). Only 8% of the described systems had specific consumer health capabilities, and only 1% had capabilities that allowed systems from different facilities to connect with each other and share data interoperably. Most studies (n = 125) assessed the effect of the systems in the outpatient setting. Of the 213 hypothesis-testing studies, 84 contained some data on costs.
Several studies assessed interventions with limited functionality, such as stand-alone decision support systems (15-17). Such studies provide limited information about issues that today's decision makers face when selecting and implementing health information technology. Thus, we preferentially highlight in the following paragraphs studies that were conducted in the United States, that had empirically measured data on multifunctional systems, and that included health information and data storage in the form of electronic documentation or order-entry capabilities. Predictive analyses were excluded. Seventy-six studies met these criteria: 54 from the 4 benchmark leaders and 22 from other institutions.
The health information technology systems evaluated by the benchmark leaders shared many characteristics. All the systems were multifunctional and included decision support, all were internally developed by research experts at the respective academic institutions, and all had capabilities added incrementally over several years. Furthermore, most reported studies of these systems used research designs with high internal validity (for example, randomized, controlled trials).
Appendix Table 1(18-71) provides a structured summary of each study from the 4 benchmark institutions. This table also includes studies that met inclusion criteria not highlighted in this synthesis (26, 27, 30, 39, 40, 53, 62, 65, 70, 71). The data supported 5 primary themes (3 directly related to quality and 2 addressing efficiency). Implementation of a multifunctional health information technology system had the following effects: 1) increased delivery of care in adherence to guidelines and protocols, 2) enhanced capacity to perform surveillance and monitoring for disease conditions and care delivery, 3) reductions in rates of medication errors, 4) decreased utilization of care, and 5) mixed effects on time utilization.
The major effect of health information technology on quality of care was its role in increasing adherence to guideline- or protocol-based care. Decision support, usually in the form of computerized reminders, was a component of all adherence studies. The decision support functions were usually embedded in electronic health records or computerized provider order-entry systems. Electronic health records systems were more frequently examined in the outpatient setting; provider order-entry systems were more often assessed in the inpatient setting. Improvements in processes of care delivery ranged from absolute increases of 5 to 66 percentage points, with most increases clustering in the range of 12 to 20 percentage points.
Twelve of the 20 adherence studies examined the effects of health information technology on enhancing preventive health care delivery (18, 21-25, 29, 31-33, 35, 37). Eight studies included measures for primary preventive care (18, 21-25, 31, 33), 4 studies included secondary preventive measures (29, 33, 35, 37), and 1 study assessed screening (not mutually exclusive) (32). The most common primary preventive measures examined were rates of influenza vaccination (improvement, 12 to 18 percentage points), pneumococcal vaccinations (improvement, 20 to 33 percentage points), and fecal occult blood testing (improvement, 12 to 33 percentage points) (18, 22, 24).
Three studies examined the effect of health information technology on secondary preventive care for complications related to hospitalization. One clinical controlled trial that used computerized surveillance and identification of high-risk patients plus alerts to physicians demonstrated a 3.3–percentage point absolute decrease (from 8.2% to 4.9%) in a combined primary end point of deep venous thrombosis and pulmonary embolism in high-risk hospitalized patients (29). One time-series study showed a 5–percentage point absolute decrease in prevention of pressure ulcers in hospitalized patients (35), and another showed a 0.4–percentage point absolute decrease in postoperative infections (37).
While most evidence for health information technology–related quality improvement through enhanced adherence to guidelines focused on preventive care, other studies covered a diverse range for types of care, including hypertension treatment (34), laboratory testing for hospitalized patients, and use of advance directives (see Appendix Table 1 for the numeric effects) (19).
The second theme showed the capacity of health information technology to improve quality of care through clinical monitoring based on large-scale screening and aggregation of data. These studies demonstrated how health information technology can support new ways of delivering care that are not feasible with paper-based information management. In one study, investigators screened more than 90 000 hospital admissions to identify the frequency of adverse drug events (43); they found a rate of 2.4 events/100 admissions. Adverse drug events were associated with an absolute increase in crude mortality of 2.45 percentage points and an increase in costs of $2262, primarily due to a 1.9-day increase in length of stay. Two studies from Evans and colleagues (44, 45) reported using an electronic health record to identify adverse drug events, examine their cause, and develop programs to decrease their frequency. In the first study, the researchers designed interventions on the basis of electronic health record surveillance that increased absolute adverse drug event identification by 2.36 percentage points (from 0.04% to 2.4%) and decreased absolute adverse drug event rates by 5.4 percentage points (from 7.6% to 2.2%) (44). The report did not describe details of the interventions used to reduce adverse drug events. In the second study, the researchers used electronic health record surveillance of nearly 61 000 inpatient admissions to determine that adverse drug events cause a 1.9-day increase in length of hospital stay and an increase of $1939 in charges (45).
Three studies from the Veterans Affairs system examined the surveillance and data aggregation capacity of health information technology systems for facilitating quality-of-care measurement. Automated quality measurement was found to be less labor intensive, but 2 of the studies found important methodologic limitations that affected the validity of automated quality measurement. For example, 1 study found high rates of false-positive results with use of automated quality measurement and indicated that such approaches may yield biased results (41). The second study found that automated queries from computerized disease registries underestimated completion of quality-of-care processes when compared with manual chart abstraction of electronic health records and paper chart sources (42).
Finally, 2 studies examined the role of health information technology surveillance systems in identifying infectious disease outbreaks. The first study found that use of a county-based electronic system for reporting results led to a 29–percentage point absolute increase in cases of shigellosis identified during an outbreak and a 2.5-day decrease in identification and public health reporting time (38). The second study showed a 14–percentage point absolute increase in identification of hospital-acquired infections and a 65% relative decrease in identification time (from 130 to 46 hours) (46).
The third health information technology–mediated effect on quality was a reduction in medication errors. Two studies of computerized provider order entry from LDS Hospital (51, 52) showed statistically significant decreases in adverse drug events, and a third study by Bates and colleagues (49) showed a non–statistically significant trend toward decreased drug events and a large decrease in medication errors. The first LDS Hospital study used a cohort with historical control design to evaluate the effect of computerized alerts on antibiotic use (52). Compared with a 2-year preintervention period, many statistically significant improvements were noted, including a decrease in antibiotic-associated adverse drug events (from 28 to 4 events), decreased length of stay (from 13 to 10 days), and a reduction in total hospital costs (from $35 283 to $26 315). The second study from LDS Hospital demonstrated a 0.6–percentage point (from 0.9% to 0.3%) absolute decrease in antibiotic-associated adverse drug events (51).
Bates and colleagues examined adverse events and showed a 17% non–statistically significant trend toward a decrease in these events (49). Although this outcome did not reach statistical significance, adverse drug events were not the main focus of the evaluation. The primary end point for this study was a surrogate end point for adverse drug events: nonintercepted serious medication errors. This end point demonstrated a statistically significant 55% relative decrease. The results from this trial were further supported by a second, follow-up study by the same researchers examining the long-term effect of the implemented system (48). After the first published study, the research team analyzed adverse drug events not prevented by computerized provider order entry, and the level of decision support was increased. This second study used a time-series design and found an 86% relative decrease in nonintercepted serious medication errors.
Health information technology systems also decreased medication errors by improving medication dosing. Improvements in dosing ranged from 12% to 21%; the primary outcome examined was doses prescribed within the recommended range and centered on antibiotics and anticoagulation (47, 50, 51).
Studies examined 2 primary types of technology-related effects on efficiency: utilization of care and provider time. Eleven studies examined the effect of health information technology systems on utilization of care. Eight showed decreased rates of health services utilization (54-61); computerized provider order-entry systems that provided decision support at the point of care were the primary interventions leading to decreased utilization. Types of decision support included automated calculation of pretest probability for diagnostic tests, display of previous test results, display of laboratory test costs, and computerized reminders. Absolute decreases in utilization rates ranged from 8.5 to 24 percentage points. The primary services affected were laboratory and radiology testing. Most studies did not judge the appropriateness of the decrease in service utilization but instead reported the effect of health information technology on the level of utilization. Most studies did not directly measure cost savings. Instead, researchers translated nonmonetized decreases in services into monetized estimates through the average cost of the examined service at that institution. One large study from Tierney and colleagues examined direct total costs per admission as its main end point and found a 12.7% absolute decrease (from $6964 to $6077) in costs associated with a 0.9-day decrease in length of stay (57).
The effect of health information technology on provider time was mixed. Two studies from the Regenstrief Institute examining inpatient order entry showed increases in physician time related to computer use (57, 64). Another study on outpatient use of electronic health records from Partners Health Care showed a clinically negligible increase in clinic visit time of 0.5 minute (67). Studies suggested that time requirements decreased as physicians grew used to the systems, but formal long-term evaluations were not available. Two studies showed slight decreases in documentation-related nursing time (68, 69) that were due to the streamlining of workflow. One study examined overall time to delivery of care and found an 11% decrease in time to deliver treatment through the use of computerized order entry with alerts to physician pagers (66).
Data on costs were more limited than the evidence on quality and efficiency. Sixteen of the 54 studies contained some data on costs (20, 28, 31, 36, 43, 47, 50-52, 54-58, 63, 71). Most of the cost data available from the institutional leaders were related to changes in utilization of services due to health information technology. Only 3 studies had cost data on aspects of system implementation or maintenance. Two studies provided computer storage costs; these were more than 20 years old, however, and therefore were of limited relevance (28, 58). The third reported that system maintenance costs were $700 000 (31). Because these systems were built, implemented, and evaluated incrementally over time, and in some cases were supported by research grants, it is unlikely that total development and implementation costs could be calculated accurately and in full detail.
Appendix Table 2 summarizes the 22 studies (72-93) from the other institutions. Most of these studies evaluated internally developed systems in academic institutions. The types of benefits found in these studies were similar to those demonstrated in benchmark institutions, although an additional theme was related to initial implementation costs. Unlike most studies from the benchmark institutions, which used randomized or controlled clinical trial designs, the most common designs of the studies from other institutions were pre–post and time-series designs that lacked a concurrent comparison group. Thirteen of the 22 studies evaluated internally developed systems (72-84). Only 9 evaluated commercial health information technology systems. Because many decision makers are likely to consider implementing a commercially developed system rather than internally developing their own, we detail these 9 studies in the following paragraphs.
Two studies examined the effect of systems on utilization of care (85, 86). Both were set in Kaiser Permanente's Pacific Northwest region and evaluated the same electronic health record system (Epic Systems Corp., Verona, Wisconsin) at different periods through time-series designs. One study (1994–1997) supported the findings of the benchmark institutions, showing decreased utilization of 2 radiology tests after implementation of electronic health records (85), while the second study (2000–2004) showed no conclusive decreases in utilization of radiology and laboratory services (86). Unlike the reports from the benchmark institutions, this second study also showed no statistically significant improvements in 3 process measures of quality. It did find a statistically significant decrease in age-adjusted total office visits per member: a relative decrease of 9% in year 2 after implementation of the electronic health record. Telephone-based care showed a relative increase of 65% over the same time. A third study evaluated this electronic health record and focused on efficiency; it showed that physicians took 30 days to return to their baseline level of productivity after implementation and that visit time increased on average by 2 minutes per encounter (87).
Two studies that were part of the same randomized trial from Rollman and colleagues, set at the University of Pittsburgh, examined the use of an electronic health record (MedicaLogic Corp., Beaverton, Oregon) with decision support in improving care for depression (88, 89). The first study evaluated electronic health record–based monitoring to enhance depression screening. As in the monitoring studies from the benchmark institutions, electronic health record screening was found to support new ways of organizing care. Physicians agreed with 65% of the computer-screened diagnoses 3 days after receiving notification of the results. In the second phase of the trial, 2 different electronic health record–based decision support interventions were implemented to improve adherence to guideline-based care for depression. Unlike the effects on adherence seen in the benchmark institutions, neither intervention showed statistically significant differences when compared with usual care.
Two pre–post studies from Ohio State University evaluated the effect of a commercial computerized order-entry system (Siemens Medical Solutions Health Services Corp., Malvern, Pennsylvania) on time utilization and medication errors (90, 91). As in the benchmark institutions, time to care dramatically decreased compared with the period before the order-entry system was implemented. Relative decreases in other outcomes were as follows: medication turnaround time, 64% (90) and 73% (91); radiology completion time, 43% (90) and 24% (91); and results reporting time, 25% (90). Use of computerized provider order entry had large effects on medication errors in both studies. Before implementation, 11.3% (90) and 13% (91) of orders had transcription errors; afterward, these errors were entirely eliminated. One study assessed length of stay and found that it decreased 5%; total cost of hospitalization, however, showed no statistically significant differences (90). In contrast, a third study examining the effect of order entry on nurse documentation time showed no benefits (92).
In contrast to all previous studies on computer order-entry systems, a study by Koppel and colleagues used a mixed quantitative-qualitative approach to investigate the possible role of such a system (Eclipsys Corp., Boca Raton, Florida) in facilitating medication prescribing errors (93). Twenty-two types of medication error risks were found to be facilitated by computer order entry, relating to 2 basic causes: fragmentation of data and flaws in human–machine interface.
These 9 studies infrequently reported or measured data on costs and contextual factors. Two reported information on costs (90, 92). Neither described the total initial costs of purchasing or implementing the system being evaluated. Data on contextual factors such as reimbursement mix, degree of capitation, and barriers encountered during implementation were scant; only 2 studies included such information. The study by Koppel and colleagues (93) included detailed contextual information related to human factors. One health record study reported physician classroom training time of 16 hours before implementation (87). Another order-entry study reported that nurses received 16 hours of training, clerical staff received 8 hours, and physicians received 2 to 4 hours (91).
To date, the health information technology literature has shown many important quality- and efficiency-related benefits as well as limitations relating to generalizability and empirical data on costs. Studies from 4 benchmark leaders demonstrate that implementing a multifunctional system can yield real benefits in terms of increased delivery of care based on guidelines (particularly in the domain of preventive health), enhanced monitoring and surveillance activities, reduction of medication errors, and decreased rates of utilization for potentially redundant or inappropriate care. However, the method used by the benchmark leaders to get to this point—the incremental development over many years of an internally designed system led by academic research champions—is unlikely to be an option for most institutions contemplating implementation of health information technology.
Studies from these 4 benchmark institutions have demonstrated the efficacy of health information technology for improving quality and efficiency. However, the effectiveness of these technologies in the practice settings where most health care is delivered remains less clear. Effectiveness and generalizability are of particular importance in this field because health information technologies are tools that support the delivery of care—they do not, in and of themselves, alter states of disease or of health. As such, how these tools are used and the context in which they are implemented are critical (94-96).
For providers considering a commercially available system installed as a package, only a limited body of literature is available to inform decision making. The available evidence comes mainly from time-series or pre–post studies, derives from a staff-model managed care organization or academic health centers, and concerns a limited number of process measures. These data, in general, support the findings of studies from the benchmark institutions on the effect of health information technology in reducing utilization and medication errors. However, they do not support the findings of increased adherence to protocol-based care. Published evidence of the information needed to make informed decisions about acquiring and implementing health information technology in community settings is nearly nonexistent. For example, potentially important evidence related to initial capital costs, effect on provider productivity, resources required for staff training (such as time and skills), and workflow redesign is difficult to locate in the peer-reviewed literature. Also lacking are key data on financial context, such as degree of capitation, which has been suggested by a model to be an important factor in defining the business case for electronic health record use (97).
Several systematic reviews related to health information technology have been done. However, they have been limited to specific systems, such as computerized provider order entry (98); capabilities, such as computerized reminders (99, 100); or clinical specialty (101). No study to date has reviewed a broad range of health information technologies. In addition, to make our findings as relevant as possible to the broad range of stakeholders interested in health information technology, we developed a Web-hosted database of our research findings. This database allows different stakeholders to find the literature most relevant to their implementation circumstances and their information needs.
This study has several important limitations. The first relates to the quantity and scope of the literature. Although we did a comprehensive search, we identified only a limited set of articles with quantitative data. In many important domains, we found few studies. This was particularly true of health information technology applications relevant to consumers and to interoperability, areas critical to the capacity for health information technology to fundamentally change health care. A second limitation relates to synthesizing the effect of a broad range of technologies. We attempted to address this limitation by basing our work on well-defined analytic frameworks and by identifying not only the systems used but also their functional capabilities. A third relates to the heterogeneity in reporting. Descriptions of health information technology systems were often very limited, making it difficult to assess whether some system capabilities were absent or simply not reported. Similarly, limited information was reported on the overall implementation process and organizational context.
This review raises many questions central to a broad range of stakeholders in health care, including providers, consumers, policymakers, technology experts, and private sector vendors. Adoption of health information technology has become one of the few widely supported, bipartisan initiatives in the fragmented, often contentious health care sector (102). Currently, numerous pieces of state and federal legislation under consideration seek to expand adoption of health information technology (103-105). Health care improvement organizations such as the Leapfrog Group are strongly advocating adoption of health information technology as a key aspect of health care reform. Policy discussions are addressing whether physician reimbursement should be altered, with higher reimbursements for those who use health information technology (106). Two critical questions that remain are 1) what will be the benefits of these initiatives and 2) who will pay and who will benefit?
Regarding the former, a disproportionate amount of literature on the benefits that have been realized comes from a small set of early-adopter institutions that implemented internally developed health information technology systems. These institutions had considerable expertise in health information technology and implemented systems over long periods in a gradual, iterative fashion. Missing from this literature are data on how to implement multifunctional health information technology systems in other health care settings. Internally developed systems are unlikely to be feasible as models for broad-scale use of health information technology. Most practices and organizations will adopt a commercially developed health information technology system, and, given logistic constraints and budgetary issues, their implementation cycles will be much shorter. The limited quantitative and qualitative description of the implementation context significantly hampers how the literature on health information technology can inform decision making by a broad array of stakeholders interested in this field.
With respect to the business case for health information technology, we found little information that could empower stakeholders to judge for themselves the financial effects of adoption. For instance, basic cost data needed to determine the total cost of ownership of a system or of the return on investment are not available. Without these data, the costs of health information technology systems can be estimated only through complex predictive analysis and statistical modeling methods, techniques generally not available outside of research. One of the chief barriers to adoption of health information technology is the misalignment of incentives for its use (107, 108). Specifying policies to address this barrier is hindered by the lack of cost data.
This review suggests several important future directions in the field. First, additional studies need to evaluate commercially developed systems in community settings, and additional funding for such work may be needed. Second, more information is needed regarding the organizational change, workflow redesign, human factors, and project management issues involved with realizing benefits from health information technology. Third, a high priority must be the development of uniform standards for the reporting of research on implementation of health information technology, similar to the Consolidated Standards of Reporting Trials (CONSORT) statements for randomized, controlled trials and the Quality of Reporting of Meta-analyses (QUORUM) statement for meta-analyses (109, 110). Finally, additional work is needed on interoperability and consumer health technologies, such as the personal health record.
The advantages of health information technology over paper records are readily discernible. However, without better information, stakeholders interested in promoting or considering adoption may not be able to determine what benefits to expect from health information technology use, how best to implement the system in order to maximize the value derived from their investment, or how to direct policy aimed at improving the quality and efficiency delivered by the health care sector as a whole.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Hospital Medicine, Healthcare Delivery and Policy, Prevention/Screening.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only