Kathryn M. McDonald, MM; Brian Matesic, BS; Despina G. Contopoulos-Ioannidis, MD; Julia Lonhart, BS, BA; Eric Schmidt, BA; Noelle Pineda, BA; John P.A. Ioannidis, MD, DSc
Note: The AHRQ reviewed contract deliverables to ensure adherence to contract requirements and quality, and a copyright release was obtained from the AHRQ before the manuscript was submitted for publication.
Disclaimer: All statements expressed in this work are those of the authors and should not in any way be construed as official opinions or positions of Stanford University, the AHRQ, or the U.S. Department of Health and Human Services.
Financial Support: From the AHRQ, U.S. Department of Health and Human Services (contract HHSA-290-2007-100621).
Potential Conflicts of Interest: Ms. McDonald: Grant (money to institution): AHRQ. Mr. Schmidt: Grant (money to institution): AHRQ. All other authors had no disclosures to report. Disclosures can also be viewed at www.acponline.org/authors/icmje/ConflictOflnterestForms.do?msNum=M12-2571.
Requests for Single Reprints: Kathryn M. McDonald, MM, Stanford University, 117 Encina Commons, Stanford, CA 94305-6019; e-mail, Kathryn.McDonald@stanford.edu.
Current Author Addresses: Ms. McDonald, Ms. Lonhart, and Mr. Schmidt: Stanford Center for Health Policy/Center for Primary Care and Outcomes Research, Stanford University, 117 Encina Commons, Stanford, CA 94305-6019.
Mr. Matesic and Ms. Pineda: School of Medicine, Stanford University, 291 Campus Drive, Stanford, CA 94305.
Dr. Contopoulos-Ioannidis: Department of Pediatrics, Division of Infectious Diseases, Stanford University School of Medicine, 300 Pasteur Drive, G312, Stanford, CA 94305.
Dr. Ioannidis: Stanford Prevention Research Center, Department of Medicine, School of Medicine, Stanford University, 1265 Welch Road, X306, Stanford, CA 94305.
Author Contributions: Conception and design: K.M. McDonald, B. Matesic, D.G. Contopoulos-Ioannidis, J. Lonhart, J.P.A. Ioannidis.
Analysis and interpretation of the data: K.M. McDonald, B. Matesic, D.G. Contopoulos-Ioannidis, J. Lonhart, E. Schmidt, J.P.A. Ioannidis.
Drafting of the article: K.M. McDonald, B. Matesic, D.G. Contopoulos-Ioannidis, J. Lonhart, E. Schmidt, J.P.A. Ioannidis.
Critical revision of the article for important intellectual content: K.M. McDonald, B. Matesic, D.G. Contopoulos-Ioannidis, J.P.A. Ioannidis.
Final approval of the article: K.M. McDonald, B. Matesic, D.G. Contopoulos-Ioannidis, J. Lonhart, E. Schmidt, N. Pineda, J.P.A. Ioannidis.
Provision of study materials or patients: J. Lonhart.
Statistical expertise: D.G. Contopoulos-Ioannidis, J.P.A. Ioannidis.
Obtaining of funding: K.M. McDonald.
Administrative, technical, or logistic support: K.M. McDonald, B. Matesic, J. Lonhart, E. Schmidt, N. Pineda.
Collection and assembly of data: K.M. McDonald, B. Matesic, J. Lonhart, E. Schmidt, N. Pineda, J.P.A. Ioannidis.
McDonald K., Matesic B., Contopoulos-Ioannidis D., Lonhart J., Schmidt E., Pineda N., Ioannidis J.; Patient Safety Strategies Targeted at Diagnostic Errors: A Systematic Review. Ann Intern Med. 2013;158:381-389. doi: 10.7326/0003-4819-158-5-201303051-00004
Download citation file:
Published: Ann Intern Med. 2013;158(5_Part_2):381-389.
Missed, delayed, or incorrect diagnosis can lead to inappropriate patient care, poor patient outcomes, and increased cost. This systematic review analyzed evaluations of interventions to prevent diagnostic errors. Searches used MEDLINE (1966 to October 2012), the Agency for Healthcare Research and Quality's Patient Safety Network, bibliographies, and prior systematic reviews. Studies that evaluated any intervention to decrease diagnostic errors in any clinical setting and with any study design were eligible, provided that they addressed a patient-related outcome. Two independent reviewers extracted study data and rated study quality.
There were 109 studies that addressed 1 or more intervention categories: personnel changes (n = 6), educational interventions (n = 11), technique (n = 23), structured process changes (n = 27), technology-based systems interventions (n = 32), and review methods (n = 38). Of 14 randomized trials, which were rated as having mostly low to moderate risk of bias, 11 reported interventions that reduced diagnostic errors. Evidence seemed strongest for technology-based systems (for example, text message alerting) and specific techniques (for example, testing equipment adaptations). Studies provided no information on harms, cost, or contextual application of interventions. Overall, the review showed a growing field of diagnostic error research and categorized and identified promising interventions that warrant evaluation in large studies across diverse settings.
Missed, delayed, or incorrect diagnosis can lead to inappropriate patient care, poor patient outcomes, and increased cost.
Patient safety strategies targeting diagnostic errors have only recently been studied.
Approaches to reduce errors may involve technical, cognitive, and systems-oriented strategies tailored to specific conditions or settings.
A framework that organizations might use to classify intervention strategies aimed at reducing diagnostic errors includes technique, personnel, education, structured process, technology-based systems, and review methods.
Limited evidence from randomized, controlled trials shows that some interventions, such as text messaging—a technology-based systems strategy—can reduce diagnostic errors in certain situations.
Very few studies of interventions to reduce diagnostic errors have examined clinical outcomes (for example, morbidity, mortality) or evaluated the utility of engaging patients and families in prevention of diagnostic errors.
The family of patient safety targets that includes diagnostic errors has unclear boundaries. An operational definition includes diagnoses that are “unintentionally delayed (sufficient information was available earlier), wrong (another diagnosis was made before the correct one), or missed (no diagnosis was ever made), as judged from the eventual appreciation of more definitive information” (1, 2).
Although the definition is a bit fluid, there is no doubt that the scope of the problem is large. A systematic review of 53 series of autopsies reported a median antemortem error rate of 23.5% (range, 4.1% to 49.8%) for major errors (clinically missed diagnoses involving a principal underlying disease or primary cause of death) and 9.0% (range, 0% to 20.7%) for incorrect diagnoses that are likely to have affected patient outcomes (3). Disease-specific studies show that 2% to 61% of patients experience missed or delayed diagnoses (4). In a survey of pediatricians, 54% admitted making a diagnostic error at least once per month, and 45% noted making diagnostic errors that harmed patients at least once per year (5). Lack of pertinent historical or clinical information and team processes (for example, inadequate care coordination) contributed to errors (5).
Furthermore, research on variation in patient outcomes related to diagnosis timing suggests that there is room for improvement for some high-risk conditions. For example, early identification of sepsis may decrease mortality in surgical intensive care (6).
Problems in care related to diagnosis are particularly prevalent among precipitating causes for lawsuits; 25% to 59% of malpractice claims are attributable to diagnostic errors (4, 7, 8). A recent study of 91 082 diagnosis-related malpractice claims from 1986 to 2005 estimated payments summing to $34.5 billion (inflation-adjusted to 2010 U.S. dollars) (9). Among 10 739 malpractice claims from the 2005–2009 National Practitioner Data Bank, diagnosis-related problems accounted for 45.9% of paid claims from outpatient settings and 21.1% of paid claims from inpatient settings (10).
Some authors have asserted that diagnostic errors are both more likely to result in patient harms and more preventable than treatment-related errors (such as wrong-site surgery or incorrect medication dose), making the problem particularly important to address (11). Given this potential, the purpose of this review is to assess the multitude of interventions to prevent diagnostic errors and better understand their effectiveness.
There is a broad array of patient safety strategies (PSSs) that could affect diagnostic errors. Approaches might involve technical, cognitive, and systems-oriented strategies, usually tailored to specific conditions or settings.
Strategies might address specific types of diagnostic error, root causes of the error, or particular technologies that are available. Strategies might target clinician errors related to assessment (for example, failure or delay in considering an important diagnosis) or laboratory and radiology testing (including failure to order needed tests, technical errors in processing specimens or tests, or erroneous reading of tests) (2). Interventions that target such failure areas might include tools that generate differential diagnosis lists based on algorithms and checklists; electronic monitoring of test result follow-up; and redesigned documentation systems that efficiently aggregate relevant evidence and aid cognitive interpretation (2). Broad-based strategies might target changes in residency training, board certification, and even patient and family engagement in diagnostic problem solving.
Finally, many strategies could incorporate advances in medical problem solving (including heuristics and metacognition), decision analytic or normative decision making, and clinical diagnostic decision support (12–14). Strategies in this area—computerized diagnosis management—could include computerized physician order entry with clinical decision support.
We captured relevant literature for review through 2 main mechanisms. First, we identified 2 key systematic reviews that summarized data on system-related interventions addressing organizational vulnerabilities to diagnostic errors (15) and cognitively related interventions that could affect diagnosis (16). Then, we used broad search strategies to identify additional literature. We searched MEDLINE (1966 to October 2012), the Agency for Healthcare Research and Quality (AHRQ) Patient Safety Network (www.psnet.ahrq.gov/), and bibliographies of background articles and previous systematic reviews to identify literature on effects of interventions targeting diagnostic errors and/or diagnostic delays. The major Medical Subject Heading terms were “diagnostic errors” and “delayed diagnosis.”
Eligible studies were those that evaluated any intervention to decrease diagnostic errors (incorrect diagnoses or missed diagnoses) in any clinical setting and with any study design, provided that they addressed patient-related outcomes (that is, the correct diagnosis was eventually confirmed through patient follow-up testing, surgery, autopsy, or other means) or proxy measures of patient-related outcomes. We also considered studies that evaluated interventions intended to affect the time to correct diagnosis or appropriate clinical action. We excluded studies in which there was no intervention or no real patients (for example, simulations), the intervention was not aimed to reduce diagnostic errors, or there were no patient outcomes or proxies thereof.
Two independent investigators screened articles for eligibility at the title and abstract level, and any discrepancies about selection were resolved through discussion with the entire research team. We also screened all of the studies included in the reviews by Singh and colleagues (15) and Graber and associates (16) and identified 23 studies that were evaluations of interventions.
In total, we identified 109 articles that met inclusion criteria. The Supplement provides a complete description of the search strategies, article flow diagram, and evidence tables.
We used AMSTAR, a tool that addresses such items as the comprehensiveness of the search, the assessment of the quality of included studies, and the methods for synthesizing the results, to assess the methodological quality of the 2 key systematic reviews (17). We used a standard risk of bias assessment to evaluate quality of the randomized trials (Table 3 of the Supplement) (18). We developed and used a categorization scheme to classify, from an organizational perspective, interventions that target diagnostic errors (Table). Categories included changes that an organization might consider generically to reduce errors. Such changes include techniques investment; personnel configurations; additional review steps for higher reliability; structured processes; education of professionals, patients, and families; and information and communications technology–based enhancements.
Table. Categories of Organizational Interventions to Decrease Diagnostic Errors
This review was supported by the AHRQ, which had no role in the selection or review of the evidence or the decision to submit this manuscript for publication.
Singh and colleagues (15) considered 43 diagnostic error studies of systems interventions related to provider–patient encounters, diagnostic test performance and interpretation, follow-up and tracking, referral-related issues, and patient-related issues. Their high-quality review (score of 9 out of 9 relevant AMSTAR criteria) identified only 6 evaluations of interventions that met eligibility criteria for our review. Three of the 6 reported diagnostic outcomes, such as incidence of delayed diagnosis of injury, incidence of missed injuries, or misdiagnosis rates. None provided information on patients' downstream clinical course.
Graber and colleagues (16) summarized 141 articles on improving cognition and human factors affecting diagnosis. Their high-quality review (score of 9 out of 9 relevant AMSTAR criteria) included 42 evaluations of interventions. These investigators classified interventions in 3 dimensions. For interventions to increase knowledge and expertise, only 1 (19) of 7 studies provided information on diagnostic outcomes and clinical course for actual patients. For interventions to improve intuitive and deliberate considerations, none of the 5 identified studies reported effects on documented diagnoses with actual patients during clinical course of care. In the largest group of studies—interventions on getting help from colleagues, consultants, and tools—16 of the 28 identified studies evaluated diagnostic outcomes in actual patients (20–35).
Graber and colleagues noted the current scarcity of evidence for any single intervention targeting cognitive and human factors in reducing diagnostic error. They highlighted potential for interventions that target content-focused training, feedback on performance, simulation-based training, metacognitive training, second opinion or group decision making, and the use of decision support tools and computer-aided technologies.
We identified 109 studies, including 14 randomized trials, of interventions that targeted diagnostic errors and addressed patient-related outcomes (see Tables 1 to 4 of the Supplement). Of the 6 categories of interventions, most studies pertained to interventions in the categories of technology-based systems and additional review methods (Figure 1). Figure 2 shows increases over time in available evidence related to the categories of additional review methods, structured process changes, technique, and technology-based systems interventions.
Interventions, by type.
The percentage of studies as categorized by the 6 types of interventions.
Intervention studies, by year.
Timeline of the included studies categorized by the 6 types of interventions.
Patient-related outcomes and their proxies can be categorized as diagnostic accuracy outcomes (for example, false-positive and false-negative results), management outcomes (for example, use of further diagnostic tests or therapeutic interventions), and direct patient outcomes (for example, death, disease progression, or deterioration). An intervention that leads to better diagnosis does not automatically change management or improve patient outcomes. Management change depends on treatment options and the feasibility of implementing those options. Improvements in direct patient outcomes depend also on effectiveness of treatment or management. Outcomes that were assessed in the 109 studies varied markedly, but few studies (5 randomized, controlled trials and 8 other designs) evaluated direct patient-level clinical outcomes (6, 31, 36–46).
Primary and secondary outcomes that were assessed in the 14 randomized trials are summarized in Table 2 of the Supplement. Eight trials (9 comparisons) addressed diagnostic accuracy outcomes, and 3 trials (5 comparisons) addressed outcomes related to further diagnostic test use. Six trials (8 comparisons) addressed outcomes related to further therapeutic management. Five trials (7 comparisons) addressed direct patient-related outcomes. Three trials addressed composite outcomes (diagnostic accuracy and therapeutic management, and therapeutic management and patient outcome). One trial addressed time to correct therapeutic management, and another trial addressed time to diagnosis.
Trials evaluated various interventions. The control group used most often was usual care. No trials had high risk of bias, whereas 9 and 5 trials had moderate and low risk of bias, respectively.
Statistically significant improvements were seen for at least 1 outcome in all but 3 trials. Of the 3 trials with non–statistically significant improvements, 1 was a noninferiority trial that showed no more diagnostic errors occurred during work-up of abdominal pain among patients given morphine and those not given morphine (47). Two trials that involved patients with mental conditions (46, 48) reported no beneficial diagnostic error effects from computerized decision-support systems. Only 1 trial (42) reported improvements in direct patient outcomes; whether improvements were related to the comparison against the randomized concurrent control group or a preintervention period was unclear.
There were 23 studies of interventions related to medical techniques (39, 47, 49–69). Most of these studies, including 3 randomized trials (47, 49, 55), found that these interventions can enhance diagnosis (for example, visual enhancements via ultrasonography-guided biopsy, changes to number of biopsy cores, and cap-fitted colonoscopy) or not make it worse (for example, medical interventions for pain relief in patients with abdominal pain).
Six studies (44, 45, 70–73) compared the effect on diagnosis of substituting 1 type of professional for another, or adding another professional to the care team. The 3 studies (71–73) in which a specialist was added to examine the interpretation of a test result reported an increase in case detection, although the studies were quite small and targeted narrow patient populations. There was only 1 randomized trial, showing that emergency nurse practitioners perform better than junior physicians (45).
Eleven studies (19, 43, 74–82) used educational interventions for various targets: patients, parents, community doctors, and intensive care unit doctors and nurses. Strategies targeted at professionals produced improvements, but the studies were nonrandomized. Two randomized trials that targeted consumers found that parent education improved discrimination of serious symptoms necessitating physician diagnosis and patient education improved the performance of breast cancer screening (74, 78).
Twenty-seven studies (43, 44, 46, 48, 56–59, 73, 77, 79, 83–98) examined interventions that added structure to the diagnostic process. Structure included, among other things, triage protocols, feedback steps, and quality improvement processes. Most interventions included the addition of a tool, often a checklist or a form (for example, to guide and standardize physical examination of a patient). Some of the studies centered on laboratory processes, whereas others occurred during clinical management, often in situations related to trauma patients. Beneficial effects on diagnosis-related outcomes were seen in most nonrandomized studies, but of the 3 randomized trials, 2 did not show benefit for improving diagnosis of mental illness (46, 48) and 1 had mixed results for a protocol for ordering radiography in injured patients (84).
Thirty-two studies (6, 29–36, 40–42, 44, 46, 60, 71, 78, 80, 97, 99–111) included computerized decision support systems and alerting systems (for example, for abnormal laboratory results), most of which were associated with improvements to processes on the diagnostic pathway (for example, relaying a critical laboratory value to the clinician in a more timely manner). Some interventions related to specific symptoms (for example, a computer-aided diagnostic tool for abdominal pain interpretation), whereas others intervened at the level of a particular test (for example, an electronic medical record alert for a positive result on a fecal occult blood screen for cancer). All 4 randomized trials (31, 36, 42, 100) reported beneficial diagnostic error effects (see Table 2 of the Supplement).
The most common type of intervention that was evaluated was the introduction of redundancy in interpreting test results (6, 20–28, 34, 37–39, 72, 73, 76, 78, 79, 81, 95, 96, 109, 112–126). Most studies showed that an additional review step (usually by a separate reader, from the same specialty or from another specialty) had a positive effect on diagnostic performance. However, in some cases, false-positive results also increased. Tradeoffs between sensitivity and specificity were reported erratically. Some studies targeted higher-risk patients for enriched review. However, the systems to support such targeting were neither described nor evaluated. Randomized evidence was weak, based on 1 group of 1 trial showing statistically significant benefit (no effect size reported) for an audit and feedback approach (78).
Twenty-four studies (6, 34, 39, 43, 44, 46, 56–60, 71–73, 76–80, 95–97, 109, 127) combined approaches in a variety of ways and covered diverse clinical areas, with mixed results. These studies are also included in the categories covered above. Twenty of the 24 studies combined 2 categories of intervention in almost every permutation possible (11 of 15 combinations). With only 1 to 4 studies for any combination set, it is not possible to draw conclusions about whether benefits are enhanced with more complex interventions. Moreover, complex approaches may be more costly, but this information was not reported.
Another potential grouping of PSSs focuses on the interface between the system and the patient, such as strategies that involve patient notification of test results (128). No studies with comparative designs evaluated this intervention. The review by Singh and colleagues (15) identified 7 studies of patient preferences or satisfaction with different options for receipt of test results. They also found no studies that tested ways to reduce error using an intervention that affected test notification.
Casalino and colleagues (129) found a 7.1% rate of apparent failures to inform patients of an abnormal test result and identified a positive association between use of simple processes by physician practices for managing results and lower failure rates. A systematic review that examined failures to follow up test results with ambulatory care patients reported that failed follow-up ranged from 1.0% to 62.0%, depending on the type of test result, including failures associated with missed cancer diagnoses (130). None of the studies included in that systematic review evaluated patient-oriented interventions.
No studies in our review evaluated direct patient harm. Studies generally did not assess unintended adverse effects, although some reported false-positive rates.
The context in which a safety strategy is implemented depends on the specific type of diagnostic error and practice being examined. The studies that we reviewed covered a range of subspecialties, settings, patient populations, and interventions. Context varied greatly. Most interventions were not tested in more than 1 site. Many studies were small, early proof-of-concept evaluations. No information was reported on the cost of implementing the reviewed PSSs; costs would probably vary greatly, depending on the particular strategy or practice.
This review identified over 100 evaluations of interventions to reduce diagnostic errors, many of which had a reported positive effect on at least 1 end point, including statistically significant improvements in at least 1 end point in 11 of the 14 randomized trials. Mortality and morbidity end points were seldom reported.
We also identified 2 previous systematic reviews of cognitive and systems-oriented approaches to improve diagnostic accuracy that mostly found proof-of-concept strategies not yet tested in practice. Our review built on the previous systematic reviews by grouping PSSs targeting diagnostic errors from an organizational perspective into changes that an organization might consider more generically (techniques investment; personnel configurations; additional review steps for higher reliability; structured processes; education of professionals, patients, families; and information and communications technology–based enhancements), as opposed to individual clinicians looking for ways to improve their own cognitive processing in specific diagnostic contexts. Although many of the PSSs tested thus far target diagnostic pathways for specific symptoms or conditions, grouping interventions into common leverage points will support future development in this field by the various stakeholders who seek to reduce diagnostic problems. Involvement of patients and families has received minimal attention, with only 2 studies addressing education of consumers.
Data synthesis is difficult because few studies have used randomized designs, comparable outcomes, or similar interventions packages. The existing literature may be susceptible to reporting biases favoring “positive” results for different interventions. It is expected that with heightened awareness of the problem, the number of studies in this field will increase further in the future, including more randomized trials and studies testing different approaches: for example, policy-level efforts. However, the range of outcomes assessed in the studies that we reviewed highlights the known lack of tools to routinely measure the effect of interventions to decrease diagnostic errors. Additional work is needed on appropriate measurements of diagnostic errors and consequential delays in diagnosis. A final limitation, especially for synthesis, is the diversity of interventions that are reverse-engineered on the basis of the many diagnostic targets; the diverse tailored needs for each clinical situation (for example, protocols designed for specific work-up pathways); and the variety of specialized personnel, and even patients, receiving educational or cognitive-support approaches.
Evidence is also lacking on the costs of interventions and implementation, particularly how to reduce diagnostic errors without producing other diagnostic problems, such as overuse of tests. Eventually reaching the correct diagnosis with inefficient testing strategies (for example, some sequences of multiple test ordering) is not the appropriate pathway to improved diagnostic safety. Our review found a paucity of studies that assessed both sensitivity and specificity of interventions addressing diagnostic performance in the context of mitigating diagnostic errors. Thus, although we found several promising interventions, evaluations need to be strengthened before any specific PSSs are scaled up in this domain.
In conclusion, our review demonstrates that the nascent field of diagnostic error research is growing, with new interventions being tested that involve technical, cognitive, and systems-oriented strategies. The framework of intervention types developed in the review provides a basis for categorizing and designing new studies, especially randomized, controlled trials, in these areas.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Elisa Piva, MD, Mario Plevani, MD
University Hospital Padova, Italy
July 29, 2013
The paper by Kathryn M McDonald et al. deals with the topic of patient safety strategies which aim to reduce diagnostic errors. Given the complexity of the issue and the nature of diagnostic errors, the key point that “approaches to reduce errors may involve technical, cognitive, and system-oriented strategies tailored to specific conditions or settings” (1) represents a crucial strategy with which we are complete agreement. Focusing on technology-based systems interventions, a body of evidence has been collected to demonstrate the importance of timely and safe notification of laboratory critical values to clinicians. The Joint Commission (2) and many accreditation agencies (3) agree that critical value reporting is an important mission of clinical laboratory. More recently, the possible harmonization of existing policies, based on robust evidence, has been advocated for improving quality and patient safety (4). However, there are few reports on the relationship between the notification of critical values, clinicians’ reaction and improved clinical outcomes. We recently performed a clinical audit aimed at evaluating the effectiveness of a computerized notification system in reporting critical values within our University-Hospital as previously described (5). In particular, we evaluated over 200 critical values over a three-month period for inpatients, of which 75% were from Internal Medicine Departments and 25% from Surgical Departments. In both settings, 43% of the critical values were unexpected by clinicians and the therapy was modified in 90% of the patients admitted to the Internal Medicine wards, and in 96% of the patients in the Surgery department, respectively. Our data underline the importance of timely and safe notification of critical values to clinical outcomes, namely immediate changes in therapy or patient management and as a quality indicator in laboratory medicine. Therefore, timeliness of laboratory results, especially for critical values and critical tests, should always be correlated with clinical effectiveness, and procedures should provide the best clinical outcomes at the lowest reasonable cost. Further initiatives to promote the harmonization of laboratory practices, including the reporting of critical values, should help to further improve the quality of care and patient safety.Elisa Piva and Mario PlebaniDepartment of Laboratory Medicine, University-Hospital, Padova, ItalyReferences1. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, Lonhart J, Schmidt E, PinedaN, Ioannidis JP. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013 ;158:381-9. 2. Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J Qual Patient Saf. 2010; 36:226-32.3. International Organization for Standardization. ISO 15189:2012: Medical laboratories: particular requirements for quality and competence. Geneva, Switzerland: International Organization for Standardization; 2012.4. Plebani M. Harmonization in laboratory medicine: the complete picture. Clin Chem Lab Med. 2013; 51:741-51.5. Piva E, Sciacovelli L, Zaninotto M, Laposata M, Plebani M. Evaluation of effectiveness of a computerized notification system for reporting critical values. Am J Clin Pathol. 2009;131:43
Healthcare Delivery and Policy, Prevention/Screening.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only