0
Medicine and Public Issues |

Public Reporting of Health Care–Associated Surveillance Data: Recommendations From the Healthcare Infection Control Practices Advisory Committee FREE

Thomas R. Talbot, MD, MPH; Dale W. Bratzler, DO, MPH; Ruth M. Carrico, PhD, RN; Daniel J. Diekema, MD; Mary K. Hayden, MD; Susan S. Huang, MD, MPH; Deborah S. Yokoe, MD, MPH; Neil O. Fishman, MD, for the Healthcare Infection Control Practices Advisory Committee
[+] Article and Author Information

From Vanderbilt University School of Medicine, Nashville, Tennessee; University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma; University of Louisville School of Medicine, Louisville, Kentucky; University of Iowa Carver College of Medicine, Iowa City, Iowa; Rush University Medical Center, Chicago, Illinois; University of California, Irvine, School of Medicine, Orange, California; Brigham and Women's Hospital, Boston, Massachusetts; and University of Pennsylvania Health System, Philadelphia, Pennsylvania.

Disclaimer: The findings in this report are those of the authors and do necessarily represent the official position of the Centers for Disease Control and Prevention or the Department of Health and Human Services.

Acknowledgment: The authors thank the other advisory and liaison members of HICPAC for their thoughtful discussion and insight on this topic and white paper.

Potential Conflicts of Interest: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M13-1120.

Requests for Single Reprints: Thomas R. Talbot, MD, MPH, Vanderbilt University Medical Center, A2200 Medical Center North, 1161 21st Avenue South, Nashville, TN 37232; e-mail, tom.talbot@vanderbilt.edu.

Current Author Addresses: Dr. Talbot: Vanderbilt University Medical Center, A2200 Medical Center North, 1161 21st Avenue South, Nashville, TN 37232.

Dr. Bratzler: OU Physicians Executive Office, Oklahoma University Health Sciences Center, 1200 North Children's Avenue, Suite 3200, Oklahoma City, OK 73104.

Dr. Carrico: Division of Infectious Diseases, University of Louisville School of Medicine, 501 East Broadway, Suite 120, Louisville, KY 40202.

Dr. Diekema: University of Iowa Carver College of Medicine, 200 Hawkins Drive, Iowa City, IA 52242.

Dr. Hayden: Rush University Medical Center, 1653 West Congress Parkway, Chicago, IL 60612.

Dr. Huang: Division of Infectious Diseases, University of California, Irvine, 101 The City Drive, City Tower, Suite 400, ZC 4081, Orange, CA 92868.

Dr. Yokoe: Brigham and Women's Hospital, Channing Laboratory, 181 Longwood Avenue, Boston, MA 02115.

Dr. Fishman: University of Pennsylvania, One Convention Avenue, Penn Tower, Suite 101, Philadelphia, PA 19104.

Author Contributions: Conception and design: T.R. Talbot, D.W. Bratzler, R.M. Carrico, D.J. Diekema, N.O. Fishman.

Analysis and interpretation of the data: T.R. Talbot, M.K. Hayden, N.O. Fishman.

Drafting of the article: T.R. Talbot, D.S. Yokoe, N.O. Fishman.

Critical revision of the article for important intellectual content: T.R. Talbot, D.W. Bratzler, R.M. Carrico, D.J. Diekema, M.K. Hayden, S.S. Huang, D.S. Yokoe, N.O. Fishman.

Final approval of the article: T.R. Talbot, D.W. Bratzler, R.M. Carrico, D.J. Diekema, M.K. Hayden, S.S. Huang, D.S. Yokoe, N.O. Fishman.

Administrative, technical, or logistic support: T.R. Talbot.

Collection and assembly of data: T.R. Talbot, R.M. Carrico.


Ann Intern Med. 2013;159(9):631-635. doi:10.7326/0003-4819-159-9-201311050-00011
Text Size: A A A

Health care–associated infection (HAI) rates are used as measures of a health care facility's quality of patient care. Recently, these outcomes have been used to publicly rank quality efforts and determine facility reimbursement. The value of comparing HAI rates among health care facilities is limited by many factors inherent to HAI surveillance, and incentives that reward low HAI rates can lead to unintended consequences that can compromise medical care surveillance efforts, such as the use of clinical adjudication panels to veto events that meet HAI surveillance definitions.The Healthcare Infection Control Practices Advisory Committee, a federal advisory committee that provides advice and guidance to the Centers for Disease Control and Prevention (CDC) and the Secretary of the Department of Health and Human Services about strategies for surveillance, prevention, and control of HAIs, assessed the challenges associated with using HAI surveillance data for external quality reporting, including the unintended consequences of clinician veto and clinical adjudication panels. Discussions with stakeholder liaisons and committee members were then used to formulate recommended standards for the use of HAI surveillance data for external facility assessment to ensure valid comparisons and to provide as level a playing field as possible.The final recommendations advocate for consistent, objective, and independent application of CDC HAI definitions with concomitant validation of HAIs and surveillance processes. The use of clinician veto and adjudication is discouraged.


Health care–associated infections (HAIs) cause substantial morbidity and mortality among patients in all types of health care facilities, and a large proportion of HAIs can be prevented with the use of evidence-based practices (12). Prevention of HAIs has become a major focus of quality and patient safety programs, and HAI rates are increasingly used by payers, consumers, and quality improvement organizations to rank a hospital's quality efforts. Of note, the value of comparing HAI rates among health care facilities can be limited by many factors inherent to HAI surveillance, and incentives that reward low HAI rates can lead to unintended consequences that can compromise the integrity of medical care surveillance efforts (3). To be confident in assessment of efforts to eliminate preventable HAIs, we must guarantee that surveillance and reporting are unbiased and transparent.

Health care–associated infections are attractive quality measurements for several reasons, including their substantial burden, the large evidence base of prevention practices, and the long-standing use of a standardized HAI surveillance process (the Centers for Disease Control and Prevention [CDC]'s National Nosocomial Infections Surveillance [NNIS] System/National Healthcare Safety Network [NHSN]) (45). First used in 1970 (Table 1) (4), the NNIS definitions were originally designed to classify internal facility-specific quality metrics to guide local prevention efforts. Initially, many programs tracked all HAIs (also known as “hospitalwide” surveillance) (4, 6); however, as the complexity of patient care and subsequent data collection burden increased, HAI surveillance programs refined their focus to selected HAIs (6). Through the NNIS/NHSN system, facilities can benchmark their internal HAI performance against a deidentified national pool of member facilities. As advocated previously (7), NHSN definitions (as opposed to other metrics, such as administrative coding) continue to be the best choice for HAI outcome measurement because of their long-established application and acceptance among infection prevention and health care epidemiology experts. They are field-tested and, when applied consistently, can provide a careful assessment of HAI burden as well as the effect of prevention efforts.

Table Jump PlaceholderTable 1. Timeline of Key Events in CDC HAI Surveillance 

Over the past decade, HAI surveillance data have been increasingly used as publicly reported metrics for comparing the quality of patient care among health care facilities, such as through mandatory reporting of hospital-specific data to state health departments, public access to hospital-specific HAI rates, and use of such data by insurers and payers to influence reimbursement (89), culminating in the addition of several HAI-specific outcomes to the Centers for Medicare & Medicaid Services (CMS) Hospital Inpatient Quality Reporting (IQR) Program (Table 1). The CMS requires hospitals to submit these HAI data to receive their full annual reimbursement updates (pay for reporting), and these data will eventually be incorporated into value-based purchasing metrics (pay for performance). Hospital-specific HAI rates are also accessible to the public (www.hospitalcompare.hhs.gov) (10). Additional HAI metrics, including outcomes in non–acute care patients, will be added to this list in the future. Using HAI surveillance data for these purposes has clearly broadened HAI awareness beyond the infection prevention and control communities and has helped to garner greater support for institutional efforts aimed at reducing HAIs and improving patient safety.

Although NHSN HAI surveillance provides a standardized process to determine the occurrence of an HAI, implementing NHSN surveillance definitions is associated with interpretive variation independent of the quality of care (6, 1112). First, some definition components are subjective, such as “purulent drainage from the deep incision” to determine the presence of a surgical site infection (13). Second, determining the presence of an HAI often relies on documentation of a provider's clinical assessment, and the variability between individual clinician determinations and documentation of those assessments can be considerable. Third, various data sources are required to apply surveillance definitions, and the ease of accessing this information can vary greatly among hospitals. Facilities with robust electronic medical record or electronic surveillance systems will be more likely to capture data used to determine the presence of an HAI and will thus report higher HAI rates than other facilities with limited access to a patient's record. Health care–associated infection surveillance is also resource-intensive, requiring trained reviewers to review many medical records, and the effort available for surveillance can vary substantially, affecting the completeness of case ascertainment. Finally, with limited patient-specific data included in the surveillance definitions, risk adjustment is incomplete. Efforts should be made to improve risk adjustment as necessary to prevent potentially misleading interfacility comparisons.

As a result of these challenges, there is marked variation and low interrater reliability in the interpretation of HAI criteria, even among experienced infection preventionists (12, 1415). Recognizing the need for more objective HAI definitions that correlate with clinical outcomes (16), the CDC has partnered with key stakeholders and content experts to revise NHSN's HAI definitions to enhance clinical relevance and reduce subjectivity (17). These modifications are essential to ensure the quality of reported HAI data and improve the clinical credibility of surveillance data.

Another important concept that is underappreciated by many clinicians is the distinction between HAI surveillance definitions and clinical diagnoses. Clinical diagnoses are based, in part, on the subjective judgment of clinicians and are used to guide treatment of individual patients. Surveillance definitions are used to assess a facility's HAI burden and the need for and effect of prevention efforts. Of note, HAI surveillance definitions are not intended for clinical diagnosis or to guide patient treatment. Surveillance definitions should ideally depend on objective data, demonstrate high interrater reliability, use readily accessible data to ascertain an event, and enable risk adjustment to account for varying case mix and underlying comorbid conditions that affect infection risk independent of the quality of care. If the definitions are applied consistently, the assessment of outcome trends over time should be reliable and instances of misclassification should be minimized. With public reporting and consequences for poor performance, however, new challenges with the use of HAI surveillance data have emerged.

Ideally, the alignment of the increasing focus on HAI rates and financial incentives to reduce these outcomes should motivate hospitals to invest in HAI prevention efforts. Many health care facilities have used the emphasis on HAI reduction to implement or complement existing comprehensive programs and have made dramatic reductions in HAIs and their associated morbidity. With public reporting of HAI surveillance data and consequences for poor performance, however, there can be skewed incentives to reduce HAI rates by excluding or reclassifying events as opposed to preventing actual negative outcomes. This potential risk is magnified by the inherent subjectivity and potential variability of HAI surveillance described previously. In a system where there is great disincentive to have unfavorable outcome data, persons responsible for ascertaining the presence of an HAI come under increased scrutiny and pressure to exclude an event when reporting it could have dramatic financial and public consequences. As stakes increase for poor performance, the pressure, implicit or explicit, on infection prevention programs to reclassify and exclude specific HAIs from reported data will be amplified.

One practice that has increased with public reporting of HAI data is the use of clinical adjudication panels or clinician veto by personnel external to the infection prevention and control program to make the final determination of HAI occurrence. As part of this adjudication, events that meet the HAI surveillance definitions are presented to facility leaders or clinicians to judge whether they are considered HAIs. These reviews frequently confuse the distinction between medical care surveillance and clinical diagnosis (Table 2). Of note, this type of clinical adjudication must be contrasted with the important discussions among the infection prevention and control personnel trained in HAI surveillance and health care epidemiology during data review to examine whether an event strictly meets the NHSN definition criteria. This latter form of decision making occurs in the initial assessment of potential HAIs and allows for consistency in application of the NHSN definitions but must avoid clinical overinterpretation of an individual patient's findings to guide HAI determination.

Table Jump PlaceholderTable 2. Examples of Clinician Veto of an NHSN-Defined HAI Surveillance Event 

Whether intentional or unintentional, the pressure to adjudicate cases by persons without familiarity of or strict adherence to NHSN criteria is problematic. Such individuals are not trained in HAI surveillance and may have difficulty distinguishing clinical assessment from application of surveillance definitions. Of note, adjudicators can be consciously or unconsciously biased if they are held accountable for institutional HAI performance. This clear conflict of interest creates a disincentive to adjudicate on the side of infection.

Unfortunately, such clinical adjudication practices are common. A recent survey of infectious disease specialists found that 70% of respondent infection prevention and control programs incorporated clinical judgment in the form of clinician veto or consensus adjudication into assessments of central line–associated bloodstream infections (CLABSIs) rather than strictly adhering to NHSN criteria (18). The issues of clinical adjudication and clinician veto are somewhat new, as reflected by the limited discussion of these issues in traditional guidance surrounding the implementation of HAI surveillance data (7).

In addition, the increasing push to reduce and even “eliminate” all HAIs can have unintended consequences (19). The goal of elimination must be contrasted with eradication, as in such other public health initiatives as tuberculosis elimination. Although many HAIs are preventable, intrinsic patient risk factors, the medical need for invasive devices, procedures that breach patients’ usual defense mechanisms, and major remaining gaps in our knowledge about preventing infections imply that eradicating all HAIs may not be realistic (2, 20). Although we must still strive to eliminate all preventable HAIs, the drive to “reach zero” can exacerbate the pressure to err on the side of underreporting HAIs described earlier. With the increasing pressure for excellent performance, continued clarification about the distinction between surveillance definitions and clinical judgment and additional guidance on implementation of HAI surveillance definitions and the use of HAI data for public reporting are necessary.

With the issues outlined earlier, additional guidance on the implementation and interpretation of HAI surveillance data could help ensure that patients are provided with more comparable data in order to make informed health care choices and provide as level a playing field as possible for comparison. The Healthcare Infection Control Practices Advisory Committee (HICPAC) is a federal advisory committee that provides advice and guidance to the CDC and the Secretary of the Department of Health and Human Services about the practice of infection control and strategies for surveillance, prevention, and control of HAIs. To address concerns surrounding the increasing use of HAI surveillance outcomes for transparent public reporting, HICPAC convened members to discuss further recommendations to guide such surveillance. At several standing triannual committee meetings, these issues were discussed with committee members and stakeholder liaisons and formal guidance was developed.

This HICPAC guidance complements and serves as an adjunct to the 2005 guidance on public reporting of healthcare-associated infections (7) and recommends the following standards of practice for the use of HAI surveillance data for internal and external performance measurement.

Recommendation 1: Hospital infection prevention and control staff should use NHSN definitions (5) for HAI outcome measurement.

Recommendation 1a: It should be recognized that these surveillance definitions serve a different purpose from clinical disease diagnosis and, therefore, it is acceptable that complete concordance between surveillance-defined and clinically defined outcomes is not present.

Recommendation 1b: Although NHSN provides training cases to improve interrater reliability for persons ascertaining HAIs, consideration for additional tools (for example, standardized case reviews or audits) that would lead to more balanced interfacility comparisons should also be considered.

Recommendation 2: Hospital administrative leadership should clearly assign authority for final decision making about whether an event meets an HAI surveillance definition to individuals with specific content expertise and training in health care epidemiology and infection prevention and control.

Recommendation 3: Hospital leadership should enable health care epidemiology and infection prevention and control staff to maintain the integrity of HAI surveillance data through strict adherence to surveillance definitions regardless of financial or other ramifications.

Recommendation 4: Persons responsible for determining whether specific events meet the NHSN definitions should systematically document which definition criteria are met or reasons for an event's exclusion to maintain consistency of surveillance over time and to provide clear and consistent assessment of the surveillance process.

Recommendation 5: Although discussion of challenging cases among health care epidemiology and infection prevention and control staff to determine whether the NHSN definition is met is encouraged, facilities should not use clinical adjudication panels or clinician veto to determine whether a given event should be reported as an HAI.

Recommendation 6: Reported data should be systematically validated to provide consequences for variations in the use and interpretation of HAI surveillance data and such practices as post hoc clinical adjudication.

Recommendation 6a: Such a validation program should be conducted by an impartial, independent party, such as a state health department or CMS surveyor.

Recommendation 6b: Validation should include an evaluation of whether reported HAIs meet NHSN definitions and an assessment of potentially unreported events (such as thorough review of positive blood culture results to assess the presence of an unreported CLABSI). It should also include a review of the facility's surveillance methods and operations.

Recommendation 6c: Additional metrics to assess for potential manipulation of reported data should be examined (for example, examination of total number of bloodstream infections [BSIs] and total number of such BSIs classified as secondary to another infection when assessing CLABSI surveillance data; low CLABSI rates in the setting of increasing secondary BSI rates may be an indication of gaming the data).

Recommendation 6d: Frank review of any claims of institutional pressure to underreport HAIs is also extremely important.

A key component of these recommendations is ensuring validation of reported data. Studies of other reported infection surveillance data have shown the variability in reported outcomes and illustrate the need for validation (2124). In Connecticut, a third-party review of facility-reported CLABSI data identified greater than 50% underreporting, primarily related to misinterpretation of the NHSN definitions, whereas in Oregon, validation increased the statewide reported CLABSI rate by 27% (2122). Validation of reported HAI data is essential to verify complete reporting of selected HAIs; to examine whether clinical adjudication practices are present; and, most important, to provide fair comparisons of HAI prevention efforts among health care facilities and to ensure that high performance is actually due to improved patient care.

The use of HAI surveillance data as publicly reported measurements of health care quality has been a positive step toward improving patient care and reducing morbidity and mortality. With expanded use of these data, we must ensure data integrity and reliability to provide a level playing field for facility-based comparisons. Unbiased and transparent reporting of HAI rates based on standardized surveillance definitions, proscription of post hoc clinical adjudication, and data validation are critical. Investments will be necessary to create such a level playing field and will have to occur at multiple levels, including additional investment in efforts to maintain and expand NHSN, state health departments for data validation and infection prevention activities, and health care facilities to support informatics infrastructure and staffing of infection prevention programs while ensuring unbiased assessments of outcome ascertainment.

Cardo D, Dennehy PH, Halverson P, Fishman N, Kohn M, Murphy CL, et al, HAI Elimination White Paper Writing Group. Moving toward elimination of healthcare-associated infections: a call to action. Infect Control Hosp Epidemiol. 2010; 31:1101-5.
PubMed
 
Umscheid CA, Mitchell MD, Doshi JA, Agarwal R, Williams K, Brennan PJ. Estimating the proportion of healthcare-associated infections that are reasonably preventable and the related mortality and costs. Infect Control Hosp Epidemiol. 2011; 32:101-14.
PubMed
 
Caper P. The epidemiologic surveillance of medical care [Editorial]. Am J Public Health. 1987; 77:669-70.
PubMed
 
Hughes JM. Nosocomial infection surveillance in the United States: historical perspective. Infect Control. 1987; 8:450-3.
PubMed
 
Centers for Disease Control and Prevention.  National Healthcare Safety Network Patient Safety Component Manual: NHSN Overview. Atlanta: Centers for Disease Control and Prevention; 2013. Accessed at www.cdc.gov/nhsn/PDFs/pscManual/1PSC_OverviewCurrent.pdf on 15 March 2012.
 
. Nosocomial infection rates for interhospital comparison: limitations and possible solutions. A Report from the National Nosocomial Infections Surveillance (NNIS) System. Infect Control Hosp Epidemiol. 1991; 12:609-21.
PubMed
 
McKibben L, Horan T, Tokars JI, Fowler G, Cardo DM, Pearson ML, et al, Heathcare Infection Control Practices Advisory Committee. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control. 2005; 33:217-26.
PubMed
 
Julian KG, Brumbach AM, Chicora MK, Houlihan C, Riddle AM, Umberger T, et al. First year of mandatory reporting of healthcare-associated infections, Pennsylvania: an infection control-chart abstractor collaboration. Infect Control Hosp Epidemiol. 2006; 27:926-30.
PubMed
 
Association for Professionals in Infection Control and Epidemiology.  Legislative Maps. Washington, DC: Association for Professionals in Infection Control and Epidemiology; 2013. Accessed at www.apic.org/Advocacy/Legislative-Map on 16 March 2012.
 
Centers for Medicare & Medicaid Services.  Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long-Term Care Hospital Prospective Payment System and FY 2012 Rates. Baltimore, MD: Centers for Medicare & Medicaid Services; 2011. Accessed at www.gpo.gov/fdsys/pkg/FR-2011-08-18/pdf/2011-19719.pdf on 16 March 2012.
 
Sexton DJ, Chen LF, Anderson DJ. Current definitions of central line-associated bloodstream infection: is the emperor wearing clothes? Infect Control Hosp Epidemiol. 2010; 31:1286-9.
PubMed
 
Klompas M. Interobserver variability in ventilator-associated pneumonia surveillance. Am J Infect Control. 2010; 38:237-9.
PubMed
 
Centers for Disease Control and Prevention.  National Healthcare Safety Network Patient Safety Component Manual: Surgical Site Infection (SSI) Event. Atlanta: Centers for Disease Control and Prevention; 2013. Accessed at www.cdc.gov/nhsn/PDFs/pscManual/9pscSSIcurrent.pdf on 5 March 2013.
 
Lin MY, Hota B, Khan YM, Woeltje KF, Borlawsky TB, Doherty JA, et al, CDC Prevention Epicenter Program. Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates. JAMA. 2010; 304:2035-41.
PubMed
 
Mayer J, Greene T, Howell J, Ying J, Rubin MA, Trick WE, et al, CDC Prevention Epicenters Program. Agreement in classifying bloodstream infections among multiple reviewers conducting surveillance. Clin Infect Dis. 2012; 55:364-70.
PubMed
 
Magill SS, Fridkin SK. Improving surveillance definitions for ventilator-associated pneumonia in an era of public reporting and performance measurement [Editorial]. Clin Infect Dis. 2012; 54:378-80.
PubMed
 
Klompas M, Khan Y, Kleinman K, Evans RS, Lloyd JF, Stevenson K, et al, CDC Prevention Epicenters Program. Multicenter evaluation of a novel surveillance paradigm for complications of mechanical ventilation. PLoS One. 2011; 6:18062.
PubMed
 
Beekmann SE, Diekema DJ, Huskins WC, Herwaldt L, Boyce JM, Sherertz RJ, et al, Infectious Diseases Society of America Emerging Infections Network. Diagnosing and reporting of central line-associated bloodstream infections. Infect Control Hosp Epidemiol. 2012; 33:875-82.
PubMed
 
Klompas M. Unintended consequences in the drive for zero [Editorial]. Thorax. 2009; 64:463-5.
PubMed
 
Klompas M. Is a ventilator-associated pneumonia rate of zero really possible? Curr Opin Infect Dis. 2012; 25:176-82.
PubMed
 
Backman LA, Melchreit R, Rodriguez R. Validation of the surveillance and reporting of central line-associated bloodstream infection data to a state health department. Am J Infect Control. 2010; 38:832-8.
PubMed
 
Oh JY, Cunningham MC, Beldavs ZG, Tujo J, Moore SW, Thomas AR, et al. Statewide validation of hospital-reported central line-associated bloodstream infections: Oregon, 2009. Infect Control Hosp Epidemiol. 2012; 33:439-45.
PubMed
 
Stenhem M, Ortqvist A, Ringberg H, Larsson L, Olsson-Liljequist B, Haeggman S, et al, Swedish study group on MRSA epidemiology. Validity of routine surveillance data: a case study on Swedish notifications of methicillin-resistant Staphylococcus aureus. Euro Surveill. 2009; 14:19281.
PubMed
 
Haley VB, Van Antwerpen C, Tserenpuntsag B, Gase KA, Hazamy P, Doughty D, et al. Use of administrative data in efficient auditing of hospital-acquired surgical site infections, New York State 2009-2010. Infect Control Hosp Epidemiol. 2012; 33:565-71.
PubMed
 

Figures

Tables

Table Jump PlaceholderTable 1. Timeline of Key Events in CDC HAI Surveillance 
Table Jump PlaceholderTable 2. Examples of Clinician Veto of an NHSN-Defined HAI Surveillance Event 

References

Cardo D, Dennehy PH, Halverson P, Fishman N, Kohn M, Murphy CL, et al, HAI Elimination White Paper Writing Group. Moving toward elimination of healthcare-associated infections: a call to action. Infect Control Hosp Epidemiol. 2010; 31:1101-5.
PubMed
 
Umscheid CA, Mitchell MD, Doshi JA, Agarwal R, Williams K, Brennan PJ. Estimating the proportion of healthcare-associated infections that are reasonably preventable and the related mortality and costs. Infect Control Hosp Epidemiol. 2011; 32:101-14.
PubMed
 
Caper P. The epidemiologic surveillance of medical care [Editorial]. Am J Public Health. 1987; 77:669-70.
PubMed
 
Hughes JM. Nosocomial infection surveillance in the United States: historical perspective. Infect Control. 1987; 8:450-3.
PubMed
 
Centers for Disease Control and Prevention.  National Healthcare Safety Network Patient Safety Component Manual: NHSN Overview. Atlanta: Centers for Disease Control and Prevention; 2013. Accessed at www.cdc.gov/nhsn/PDFs/pscManual/1PSC_OverviewCurrent.pdf on 15 March 2012.
 
. Nosocomial infection rates for interhospital comparison: limitations and possible solutions. A Report from the National Nosocomial Infections Surveillance (NNIS) System. Infect Control Hosp Epidemiol. 1991; 12:609-21.
PubMed
 
McKibben L, Horan T, Tokars JI, Fowler G, Cardo DM, Pearson ML, et al, Heathcare Infection Control Practices Advisory Committee. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control. 2005; 33:217-26.
PubMed
 
Julian KG, Brumbach AM, Chicora MK, Houlihan C, Riddle AM, Umberger T, et al. First year of mandatory reporting of healthcare-associated infections, Pennsylvania: an infection control-chart abstractor collaboration. Infect Control Hosp Epidemiol. 2006; 27:926-30.
PubMed
 
Association for Professionals in Infection Control and Epidemiology.  Legislative Maps. Washington, DC: Association for Professionals in Infection Control and Epidemiology; 2013. Accessed at www.apic.org/Advocacy/Legislative-Map on 16 March 2012.
 
Centers for Medicare & Medicaid Services.  Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long-Term Care Hospital Prospective Payment System and FY 2012 Rates. Baltimore, MD: Centers for Medicare & Medicaid Services; 2011. Accessed at www.gpo.gov/fdsys/pkg/FR-2011-08-18/pdf/2011-19719.pdf on 16 March 2012.
 
Sexton DJ, Chen LF, Anderson DJ. Current definitions of central line-associated bloodstream infection: is the emperor wearing clothes? Infect Control Hosp Epidemiol. 2010; 31:1286-9.
PubMed
 
Klompas M. Interobserver variability in ventilator-associated pneumonia surveillance. Am J Infect Control. 2010; 38:237-9.
PubMed
 
Centers for Disease Control and Prevention.  National Healthcare Safety Network Patient Safety Component Manual: Surgical Site Infection (SSI) Event. Atlanta: Centers for Disease Control and Prevention; 2013. Accessed at www.cdc.gov/nhsn/PDFs/pscManual/9pscSSIcurrent.pdf on 5 March 2013.
 
Lin MY, Hota B, Khan YM, Woeltje KF, Borlawsky TB, Doherty JA, et al, CDC Prevention Epicenter Program. Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates. JAMA. 2010; 304:2035-41.
PubMed
 
Mayer J, Greene T, Howell J, Ying J, Rubin MA, Trick WE, et al, CDC Prevention Epicenters Program. Agreement in classifying bloodstream infections among multiple reviewers conducting surveillance. Clin Infect Dis. 2012; 55:364-70.
PubMed
 
Magill SS, Fridkin SK. Improving surveillance definitions for ventilator-associated pneumonia in an era of public reporting and performance measurement [Editorial]. Clin Infect Dis. 2012; 54:378-80.
PubMed
 
Klompas M, Khan Y, Kleinman K, Evans RS, Lloyd JF, Stevenson K, et al, CDC Prevention Epicenters Program. Multicenter evaluation of a novel surveillance paradigm for complications of mechanical ventilation. PLoS One. 2011; 6:18062.
PubMed
 
Beekmann SE, Diekema DJ, Huskins WC, Herwaldt L, Boyce JM, Sherertz RJ, et al, Infectious Diseases Society of America Emerging Infections Network. Diagnosing and reporting of central line-associated bloodstream infections. Infect Control Hosp Epidemiol. 2012; 33:875-82.
PubMed
 
Klompas M. Unintended consequences in the drive for zero [Editorial]. Thorax. 2009; 64:463-5.
PubMed
 
Klompas M. Is a ventilator-associated pneumonia rate of zero really possible? Curr Opin Infect Dis. 2012; 25:176-82.
PubMed
 
Backman LA, Melchreit R, Rodriguez R. Validation of the surveillance and reporting of central line-associated bloodstream infection data to a state health department. Am J Infect Control. 2010; 38:832-8.
PubMed
 
Oh JY, Cunningham MC, Beldavs ZG, Tujo J, Moore SW, Thomas AR, et al. Statewide validation of hospital-reported central line-associated bloodstream infections: Oregon, 2009. Infect Control Hosp Epidemiol. 2012; 33:439-45.
PubMed
 
Stenhem M, Ortqvist A, Ringberg H, Larsson L, Olsson-Liljequist B, Haeggman S, et al, Swedish study group on MRSA epidemiology. Validity of routine surveillance data: a case study on Swedish notifications of methicillin-resistant Staphylococcus aureus. Euro Surveill. 2009; 14:19281.
PubMed
 
Haley VB, Van Antwerpen C, Tserenpuntsag B, Gase KA, Hazamy P, Doughty D, et al. Use of administrative data in efficient auditing of hospital-acquired surgical site infections, New York State 2009-2010. Infect Control Hosp Epidemiol. 2012; 33:565-71.
PubMed
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Topic Collections
PubMed Articles

Want to Subscribe?

Learn more about subscription options

Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)