0
Articles |

Accepting Critically Ill Transfer Patients: Adverse Effect on a Referral Center's Outcome and Benchmark Measures FREE

Andrew L. Rosenberg, MD; Timothy P. Hofer, MD; Cathy Strachan, , MSRN; Charles M. Watts, MD; and Rodney A. Hayward, MD
[+] Article and Author Information

From the University of Michigan and the Department of Veterans Affairs Health Services Research & Development Service, Veterans Affairs Ann Arbor Healthcare System, Ann Arbor, Michigan.


Acknowledgments: The authors thank the members of the University of Michigan Medical Center's Office of Clinical Affairs for collecting the data used in this study and the Consortium for Health Outcomes, Innovation, and Cost Effectiveness Studies (CHOICES) for database support.

Grant Support: Dr. Rosenberg is supported by a grant from the Robert Wood Johnson Foundation and by the Department of Veterans Affairs. Dr. Hofer is supported by a Career Development Grant from the Health Services Research & Development Service of the Department of Veterans Affairs.

Potential Financial Conflicts of Interest: None disclosed.

Requests for Single Reprints: Andrew Rosenberg, MD, Department of Anesthesiology and Critical Care, University of Michigan Medical Center, Room 1G323, Box 0048, Ann Arbor, MI 48109-0048; e-mail, arosen@umich.edu.

Current Author Addresses: Dr. Rosenberg: Department of Anesthesiology and Critical Care, University of Michigan Medical Center, Room 1G323, Box 0048, Ann Arbor, MI 48109-0048

Drs. Hofer and Hayward: Ann Arbor Veterans Affairs Health Services Research & Development Service, 3rd Floor, Lobby L, PO Box 130170, Ann Arbor, MI 48113.

Ms. Strachan: Office of Clinical Affairs, University of Michigan Medical Center, 1500 East Medical Center Drive, Ann Arbor, MI 48109.

Dr. Watts: Office of Clinical Affairs, Northwestern Memorial Hospital, 251 East Huron Street, Chicago, IL 60611.

Author Contributions: Conception and design: A.L. Rosenberg.

Analysis and interpretation of the data: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.

Drafting of the article: A.L. Rosenberg, C.M. Watts.

Critical revision of the article for important intellectual content: A.L. Rosenberg, T.P. Hofer, C. Strachan, C.M. Watts, R.A. Hayward.

Final approval of the article: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.

Provision of study materials or patients: C. Strachan.

Statistical expertise: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.

Obtaining of funding: A.L. Rosenberg, C.M. Watts, R.A. Hayward.

Administrative, technical, or logistic support: C. Strachan, C.M. Watts, R.A. Hayward.

Collection and assembly of data: A.L. Rosenberg, C. Strachan.


Ann Intern Med. 2003;138(11):882-890. doi:10.7326/0003-4819-138-11-200306030-00009
Text Size: A A A
Editors' Notes
Context

  • Benchmarking compares performance of providers or systems with a standard. Are such comparisons fair, even if they are adjusted for varying case mix and severity of illness of patients?

Contribution

  • This prospective study showed that patients who were transferred to an intensive care unit from other hospitals had worse outcomes than those who were directly admitted. In modeling analyses, benchmarking adjusted with sophisticated case-mix and severity-of-illness information, but not admission source, penalized units with a 25% transfer rate (versus a 0% rate) by 14 “excess deaths” per 1000 admissions.

Implications

  • Benchmarking of intensive care unit performance should account for transfer patients.

–The Editors

Payers and consumers of health care are increasingly seeking information on quality of care (13). Since the Centers for Medicare & Medicaid Services (formerly the Health Care Financing Administration) began reporting hospital mortality rates in the mid-1980s (4), an increasing number of health care providers, health plans, third-party payers, and regulatory agencies are collecting and disseminating information used to judge and compare quality of care (58). The converging influences of an expanding use of expensive technology in medicine and the resulting rapid growth in health care costs are driving these initiatives (914).

Purchasers, providers, and the government are relying on various benchmarking methods to compare outcomes, quality, and costs of health care (1518). Benchmarking, the process by which the performance of an individual, group, hospital, or health system is compared with a standard, is the cornerstone of many efforts to measure and compare the quality and efficiency of care (1921). Principally, benchmarking accounts for variation in outcomes, processes, or costs of care (2223). The results of these benchmarks are often used to prepare profiles (“report cards”) that compare a provider's, hospital's or health system's outcomes and costs (2425).

Frequently, this type of profiling is performed by using limited administrative data, such as age, sex, and diagnosis-related group categories that are primarily collected for billing purposes (26). These profiling methods have met with substantial criticism (24, 2729). Most commonly, critics cite the lack of sufficiently accurate case-mix adjustment and severity-of-illness measures on which to compare actual and predicted outcomes for different types of care and different groups of patients (3031). These considerations are especially important for tertiary referral centers that care for patients with complex or severe illnesses, many of whom have been referred or transferred from another hospital or area (26, 3233). Several previous studies have shown that transfer patients often require more resources (3335) but have worse outcomes than nontransferred patients (3638). The level of severity adjustment, however, is often not detailed in these studies (39).

How detailed must risk adjustment be to adequately control for case-mix differences? For several reasons, adjustment for the risk for the outcomes of transfer patients in the intensive care unit (ICU) should be the ideal setting in which to examine this question. Currently, the most sophisticated and highly validated set of clinical and physiologic case-mix measures available have been developed for ICU patients (4043). This is partly because an extensive amount of physiologic information is available from the repetitive and often invasive monitoring unique to the ICU. Furthermore, lead-time and selection bias (4447) (the reason why patient location before ICU admission has been included in some risk-adjustment models [43]) may account for some of the difficulty in accounting for ICU patients' outcomes. However, few studies have evaluated the ability to measure the severity of illness among patients admitted to an ICU after previously receiving care from a general medical ward or another hospital (4344, 4647) by using current, state-of-the-art ICU outcome prediction models (35, 48). Furthermore, among the most rigorously developed and tested ICU prediction models, there has been no detailed evaluation of the “transfer effect” at an individual hospital level. Even small bias in comparisons of observed versus expected deaths, as well as other outcomes, can substantially affect how high-quality institutions compare with peer hospitals.

Therefore, we studied the incremental improvement that increasingly more detailed clinical and physiologic case-mix adjustment could provide in evaluating the greater severity of illness of transfer patients. We sought to examine whether full case-mix adjustment could adequately account for the increased risk for poor outcomes among transfer patients (that is, patients who received medical care before transfer to a referral center medical intensive care unit [MICU]). In addition, we investigated the potential bias in benchmarking outcomes produced by a transfer effect even with the use of state-of-the-art case-mix measures.

Study Design and Patient Information

We obtained the data for this study from a prospectively developed cohort of 4579 consecutive admissions to the MICU at a tertiary care university hospital (800 beds, 24 of which are MICU beds) from 1 January 1994 to 1 April 1998. A study group of 4208 patients was available after previously published exclusion criteria were met (49) (Appendix). This MICU primarily admits non–cardiac care, medical patients with various conditions and is a referral center for patients with acute respiratory distress syndrome and hepatic failure. Surgical and trauma patients are admitted to other, specialized ICUs. Measurements of all dependent and independent variables were collected prospectively, whereas hypotheses were generated before data analyses but after the data were collected (secondary or post hoc analyses of a prospective cohort study). The data were obtained from the university's ICU clinical database, APACHE Medical Systems (APACHE Medical Systems, Inc., Vienna, Virginia). The data quality was ensured by using previously established practices (40). The need for informed consent was waived by the institutional review board.

Standard Acute Physiology, Age, and Chronic Health Evaluation (APACHE) methods were used to construct case-mix adjustment and severity-of-illness scores (40). A daily Acute Physiology Score (APS) based on the most abnormal values of 17 specific physiologic variables during the previous 24-hour period measured disease severity. The age and comorbidity score was derived from previously validated weights for the patients' ages and one of seven severe comorbid conditions (hepatic failure, cirrhosis, immunosuppressive conditions, hematologic malignant conditions, lymphoma, metastatic cancer, and AIDS). The type and amount of monitoring and treatment received over each 24-hour period were recorded by using the Therapeutic Intervention Scoring System (TISS) (50). Patients were characterized as receiving either monitored care or active treatment, such as ventilator management (51). Each patient's primary reason for MICU admission (MICU admission diagnosis) was recorded as 1 of 430 diseases, injuries, surgical procedures, or events most immediately threatening to the patient and requiring the services of the MICU.

Variables

Our outcome measures were MICU and hospital lengths of stay, MICU and hospital mortality rates, and MICU readmission rates. Intensive care unit and hospital length of stay is also an established proxy measure of overall resource use (5253). Our primary predictor variables included age, comorbid conditions, diagnosis variables, and admission APS. General ICU severity models are primarily developed to predict mortality rate and length of stay. Although we did not use the APACHE model for most of our analyses, we did use the APS component of the APACHE score as an essential variable to control for severity of illness.

Data collected for all admissions to the MICU also included patient demographic characteristics, hospital and MICU admission dates and times, location from which a patient was admitted to the MICU, and the time from admission at the original source to the MICU admission at the study hospital (hours to MICU). Patients admitted directly to the MICU from the emergency department or an ambulatory clinic were categorized as “direct admissions.” Patients admitted from any of the non-ICU general nursing floors or telemetry units were categorized as “floor admissions.” Finally, patients transferred from another hospital were categorized as “transfer admissions,” 98% of whom were from another hospital's ICU.

Quantifying Risk Adjustment Offered by More Precise Biophysiologic Information

We used three predictive models to develop benchmarks to study the incremental effect of increasingly more precise risk adjustment on predicting outcomes for the directly admitted, floor, and transfer patients. For these analyses, we consolidated the admission diagnoses to 19 categories to have sufficient sample size for each group. First, we examined the effect of admission source on outcomes without any diagnostic or physiologic information in an unadjusted model. Second, in the partial-adjustment model, we controlled for age, sex, comorbid conditions, and diagnosis only. This traditional adjustment is similar to or even more precise than many general hospital case-mix and severity-adjustment methods that rely on administratively derived diagnosis-related group information. Finally, we added more precise acute physiology clinical information (APS) to adjust for physiologic derangements yielding a full case-mix and severity-adjustment model. We also used the predicted mortality rates and length of stay information from the actual APACHE III model itself, which include an average ICU admission source correction. The results of this last model are reported in the Results section but not in Table 3.

Quantifying the Influence of Different Transfer Rates on Excess Mortality Rates and Lengths of Stay

We sought to demonstrate the magnitude of a bias in estimating mortality rates created by a profiling system that does not account for admission source. We evaluated the difference in hospital mortality rate for a medical center that receives approximately 25% of its MICU patients as transfers from another hospital that receives no transfer patients at all. By using the coefficients from the full case-mix–adjusted logistic regression model, we calculated the expected mortality rate if 25% of patients were transfer patients and the expected mortality rate if there were no transfer patients. The difference between these two values represented the magnitude of error that might be mistakenly assumed if a profiling program does not account for transfer status.

Statistical Analysis

We performed descriptive analysis of patient characteristics and patient outcomes (MICU and hospital mortality rates, MICU and hospital lengths of stay, and MICU readmission rate) grouped by admission source. We performed bivariate analysis to evaluate associations between categorical outcomes (death and MICU readmission) and predictor variables by using the chi-square test. Logistic regression was used to evaluate independent associations for categorical predictor variables. Potential interactions between APS and MICU lengths of stay, as well as APS and being discharged from the MICU with a do-not-resuscitate order were evaluated for the mortality models. Interactions between APS and being discharged from the MICU with a do-not-resuscitate order were also evaluated for the length of stay models. There were no significant differences when these interaction terms were included in the models. The Student t-test or one-way analysis of variance was used to compare continuous outcomes (lengths of stay), and linear regression was used to evaluate independent associations for continuous variables. Since hospital and MICU lengths of stay are skewed, these outcome variables were log transformed before we conducted statistical analyses. Therefore, percentage increases in lengths of stay are reported. In all of the analyses, P values were two tailed and considered significant if they were less than 0.05. P values were adjusted for multiple comparisons. The database was exported from the APACHE Medical Systems Database to SPSS, version 9.0 (SPSS, Inc., Chicago, Illinois), for all the above statistical analyses.

Role of the Funding Source

The funding source had no role in the design, conduct, or reporting of the study or in the decision to submit the manuscript for publication.

Differences in Demographic and Clinical Characteristics between Patient Groups

Table 1 shows demographic and clinical information for the three patient groups. As expected, transfer and floor patients had significantly longer courses of treatments before MICU admission than patients directly admitted to the MICU. Furthermore, transfer and floor patients were sicker at the time of MICU admission and at MICU discharge and had 20% to 30% larger declines, respectively, in their APS during the MICU stay. In addition, the admission diagnoses for these groups of patients often differed. Transfer patients were more likely to be admitted with complex medical conditions, such as severe sepsis; acute respiratory distress syndrome; and various complications due to hepatic failure, especially upper gastrointestinal bleeding. Both floor and transfer patients also had significantly more comorbid conditions.

Table Jump PlaceholderTable 1.  Characteristics of Medical Intensive Care Unit Patients Directly Admitted from the Emergency Department or Clinic, Admitted from the General Medicine Floor, or Transferred from Another Hospital
Differences in Length of Stay, Mortality, and Readmission Rates

Compared with directly admitted patients, transfer patients had 1.5 times longer mean MICU days and stayed almost twice as long in the hospital (Table 2). Even compared with floor patients, transfer patients had 20% longer MICU stays. Standardized mortality ratios comparing observed with predicted hospital mortality were 0.98 (CI, 0.83 to 1.14) for transfer patients, 0.89 (CI, 0.77 to 1.02) for floor patients, and 0.82 (CI, 0.42 to 1.22) for directly admitted patients. Transfer patients had 1.4- to 2.5-fold higher hospital mortality rates than any other group (Table 3). Transfer patients were almost 80% more likely to be readmitted to the MICU than directly admitted patients and had hospital mortality rates nearly two times higher.

Table Jump PlaceholderTable 2.  Resource Use and Mortality of Medical Intensive Care Unit Patients Directly Admitted from the Emergency Department or Clinic, Admitted from the General Medicine Floor, or Transferred from Another Hospital
Table Jump PlaceholderTable 3.  Intensive Care Unit and Hospital Outcomes for Patients Transferred from Another Hospital to the Medical Intensive Care Unit Compared with Direct and Combined Direct and Floor Intensive Care Unit Admissions, Adjusting for Case Mix and Physiologic Illness Severity

The multivariate analyses show that increasingly detailed and sophisticated case-mix and severity measures were still ineffective in adjusting for admission source on length of stay when transfer patients were compared with directly admitted patients (Table 3). Only by including an admission source variable, such as in the actual APACHE III model, could the transfer effect for MICU length of stay be eliminated. Hospital stay, however, was still longer for transfer patients than for directly admitted patients, although the APACHE III transfer variable reduced this effect significantly (5% increased length of stay [CI, 1% to 16%]). When we compared transfer patients with all other patients (Table 3), the effect of admission source on length of stay was reduced but mortality rate was not. To test whether length of stay differences were affected by differences in mortality rates, we analyzed length of stay with and without patients who died. Our results were not significantly different when patients who died either in the MICU or after MICU discharge were included or excluded from the analyses.

We also found that admission source was a strongly significant independent predictor of hospital mortality (Table 3). Even after incorporating the standard APACHE III admission source correction and adjustment for the duration of therapy at the previous location, transfer patients had significantly higher MICU mortality rates (odds ratio, 2.0 [CI, 1.5 to 2.6]) and were more likely than directly admitted patients to die before discharge from the hospital.

Effect of Transfer Patients on Risk-Adjusted Outcome Measures

Bias in observed versus expected deaths can be substantial. Even with identical efficiency and quality, a referral hospital with a 25% MICU transfer rate (similar to our current level), compared with another hospital with a 0% transfer rate, would be penalized by 14 excess deaths per 1000 admissions when a benchmarking program adjusts only for case mix and severity of illness and not for the source of admission.

Severity of illness and case-mix measures are used extensively for creating and comparing the risk-adjusted outcomes of different health care providers. The assumption is that the risk-adjustment method will adequately account for enough of the differences in patient case mix that the residual differences in outcomes are mainly due to quality. This study demonstrates that a substantial underestimation of transfer patients' resource use and outcomes occurs even when the best risk-prediction measures available are used. This underestimate is almost certainly worse when only administrative or diagnosis-related group information is used (6, 27). Moreover, this underestimate can substantially penalize referral centers with more transfer patients. Even small degrees of excess deaths (such as the 1.4% increase we found) is all that it takes to make a high-quality institution look worse than its peer hospitals.

Patients transferred to a MICU from another hospital had hospital stays twice as long as and hospital mortality rates twofold higher than patients directly admitted to the MICU. These data support previous findings that transfer patients, especially those transferred to an ICU, may be among the most resource-intensive patients in a medical system (26, 33, 36). Arguably, the ICU is the best place to evaluate the adequacy of risk adjustment for important outcomes, such as length of stay and mortality. In the ICU, validated clinical–physiologic measures are readily available and are most precisely measured because of the intensity of one-on-one care, the frequency that the measures can be taken, and the unique use of invasive monitors.

The importance of the admission source in predicting clinical outcomes has been reported previously (26, 3336, 44, 47). Bernard and colleagues (26) and Gordon and Rosenthal (36) found major transfer effects for individual patients but included only modest case-mix adjustment. The bias created by not accounting for referral patients in large populations of patients has also been recorded by Bailey and colleagues (54), who showed that patients who travel from one site to another within the TenCare health plan were frequently sicker than nonreferral patients. Escarce and Kelley's study (47) was among the first to use more sophisticated case-mix adjustment by using the Acute Physiology Scores from APACHE II to show the importance of admissions source. During the development of the APACHE III predictive model, weighting for the patient's location before ICU admission (patient origin) and length of stay before ICU admission was incorporated to lessen the effect of lead-time and selection bias on predicting outcome (40, 55). Because this weighting is based on the average association of a surrogate case-mix measure across many hospitals, there are concerns that this surrogate severity measure (for example, APACHE admissions source weight) may not accurately account for the underlying severity of illness at individual hospitals (40).

It is appropriate to ask why state-of-the-art diagnosis and physiologic severity adjusters cannot account for the greater risk for adverse outcomes of transfer patients in our hospital. What is being missed? One hypothesis relates to differences in quality of care. However, the argument that perhaps the worse outcomes for transfer patients are due to poorer quality of care seems unlikely. First, the outcomes for transfer patients at the study hospital were worse than those for directly admitted patients, but they were still good on the basis of the standardized mortality ratios from the APACHE III model. Second, referral centers tend to pride themselves in succeeding where others have failed. Third, the resource use and care activity rating suggest that these patients received very active care. There is certainly no evidence of being quick to “give up.”

It has been suggested by others that admission source reflects lead-time bias when patients have received various durations of care at a previous location (4445). Our findings that the longer a patient stayed at the original admission source was associated with worse outcomes may be evidence of lead-time bias. However, when the amount of time the patient spent at the admitting location was analyzed independently from the origin of a patient's previous care, the lead-time variable did not substantially diminish transfer patients' poorer outcomes.

It seems more likely that transfer to another ICU is often associated with some unmeasured aspect related to severity of illness. Perhaps failure to respond to initial treatment and management underlies this phenomenon. Although the most validated ICU prediction models (such as APACHE III, the Mortality Prediction Model, and the Simplified Acute Physiology Score) include detailed clinical measures, they do not include a measure of which treatments have already been attempted, nor do they incorporate information on response to previous therapies or physiologic reserve. This may also explain why floor patients had outcomes similar to those of transfer patients. Floor and transfer patients are much more likely to have not responded to standard therapy in settings where care durations are longer than in an emergency department or clinic (26, 36, 47), or they may have less physiologic reserve. Although APACHE III includes a correction factor for transfer status that adjusts for the average transfer effect (40, 48), it is not surprising that a surrogate severity measure such as “transfer patient” does not work well at all hospitals because admission source probably represents a heterogeneous phenomenon. Patients may be transferred because of insurance reasons, for tertiary care, to receive a specific procedure or treatment, to be closer to home, or because of patient or family distrust of or unhappiness with the transferring facility. The average effect of being transferred from another hospital may vary dramatically from institution to institution. The higher rate of hepatic failure and lung injury or acute respiratory distress syndrome among patients transferred to our institution supports the hypothesis that higher proportions of patients with more complex medical disorders may inordinately bias prediction models developed with different patient populations. The inclusion of a “transfer” variable in the predictive model is simply a best attempt to control for some undefined, and unmeasured, source of severity.

The implications of our findings extend beyond the ICU. Most areas of health care are being scrutinized for both their costs and quality. Our finding that even the best available ICU biophysiologic measures of severity of illness could not adjust for the effect of admission source at a referral center suggests that it is probably even more difficult to adequately risk-stratify non-ICU patients or groups of outpatients, especially if only diagnosis-related group and administrative data are used (24, 27, 56). This contention is supported by our data that showed minimal improvements in case-mix adjustment when using only age, sex, and diagnosis, a method similar to or better than many practices used by payers and regulators. Health policy on quality measurement has often been driven by the idea that if more time and money were spent to improve case-mix adjustment, these problems could be solved. Critics have suggested that other issues beyond case mix may be influencing this imprecision (24, 27). Current statistical methods may not account for multilevel relationships, and the residual errors in many current models may not be related to practices or phenomena that are captured by current conceptual models.

Using conventional case-mix measures will not exclude the possibility of substantial bias due to unmeasured severity of illness (27, 57). Until such measures are available, administrators and researchers should exclude transferred patients or assign them to their original hospital when creating profiles (26, 36) (Table 4). However, this may still not be an adequate intervention. Referral of refractory patients may be occurring at multiple levels (that is, hospital to hospital, emergency department to emergency department, clinic to clinic, and health plan to health plan). Patients who are not satisfied with the care they receive, or for whom treatments have been ineffective, may be more likely to seek care elsewhere (36, 44, 5859). The ability to compare institutions and health plans may be more valid and clinically useful if their processes are evaluated rather than trying to incrementally improve or individually calibrate outcome prediction models (24, 29, 57, 6061).

Table Jump PlaceholderTable 4.  Recommendations for Profiling and Research Using Risk-Adjusted Outcome Measures

We acknowledge several possible limitations in this study. Because our data are from only one referral center, our results may not be as generalizable to other hospitals that do not receive transfer patients. However, our data suggest that the phenomenon of a significant referral bias can exist even with the best available risk adjustment and that the effect on a referral center or county hospital that cares for similar patients (26) could have substantially negative effects on profiling. To date, no studies of multicenter data sets have evaluated whether this phenomenon exists in a heterogeneous fashion between referral and nonreferral hospitals. We are also limited by not being able to accurately characterize the reasons for transfer because few data from the transferring hospital were collected. Future studies would benefit from refining which subgroups of transferred patients have significantly greater severity of illness. This might include those patients transferred after local therapeutic failure versus brief courses of stabilization. Similarly, differences among patients transferred from larger institutions or other referral centers might be compared with those from smaller hospitals. Our data did not include sufficient patients with short stays at the referring institution to allow us to analyze this. Finally, results from a MICU may not be generalizable to other types of ICUs, such as general surgical units, where transferred patient populations are usually much lower (48) unless they are particularly specialized (35).

In summary, even in a setting with the most thorough diagnostic-based case-mix adjustment and the most physiologically precise severity-of-illness information data, we have found that patients who were transferred from another hospital had significantly greater resource use and worse outcomes than patients directly admitted from the emergency department or clinics. Benchmarking that does not adequately adjust for increased severity of illness intrinsic to many transfer patients may substantially penalize referral centers with higher numbers of these patients and, more important, create barriers for patients being transferred to institutions that can supply the specialized care that they need.

Appendix
Exclusion Criteria

For the length of stay and mortality rate analyses, we excluded 48 patients whose medical ICU admission was an ICU readmission from a previous ICU stay during the same hospitalization, but for which we did not have initial ICU physiology. We also excluded the 323 subsequent admissions for readmitted patients; however, their first admission was included in all analyses. This left 4208 patients eligible for the study. In addition, for all analyses evaluating risk for MICU readmission, we excluded 787 patients who died during their first MICU admission (since they were not at risk for readmission), as well as 111 patients who were admitted for a medication or drug overdose since these patients have very low rates of ICU readmission or adverse outcomes (62). This left a study cohort of patients for the readmission analyses who were discharged alive from the MICU and at risk for a subsequent MICU readmission.

Consolidated Admission Diagnoses

These included chronic obstructive pulmonary disease, respiratory failure with and without ventilator use, pneumonia, acute respiratory distress syndrome, gastrointestinal bleeding, sepsis, stroke or intracranial hemorrhage, other neurologic disorders, hepatic failure, cardiac ischemia, congestive heart failure, cardiac arrest, renal failure, metabolic disorders, medication toxicities, drug overdose, and an “other” category.

Use of Actual APACHE III Predictions

Predictions of APACHE III risk-adjusted ICU and hospital deaths and lengths of stay were based on logistic regression models incorporated in the APACHE prognostic system (APACHE Medical Systems Inc., Vienna, Virginia) (63). These predictions include an average ICU admission source correction.

Blumenthal D.  Part 1: Quality of care—what is it? [Editorial]. N Engl J Med. 1996; 335:891-4. PubMed
 
Rubin HR, Gandek B, Rogers WH, Kosinski M, McHorney CA, Ware JE Jr.  Patients' ratings of outpatient visits in different practice settings. Results from the Medical Outcomes Study. JAMA. 1993; 270:835-40. PubMed
 
Spoeri RK, Ullman R.  Measuring and reporting managed care performance: lessons learned and new initiatives. Ann Intern Med. 1997; 127:726-32. PubMed
 
Jencks SF, Daley J, Draper D, Thomas N, Lenhart G, Walker J.  Interpreting hospital mortality data. The role of clinical risk adjustment. JAMA. 1988; 260:3611-6. PubMed
 
Hannan EL, O'Donnell JF, Kilburn H Jr, Bernard HR, Yazici A.  Investigation of the relationship between volume and mortality for surgical procedures performed in New York State hospitals. JAMA. 1989; 262:503-10. PubMed
 
Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD.  Judging hospitals by severity-adjusted mortality rates: the influence of the severity-adjustment method. Am J Public Health. 1996; 86:1379-87. PubMed
 
Green J, Wintfeld N, Krasner M, Wells C.  In search of America's best hospitals. The promise and reality of quality assessment. JAMA. 1997; 277:1152-5. PubMed
 
Pennsylvania Health Care Cost Containment Council (PHC4). Focus on heart attack in Western Pennsylvania 1993 summary report for health benefits purchasers, health care providers, policy makers and consumers. 1995, Pennsylvania Health Care Cost Containment Council.
 
Schwartz WB.  The inevitable failure of current cost-containment strategies. Why they can provide only temporary relief. JAMA. 1987; 257:220-4. PubMed
 
Jacobs P, Noseworthy TW.  National estimates of intensive care utilization and costs: Canada and the United States. Crit Care Med. 1990; 18:1282-6. PubMed
 
Rubenfeld GD, Angus DC, Pinsky MR, Curtis JR, Connors AF Jr, Bernard GR.  Outcomes research in critical care: results of the American Thoracic Society Critical Care Assembly Workshop on Outcomes Research. The Members of the Outcomes Research Workshop. Am J Respir Crit Care Med. 1999; 160:358-67. PubMed
 
Clemmitt M.  Perspectives. Burgeoning technology will strain Medicare more than previously admitted, says panel. Med Health. 2001; 55:suppl1-4. PubMed
 
Chernew ME, Hirth RA, Sonnad SS, Ermann R, Fendrick AM.  Managed care, medical technology, and health care cost growth: a review of the evidence. Med Care Res Rev. 1998; 55:259-88; discussion 289-97. [PMID: 9727299]
 
Huerta JA.  The role of technology in rising healthcare costs. J Clin Eng. 1995; 20:48-56. PubMed
 
Goodman DC, Fisher ES, Bubolz TA, Mohr JE, Poage JF, Wennberg JE.  Benchmarking the US physician workforce. An alternative to needs-based or demand-based planning. JAMA. 1996; 276:1811-7. PubMed
 
Rosenthal GE, Harper DL.  Cleveland health quality choice: a model for collaborative community-based outcomes assessment. Jt Comm J Qual Improv. 1994; 20:425-42. PubMed
 
Schroeder J, Lamb S.  Data initiatives: HEDIS and the New England Business Coalition. Am J Med Qual. 1996; 11:S58-62. PubMed
 
100 top hospitals. National benchmarks. Mod Healthc. 2001; Suppl:6-12. [PMID: 11246764]
 
Weissman NW, Allison JJ, Kiefe CI, Farmer RM, Weaver MT, Williams OD, et al..  Achievable benchmarks of care: the ABCs of benchmarking. J Eval Clin Pract. 1999; 5:269-81. PubMed
 
Fihn SD.  The quest to quantify quality [Editorial]. JAMA. 2000; 283:1740-2. PubMed
 
Hayward RA, Williams BC, Gruppen LD, Rosenbaum D.  Measuring attending physician performance in a general medicine outpatient clinic. J Gen Intern Med. 1995; 10:504-10. PubMed
 
Zimmerman JE, Wagner DP.  Prognostic systems in intensive care: how do you interpret an observed mortality that is higher than expected? [Editorial]. Crit Care Med. 2000; 28:258-60. PubMed
 
Krakauer H, Lin MJ, Schone EM, Park D, Miller RC, Greenwald J, et al..  ‘Best clinical practice’: assessment of processes of care and of outcomes in the US Military Health Services System. J Eval Clin Pract. 1998; 4:11-29. PubMed
 
Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG.  The unreliability of individual physician “report cards” for assessing the costs and quality of care of a chronic disease. JAMA. 1999; 281:2098-105. PubMed
 
Rosenthal GE, Quinn L, Harper DL.  Declines in hospital mortality associated with a regional initiative to measure hospital performance. Am J Med Qual. 1997; 12:103-12. PubMed
 
Bernard AM, Hayward RA, Rosevear J, Chun H, McMahon LF.  Comparing the hospitalizations of transfer and non-transfer patients in an academic medical center. Acad Med. 1996; 71:262-6. PubMed
 
Iezzoni LI.  The risks of risk adjustment. JAMA. 1997; 278:1600-7. PubMed
 
Bindman AB.  Can physician profiles be trusted? [Editorial]. JAMA. 1999; 281:2142-3. PubMed
 
Hofer TP, Hayward RA.  Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996; 34:737-53. PubMed
 
Romano PS, Roos LL, Luft HS, Jollis JG, Doliszny K.  A comparison of administrative versus clinical data: coronary artery bypass surgery as an example. Ischemic Heart Disease Patient Outcomes Research Team. J Clin Epidemiol. 1994; 47:249-60. PubMed
 
Hofer TP, Bernstein SJ, Hayward RA, DeMonner S.  Validating quality indicators for hospital care. Jt Comm J Qual Improv. 1997; 23:455-67. PubMed
 
Hurley J, Linz D, Swint E.  Assessing the effects of the Medicare Prospective Payment System on the demand for VA inpatient services: an examination of transfers and discharges of problem patients. Health Serv Res. 1990; 25:239-55. PubMed
 
Duncan RP, McKinstry AK.  Inpatient transfers and uncompensated care. Hosp Health Serv Adm. 1988; 33:237-48. PubMed
 
Jencks SF, Bobula JD.  Does receiving referral and transfer patients make hospitals expensive? Med Care. 1988; 26:948-58. PubMed
 
Borlase BC, Baxter JK, Kenney PR, Forse RA, Benotti PN, Blackburn GL.  Elective intrahospital admissions versus acute interhospital transfers to a surgical intensive care unit: cost and outcome prediction. J Trauma. 1991; 31:915-8; discussion 918-9. [PMID: 2072429]
 
Gordon HS, Rosenthal GE.  Impact of interhospital transfers on outcomes in an academic medical center. Implications for profiling hospital quality. Med Care. 1996; 34:295-309. PubMed
 
Obremskey W, Henley MB.  A comparison of transferred versus direct admission orthopedic trauma patients. J Trauma. 1994; 36:373-6. PubMed
 
Schiff RL, Ansell DA, Schlosser JE, Idris AH, Morrison A, Whitman S.  Transfers to a public hospital. A prospective study of 467 patients. N Engl J Med. 1986; 314:552-7. PubMed
 
Miller MG, Miller LS, Fireman B, Black SB.  Variation in practice for discretionary admissions. Impact on estimates of quality of hospital care. JAMA. 1994; 271:1493-8. PubMed
 
Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al..  The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest. 1991; 100:1619-36. PubMed
 
Teres D, Lemeshow S.  Using severity measures to describe high performance intensive care units. Crit Care Clin. 1993; 9:543-54. PubMed
 
Le Gall JR, Lemeshow S, Saulnier F.  A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study. JAMA. 1993; 270:2957-63. PubMed
 
Lemeshow S, Le Gall JR.  Modeling the severity of illness of ICU patients. A systems update. JAMA. 1994; 272:1049-55. PubMed
 
Dragsted L, Jörgensen J, Jensen NH, Bönsing E, Jacobsen E, Knaus WA, et al..  Interhospital comparisons of patient outcome from intensive care: importance of lead-time bias. Crit Care Med. 1989; 17:418-22. PubMed
 
Rapoport J, Teres D, Lemeshow S, Harris D.  Timing of intensive care unit admission in relation to ICU outcome. Crit Care Med. 1990; 18:1231-5. PubMed
 
Tunnell RD, Millar BW, Smith GB.  The effect of lead time bias on severity of illness scoring, mortality prediction and standardised mortality ratio in intensive care—a pilot study. Anaesthesia. 1998; 53:1045-53. PubMed
 
Escarce JJ, Kelley MA.  Admission source to the medical intensive care unit predicts hospital death independent of APACHE II score. JAMA. 1990; 264:2389-94. PubMed
 
Zimmerman JE, Wagner DP, Draper EA, Wright L, Alzola C, Knaus WA.  Evaluation of acute physiology and chronic health evaluation III predictions of hospital mortality in an independent database. Crit Care Med. 1998; 26:1317-26. PubMed
 
Rosenberg AL, Hofer TP, Hayward RA, Strachan C, Watts CM.  Who bounces back? Physiologic and other predictors of intensive care unit readmission. Crit Care Med. 2001; 29:511-8. PubMed
 
Keene AR, Cullen DJ.  Therapeutic Intervention Scoring System: update 1983. Crit Care Med. 1983; 11:1-3. PubMed
 
Zimmerman JE, Wagner DP, Draper EA, Knaus WA.  Improving intensive care unit discharge decisions: supplementing physician judgment with predictions of next day risk for life support. Crit Care Med. 1994; 22:1373-84. PubMed
 
Oye RK, Bellamy PE.  Patterns of resource consumption in medical intensive care. Chest. 1991; 99:685-9. PubMed
 
Rapoport J, Gehlbach S, Lemeshow S, Teres D.  Resource utilization among intensive care patients. Managed care vs traditional insurance. Arch Intern Med. 1992; 152:2207-12. PubMed
 
Bailey JE, Van Brunt DL, Mirvis DM, McDaniel S, Spears CR, Chang CF, et al..  Academic managed care organizations and adverse selection under Medicaid managed care in Tennessee. JAMA. 1999; 282:1067-72. PubMed
 
Knaus W, Draper E, Wagner D.  APACHE III study design: analytic plan for evaluation of severity and outcome in intensive care unit patients. Introduction. Crit Care Med. 1989; 17:S176-80. PubMed
 
Langenbrunner JC, Willis P, Jencks SF, Dobson A, Iezzoni L.  Developing payment refinements and reforms under Medicare for excluded hospitals. Health Care Financ Rev. 1989; 10:91-107. PubMed
 
Shapiro MF, Park RE, Keesey J, Brook RH.  The effect of alternative case-mix adjustments on mortality differences between municipal and voluntary hospitals in New York City. Health Serv Res. 1994; 29:95-112. PubMed
 
Albertson GA, Lin CT, Kutner J, Schilling LM, Anderson SN, Anderson RJ.  Recognition of patient referral desires in an academic managed care plan frequency, determinants, and outcomes. J Gen Intern Med. 2000; 15:242-7. PubMed
 
Kulu-Glasgow I, Delnoij D, de Bakker D.  Self-referral in a gatekeeping system: patients' reasons for skipping the general-practitioner. Health Policy. 1998; 45:221-38. PubMed
 
Mant J, Hicks N.  Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ. 1995; 311:793-6. PubMed
 
Angus DC.  Scoring system fatigue … and the search for a way forward [Editorial]. Crit Care Med. 2000; 28:2145-6. PubMed
 
Franklin C, Jackson D.  Discharge decision-making in a medical ICU: characteristics of unexpected readmissions. Crit Care Med. 1983; 11:61-6. PubMed
 
Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al..  The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest. 1991; 100:1619-36. PubMed
 

Figures

Tables

Table Jump PlaceholderTable 1.  Characteristics of Medical Intensive Care Unit Patients Directly Admitted from the Emergency Department or Clinic, Admitted from the General Medicine Floor, or Transferred from Another Hospital
Table Jump PlaceholderTable 2.  Resource Use and Mortality of Medical Intensive Care Unit Patients Directly Admitted from the Emergency Department or Clinic, Admitted from the General Medicine Floor, or Transferred from Another Hospital
Table Jump PlaceholderTable 3.  Intensive Care Unit and Hospital Outcomes for Patients Transferred from Another Hospital to the Medical Intensive Care Unit Compared with Direct and Combined Direct and Floor Intensive Care Unit Admissions, Adjusting for Case Mix and Physiologic Illness Severity
Table Jump PlaceholderTable 4.  Recommendations for Profiling and Research Using Risk-Adjusted Outcome Measures

References

Blumenthal D.  Part 1: Quality of care—what is it? [Editorial]. N Engl J Med. 1996; 335:891-4. PubMed
 
Rubin HR, Gandek B, Rogers WH, Kosinski M, McHorney CA, Ware JE Jr.  Patients' ratings of outpatient visits in different practice settings. Results from the Medical Outcomes Study. JAMA. 1993; 270:835-40. PubMed
 
Spoeri RK, Ullman R.  Measuring and reporting managed care performance: lessons learned and new initiatives. Ann Intern Med. 1997; 127:726-32. PubMed
 
Jencks SF, Daley J, Draper D, Thomas N, Lenhart G, Walker J.  Interpreting hospital mortality data. The role of clinical risk adjustment. JAMA. 1988; 260:3611-6. PubMed
 
Hannan EL, O'Donnell JF, Kilburn H Jr, Bernard HR, Yazici A.  Investigation of the relationship between volume and mortality for surgical procedures performed in New York State hospitals. JAMA. 1989; 262:503-10. PubMed
 
Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD.  Judging hospitals by severity-adjusted mortality rates: the influence of the severity-adjustment method. Am J Public Health. 1996; 86:1379-87. PubMed
 
Green J, Wintfeld N, Krasner M, Wells C.  In search of America's best hospitals. The promise and reality of quality assessment. JAMA. 1997; 277:1152-5. PubMed
 
Pennsylvania Health Care Cost Containment Council (PHC4). Focus on heart attack in Western Pennsylvania 1993 summary report for health benefits purchasers, health care providers, policy makers and consumers. 1995, Pennsylvania Health Care Cost Containment Council.
 
Schwartz WB.  The inevitable failure of current cost-containment strategies. Why they can provide only temporary relief. JAMA. 1987; 257:220-4. PubMed
 
Jacobs P, Noseworthy TW.  National estimates of intensive care utilization and costs: Canada and the United States. Crit Care Med. 1990; 18:1282-6. PubMed
 
Rubenfeld GD, Angus DC, Pinsky MR, Curtis JR, Connors AF Jr, Bernard GR.  Outcomes research in critical care: results of the American Thoracic Society Critical Care Assembly Workshop on Outcomes Research. The Members of the Outcomes Research Workshop. Am J Respir Crit Care Med. 1999; 160:358-67. PubMed
 
Clemmitt M.  Perspectives. Burgeoning technology will strain Medicare more than previously admitted, says panel. Med Health. 2001; 55:suppl1-4. PubMed
 
Chernew ME, Hirth RA, Sonnad SS, Ermann R, Fendrick AM.  Managed care, medical technology, and health care cost growth: a review of the evidence. Med Care Res Rev. 1998; 55:259-88; discussion 289-97. [PMID: 9727299]
 
Huerta JA.  The role of technology in rising healthcare costs. J Clin Eng. 1995; 20:48-56. PubMed
 
Goodman DC, Fisher ES, Bubolz TA, Mohr JE, Poage JF, Wennberg JE.  Benchmarking the US physician workforce. An alternative to needs-based or demand-based planning. JAMA. 1996; 276:1811-7. PubMed
 
Rosenthal GE, Harper DL.  Cleveland health quality choice: a model for collaborative community-based outcomes assessment. Jt Comm J Qual Improv. 1994; 20:425-42. PubMed
 
Schroeder J, Lamb S.  Data initiatives: HEDIS and the New England Business Coalition. Am J Med Qual. 1996; 11:S58-62. PubMed
 
100 top hospitals. National benchmarks. Mod Healthc. 2001; Suppl:6-12. [PMID: 11246764]
 
Weissman NW, Allison JJ, Kiefe CI, Farmer RM, Weaver MT, Williams OD, et al..  Achievable benchmarks of care: the ABCs of benchmarking. J Eval Clin Pract. 1999; 5:269-81. PubMed
 
Fihn SD.  The quest to quantify quality [Editorial]. JAMA. 2000; 283:1740-2. PubMed
 
Hayward RA, Williams BC, Gruppen LD, Rosenbaum D.  Measuring attending physician performance in a general medicine outpatient clinic. J Gen Intern Med. 1995; 10:504-10. PubMed
 
Zimmerman JE, Wagner DP.  Prognostic systems in intensive care: how do you interpret an observed mortality that is higher than expected? [Editorial]. Crit Care Med. 2000; 28:258-60. PubMed
 
Krakauer H, Lin MJ, Schone EM, Park D, Miller RC, Greenwald J, et al..  ‘Best clinical practice’: assessment of processes of care and of outcomes in the US Military Health Services System. J Eval Clin Pract. 1998; 4:11-29. PubMed
 
Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG.  The unreliability of individual physician “report cards” for assessing the costs and quality of care of a chronic disease. JAMA. 1999; 281:2098-105. PubMed
 
Rosenthal GE, Quinn L, Harper DL.  Declines in hospital mortality associated with a regional initiative to measure hospital performance. Am J Med Qual. 1997; 12:103-12. PubMed
 
Bernard AM, Hayward RA, Rosevear J, Chun H, McMahon LF.  Comparing the hospitalizations of transfer and non-transfer patients in an academic medical center. Acad Med. 1996; 71:262-6. PubMed
 
Iezzoni LI.  The risks of risk adjustment. JAMA. 1997; 278:1600-7. PubMed
 
Bindman AB.  Can physician profiles be trusted? [Editorial]. JAMA. 1999; 281:2142-3. PubMed
 
Hofer TP, Hayward RA.  Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996; 34:737-53. PubMed
 
Romano PS, Roos LL, Luft HS, Jollis JG, Doliszny K.  A comparison of administrative versus clinical data: coronary artery bypass surgery as an example. Ischemic Heart Disease Patient Outcomes Research Team. J Clin Epidemiol. 1994; 47:249-60. PubMed
 
Hofer TP, Bernstein SJ, Hayward RA, DeMonner S.  Validating quality indicators for hospital care. Jt Comm J Qual Improv. 1997; 23:455-67. PubMed
 
Hurley J, Linz D, Swint E.  Assessing the effects of the Medicare Prospective Payment System on the demand for VA inpatient services: an examination of transfers and discharges of problem patients. Health Serv Res. 1990; 25:239-55. PubMed
 
Duncan RP, McKinstry AK.  Inpatient transfers and uncompensated care. Hosp Health Serv Adm. 1988; 33:237-48. PubMed
 
Jencks SF, Bobula JD.  Does receiving referral and transfer patients make hospitals expensive? Med Care. 1988; 26:948-58. PubMed
 
Borlase BC, Baxter JK, Kenney PR, Forse RA, Benotti PN, Blackburn GL.  Elective intrahospital admissions versus acute interhospital transfers to a surgical intensive care unit: cost and outcome prediction. J Trauma. 1991; 31:915-8; discussion 918-9. [PMID: 2072429]
 
Gordon HS, Rosenthal GE.  Impact of interhospital transfers on outcomes in an academic medical center. Implications for profiling hospital quality. Med Care. 1996; 34:295-309. PubMed
 
Obremskey W, Henley MB.  A comparison of transferred versus direct admission orthopedic trauma patients. J Trauma. 1994; 36:373-6. PubMed
 
Schiff RL, Ansell DA, Schlosser JE, Idris AH, Morrison A, Whitman S.  Transfers to a public hospital. A prospective study of 467 patients. N Engl J Med. 1986; 314:552-7. PubMed
 
Miller MG, Miller LS, Fireman B, Black SB.  Variation in practice for discretionary admissions. Impact on estimates of quality of hospital care. JAMA. 1994; 271:1493-8. PubMed
 
Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al..  The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest. 1991; 100:1619-36. PubMed
 
Teres D, Lemeshow S.  Using severity measures to describe high performance intensive care units. Crit Care Clin. 1993; 9:543-54. PubMed
 
Le Gall JR, Lemeshow S, Saulnier F.  A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study. JAMA. 1993; 270:2957-63. PubMed
 
Lemeshow S, Le Gall JR.  Modeling the severity of illness of ICU patients. A systems update. JAMA. 1994; 272:1049-55. PubMed
 
Dragsted L, Jörgensen J, Jensen NH, Bönsing E, Jacobsen E, Knaus WA, et al..  Interhospital comparisons of patient outcome from intensive care: importance of lead-time bias. Crit Care Med. 1989; 17:418-22. PubMed
 
Rapoport J, Teres D, Lemeshow S, Harris D.  Timing of intensive care unit admission in relation to ICU outcome. Crit Care Med. 1990; 18:1231-5. PubMed
 
Tunnell RD, Millar BW, Smith GB.  The effect of lead time bias on severity of illness scoring, mortality prediction and standardised mortality ratio in intensive care—a pilot study. Anaesthesia. 1998; 53:1045-53. PubMed
 
Escarce JJ, Kelley MA.  Admission source to the medical intensive care unit predicts hospital death independent of APACHE II score. JAMA. 1990; 264:2389-94. PubMed
 
Zimmerman JE, Wagner DP, Draper EA, Wright L, Alzola C, Knaus WA.  Evaluation of acute physiology and chronic health evaluation III predictions of hospital mortality in an independent database. Crit Care Med. 1998; 26:1317-26. PubMed
 
Rosenberg AL, Hofer TP, Hayward RA, Strachan C, Watts CM.  Who bounces back? Physiologic and other predictors of intensive care unit readmission. Crit Care Med. 2001; 29:511-8. PubMed
 
Keene AR, Cullen DJ.  Therapeutic Intervention Scoring System: update 1983. Crit Care Med. 1983; 11:1-3. PubMed
 
Zimmerman JE, Wagner DP, Draper EA, Knaus WA.  Improving intensive care unit discharge decisions: supplementing physician judgment with predictions of next day risk for life support. Crit Care Med. 1994; 22:1373-84. PubMed
 
Oye RK, Bellamy PE.  Patterns of resource consumption in medical intensive care. Chest. 1991; 99:685-9. PubMed
 
Rapoport J, Gehlbach S, Lemeshow S, Teres D.  Resource utilization among intensive care patients. Managed care vs traditional insurance. Arch Intern Med. 1992; 152:2207-12. PubMed
 
Bailey JE, Van Brunt DL, Mirvis DM, McDaniel S, Spears CR, Chang CF, et al..  Academic managed care organizations and adverse selection under Medicaid managed care in Tennessee. JAMA. 1999; 282:1067-72. PubMed
 
Knaus W, Draper E, Wagner D.  APACHE III study design: analytic plan for evaluation of severity and outcome in intensive care unit patients. Introduction. Crit Care Med. 1989; 17:S176-80. PubMed
 
Langenbrunner JC, Willis P, Jencks SF, Dobson A, Iezzoni L.  Developing payment refinements and reforms under Medicare for excluded hospitals. Health Care Financ Rev. 1989; 10:91-107. PubMed
 
Shapiro MF, Park RE, Keesey J, Brook RH.  The effect of alternative case-mix adjustments on mortality differences between municipal and voluntary hospitals in New York City. Health Serv Res. 1994; 29:95-112. PubMed
 
Albertson GA, Lin CT, Kutner J, Schilling LM, Anderson SN, Anderson RJ.  Recognition of patient referral desires in an academic managed care plan frequency, determinants, and outcomes. J Gen Intern Med. 2000; 15:242-7. PubMed
 
Kulu-Glasgow I, Delnoij D, de Bakker D.  Self-referral in a gatekeeping system: patients' reasons for skipping the general-practitioner. Health Policy. 1998; 45:221-38. PubMed
 
Mant J, Hicks N.  Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ. 1995; 311:793-6. PubMed
 
Angus DC.  Scoring system fatigue … and the search for a way forward [Editorial]. Crit Care Med. 2000; 28:2145-6. PubMed
 
Franklin C, Jackson D.  Discharge decision-making in a medical ICU: characteristics of unexpected readmissions. Crit Care Med. 1983; 11:61-6. PubMed
 
Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al..  The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest. 1991; 100:1619-36. PubMed
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Accepting Critically Ill Transfer Patients

The summary below is from the full report titled “Accepting Critically Ill Transfer Patients: Adverse Effect on a Referral Center's Outcome and Benchmark Measures.” It is in the 3 June 2003 issue of Annals of Internal Medicine (volume 138, pages 882-890). The authors are A.L. Rosenberg, T.P. Hofer, C. Strachan, C.M. Watts, and R.A. Hayward.

Read More...

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Related Point of Care
Topic Collections
PubMed Articles

Want to Subscribe?

Learn more about subscription options

Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)