Andrew L. Rosenberg, MD; Timothy P. Hofer, MD; Cathy Strachan , MSRN; Charles M. Watts, MD; Rodney A. Hayward, MD
Acknowledgments: The authors thank the members of the University of Michigan Medical Center's Office of Clinical Affairs for collecting the data used in this study and the Consortium for Health Outcomes, Innovation, and Cost Effectiveness Studies (CHOICES) for database support.
Grant Support: Dr. Rosenberg is supported by a grant from the Robert Wood Johnson Foundation and by the Department of Veterans Affairs. Dr. Hofer is supported by a Career Development Grant from the Health Services Research & Development Service of the Department of Veterans Affairs.
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: Andrew Rosenberg, MD, Department of Anesthesiology and Critical Care, University of Michigan Medical Center, Room 1G323, Box 0048, Ann Arbor, MI 48109-0048; e-mail, firstname.lastname@example.org.
Current Author Addresses: Dr. Rosenberg: Department of Anesthesiology and Critical Care, University of Michigan Medical Center, Room 1G323, Box 0048, Ann Arbor, MI 48109-0048
Drs. Hofer and Hayward: Ann Arbor Veterans Affairs Health Services Research & Development Service, 3rd Floor, Lobby L, PO Box 130170, Ann Arbor, MI 48113.
Ms. Strachan: Office of Clinical Affairs, University of Michigan Medical Center, 1500 East Medical Center Drive, Ann Arbor, MI 48109.
Dr. Watts: Office of Clinical Affairs, Northwestern Memorial Hospital, 251 East Huron Street, Chicago, IL 60611.
Author Contributions: Conception and design: A.L. Rosenberg.
Analysis and interpretation of the data: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.
Drafting of the article: A.L. Rosenberg, C.M. Watts.
Critical revision of the article for important intellectual content: A.L. Rosenberg, T.P. Hofer, C. Strachan, C.M. Watts, R.A. Hayward.
Final approval of the article: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.
Provision of study materials or patients: C. Strachan.
Statistical expertise: A.L. Rosenberg, T.P. Hofer, R.A. Hayward.
Obtaining of funding: A.L. Rosenberg, C.M. Watts, R.A. Hayward.
Administrative, technical, or logistic support: C. Strachan, C.M. Watts, R.A. Hayward.
Collection and assembly of data: A.L. Rosenberg, C. Strachan.
Rosenberg AL, Hofer TP, Strachan C, Watts CM, Hayward RA. Accepting Critically Ill Transfer Patients: Adverse Effect on a Referral Center's Outcome and Benchmark Measures. Ann Intern Med. 2003;138:882-890. doi: 10.7326/0003-4819-138-11-200306030-00009
Download citation file:
Published: Ann Intern Med. 2003;138(11):882-890.
Benchmarking compares performance of providers or systems with a standard. Are such comparisons fair, even if they are adjusted for varying case mix and severity of illness of patients?
This prospective study showed that patients who were transferred to an intensive care unit from other hospitals had worse outcomes than those who were directly admitted. In modeling analyses, benchmarking adjusted with sophisticated case-mix and severity-of-illness information, but not admission source, penalized units with a 25% transfer rate (versus a 0% rate) by 14 “excess deaths” per 1000 admissions.
Benchmarking of intensive care unit performance should account for transfer patients.
Payers and consumers of health care are increasingly seeking information on quality of care (1-3). Since the Centers for Medicare & Medicaid Services (formerly the Health Care Financing Administration) began reporting hospital mortality rates in the mid-1980s (4), an increasing number of health care providers, health plans, third-party payers, and regulatory agencies are collecting and disseminating information used to judge and compare quality of care (5-8). The converging influences of an expanding use of expensive technology in medicine and the resulting rapid growth in health care costs are driving these initiatives (9-14).
Purchasers, providers, and the government are relying on various benchmarking methods to compare outcomes, quality, and costs of health care (15-18). Benchmarking, the process by which the performance of an individual, group, hospital, or health system is compared with a standard, is the cornerstone of many efforts to measure and compare the quality and efficiency of care (19-21). Principally, benchmarking accounts for variation in outcomes, processes, or costs of care (22, 23). The results of these benchmarks are often used to prepare profiles (“report cards”) that compare a provider's, hospital's or health system's outcomes and costs (24, 25).
Frequently, this type of profiling is performed by using limited administrative data, such as age, sex, and diagnosis-related group categories that are primarily collected for billing purposes (26). These profiling methods have met with substantial criticism (24, 27-29). Most commonly, critics cite the lack of sufficiently accurate case-mix adjustment and severity-of-illness measures on which to compare actual and predicted outcomes for different types of care and different groups of patients (30, 31). These considerations are especially important for tertiary referral centers that care for patients with complex or severe illnesses, many of whom have been referred or transferred from another hospital or area (26, 32, 33). Several previous studies have shown that transfer patients often require more resources (33-35) but have worse outcomes than nontransferred patients (36-38). The level of severity adjustment, however, is often not detailed in these studies (39).
How detailed must risk adjustment be to adequately control for case-mix differences? For several reasons, adjustment for the risk for the outcomes of transfer patients in the intensive care unit (ICU) should be the ideal setting in which to examine this question. Currently, the most sophisticated and highly validated set of clinical and physiologic case-mix measures available have been developed for ICU patients (40-43). This is partly because an extensive amount of physiologic information is available from the repetitive and often invasive monitoring unique to the ICU. Furthermore, lead-time and selection bias (44-47) (the reason why patient location before ICU admission has been included in some risk-adjustment models ) may account for some of the difficulty in accounting for ICU patients' outcomes. However, few studies have evaluated the ability to measure the severity of illness among patients admitted to an ICU after previously receiving care from a general medical ward or another hospital (43, 44, 46, 47) by using current, state-of-the-art ICU outcome prediction models (35, 48). Furthermore, among the most rigorously developed and tested ICU prediction models, there has been no detailed evaluation of the “transfer effect” at an individual hospital level. Even small bias in comparisons of observed versus expected deaths, as well as other outcomes, can substantially affect how high-quality institutions compare with peer hospitals.
Therefore, we studied the incremental improvement that increasingly more detailed clinical and physiologic case-mix adjustment could provide in evaluating the greater severity of illness of transfer patients. We sought to examine whether full case-mix adjustment could adequately account for the increased risk for poor outcomes among transfer patients (that is, patients who received medical care before transfer to a referral center medical intensive care unit [MICU]). In addition, we investigated the potential bias in benchmarking outcomes produced by a transfer effect even with the use of state-of-the-art case-mix measures.
We obtained the data for this study from a prospectively developed cohort of 4579 consecutive admissions to the MICU at a tertiary care university hospital (800 beds, 24 of which are MICU beds) from 1 January 1994 to 1 April 1998. A study group of 4208 patients was available after previously published exclusion criteria were met (49) (Appendix). This MICU primarily admits non–cardiac care, medical patients with various conditions and is a referral center for patients with acute respiratory distress syndrome and hepatic failure. Surgical and trauma patients are admitted to other, specialized ICUs. Measurements of all dependent and independent variables were collected prospectively, whereas hypotheses were generated before data analyses but after the data were collected (secondary or post hoc analyses of a prospective cohort study). The data were obtained from the university's ICU clinical database, APACHE Medical Systems (APACHE Medical Systems, Inc., Vienna, Virginia). The data quality was ensured by using previously established practices (40). The need for informed consent was waived by the institutional review board.
Standard Acute Physiology, Age, and Chronic Health Evaluation (APACHE) methods were used to construct case-mix adjustment and severity-of-illness scores (40). A daily Acute Physiology Score (APS) based on the most abnormal values of 17 specific physiologic variables during the previous 24-hour period measured disease severity. The age and comorbidity score was derived from previously validated weights for the patients' ages and one of seven severe comorbid conditions (hepatic failure, cirrhosis, immunosuppressive conditions, hematologic malignant conditions, lymphoma, metastatic cancer, and AIDS). The type and amount of monitoring and treatment received over each 24-hour period were recorded by using the Therapeutic Intervention Scoring System (TISS) (50). Patients were characterized as receiving either monitored care or active treatment, such as ventilator management (51). Each patient's primary reason for MICU admission (MICU admission diagnosis) was recorded as 1 of 430 diseases, injuries, surgical procedures, or events most immediately threatening to the patient and requiring the services of the MICU.
Our outcome measures were MICU and hospital lengths of stay, MICU and hospital mortality rates, and MICU readmission rates. Intensive care unit and hospital length of stay is also an established proxy measure of overall resource use (52, 53). Our primary predictor variables included age, comorbid conditions, diagnosis variables, and admission APS. General ICU severity models are primarily developed to predict mortality rate and length of stay. Although we did not use the APACHE model for most of our analyses, we did use the APS component of the APACHE score as an essential variable to control for severity of illness.
Data collected for all admissions to the MICU also included patient demographic characteristics, hospital and MICU admission dates and times, location from which a patient was admitted to the MICU, and the time from admission at the original source to the MICU admission at the study hospital (hours to MICU). Patients admitted directly to the MICU from the emergency department or an ambulatory clinic were categorized as “direct admissions.” Patients admitted from any of the non-ICU general nursing floors or telemetry units were categorized as “floor admissions.” Finally, patients transferred from another hospital were categorized as “transfer admissions,” 98% of whom were from another hospital's ICU.
We used three predictive models to develop benchmarks to study the incremental effect of increasingly more precise risk adjustment on predicting outcomes for the directly admitted, floor, and transfer patients. For these analyses, we consolidated the admission diagnoses to 19 categories to have sufficient sample size for each group. First, we examined the effect of admission source on outcomes without any diagnostic or physiologic information in an unadjusted model. Second, in the partial-adjustment model, we controlled for age, sex, comorbid conditions, and diagnosis only. This traditional adjustment is similar to or even more precise than many general hospital case-mix and severity-adjustment methods that rely on administratively derived diagnosis-related group information. Finally, we added more precise acute physiology clinical information (APS) to adjust for physiologic derangements yielding a full case-mix and severity-adjustment model. We also used the predicted mortality rates and length of stay information from the actual APACHE III model itself, which include an average ICU admission source correction. The results of this last model are reported in the Results section but not in Table 3.
We sought to demonstrate the magnitude of a bias in estimating mortality rates created by a profiling system that does not account for admission source. We evaluated the difference in hospital mortality rate for a medical center that receives approximately 25% of its MICU patients as transfers from another hospital that receives no transfer patients at all. By using the coefficients from the full case-mix–adjusted logistic regression model, we calculated the expected mortality rate if 25% of patients were transfer patients and the expected mortality rate if there were no transfer patients. The difference between these two values represented the magnitude of error that might be mistakenly assumed if a profiling program does not account for transfer status.
We performed descriptive analysis of patient characteristics and patient outcomes (MICU and hospital mortality rates, MICU and hospital lengths of stay, and MICU readmission rate) grouped by admission source. We performed bivariate analysis to evaluate associations between categorical outcomes (death and MICU readmission) and predictor variables by using the chi-square test. Logistic regression was used to evaluate independent associations for categorical predictor variables. Potential interactions between APS and MICU lengths of stay, as well as APS and being discharged from the MICU with a do-not-resuscitate order were evaluated for the mortality models. Interactions between APS and being discharged from the MICU with a do-not-resuscitate order were also evaluated for the length of stay models. There were no significant differences when these interaction terms were included in the models. The Student t-test or one-way analysis of variance was used to compare continuous outcomes (lengths of stay), and linear regression was used to evaluate independent associations for continuous variables. Since hospital and MICU lengths of stay are skewed, these outcome variables were log transformed before we conducted statistical analyses. Therefore, percentage increases in lengths of stay are reported. In all of the analyses, P values were two tailed and considered significant if they were less than 0.05. P values were adjusted for multiple comparisons. The database was exported from the APACHE Medical Systems Database to SPSS, version 9.0 (SPSS, Inc., Chicago, Illinois), for all the above statistical analyses.
The funding source had no role in the design, conduct, or reporting of the study or in the decision to submit the manuscript for publication.
Table 1 shows demographic and clinical information for the three patient groups. As expected, transfer and floor patients had significantly longer courses of treatments before MICU admission than patients directly admitted to the MICU. Furthermore, transfer and floor patients were sicker at the time of MICU admission and at MICU discharge and had 20% to 30% larger declines, respectively, in their APS during the MICU stay. In addition, the admission diagnoses for these groups of patients often differed. Transfer patients were more likely to be admitted with complex medical conditions, such as severe sepsis; acute respiratory distress syndrome; and various complications due to hepatic failure, especially upper gastrointestinal bleeding. Both floor and transfer patients also had significantly more comorbid conditions.
Compared with directly admitted patients, transfer patients had 1.5 times longer mean MICU days and stayed almost twice as long in the hospital (Table 2). Even compared with floor patients, transfer patients had 20% longer MICU stays. Standardized mortality ratios comparing observed with predicted hospital mortality were 0.98 (CI, 0.83 to 1.14) for transfer patients, 0.89 (CI, 0.77 to 1.02) for floor patients, and 0.82 (CI, 0.42 to 1.22) for directly admitted patients. Transfer patients had 1.4- to 2.5-fold higher hospital mortality rates than any other group (Table 3). Transfer patients were almost 80% more likely to be readmitted to the MICU than directly admitted patients and had hospital mortality rates nearly two times higher.
The multivariate analyses show that increasingly detailed and sophisticated case-mix and severity measures were still ineffective in adjusting for admission source on length of stay when transfer patients were compared with directly admitted patients (Table 3). Only by including an admission source variable, such as in the actual APACHE III model, could the transfer effect for MICU length of stay be eliminated. Hospital stay, however, was still longer for transfer patients than for directly admitted patients, although the APACHE III transfer variable reduced this effect significantly (5% increased length of stay [CI, 1% to 16%]). When we compared transfer patients with all other patients (Table 3), the effect of admission source on length of stay was reduced but mortality rate was not. To test whether length of stay differences were affected by differences in mortality rates, we analyzed length of stay with and without patients who died. Our results were not significantly different when patients who died either in the MICU or after MICU discharge were included or excluded from the analyses.
We also found that admission source was a strongly significant independent predictor of hospital mortality (Table 3). Even after incorporating the standard APACHE III admission source correction and adjustment for the duration of therapy at the previous location, transfer patients had significantly higher MICU mortality rates (odds ratio, 2.0 [CI, 1.5 to 2.6]) and were more likely than directly admitted patients to die before discharge from the hospital.
Bias in observed versus expected deaths can be substantial. Even with identical efficiency and quality, a referral hospital with a 25% MICU transfer rate (similar to our current level), compared with another hospital with a 0% transfer rate, would be penalized by 14 excess deaths per 1000 admissions when a benchmarking program adjusts only for case mix and severity of illness and not for the source of admission.
Severity of illness and case-mix measures are used extensively for creating and comparing the risk-adjusted outcomes of different health care providers. The assumption is that the risk-adjustment method will adequately account for enough of the differences in patient case mix that the residual differences in outcomes are mainly due to quality. This study demonstrates that a substantial underestimation of transfer patients' resource use and outcomes occurs even when the best risk-prediction measures available are used. This underestimate is almost certainly worse when only administrative or diagnosis-related group information is used (6, 27). Moreover, this underestimate can substantially penalize referral centers with more transfer patients. Even small degrees of excess deaths (such as the 1.4% increase we found) is all that it takes to make a high-quality institution look worse than its peer hospitals.
Patients transferred to a MICU from another hospital had hospital stays twice as long as and hospital mortality rates twofold higher than patients directly admitted to the MICU. These data support previous findings that transfer patients, especially those transferred to an ICU, may be among the most resource-intensive patients in a medical system (26, 33, 36). Arguably, the ICU is the best place to evaluate the adequacy of risk adjustment for important outcomes, such as length of stay and mortality. In the ICU, validated clinical–physiologic measures are readily available and are most precisely measured because of the intensity of one-on-one care, the frequency that the measures can be taken, and the unique use of invasive monitors.
The importance of the admission source in predicting clinical outcomes has been reported previously (26, 33-36, 44, 47). Bernard and colleagues (26) and Gordon and Rosenthal (36) found major transfer effects for individual patients but included only modest case-mix adjustment. The bias created by not accounting for referral patients in large populations of patients has also been recorded by Bailey and colleagues (54), who showed that patients who travel from one site to another within the TenCare health plan were frequently sicker than nonreferral patients. Escarce and Kelley's study (47) was among the first to use more sophisticated case-mix adjustment by using the Acute Physiology Scores from APACHE II to show the importance of admissions source. During the development of the APACHE III predictive model, weighting for the patient's location before ICU admission (patient origin) and length of stay before ICU admission was incorporated to lessen the effect of lead-time and selection bias on predicting outcome (40, 55). Because this weighting is based on the average association of a surrogate case-mix measure across many hospitals, there are concerns that this surrogate severity measure (for example, APACHE admissions source weight) may not accurately account for the underlying severity of illness at individual hospitals (40).
It is appropriate to ask why state-of-the-art diagnosis and physiologic severity adjusters cannot account for the greater risk for adverse outcomes of transfer patients in our hospital. What is being missed? One hypothesis relates to differences in quality of care. However, the argument that perhaps the worse outcomes for transfer patients are due to poorer quality of care seems unlikely. First, the outcomes for transfer patients at the study hospital were worse than those for directly admitted patients, but they were still good on the basis of the standardized mortality ratios from the APACHE III model. Second, referral centers tend to pride themselves in succeeding where others have failed. Third, the resource use and care activity rating suggest that these patients received very active care. There is certainly no evidence of being quick to “give up.”
It has been suggested by others that admission source reflects lead-time bias when patients have received various durations of care at a previous location (44, 45). Our findings that the longer a patient stayed at the original admission source was associated with worse outcomes may be evidence of lead-time bias. However, when the amount of time the patient spent at the admitting location was analyzed independently from the origin of a patient's previous care, the lead-time variable did not substantially diminish transfer patients' poorer outcomes.
It seems more likely that transfer to another ICU is often associated with some unmeasured aspect related to severity of illness. Perhaps failure to respond to initial treatment and management underlies this phenomenon. Although the most validated ICU prediction models (such as APACHE III, the Mortality Prediction Model, and the Simplified Acute Physiology Score) include detailed clinical measures, they do not include a measure of which treatments have already been attempted, nor do they incorporate information on response to previous therapies or physiologic reserve. This may also explain why floor patients had outcomes similar to those of transfer patients. Floor and transfer patients are much more likely to have not responded to standard therapy in settings where care durations are longer than in an emergency department or clinic (26, 36, 47), or they may have less physiologic reserve. Although APACHE III includes a correction factor for transfer status that adjusts for the average transfer effect (40, 48), it is not surprising that a surrogate severity measure such as “transfer patient” does not work well at all hospitals because admission source probably represents a heterogeneous phenomenon. Patients may be transferred because of insurance reasons, for tertiary care, to receive a specific procedure or treatment, to be closer to home, or because of patient or family distrust of or unhappiness with the transferring facility. The average effect of being transferred from another hospital may vary dramatically from institution to institution. The higher rate of hepatic failure and lung injury or acute respiratory distress syndrome among patients transferred to our institution supports the hypothesis that higher proportions of patients with more complex medical disorders may inordinately bias prediction models developed with different patient populations. The inclusion of a “transfer” variable in the predictive model is simply a best attempt to control for some undefined, and unmeasured, source of severity.
The implications of our findings extend beyond the ICU. Most areas of health care are being scrutinized for both their costs and quality. Our finding that even the best available ICU biophysiologic measures of severity of illness could not adjust for the effect of admission source at a referral center suggests that it is probably even more difficult to adequately risk-stratify non-ICU patients or groups of outpatients, especially if only diagnosis-related group and administrative data are used (24, 27, 56). This contention is supported by our data that showed minimal improvements in case-mix adjustment when using only age, sex, and diagnosis, a method similar to or better than many practices used by payers and regulators. Health policy on quality measurement has often been driven by the idea that if more time and money were spent to improve case-mix adjustment, these problems could be solved. Critics have suggested that other issues beyond case mix may be influencing this imprecision (24, 27). Current statistical methods may not account for multilevel relationships, and the residual errors in many current models may not be related to practices or phenomena that are captured by current conceptual models.
Using conventional case-mix measures will not exclude the possibility of substantial bias due to unmeasured severity of illness (27, 57). Until such measures are available, administrators and researchers should exclude transferred patients or assign them to their original hospital when creating profiles (26, 36) (Table 4). However, this may still not be an adequate intervention. Referral of refractory patients may be occurring at multiple levels (that is, hospital to hospital, emergency department to emergency department, clinic to clinic, and health plan to health plan). Patients who are not satisfied with the care they receive, or for whom treatments have been ineffective, may be more likely to seek care elsewhere (36, 44, 58, 59). The ability to compare institutions and health plans may be more valid and clinically useful if their processes are evaluated rather than trying to incrementally improve or individually calibrate outcome prediction models (24, 29, 57, 60, 61).
We acknowledge several possible limitations in this study. Because our data are from only one referral center, our results may not be as generalizable to other hospitals that do not receive transfer patients. However, our data suggest that the phenomenon of a significant referral bias can exist even with the best available risk adjustment and that the effect on a referral center or county hospital that cares for similar patients (26) could have substantially negative effects on profiling. To date, no studies of multicenter data sets have evaluated whether this phenomenon exists in a heterogeneous fashion between referral and nonreferral hospitals. We are also limited by not being able to accurately characterize the reasons for transfer because few data from the transferring hospital were collected. Future studies would benefit from refining which subgroups of transferred patients have significantly greater severity of illness. This might include those patients transferred after local therapeutic failure versus brief courses of stabilization. Similarly, differences among patients transferred from larger institutions or other referral centers might be compared with those from smaller hospitals. Our data did not include sufficient patients with short stays at the referring institution to allow us to analyze this. Finally, results from a MICU may not be generalizable to other types of ICUs, such as general surgical units, where transferred patient populations are usually much lower (48) unless they are particularly specialized (35).
In summary, even in a setting with the most thorough diagnostic-based case-mix adjustment and the most physiologically precise severity-of-illness information data, we have found that patients who were transferred from another hospital had significantly greater resource use and worse outcomes than patients directly admitted from the emergency department or clinics. Benchmarking that does not adequately adjust for increased severity of illness intrinsic to many transfer patients may substantially penalize referral centers with higher numbers of these patients and, more important, create barriers for patients being transferred to institutions that can supply the specialized care that they need.
For the length of stay and mortality rate analyses, we excluded 48 patients whose medical ICU admission was an ICU readmission from a previous ICU stay during the same hospitalization, but for which we did not have initial ICU physiology. We also excluded the 323 subsequent admissions for readmitted patients; however, their first admission was included in all analyses. This left 4208 patients eligible for the study. In addition, for all analyses evaluating risk for MICU readmission, we excluded 787 patients who died during their first MICU admission (since they were not at risk for readmission), as well as 111 patients who were admitted for a medication or drug overdose since these patients have very low rates of ICU readmission or adverse outcomes (62). This left a study cohort of patients for the readmission analyses who were discharged alive from the MICU and at risk for a subsequent MICU readmission.
These included chronic obstructive pulmonary disease, respiratory failure with and without ventilator use, pneumonia, acute respiratory distress syndrome, gastrointestinal bleeding, sepsis, stroke or intracranial hemorrhage, other neurologic disorders, hepatic failure, cardiac ischemia, congestive heart failure, cardiac arrest, renal failure, metabolic disorders, medication toxicities, drug overdose, and an “other” category.
Predictions of APACHE III risk-adjusted ICU and hospital deaths and lengths of stay were based on logistic regression models incorporated in the APACHE prognostic system (APACHE Medical Systems Inc., Vienna, Virginia) (63). These predictions include an average ICU admission source correction.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Hospital Medicine, Pulmonary/Critical Care.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only