William Rollow, MD, MPH; Terry R. Lied, PhD; Paul McGann, SM, MD; James Poyer, MS, MBA; Lawrence LaVoie, PhD; Robert T. Kambic, MSH; Dale W. Bratzler, DO, MPH; Allen Ma, PhD; Edwin D. Huff, PhD; Lawrence D. Ramunno, MD, MPH
Disclaimer: The opinions herein are those of the authors and are not necessarily those of the Centers for Medicare & Medicaid Services.
Disclosure: William Rollow, MD, MPH, had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Acknowledgments: The authors acknowledge Judith B. Kaplan, MS, Centers for Medicare & Medicaid Services; Mark Gottlieb, PhD, New Mexico Medical Review Association; Meghan B. Harris, MS, Ohio KePRO; Michael J. McInerney, PhD, Mountain Pacific Quality Health Foundation; and Rodney J. Presley, PhD, Georgia Medical Care Foundation, for their insights and helpful suggestions. They also thank the data team at the Colorado Foundation for Medical Care (Kris Mattivi, Beth Stevens, Steve Anderson, and Laura Palmer) and Sean Hunt of Quality Insights of Pennsylvania for expert assistance in trending the nursing home and home health measures.
Grant Support: All funding for this work was provided by the Centers for Medicare & Medicaid Services.
Potential Financial Conflicts of Interest: All authors work for the federal government on administering the Medicare Quality Improvement Organization Program or work in a Quality Improvement Organization as one of the program contractors.
Requests for Single Reprints: William Rollow, MD, MPH, Centers for Medicare & Medicaid Services, Mail Stop S3-02-01, 7500 Security Boulevard, Baltimore, MD 21244-1850; e-mail, firstname.lastname@example.org.
Current Author Addresses: Drs. Rollow, Lied, and McGann, Mr. Poyer, and Mr. Kambic: Centers for Medicare & Medicaid Services, Mail Stop S3-02-01, 7500 Security Boulevard, Baltimore, MD 21244-1850.
Dr. LaVoie: Centers for Medicare & Medicaid Services, Kansas City Regional Office, 601 East 12th Street, Suite 235, Kansas City, MO 64106.
Drs. Bratzler and Ma: Oklahoma Foundation for Medical Quality, 14000 Quail Springs Parkway, Suite 400, Oklahoma City, OK 73134.
Dr. Huff: Centers for Medicare & Medicaid Services, Boston Regional Office, JFK Federal Building, Boston, MA 02203.
Dr. Ramunno: Northeast Health Care Quality Foundation, 15 Old Rollinsford Road, Suite 302, Dover, NH 03820.
Rollow W., Lied T., McGann P., Poyer J., LaVoie L., Kambic R., Bratzler D., Ma A., Huff E., Ramunno L.; Assessment of the Medicare Quality Improvement Organization Program. Ann Intern Med. 2006;145:342-353. doi: 10.7326/0003-4819-145-5-200609050-00134
Download citation file:
Published: Ann Intern Med. 2006;145(5):342-353.
Appendix: Clarification Addendum for Selection and Reporting of Measures
Recent reports have highlighted deficiencies in quality of health care in the United States (1-2). Several reports of nationwide improvements have also been published by such organizations as the Agency for Healthcare Research and Quality, the National Committee for Quality Assurance, the Joint Commission on Accreditation of Healthcare Organizations, and the Medicare Quality Improvement Organization (QIO) Program. The extent to which these improvements are attributable to the efforts of health plans, accreditors, or QIOs is unclear, given the absence of comparison groups (3-11).
The Centers for Medicare & Medicaid Services (CMS), the federal agency responsible for administering Medicare, Medicaid, and several other health care–related programs, seeks to improve the quality of health care for Medicare beneficiaries through contracts with QIOs (12)—state-based organizations staffed with clinicians, analysts, and others with expertise in case review and quality improvement. The 53 QIO contracts cover the 50 U.S. states, the District of Columbia, Puerto Rico, and the Virgin Islands. A single organization can hold more than 1 QIO contract. Appendix Figure 1 shows the locations of QIO lead offices.
The size of each QIO is represented by the size of each star. Maine and Vermont QIO coverage is directed from the New Hampshire QIO office (Maine, New Hampshire, and Vermont are similar in QIO size).
The most recently concluded QIO contract period, the 7th Scope of Work, began in 2002. At various points during this period, the CMS began public reporting of provider performance on quality measures for nursing homes, home health agencies, and hospitals (13-15). We evaluated the effect of the QIO Program in 4 clinical settings by using performance data for 41 quality measures and explored the implications of these findings for future Program evaluations.
For the 7th Scope of Work, the CMS expanded the QIO contract, which was previously limited to hospitals and physician offices, to include nursing homes and home health agencies. For each of the 4 settings, the CMS required QIOs to offer assistance to all interested providers in their state or jurisdiction. In the nursing home, home health agency, and physician office settings, QIOs were also required to recruit subsets of providers, known as identified participant groups (IPGs), that would receive focused assistance related to clinical quality measures. There was no IPG requirement for the hospital setting.
The 53 QIOs recruited voluntary IPGs among nursing homes, home health agencies, and physician offices during the initial months of the 7th Scope of Work. The CMS used outcome measures and clinical process measures to evaluate the performance of each QIO as part of a performance-based service contract. The QIOs were evaluated on improving statewide and IPG clinical process measures and provider satisfaction with their QIO.
Each QIO was required to recruit an IPG of at least 10% but no more than 15% of the nursing homes in its state or jurisdiction and at least 5% but no more than 7.5% of primary care practitioners (physicians, nurse practitioners, and physicians' assistants). For the home health setting, the minimum number of providers in the IPG was 30% of the agencies in the state; there was no maximum number.
The CMS provided general guidance to the QIOs on the selection of IPG providers but did not control the selection process. The QIOs shared information among themselves about appropriate factors, such as willingness to commit resources to quality improvement and baseline performance, for which there were opportunities for improvement.
In this study, we classified providers not participating in an IPG as non-IPG providers. Nursing homes and home health agencies that were participants in an IPG were required to select 1 or more quality measures to target for improvement. For these 2 settings, we subdivided providers in IPGs by measure into 2 subgroups: IPG-select and IPG-other (Appendix Figure 2). For a given measure, the IPG-select subgroup consists of IPG providers that elected to focus on that measure, and the IPG-other subgroup consists of IPG providers that selected other measures.
Quality improvement organizations (QIOs) were required to recruit a limited number of nursing homes, home health agencies, and physician offices into identified participant groups (IPGs) for focused quality improvement interventions. Facilities not participating in the IPG for a given setting are labeled “non-IPG” for that setting. For a given quality measure, IPG nursing homes and home health agencies are subdivided into those focusing on a specified quality measure (IPG-select) and those not focusing on the specified measure (IPG-other). *Low-volume nursing homes and home health agencies and non–primary care physicians were not eligible for the IPG for contractual reasons.
We collected information on the intensity of QIO assistance for nursing homes but not the other provider settings so that we could classify the non-IPG and IPG providers according to 4 levels of QIO intervention (high, medium, low, and none). The highest level of activity involved on-site visits or planned multicontact educational interventions in a group setting to the provider by QIO staff; low-level activity was often limited to sending written or electronic material to the nursing home. Overall, there was a strong relationship between participant status and level of QIO intervention. Only 32.5% of the non-IPG facilities received a high level of QIO intervention, whereas 97.3% of the IPG facilities received this level of intervention. We did not collect information on non-QIO quality improvement programs in which non-IPG providers may have participated during the 7th Scope of Work.
At the statewide level, QIOs promoted quality improvement in the 4 settings through such activities as partnerships with provider organizations, work with business and consumer groups, regional educational meetings, and direct QIO communication with providers ((10), (16-17)). Development and dissemination of information on best practices and improvement tools gave providers resources that were useful in improvement work ((6-7), (18-26)). With the IPG providers, QIOs conducted more focused activities.
Quality measures selected by the IPGs were driven by different factors in each setting, including contractual direction and limitations, baseline performance, and method of improvement. Data are reported on 5 nursing home measures, 11 home health agency measures, 21 hospital measures, and 4 outpatient measures. One measure (infection) was not reported because it was measured in more than 1 way, and another measure (delirium) was not reported because very few providers specifically worked to improve performance in this area. The Appendix provides details on the selection and reporting of measures by setting.
We used data from nursing homes and home health agencies that were reported to CMS through the systems required for Medicare payment: the Minimum Data Set (27) and the Outcomes Assessment and Information Set (28). Medicare- and Medicaid-certified nursing homes are required to conduct Minimum Data Set assessments of all residents on admission and at mandated intervals. The Outcomes Assessment and Information Set provides a comprehensive assessment of adult home care patients; like the Minimum Data Set assessment, its use is required on admission and at mandated intervals.
The hospital data were abstracted by clinical data abstraction contractors, who provide data support to the CMS. As in the 6th Scope of Work evaluation (8), we used random samples of 125 inpatient records per state per quarter for Medicare patients with a diagnosis of acute myocardial infarction, heart failure, or pneumonia or who had undergone surgery. Sample cases were weighted according to their probability of selection in the national quarterly sample. We analyzed CMS National Claims History data to determine assignment, based on majority of care, of Medicare beneficiary to practitioner for the physician office setting. Information on assignment of beneficiary to practitioner and performance on physician office quality measures was compiled quarterly.
For nursing homes, home health agencies, hospitals, and physician offices, we report only baseline and remeasurement data, because of space limitations and because the CMS evaluated QIO performance on the basis of improvement from baseline to remeasurement. The baseline and remeasurement periods were separated by about 2 years for nursing homes, home health agencies, and hospitals and by 3 years for physician offices.
For nursing homes, we used the second quarter of 2002 as the baseline period and the second quarter of 2004 as the remeasurement period. For home health, 1 May 2001 to 30 April 2002 was the baseline period and 1 February 2004 to 31 January 2005 was the remeasurement period. For the hospital setting, the first quarter of 2002 was the baseline period and the fourth quarter of 2004 was the remeasurement period. For the physician office setting, the baseline and remeasurement periods varied depending on the QIO contract cycle. The selection of baseline and remeasurement periods varied by setting because of contractual reasons and data set limitations.
For the nursing home setting, QIO contracts and publicly reported data required a minimum denominator of 30 for the chronic care measures and 20 for the acute care measures to create stable rates (qualifying providers). Approximately 13 000 of 16 000 nursing facilities, or about 80%, were included for each long-stay measure. The 1 short-stay measure, pain, had approximately 3100 qualifying providers. For nursing homes, the percentage of providers with excluded data because they did not meet the denominator requirements (exclusion rate) varied by measure from 17.8% to 44.9% for non-IPG providers and from 5.4% to 34.9% for IPG providers. For contractual and reporting reasons, in the home health setting, agencies had to have at least 30 episodes of care in their denominator for a particular measure to be included in the calculations. For the 11 home health agency measures in our study, approximately 6000 home health agencies (about 80%) were included in the rate calculations. The exclusion rate varied by measure from 10.4% to 17.2% for non-IPG providers and from 0.5% to 1.8% for IPG providers.
In the hospital setting, the only missing information was from an occasional chart that was not sent to the clinical data abstraction contractors. The physician office setting measurement was based on claims data, and we have no measure of claims that were not submitted.
For nursing homes and home health agencies, the national mean baseline and remeasurement rates were calculated, by quality measure, for participant groups (non-IPG, IPG-other, and IPG-select). National means were calculated as means of individual facility rates. The sample for both nursing homes and home health agencies consisted of qualifying providers that reported quality measure rates at both baseline and remeasurement. For the hospital setting, we report data on the number of patients sampled nationwide with primary diagnoses of acute myocardial infarction, heart failure, and pneumonia, as well as those eligible for surgical infection prevention measures. Descriptive statistics for the physician office setting are the number of eligible patients at baseline and baseline and remeasurement rates for Medicare beneficiaries who received most care from practitioners in IPG and non-IPG physician offices.
We also analyzed time-trend data for nursing homes and home health agencies. These data included all providers who reported at any of the time intervals and met the denominator requirement for at least 1 period. These data included all reporting nursing homes and home health agencies regardless of the number of periods in which they reported data; therefore, the denominator counts are not the same as those in the previous analyses.
Table 1 shows provider characteristics across non-IPG and IPG nursing homes, home health agencies, and physician offices. Compared with non-IPG nursing homes, IPG nursing homes tended to have a lower proportion of Medicaid residents and were larger, more often found in urban locations, and more likely to be located in a hospital. The IPG home health agencies tended to be larger than non-IPG home health agencies. In the physician office setting, the predominant specialty codes were similar for the 2 groups: general practice, family practice, internal medicine, and obstetrics/gynecology. However, IPG physician offices had a higher proportion of family practice and internal medicine specialists compared with non-IPG physician offices.
Appendix Table 1 shows the total number of nursing homes and the number of IPG nursing homes for the second quarters of 2002 and 2004, by state or jurisdiction. Appendix Table 2 shows similar data for home health agencies. Appendix Table 3 shows the number of primary care physicians at baseline, IPG practitioners, and IPG practitioners as a percentage of total primary care physicians, by state or jurisdiction.
Appendix Table 1.
Appendix Table 2.
Appendix Table 3.
Table 2 shows quality measures for IPG and non-IPG nursing homes. For all 5 quality measures, IPG nursing homes experienced greater improvement from baseline to remeasurement than did non-IPG nursing homes. Non-IPG nursing homes improved in 3 measures, whereas IPG nursing homes improved in all 5 measures. The IPG-select nursing homes showed greater improvement than did IPG-other nursing homes in all 5 measures. The most substantial improvements for the IPG-select nursing homes were in the chronic care pain measure (from 13.0% of residents at baseline to 6.2% at remeasurement), the short-stay (post–acute care) pain measure (from 29.7% to 23.0%), and the restraints use measure (16.5% to 8.4%). Figures 1, 2, and 3 show time-trend results for the nursing home measures.
Quarterly means were calculated from facilities with a reportable score in that quarter. Lower scores indicate better performance. Numbers in parentheses are the average numbers of facilities across quarters. IPG = identified participant group. For a given measure, the IPG-select subgroup consists of IPG providers that elected to focus on that measure, and the IPG-other subgroup consists of IPG providers that selected other measures.
Table 3 shows mean data on quality measures at baseline and remeasurement for IPG and non-IPG home health agencies. Both groups showed improvement from baseline to remeasurement in mean facility rates for 10 of 11 measures. In addition, for 10 of 11 measures, improvement was greater for IPG agencies than non-IPG agencies. For the 1 measure with an overall decline in performance (acute care hospitalizations), IPG and non-IPG agencies had the same rate of hospitalization at baseline and remeasurement. Comparisons within IPG agencies show that the IPG-select agencies improved more than IPG-other agencies on all 11 quality measures. Appendix Figures 3, 4, 5, 6, 7, and 8 show time trends for the home health agency quality measures.
All rates are calculated as 12-month averages, ending with the last day of the month shown. “0” represents the 12-month period ending the month before the start of the contract. For the national group, the average number of facilities across all periods is given. All other groups include only home health agencies with valid data in the first and last time points shown. For these quality measures, lower scores indicate better performance. IPG = identified participant group; SOW = Scope of Work. For a given measure, the IPG-select subgroup consists of IPG providers that elected to focus on that measure, and the IPG-other subgroup consists of IPG providers that selected other measures.
All rates are calculated as 12-month averages, ending with the last day of the month shown. “0” represents the 12-month period ending the month before the start of the contract. For the national group, the average number of facilities across all periods is given. All other groups include only home health agencies with valid data in the first and last time points shown. For these quality measures, higher scores indicate better performance. IPG = identified participant group; SOW = Scope of Work. For a given measure, the IPG-select subgroup consists of IPG providers that elected to focus on that measure, and the IPG-other subgroup consists of IPG providers that selected other measures.
All rates are calculated as 12-month averages, ending with the last day of the month shown. “0” represents the 12-month period ending the month before the start of the contract. For the national group, the average number of facilities across all periods is given. All other groups include only home health agencies with valid data in the first and last time points shown. For this quality measure, higher scores indicate better performance. IPG = identified participant group; SOW = Scope of Work. For a given measure, the IPG-select subgroup consists of IPG providers that elected to focus on that measure, and the IPG-other subgroup consists of IPG providers that selected other measures.
There were no comparison groups for the hospital setting; therefore, overall trends in hospital performance were examined. Table 4 shows the nationally weighted mean performance rates for the 21 hospital quality measures from the first quarter of 2002 and the fourth quarter of 2004. Nineteen of the 21 measures improved during this period. The 2 measures for which no improvement was seen were use of angiotensin-converting enzyme inhibitors for left ventricular systolic dysfunction in heart failure and selection of prophylactic antibiotics for surgical patients.
Table 5 shows baseline and remeasurement performance for IPG and non-IPG physician offices. The IPG offices showed improvement in all 4 measures, whereas the non-IPG offices showed improvement in 2 of the 4 measures but slight worsening in the screening mammography and diabetic retinal eye examination measures. Baseline performance was generally better for IPG offices than non-IPG offices. The greatest improvements for IPG offices occurred in the diabetic hemoglobin A1c testing measure (improvement of 8.7 percentage points) and the diabetic lipid profile determination measure (improvement of 11.2 percentage points). Procedures performed outside of primary care practices (diabetic retinal eye examination and screening mammography) had more modest improvements.
We assessed whether national clinical quality measures had improved in the QIO 7th Scope of Work and whether QIOs contributed to this improvement. We found that clinical quality improved for Medicare beneficiaries on 34 of 41 measures. These findings are consistent with published findings for a previous contract period for the inpatient hospital and outpatient (physician office) settings (8). However, for the first time, they now include the clinical performance results for nursing homes and home health agencies.
Assessment of the contribution of the QIO Program to quality improvement is challenging. Two types of contributions are possible. One type derives from the work that the Program does in partnership with stakeholder organizations and in support of CMS quality initiatives. This work includes the convening of partnerships and other activities that focus provider attention on the need to improve in a given area, the development of quality measures and support for the infrastructure by which provider performance data are publicly reported, and the development of improvement methods and tools. The other contribution is direct provision of technical assistance to providers.
As an example of the first type of contribution, in preparation for and during the 7th Scope of Work, the Program played a leading national role in creating the measurement, reporting, and improvement infrastructure for efforts to prevent surgical infection in hospitals. In 2000, there were no nationally available measures of surgical infection prevention, nor were there reliable estimates of the extent to which preventable infections occur. Through a QIO developmental project, in partnership with the Centers for Disease Control and Prevention, national performance measures were formulated that were broadly endorsed by major surgical and medical specialty societies (29). In another QIO project, a retrospective medical record review demonstrated a substantial opportunity to improve the quality of care for delivery of prophylactic antibiotics (30). A QIO-sponsored national collaborative that included 56 hospitals and 43 QIOs representing 50 states tested change ideas to prevent surgical site infections and to facilitate the spread of improvement methods between hospitals and QIOs across the United States (31-32). These activities created the foundation for the inclusion of surgical infection prevention measures in the QIO 7th Scope of Work. The substantial improvement that occurred during the contract period preceded the adoption of surgical infection measures by the Joint Commission on Accreditation of Healthcare Organizations and public reporting of hospital performance on these measures.
Although it is difficult to distinguish the unique contribution of the QIO Program through such activities, the Program has played a leading role with other stakeholders in national quality improvement initiatives. Increasingly, however, reviewers have focused on the second type of contribution that the Program may make: technical assistance by QIOs to providers.
Snyder and Anderson (9) attempted to assess the contribution of QIOs to the improvement in hospital measures in the QIO 6th Scope of Work by comparing the amount of improvement achieved by hospitals receiving different levels of QIO assistance. They did not find evidence of such a contribution. Their study, however, was criticized for its small and unrepresentative sample of 5 U.S. states, the short interval assessed (half of the contract period), and use of retrospective assignment by QIOs of the intensity of assistance that hospitals received in which the reliability of the assignment was not tested (33). Furthermore, in the context of a contract that required statewide improvement in which QIOs may have directed their resources away from hospitals that had internal capacity for improvement, a comparison among hospitals receiving different levels of assistance would not be a true test of the effects of QIOs (34). In another study of the effect of QIOs on hospitals in the 6th Scope of Work, some hospital quality improvement directors reported QIO assistance as being helpful, and others said that they did not need it (10).
The 2005 National Healthcare Quality Report recently released by the Agency for Healthcare Research and Quality reported 4-fold greater improvement on measures on which QIOs worked than on other measures included in the report (35). The Institute of Medicine, however, in a statutorily mandated review of the QIO Program whose results were released in February 2006, concluded that “the quality of health care received by Medicare beneficiaries has improved over time” but “the existing evidence is inadequate to determine the extent to which the QIO program has contributed directly to those improvements” (36).
We assessed the effect of technical assistance by comparing results among providers that received greater or lesser amounts of assistance from QIOs. Such a comparison was not possible in the hospital setting because QIOs were responsible for working with all hospitals in the state and we did not have reliable data about the relative intensity of assistance in this setting. In other settings, however, we found that providers that were recruited specifically by the QIO for receipt of assistance (those in an IPG) improved more on 18 of 20 measures than did providers who were not recruited, and improvement on the other 2 measures was similar. Nursing homes and home health agencies improved more on all measures on which they chose to work with the QIO (IPG-select) than on other measures (IPG-other).
Although there are potential limitations to the data used to assess trends in clinical measure results, evidence suggests that our findings are generally valid and reliable. Data for nursing homes and home health agencies are self-reported and therefore are subject to reporting bias that may be heightened by public reporting. However, these data are linked to payment, and providers may be penalized if they report incorrect information. According to Sangl and colleagues (37), the reliability of the Outcomes Assessment and Information Set (home health agencies) and the Minimum Data Set (nursing homes) data are acceptable, although evidence for the validity of the quality measures themselves is mixed. Kinatukara and associates (38) demonstrated low interrater reliability among experts using the Outcomes Assessment and Information Set in test situations. The clinical hospital data are abstracted through independent review of medical records by the clinical data abstraction contractors, who use standardized data collection procedures with rigorous internal quality control procedures (8). The trends in hospital measures are consistent with those recently reported by the Joint Commission on Accreditation of Healthcare Organizations in which self-reported data by hospitals seeking accreditation during this period were used (5). Physician office claims data sometimes understate utilization (39).
Another potential limitation is the dearth of quantitative information on the intensity of assistance received by the participant groups. By contract, QIOs were required to improve statewide rates for the 7th Scope of Work quality measures and, more specifically, to improve them within an identified subset of providers, the IPG. Contract monitoring showed that IPG providers received greater assistance than did non-IPG providers, and we further confirmed this finding retrospectively for nursing homes (but not for other settings). Because non-IPG providers also received some QIO assistance, the observed difference between the 2 groups may be less than it would be if non-IPGs had received no assistance.
Our findings are consistent with an effect of QIO technical assistance on performance. For many programs, identifying recipients of assistance, helping them, and finding improvement might be a sufficient demonstration of program impact. However, there are alternative explanations for our findings. Chance is an unlikely explanation, given that we are working with population data sets and all of the trends are in the same direction. Secular trends, particularly public reporting, may explain some of the observed improvement, but not all of the measures improved, and the greater improvement for the IPG-select providers argues against this as the sole explanation.
Our findings could be influenced by selection bias and regression to the mean. The IPG was presumably selected in part by the QIOs because they viewed these providers as likely to improve on the basis of their baseline performance and the QIO's assessment of their internal capacity and motivation for improvement. The baseline performance for the IPG was generally similar to that of the non-IPG; the finding of greater improvement by this group for virtually all measures is therefore inconsistent with regression to the mean as a sufficient explanation for our findings. Selection bias could account for some of the difference between the IPG and non-IPG providers, but within the group of providers that the QIO recruited—the IPG—the greater improvement among IPG-select providers than IPG-other providers is not explainable simply by a selection effect.
The IPG-select providers, however, had worse baseline performance than the IPG-other providers, and there may be interactive effects with other factors, such as provider size and differences among QIOs at the state level. We attempted to explore these relationships through more detailed analysis. We stratified the various comparison groups into quartiles by baseline performance and further stratified by provider size. We also used inferential statistics to control for differences among these groups in baseline performance, size, and other factors. Ultimately, we decided not to report such analyses, given the lack of an a priori experimental design that would have allowed unbiased estimates of QIO effect.
In summary, improvements on most measures in the 7th Scope of Work were greater for the providers with which the QIO worked closely and were greater for the measures for which providers requested and received QIO technical assistance. These findings are consistent with an effect of the QIO Program and an effect of QIO technical assistance. The Program's effect is conjoined with that of other CMS initiatives and those of other stakeholders. Secular trend, regression to the mean, and selection bias may also have contributed to our findings related to QIO technical assistance.
Our study demonstrates the difficulty in evaluating a program that aims to offer assistance to those who request it and to achieve the maximum possible improvement nationally and within each U.S. state. To mitigate this difficulty, program interventions and related data collection techniques must be prospectively designed. Evidence of effect is likely to be most difficult to distinguish from other factors when historical controls are used, more distinguishable when comparison groups are used that are well matched to the intervention group, and best when selection bias is eliminated through randomized selection of the intervention group.
To improve our ability to assess program impact in the QIO 8th Scope of Work, which was launched in August 2005, we have made 3 changes. First, we have reduced the contract requirements for statewide improvement and will prospectively collect information on the level of assistance received by IPG and non-IPG providers. Second, we will seek to match IPG providers with controls in the non-IPG group. Finally, we will use an independently administered survey that will ask providers to assess the extent to which they would have achieved improvement without QIO assistance.
For the QIO 9th Scope of Work, which will begin in August 2008, we are in the process of convening an evaluation workgroup that will make recommendations on program design for that contract period, on the basis of the work of a contractor currently in place and on advice from a group of independent technical experts. Through that process, we will consider the potential for more fundamental redesign, such as randomized selection of the IPG, delayed implementation of assistance for a subset of providers, or both.
Performance data for nursing home and home health agency measures were publicly reported for the first time in 2002 and 2003, respectively. Criteria for selection of a particular measure for quality improvement were discussed among QIOs in “community of practice” national telephone conferences, e-mail listservs, and national meetings. For nursing homes, the CMS had no specific requirements for measure selection; the principal advice was to choose a measure for which there was significant “room for improvement.” In practice, this meant avoiding measures for which a facility's current performance was well above the national or state average or for which the provider had achieved the best possible performance. The guidance for home health agencies was found in the formal Outcome-Based Quality Improvement training system. Every home health agency in the IPG was encouraged to use the Outcome-Based Quality Improvement system, which includes a data-driven procedure for identifying measures for which an agency had significant room for improvement. In the outpatient setting, QIOs were evaluated on the relative improvement of all IPG practices on all 4 measures.
We report data on 5 nursing home measures. Four of these measures applied to long-term residents: the percentage of residents whose need for help with daily activities had increased from the previous assessment, the percentage with moderate pain daily or severe pain in the past 7 days, the percentage with pressure ulcers in the past 7 days, and the percentage who were physically restrained daily in the past 7 days. The fifth measure was the percentage of short-stay (post–acute care) residents with moderate or severe pain. We do not report data for 3 nursing home quality measures: percentage of short-stay residents with delirium (because few nursing homes selected this measure), percentage of chronic care residents with an infection (because of varying definitions of this measure across states), and percentage of residents whose ability to move about in and around their room got worse (because of concerns about the reliability of the measure). Four hundred thirty-four nursing homes targeted decline in activities of daily living for improvement during the 7th Scope of Work, 1809 targeted pain in chronic care residents, 609 targeted use of physical restraints, 1380 targeted development of pressure ulcers, and 668 targeted post–acute care pain.
We report data for 11 of 41 publicly reported measures. These 11 measures account for more than 75% of all home health agencies that selected measures for improvement. Four measures are related to mobility: improved ambulation, improved transferring, improved toileting, and improvement in pain interfering with activity. Another 4 measures are related to daily needs: improved upper-body dressing, improved bathing, improved management of oral medications, and stabilization in bathing. Two measures are related to medical emergencies: acute hospital admission and emergent care. One measure, improvement in confusion, is related to mental status. Improved toileting, upper-body dressing, confusion frequency, stabilization in bathing, and emergent care were each targeted by 50 to 100 home health agencies. Improved ambulation, transferring, pain, bathing, and medication management and acute care hospitalization were each targeted by 250 to 700 home health agencies.
We report national rates for 21 of 23 publicly reported measures: 6 acute myocardial infarction measures, 4 heart failure measures, 8 pneumonia measures, and 3 surgical infection prevention measures. Two measures were excluded owing to very low numbers of eligible cases: timely thrombolysis and timely percutaneous coronary intervention.
We report national rates for all 4 physician office measures, based on analysis of Medicare claims for beneficiaries enrolled in traditional fee-for-service Medicare. Three of these measures pertain to patients with diabetes: biennial retinal eye examination by an eye care professional, annual testing of hemoglobin A1c, and biennial lipid profile. The fourth measure was biennial mammography for women 52 to 69 years of age at the end of the 2-year period. The QIOs were also assessed on the statewide performance of these 4 measures on the basis of analysis of Medicare claims. In addition, QIOs were required to improve their state's rates of influenza and pneumococcal vaccination of Medicare beneficiaries 65 years of age or older as assessed by the Consumer Health Plans Survey. The QIOs were not required to improve vaccination rates for their IPG practitioners because we did not have a reliable measure of vaccination rates for each IPG practitioner.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Healthcare Delivery and Policy.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only