0

The full content of Annals is available to subscribers

Subscribe/Learn More  >
Medicine and Public Policy |

The Role of Reputation in U.S. News & World Report's Rankings of the Top 50 American Hospitals

Ashwini R. Sehgal, MD
[+] Article and Author Information

From Case Western Reserve University and MetroHealth Medical Center, Cleveland, Ohio.


Reproducible Research Statement:Study protocol: Not available. Statistical code: Available from Dr. Sehgal (e-mail, axs81@cwru.edu). Data set: Available at http://static.usnews.com/documents/health/2009-best-hospitals-methodology.pdf.

Potential Conflicts of Interest: None disclosed. Forms can also be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M09-2201.

Requests for Single Reprints: Ashwini R. Sehgal, MD, Division of Nephrology, 2500 MetroHealth Drive, MetroHealth Medical Center, Cleveland, OH 44109; e-mail, axs81@cwru.edu.


Ann Intern Med. 2010;152(8):521-525. doi:10.7326/0003-4819-152-8-201004200-00009
Text Size: A A A

Background: U.S. News & World Report's annual rankings of the top 50 American hospitals in 12 specialties are based on a combination of subjective and objective measures of quality. Although the rankings have been criticized for emphasizing the subjective reputation of hospitals too strongly, the role of reputation in determining the relative standings of the top 50 hospitals has not been quantified.

Objective: To quantify the role of reputation in determining the relative standings of the top 50 hospitals in the 2009 edition of U.S. News & World Report's rankings.

Design: Cross-sectional study.

Setting: The top 50 hospitals in each of 12 specialties.

Measurements: Rankings based on the total U.S. News score and on a subjective reputation score.

Results: On average, rankings based on reputation score alone agreed with U.S. News & World Report's overall rankings 100% of the time for the top hospital in each specialty, 97% for the top 5 hospitals, 91% for the top 10 hospitals, and 89% for the top 20 hospitals. Hospital reputation was minimally associated with objective quality measures (mean Spearman ρ2 = 0.03).

Limitation: The findings apply primarily to interpretations about the relative standings of the 50 top-ranked hospitals in each specialty and not necessarily to the hundreds of unranked hospitals.

Conclusion: The relative standings of the top 50 hospitals largely reflect the subjective reputations of those hospitals. Moreover, little relationship exists between subjective reputation and objective measures of hospital quality among the top 50 hospitals.

Primary Funding Source: None.

Topics

hospitals

Figures

Grahic Jump Location
Figure.
Total U.S. News score vs. reputation score among the 50 top-ranked cancer hospitals.
Grahic Jump Location

Tables

References

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Academic Ranking: A Reproducible Metric of Thought Leadership
Posted on February 7, 2011
Alexander Kutikov
Division of Urologic Oncology, Department of Surgical Oncology, Fox Chase Cancer Center, Phila
Conflict of Interest: None Declared

The U.S. News & World Report rankings of U.S. hospitals have become integral to the marketing strategies of many healthcare systems (1). Moreover patients, physicians and administrators survey, quote and legitimize this annual list. Nevertheless, we and others in the medical community have long felt the methodology utilized in compiling this list is highly flawed.

We read with great interest the recent article by Sehgal entitled "The Role of Reputation in the U.S. News & World Report's Rankings of the Top 50 American Hospitals"(2). The author demonstrates that of the three "quality domains" the magazine uses to generate hospital rankings - structure, process, and outcomes -, the process domain has a tremendous dominating effect on the final rankings. Furthermore, the process domain overwhelmingly measures the hospital's reputation score (2). As such, the rankings in U.S. News & World Report essentially reflects a hospital's national reputation, a characteristic that is only further reinforced by the magazine's rankings. The findings were compelling, as nearly perfect correlation existed between the reputation score and the final hospital ranking for all 12 specialties examined. For example, in urology 100% of top 10 hospitals and 95% of top 20 hospitals were ordered identically by reputation score and overall ranking.

Despite the near total dependency of the final ranking on reputation, it is not hard to argue the degree of imperfection in using reputation as a proxy for quality, outcomes and patient satisfaction. The reputation metric is created by surveying approximately 250 specialists in each field and asking them to name the top 5 hospitals in their respective specialty. Only 40-50% of specialists respond to the survey (2). These responses form the basis of the U.S. News & World Report's reputation score and according to Sehgal's findings, its overall rankings (2). This correlation of ranking with reputation could be due to a focus on ensuring "face validity". That is, if those programs that were expected to be at the top by decision makers were not at the top after developing a ranking methodology, then the methodology would be revised (3, 4).

Yet, within the subspecialty of urology, the U.S. News & World Report rankings seem to reflect reality particularly poorly (Table 4) (2). In fact, the rankings every year generate competitive discussions within the specialty, as some strong programs appear low on the list (or do not appear at all), while larger and "reputable," but less accomplished, departments are placed high in the rankings.

Given the increased awareness and availability of contribution, outcomes and process metrics, opportunities exist to improve ranking methodology. Instead of relying largely on highly subjective and bias prone survey responses, as the U.S. News & World Report currently does, we propose a more objective metric we call "Academic Ranking." This metric attempts to capture a department's level of scientific contribution to their specialty relative to other departments nationwide. The Academic Ranking is a simple and reproducible measurement of thought leadership and is calculated by identifying the number of publications originating from a specific department/division within an institution, normalized by the Impact FactorTM of the peer-reviewed journal. Using entirely publically available data on the web, we calculated the Academic Ranking (2005-2010) among departments/divisions of urology in the U.S. and noted dramatic differences in rank compared to the U.S. News & World Report list. Two of the top three programs in the U.S. News & World Report rankings dropped out of the top 10. Meanwhile, the top 10 academically ranked programs dropped or rose an average of more than 5 positions (range 0 to 17) compared to their position in the U.S. News & World Report rankings. Indeed, no statistical correlation was seen between the programs ranked in the top-10 by U.S. News & World Report and our objective and reproducible Academic Ranking method (Spearman's rho -0.1, p=0.75) (5). Even when adjusting Academic Ranking on a per FTE basis to eliminate the benefit of size, a statistically significant difference in the rank lists persisted (Spearman's rho -0.33, p=0.23) (data and methodology available at www.hospitalacademicrank.com). Quantification of each department's recent academic contribution, as with Academic Ranking, better reflects the thought leadership of existing faculty and prominence of a department than historical reputation score. Furthermore, we submit that thought leadership in this manner is more relevant to providing high quality, state of the art patient care than national reputation as derived from the opinion of 125 physicians. It affords an objective metric of leadership that until now remained almost entirely subjective. Integration of such objective measures into an overall ranking system would replace the subjective opinions marred by historical biases and "face validity" with up-to-date, merit-based assessments.

In summary, as we move towards increasingly accountable, evidence- based care, we invite the medical community to develop, test and publish other objective performance assessments - such as the Academic Ranking - to better inform the public of our strengths and identify our weaknesses. Let's change marketing to "metric-ing."

References

1. Larson RJ, Schwartz LM, Woloshin S, Welch HG. Advertising by academic medical centers. Arch Intern Med. 2005;165(6):645-51.

2. Sehgal AR. The role of reputation in U.S. News & World Report's rankings of the top 50 American hospitals. Ann Intern Med. 2010;152(8):521-5.

3. Sweitzer K, Volkwein J. Prestige among graduate and professional schools: Comparing the U.S. News' Graduate School Reputation Ratings Between Disciplines. Res High Ed. 2009;50:812-36.

4. Brooks R. Measuring University Quality. Rev High Educ. 2005;29:1- 21.

5. Kutikov A, Rozenfeld B, Egleston BL, Sirohi M, Hwang R, Uzzo RG. "Academic Ranking": A Reproducible Metric of Thought Leadership. manuscript in preparation.

Conflict of Interest:

None declared

Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Buy Now

to gain full access to the content and tools.

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Related Point of Care
Topic Collections
PubMed Articles
Gait evolution in a family with hereditary spastic paraplegia. Eur J Paediatr Neurol Published online Aug 30, 2014.;
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)