John W. Peabody, MD, PhD; Jeff Luck, MBA, PhD; Peter Glassman, MBBS, MSc; Sharad Jain, MD; Joyce Hansen, MD; Maureen Spell, MD; Martin Lee, PhD
Acknowledgments: The authors thank Elizabeth O'Gara (University of California, Los Angeles), Julianne Arnall (Stanford University), and Molly Bates Efrusy and Ojig Yeretsian (University of California, San Francisco), who trained and coordinated the standardized patient visits; the staff of the hospitals that worked so diligently to simulate the visits, provide their laboratory data, and introduce the simulated medical records; Bret Lewis, Ed La Calle, and Dan Bertenthal for their programming assistance and keen insights; Anne Sunderland and Miriam Polon for their assistance with the manuscript; and the actors for their fine performances.
Grant Support: By grant 11R 98118-1 from the Veterans Affairs Health Service Research and Development Service, Washington DC. Dr. Peabody is also a recipient of a Senior Research Associate Career Development Award from 1998 to 2001 from the Department of Veterans Affairs.
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: John W. Peabody, MD, PhD, Institute for Global Health, 74 New Montgomery, Suite 508, San Francisco, CA 94105; e-mail, email@example.com.
Current Author Addresses: Dr. Peabody: Institute for Global Health, 74 New Montgomery Street, Suite 508, San Francisco, CA 94105.
Dr. Luck: Department of Health Services, University of California, Los Angeles, School of Public Health, PO Box 951772, Los Angeles, CA 90095-1772.
Mr. Glassman: Division of General Internal Medicine (111G), Veterans Affairs Greater Los Angeles, 11301 Wilshire Boulevard, Los Angeles, CA 90073.
Dr. Jain: Medical Service (111), Veterans Affairs Medical Center, 4150 Clement Street, San Francisco, CA 94121.
Dr. Hansen: 3801 Sacramento Street, Suite 309, San Francisco, CA 94118.
Dr. Spell: 4950 Sunset Boulevard, Los Angeles, CA 90027.
Dr. Lee: Center for the Study of Healthcare Provider Behavior, Veterans Affairs Medical Center (152), 16111 Plummer Street, Building 25, Sepulveda, CA 91343-2036.
Worldwide efforts are under way to improve the quality of clinical practice. Most quality measurements, however, are poorly validated, expensive, and difficult to compare among sites.
To validate whether vignettes accurately measure the quality of clinical practice by using a comparison with standardized patients (the gold standard method), and to determine whether vignettes are a more or less accurate method than medical record abstraction.
Prospective, multisite study.
Outpatient primary care clinics in 2 Veterans Affairs medical centers and 2 large, private medical centers.
144 of 163 eligible physicians agreed to participate, and, of these, 116 were randomly selected to see standardized patients, to complete vignettes, or both.
Scores, expressed as the percentage of explicit quality criteria correctly completed, were obtained by using 3 methods.
Among all physicians, the quality of clinical practice as measured by the standardized patients was 73% correct (95% CI, 72.1% to 73.4%). By using exactly the same criteria, physicians scored 68% (CI, 67.9% to 68.9%) when measured by the vignettes but only 63% (CI, 62.7% to 64.0%) when assessed by medical record abstraction. These findings were consistent across all diseases and were independent of case complexity or physician training level. Vignettes also accurately measured unnecessary care. Finally, vignettes seem to capture the range in the quality of clinical practice among physicians within a site.
Despite finding variation in the quality of clinical practice, we did not determine whether poorer quality translated into worse health status for patients. In addition, the quality scores are based on measurements from 1 patient–provider interaction. As with all other scoring criteria, vignette criteria must be regularly updated.
Vignettes are a valid tool for measuring the quality of clinical practice. They can be used for diverse clinical settings, diseases, physician types, and situations in which case-mix variation is a concern. They are inexpensive and easy to use. Vignettes are particularly useful for comparing quality among and within sites and may be useful for longitudinal evaluations of interventions intended to change clinical practice.
Planned study design showing sites and physician sample by level of training and clinical case for the 3 quality measurement methods.
Direct comparison of scores, overall and by disease, using 3 measurement methods: standardized patients, vignettes, and chart abstraction.PP
Comparison of vignette scores to standardized patient and chart scores, stratified by case complexity and training level.PPP
Comparison of variations among and within the 4 sites by measurement method.
Distribution of unnecessary items ordered by participants while caring for cases depicted by vignettes compared with all tests and referrals entered directly into the medical record after standardized patient visits.
Peabody JW, Luck J, Glassman P, et al. Measuring the Quality of Physician Practice by Using Clinical Vignettes: A Prospective Validation Study. Ann Intern Med. 2004;141:771–780. doi: https://doi.org/10.7326/0003-4819-141-10-200411160-00008
Download citation file:
Published: Ann Intern Med. 2004;141(10):771-780.
Education and Training, Healthcare Delivery and Policy.
Results provided by:
Copyright © 2020 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use