Lisa M. Kern, MD, MPH; Sameer Malhotra, MD, MA; Yolanda Barrón, MS; Jill Quaresimo, RN, JD; Rina Dhopeshwarkar, MPH; Michelle Pichardo, MPH; Alison M. Edwards, MStat; Rainu Kaushal, MD, MPH
Presented in part at the Annual Symposium of the American Medical Informatics Association, Washington, DC, 22–26 October 2011.
Note: The authors had full access to all of the data in the study and take responsibility for the integrity of the data and accuracy of the data analysis.
Acknowledgment: The authors thank Jonah Piascik for his assistance with data collection.
Grant Support: By the Agency for Healthcare Research and Quality (grant R18 HS 017067).
Potential Conflicts of Interest: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M12-1178.
Reproducible Research Statement: Study protocol and statistical code: Available from Dr. Kern (e-mail, email@example.com). Data set: Not available.
Requests for Single Reprints: Lisa M. Kern, MD, MPH, Department of Public Health, Weill Cornell Medical College, 425 East 61st Street, Suite 301, New York, NY; e-mail, firstname.lastname@example.org.
Current Author Addresses: Drs. Kern and Kaushal and Ms. Edwards: Center for Healthcare Informatics and Policy, Weill Cornell Medical College, 425 East 61st Street, Suite 301, New York, NY 10065.
Dr. Malhotra: Weill Cornell Medical College, 575 Lexington Avenue, Box 110, New York, NY 10022.
Ms. Barrón: Center for Home Care and Research, Visiting Nurse Service of New York, 1250 Broadway, 20th Floor, New York, NY 10001.
Ms. Quaresimo: 4 Cleveland Drive, Poughkeepsie, NY 12601.
Ms. Dhopeshwarkar: 2665 Prosperity Avenue, Apartment 337, Fairfax, VA 22031.
Ms. Pichardo: Institute for Family Health, 22 West 19th Street, 8th Floor, New York, NY 10011.
Author Contributions: Conception and design: L.M. Kern, S. Malhotra, R. Kaushal.
Analysis and interpretation of the data: L.M. Kern, S. Malhotra, Y. Barrón, R. Dhopeshwarkar, M. Pichardo, A.M. Edwards, R. Kaushal.
Drafting of the article: L.M. Kern, S. Malhotra, M. Pichardo, R. Kaushal.
Critical revision of the article for important intellectual content: L.M. Kern, S. Malhotra, Y. Barrón, R. Dhopeshwarkar, R. Kaushal.
Final approval of the article: L.M. Kern, Y. Barrón, A.M. Edwards, R. Kaushal.
Provision of study materials or patients:
Statistical expertise: Y. Barrón, A.M. Edwards.
Obtaining of funding: L.M. Kern, R. Kaushal.
Administrative, technical, or logistic support: S. Malhotra, R. Dhopeshwarkar, M. Pichardo.
Collection and assembly of data: L.M. Kern, S. Malhotra, Y. Barrón, J. Quaresimo, M. Pichardo.
Kern L., Malhotra S., Barrón Y., Quaresimo J., Dhopeshwarkar R., Pichardo M., Edwards A., Kaushal R.; Accuracy of Electronically Reported “Meaningful Use” Clinical Quality Measures: A Cross-sectional Study. Ann Intern Med. 2013;158:77-83. doi: 10.7326/0003-4819-158-2-201301150-00001
Download citation file:
Published: Ann Intern Med. 2013;158(2):77-83.
The federal Electronic Health Record Incentive Program requires electronic reporting of quality from electronic health records, beginning in 2014. Whether electronic reports of quality are accurate is unclear.
To measure the accuracy of electronic reporting compared with manual review.
A federally qualified health center with a commercially available electronic health record.
All adult patients eligible in 2008 for 12 quality measures (using 8 unique denominators) were identified electronically. One hundred fifty patients were randomly sampled per denominator, yielding 1154 unique patients.
Receipt of recommended care, assessed by both electronic reporting and manual review. Sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and absolute rates of recommended care were measured.
Sensitivity of electronic reporting ranged from 46% to 98% per measure. Specificity ranged from 62% to 97%, positive predictive value from 57% to 97%, and negative predictive value from 32% to 99%. Positive likelihood ratios ranged from 2.34 to 24.25 and negative likelihood ratios from 0.02 to 0.61. Differences between electronic reporting and manual review were statistically significant for 3 measures: Electronic reporting underestimated the absolute rate of recommended care for 2 measures (appropriate asthma medication [38% vs. 77%; P < 0.001] and pneumococcal vaccination [27% vs. 48%; P < 0.001]) and overestimated care for 1 measure (cholesterol control in patients with diabetes [57% vs. 37%; P = 0.001]).
This study addresses the accuracy of the measure numerator only.
Wide measure-by-measure variation in accuracy threatens the validity of electronic reporting. If variation is not addressed, financial incentives intended to reward high quality may not be given to the highest-quality providers.
Agency for Healthcare Research and Quality.
Robert H Dolin, MD, FACP
Lantana Consulting Group
February 13, 2013
The Achilles heel of quality reporting is data capture
To the Editor,
Let’s face it. While we can challenge Kern, et al’s methods (e.g., by claiming that EHRs are much different today than they were in 2008), I don’t think we can challenge their findings. The Achilles heel of end to end quality reporting from EHRs lies with data capture. Reasons for the data capture challenges are many – divergent data requirements (across quality measures, decision support rules, clinical practice guidelines, etc.); time pressures; and skepticism (e.g., are all these data elements really necessary) – which can leave providers overwhelmed and resistant.Quality reporting is required under the federal HITECH (Meaningful Use) regulations.
A certified EHR must be able to export standardized quality reports, which can then be fed into a calculation engine to compute various aggregate scores (e.g., number of patients meeting the numerator and denominator criteria). Interoperability standards for required export, calculate, and report criteria have been carefully crafted within the Health Level Seven (HL7) standards organization and widely vetted, but we have to acknowledge that garbage in equals garbage out, and that quality reporting standards cannot compensate for inconsistent or missing data at the source.
So how is the standards community addressing the data capture challenge? We are addressing it head on, by providing the definitive source of truth and direction needed by software vendors and clinicians. Through standardization comes a convergence on key data elements needed for transitions in care, quality reporting, and decision support. Rather than having a multitude of independent use cases, each with their own data requirements converging on the point of care provider, standardization leads to a convergence of key data elements. This sets a clearer path for vendors and user interface designers, lessens the data capture burden on clinicians, focusing data capture on data elements known to be of value for a variety of purposes. In other words, interoperability standards are relevant both on the afferent and efferent limbs of the EHR. Focusing on standards is a tractable approach for addressing data capture challenges, and therefore provides a strategy for addressing the very real challenges identified by Kern, et al.
Robert H Dolin, MD, FACP
President and Chief Medical Officer,
Lantana Consulting GroupChair-Elect,
Health Level Seven International
Lisa M. Kern, MD, MPH , Rainu Kaushal, MD, MPH
Weill Cornell Medical College
May 10, 2013
Author's Reply The Accuracy of Electronic Reporting of Quality Measures
Thank you for your comments(1) related to our article on the accuracy of automated reporting of quality data from electronic health records (EHRs).(2) We agree that interoperability standards are one critical component for improving the accuracy of quality reporting.
We think that other important strategies for improving the accuracy of quality reporting include: 1) changing clinical workflow to facilitate documentation of the care provided, 2) changing EHRs to create new structured fields for important variables previously captured only by free text, 3) improving specifications for automated reporting of quality measures, and 4) testing the accuracy of those specifications.
All of these strategies do not need to be pursued in sequence. Rather, they can and should all be pursued concurrently. The accuracy of automated electronic reporting can be improved now with the technologies we currently have, and it can also be iteratively refined with newer technologies over time.
As quality measurement evolves in this electronic era, it is important to ensure that automated reports accurately reflect the health care that is provided. Patients and providers will be increasingly depending on that.
Lisa M. Kern, MD, MPH and Rainu Kaushal, MD, MPH
Center for Healthcare Informatics and Policy, Weill Cornell Medical College
1. Dolin RH. The Achilles heel of quality reporting is data capture [comment]. Ann Intern Med. 2013.
2. Kern LM, Malhotra S, Barron Y, et al. Accuracy of electronically reported "meaningful use" clinical quality measures: a cross-sectional study. Ann Intern Med. 2013;158(2):77-83.
to gain full access to the content and tools.
Learn more about subscription options.
Register Now for a free account.
Healthcare Delivery and Policy.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only