0

The full content of Annals is available to subscribers

Subscribe/Learn More  >
Original Research |

Accuracy of Electronically Reported “Meaningful Use” Clinical Quality Measures: A Cross-sectional Study

Lisa M. Kern, MD, MPH; Sameer Malhotra, MD, MA; Yolanda Barrón, MS; Jill Quaresimo, RN, JD; Rina Dhopeshwarkar, MPH; Michelle Pichardo, MPH; Alison M. Edwards, MStat; and Rainu Kaushal, MD, MPH
[+] Article and Author Information

From the Center for Healthcare Informatics and Policy, Weill Cornell Medical College, Institute for Family Health, and New York-Presbyterian Hospital, New York, and Taconic Independent Practice Association, Fishkill, New York.

Presented in part at the Annual Symposium of the American Medical Informatics Association, Washington, DC, 22–26 October 2011.

Note: The authors had full access to all of the data in the study and take responsibility for the integrity of the data and accuracy of the data analysis.

Acknowledgment: The authors thank Jonah Piascik for his assistance with data collection.

Grant Support: By the Agency for Healthcare Research and Quality (grant R18 HS 017067).

Potential Conflicts of Interest: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M12-1178.

Reproducible Research Statement: Study protocol and statistical code: Available from Dr. Kern (e-mail, lmk2003@med.cornell.edu). Data set: Not available.

Requests for Single Reprints: Lisa M. Kern, MD, MPH, Department of Public Health, Weill Cornell Medical College, 425 East 61st Street, Suite 301, New York, NY; e-mail, lmk2003@med.cornell.edu.

Current Author Addresses: Drs. Kern and Kaushal and Ms. Edwards: Center for Healthcare Informatics and Policy, Weill Cornell Medical College, 425 East 61st Street, Suite 301, New York, NY 10065.

Dr. Malhotra: Weill Cornell Medical College, 575 Lexington Avenue, Box 110, New York, NY 10022.

Ms. Barrón: Center for Home Care and Research, Visiting Nurse Service of New York, 1250 Broadway, 20th Floor, New York, NY 10001.

Ms. Quaresimo: 4 Cleveland Drive, Poughkeepsie, NY 12601.

Ms. Dhopeshwarkar: 2665 Prosperity Avenue, Apartment 337, Fairfax, VA 22031.

Ms. Pichardo: Institute for Family Health, 22 West 19th Street, 8th Floor, New York, NY 10011.

Author Contributions: Conception and design: L.M. Kern, S. Malhotra, R. Kaushal.

Analysis and interpretation of the data: L.M. Kern, S. Malhotra, Y. Barrón, R. Dhopeshwarkar, M. Pichardo, A.M. Edwards, R. Kaushal.

Drafting of the article: L.M. Kern, S. Malhotra, M. Pichardo, R. Kaushal.

Critical revision of the article for important intellectual content: L.M. Kern, S. Malhotra, Y. Barrón, R. Dhopeshwarkar, R. Kaushal.

Final approval of the article: L.M. Kern, Y. Barrón, A.M. Edwards, R. Kaushal.

Provision of study materials or patients:

Statistical expertise: Y. Barrón, A.M. Edwards.

Obtaining of funding: L.M. Kern, R. Kaushal.

Administrative, technical, or logistic support: S. Malhotra, R. Dhopeshwarkar, M. Pichardo.

Collection and assembly of data: L.M. Kern, S. Malhotra, Y. Barrón, J. Quaresimo, M. Pichardo.


Ann Intern Med. 2013;158(2):77-83. doi:10.7326/0003-4819-158-2-201301150-00001
Text Size: A A A

Chinese translation

Background: The federal Electronic Health Record Incentive Program requires electronic reporting of quality from electronic health records, beginning in 2014. Whether electronic reports of quality are accurate is unclear.

Objective: To measure the accuracy of electronic reporting compared with manual review.

Design: Cross-sectional study.

Setting: A federally qualified health center with a commercially available electronic health record.

Patients: All adult patients eligible in 2008 for 12 quality measures (using 8 unique denominators) were identified electronically. One hundred fifty patients were randomly sampled per denominator, yielding 1154 unique patients.

Measurements: Receipt of recommended care, assessed by both electronic reporting and manual review. Sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and absolute rates of recommended care were measured.

Results: Sensitivity of electronic reporting ranged from 46% to 98% per measure. Specificity ranged from 62% to 97%, positive predictive value from 57% to 97%, and negative predictive value from 32% to 99%. Positive likelihood ratios ranged from 2.34 to 24.25 and negative likelihood ratios from 0.02 to 0.61. Differences between electronic reporting and manual review were statistically significant for 3 measures: Electronic reporting underestimated the absolute rate of recommended care for 2 measures (appropriate asthma medication [38% vs. 77%; P < 0.001] and pneumococcal vaccination [27% vs. 48%; P < 0.001]) and overestimated care for 1 measure (cholesterol control in patients with diabetes [57% vs. 37%; P = 0.001]).

Limitation: This study addresses the accuracy of the measure numerator only.

Conclusion: Wide measure-by-measure variation in accuracy threatens the validity of electronic reporting. If variation is not addressed, financial incentives intended to reward high quality may not be given to the highest-quality providers.

Primary Funding Source: Agency for Healthcare Research and Quality.

Figures

Tables

References

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
The Achilles heel of quality reporting is data capture
Posted on February 13, 2013
Robert H Dolin, MD, FACP
Lantana Consulting Group
Conflict of Interest: None Declared

To the Editor,

Let’s face it. While we can challenge Kern, et al’s methods (e.g., by claiming that EHRs are much different today than they were in 2008), I don’t think we can challenge their findings. The Achilles heel of end to end quality reporting from EHRs lies with data capture. Reasons for the data capture challenges are many – divergent data requirements (across quality measures, decision support rules, clinical practice guidelines, etc.); time pressures; and skepticism (e.g., are all these data elements really necessary) – which can leave providers overwhelmed and resistant.Quality reporting is required under the federal HITECH (Meaningful Use) regulations.

A certified EHR must be able to export standardized quality reports, which can then be fed into a calculation engine to compute various aggregate scores (e.g., number of patients meeting the numerator and denominator criteria). Interoperability standards for required export, calculate, and report criteria have been carefully crafted within the Health Level Seven (HL7) standards organization and widely vetted, but we have to acknowledge that garbage in equals garbage out, and that quality reporting standards cannot compensate for inconsistent or missing data at the source.

So how is the standards community addressing the data capture challenge? We are addressing it head on, by providing the definitive source of truth and direction needed by software vendors and clinicians. Through standardization comes a convergence on key data elements needed for transitions in care, quality reporting, and decision support. Rather than having a multitude of independent use cases, each with their own data requirements converging on the point of care provider, standardization leads to a convergence of key data elements. This sets a clearer path for vendors and user interface designers, lessens the data capture burden on clinicians, focusing data capture on data elements known to be of value for a variety of purposes. In other words, interoperability standards are relevant both on the afferent and efferent limbs of the EHR. Focusing on standards is a tractable approach for addressing data capture challenges, and therefore provides a strategy for addressing the very real challenges identified by Kern, et al.

Robert H Dolin, MD, FACP

President and Chief Medical Officer,

Lantana Consulting GroupChair-Elect,

Health Level Seven International

Author's Reply The Accuracy of Electronic Reporting of Quality Measures
Posted on May 10, 2013
Lisa M. Kern, MD, MPH , Rainu Kaushal, MD, MPH
Weill Cornell Medical College
Conflict of Interest: None Declared

Thank you for your comments(1) related to our article on the accuracy of automated reporting of quality data from electronic health records (EHRs).(2)   We agree that interoperability standards are one critical component for improving the accuracy of quality reporting. 

 We think that other important strategies for improving the accuracy of quality reporting include:  1) changing clinical workflow to facilitate documentation of the care provided, 2) changing EHRs to create new structured fields for important variables previously captured only by free text, 3) improving specifications for automated reporting of quality measures, and 4) testing the accuracy of those specifications.

 All of these strategies do not need to be pursued in sequence.  Rather, they can and should all be pursued concurrently.  The accuracy of automated electronic reporting can be improved now with the technologies we currently have, and it can also be iteratively refined with newer technologies over time. 

 As quality measurement evolves in this electronic era, it is important to ensure that automated reports accurately reflect the health care that is provided.  Patients and providers will be increasingly depending on that.

 Lisa M. Kern, MD, MPH and Rainu Kaushal, MD, MPH

Center for Healthcare Informatics and Policy, Weill Cornell Medical College

 References

 1.            Dolin RH. The Achilles heel of quality reporting is data capture [comment]. Ann Intern Med. 2013.

2.            Kern LM, Malhotra S, Barron Y, et al. Accuracy of electronically reported "meaningful use" clinical quality measures: a cross-sectional study. Ann Intern Med. 2013;158(2):77-83.

 

 

Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Buy Now

to gain full access to the content and tools.

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Journal Club
Topic Collections
PubMed Articles
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)