0

The full content of Annals is available to subscribers

Subscribe/Learn More  >
Academia and the Profession |

Unlocking Clinical Data from Narrative Reports: A Study of Natural Language Processing

George Hripcsak, MD; Carol Friedman, PhD; Philip O. Alderson, MD; William DuMouchel, PhD; Stephen B. Johnson, PhD; and Paul D. Clayton, PhD
[+] Article and Author Information

From Columbia-Presbyterian Medical Center, New York, New York and Queens College, Flushing, New York. Requests for Reprints: George Hripcsak, MD, Department of Medical Informatics, Columbia-Presbyterian Medical Center, 161 Fort Washington Avenue, AP-1310, New York, NY 10032. Grant Support: National Library of Medicine grants LM04419, LM05397, and LM05627; grant #6-61483 from the Research Foundation of City University of New York.


Copyright ©2004 by the American College of Physicians


Ann Intern Med. 1995;122(9):681-688. doi:10.7326/0003-4819-122-9-199505010-00007
Text Size: A A A

Objective: To evaluate the automated detection of clinical conditions described in narrative reports.

Design: Automated methods and human experts detected the presence or absence of six clinical conditions in 200 admission chest radiograph reports.

Study Subjects: A computerized, general-purpose natural language processor; 6 internists; 6 radiologists; 6 lay persons; and 3 other computer methods.

Main Outcome Measures: Intersubject disagreement was quantified by “distance” (the average number of clinical conditions per report on which two subjects disagreed) and by sensitivity and specificity with respect to the physicians.

Results: Using a majority vote, physicians detected 101 conditions in the 200 reports (0.51 per report); the most common condition was acute bacterial pneumonia (prevalence, 0.14), and the least common was chronic obstructive pulmonary disease (prevalence, 0.03). Pairs of physicians disagreed on the presence of at least 1 condition for an average of 20% of reports. The average intersubject distance among physicians was 0.24 (95% CI, 0.19 to 0.29) out of a maximum possible distance of 6. No physician had a significantly greater distance than the average. The average distance of the natural language processor from the physicians was 0.26 (CI, 0.21 to 0.32; not significantly greater than the average among physicians). Lay persons and alternative computer methods had significantly greater distance from the physicians (all >0.5). The natural language processor had a sensitivity of 81% (CI, 73% to 87%) and a specificity of 98% (CI, 97% to 99%); physicians had an average sensitivity of 85% and an average specificity of 98%.

Conclusions: Physicians disagreed on the interpretation of narrative reports, but this was not caused by outlier physicians or a consistent difference in the way internists and radiologists read reports. The natural language processor was not distinguishable from the physicians and was superior to all other comparison subjects. Although the domain of this study was restricted (six clinical conditions in chest radiographs), natural language processing seems to have the potential to extract clinical information from narrative reports in a manner that will support automated decision-support and clinical research.

Figures

Grahic Jump Location
Figure 1.
Average distance of subjects from physicians.

The average distance and 95% CIs from each of the subjects to the physicians are shown. A greater distance implies worse performance (further from physician consensus).

Grahic Jump Location
Grahic Jump Location
Figure 2.
Sensitivity and specificity plotted on receiver-operator characteristic curve axes (specificity is listed in reverse order).topbottom

Ideal performance is in the upper left corner of both graphs. The first graph ( ) shows the full receiver-operator characteristic curve, whereas the second graph ( ) is an expansion of the area near ideal performance (specificity has been expanded five times as much as sensitivity).

Grahic Jump Location

Tables

References

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Buy Now

to gain full access to the content and tools.

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Journal Club
Related Point of Care
Topic Collections
PubMed Articles
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)