George Hripcsak, MD; Carol Friedman, PhD; Philip O. Alderson, MD; William DuMouchel, PhD; Stephen B. Johnson, PhD; Paul D. Clayton, PhD
To evaluate the automated detection of clinical conditions described in narrative reports.
Automated methods and human experts detected the presence or absence of six clinical conditions in 200 admission chest radiograph reports.
A computerized, general-purpose natural language processor; 6 internists; 6 radiologists; 6 lay persons; and 3 other computer methods.
Intersubject disagreement was quantified by “distance” (the average number of clinical conditions per report on which two subjects disagreed) and by sensitivity and specificity with respect to the physicians.
Using a majority vote, physicians detected 101 conditions in the 200 reports (0.51 per report); the most common condition was acute bacterial pneumonia (prevalence, 0.14), and the least common was chronic obstructive pulmonary disease (prevalence, 0.03). Pairs of physicians disagreed on the presence of at least 1 condition for an average of 20% of reports. The average intersubject distance among physicians was 0.24 (95% CI, 0.19 to 0.29) out of a maximum possible distance of 6. No physician had a significantly greater distance than the average. The average distance of the natural language processor from the physicians was 0.26 (CI, 0.21 to 0.32; not significantly greater than the average among physicians). Lay persons and alternative computer methods had significantly greater distance from the physicians (all >0.5). The natural language processor had a sensitivity of 81% (CI, 73% to 87%) and a specificity of 98% (CI, 97% to 99%); physicians had an average sensitivity of 85% and an average specificity of 98%.
Physicians disagreed on the interpretation of narrative reports, but this was not caused by outlier physicians or a consistent difference in the way internists and radiologists read reports. The natural language processor was not distinguishable from the physicians and was superior to all other comparison subjects. Although the domain of this study was restricted (six clinical conditions in chest radiographs), natural language processing seems to have the potential to extract clinical information from narrative reports in a manner that will support automated decision-support and clinical research.
Hripcsak G, Friedman C, Alderson PO, et al. Unlocking Clinical Data from Narrative Reports: A Study of Natural Language Processing. Ann Intern Med. 1995;122:681–688. doi: https://doi.org/10.7326/0003-4819-122-9-199505010-00007
Download citation file:
Published: Ann Intern Med. 1995;122(9):681-688.
Infectious Disease, Pneumonia, Pulmonary/Critical Care.
Copyright © 2019 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use