John P.A. Ioannidis, MD, DSc; Yuan Jin Tan, BS; Manuel R. Blum, MD
Financial Support: The Meta-Research Innovation Center at Stanford (METRICS) has been supported by a grant from the Laura and John Arnold Foundation. Dr. Blum's work is supported by grant P2BEP3_175289 from the Swiss National Science Foundation. Mr. Tan is supported by the Stanford Graduate Fellowship from Stanford University.
Disclosures: Authors not named here have disclosed no conflicts of interest. Disclosures can also be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M18-2159.
Corresponding Author: John P.A. Ioannidis, MD, DSc, Stanford Prevention Research Center and Meta-Research Innovation Center at Stanford (METRICS), 1265 Welch Road, Medical School Office Building, Room X306, Stanford, CA 94305; e-mail, email@example.com.
Current Author Addresses: Dr. Ioannidis: Stanford Prevention Research Center and Meta-Research Innovation Center at Stanford (METRICS), 1265 Welch Road, Medical School Office Building, Room X306, Stanford, CA 94305.
Mr. Tan: Redwood Building, 150 Governor's Lane, Stanford, CA 94305.
Dr. Blum: Stanford University, 150 Governor's Lane, Stanford, CA 94305.
Author Contributions: Conception and design: J.P.A. Ioannidis, Y.J. Tan, M.R. Blum.
Drafting of the article: J.P.A. Ioannidis, Y.J. Tan, M.R. Blum.
Critical revision of the article for important intellectual content: J.P.A. Ioannidis, Y.J. Tan, M.R. Blum.
Final approval of the article: J.P.A. Ioannidis, Y.J. Tan, M.R. Blum.
The E-value was recently introduced on the basis of earlier work as “the minimum strength of association…that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment–outcome association, conditional on the measured covariates.” E-values have been proposed for wide application in observational studies evaluating causality. However, they have limitations and are prone to misinterpretation. E-values have a monotonic, almost linear relationship with effect estimates and thus offer no additional information beyond what effect estimates can convey. Whereas effect estimates are based on real data, E-values may make unrealistic assumptions. No general rule can exist about what is a “small enough” E-value, and users of the biomedical literature are not familiar with how to interpret a range of E-values. Problems arise for any measure dependent on effect estimates and their CIs—for example, bias due to selective reporting and dependence on choice of exposure contrast and level of confidence. The automation of E-values may give an excuse not to think seriously about confounding. Moreover, biases other than confounding may still undermine results. Instead of misused or misinterpreted E-values, the authors recommend judicious use of existing methods for sensitivity analyses with careful assumptions; systematic assessments of whether and how known confounders have been handled, along with consideration of their prevalence and magnitude; thorough discussion of the potential for unknown confounders considering the study design and field of application; and explicit caution in making causal claims from observational studies.
Ioannidis JP, Tan YJ, Blum MR. Limitations and Misinterpretations of E-Values for Sensitivity Analyses of Observational Studies. Ann Intern Med. 2019;170:108–111. [Epub ahead of print 1 January 2019]. doi: 10.7326/M18-2159
Download citation file:
Published: Ann Intern Med. 2019;170(2):108-111.
Published at www.annals.org on 1 January 2019
Research and Reporting Methods.
Results provided by:
Copyright © 2019 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use