Tyler J. VanderWeele, PhD; Peng Ding, PhD
Acknowledgment: The authors thank Sander Greenland, James Robins, 2 reviewers, and the editors for helpful comments on an earlier draft of this paper.
Grant Support: By National Institutes of Health grant ES017876.
Disclosures: Authors have disclosed no conflicts of interest. Forms can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M16-2607.
Requests for Single Reprints: Tyler J. VanderWeele, PhD, Harvard T.H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA 02115; e-mail, firstname.lastname@example.org.
Current Author Addresses: Dr. VanderWeele: Harvard T.H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA 02115.
Dr. Ding: University of California, Berkeley, 425 Evans Hall, Berkeley, CA 94720.
Author Contributions: Conception and design: T.J. VanderWeele, P. Ding.
Analysis and interpretation of the data: T.J. VanderWeele, P. Ding.
Drafting of the article: T.J. VanderWeele.
Critical revision for important intellectual content: T.J. VanderWeele.
Final approval of the article: T.J. VanderWeele, P. Ding.
Provision of study materials or patients: T.J. VanderWeele.
Statistical expertise: T.J. VanderWeele, P. Ding.
Obtaining of funding: T.J. VanderWeele.
Administrative, technical, or logistic support: T.J. VanderWeele.
Collection and assembly of data: T.J. VanderWeele, P. Ding.
Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the “E-value,” which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment–outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used. They suggest calculating the E-value for both the observed association estimate (after adjustments for measured confounders) and the limit of the confidence interval closest to the null. If this were to become standard practice, the ability of the scientific community to assess evidence from observational studies would improve considerably, and ultimately, science would be strengthened.
Tyler J. VanderWeele, Peng Ding. Sensitivity Analysis in Observational Research: Introducing the E-Value. Ann Intern Med. 2017;167:268–274. doi: 10.7326/M16-2607
Download citation file:
Published: Ann Intern Med. 2017;167(4):268-274.
Published at www.annals.org on 11 July 2017
Research and Reporting Methods.
Results provided by:
Copyright © 2017 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use