Steven N. Goodman, MD, MHS, PhD
Disclaimer: The views and content herein are solely the responsibility of the author.
Acknowledgments: The author thanks Drs. Donald Berry, Thomas Louis, and Joel Greenhouse for their helpful comments on earlier versions of the manuscript.
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: Steven N. Goodman, MD, MHS, PhD, Johns Hopkins University Schools of Medicine and Public Health, 550 North Broadway, Suite 1103, Baltimore, MD 21205; e-mail, firstname.lastname@example.org.
This commentary reviews the argument that clinical trials with data monitoring committees that use statistical stopping guidelines should generally not be stopped early for large observed efficacy differences because efficacy estimates may be exaggerated and there is minimal information on treatment harms. Overall, the average of estimates from trials that use these boundaries differs minimally from the true value. Estimates from a given trial that seem implausibly high can be moderated by using Bayesian methods. Data monitoring committees are not ethically required to precisely estimate a large efficacy difference if that difference differs convincingly from zero, and the requirement to detect harms and balance efficacy against harm depends on whether the nature of the harm is known or unknown before the trial.
Distribution of observed effects in trials with and without stopping rules.
The trials were designed to have 90% power to detect a 10% mortality benefit (for example, 50% vs. 40%). Each panel corresponds to a different underlying true difference: no difference (top), 10% difference (middle), and 20% difference (bottom). The distribution of results is shown for trials of 2 designs: 1 using a 4-look O'Brien–Fleming stopping rule (“stopping”) and 1 using a fixed sample size (“no stopping”). Median effect size and 2.5% and 97.5% percentiles of each estimate are reported in parentheses. The mean sample size is reported for the “stopping” trial only: n = 1040 for the fixed sample size design.
If reliable estimates are required for each treatment then it seems inevitable that a substantial number of patients must receive the inferior treatment … Then it must be recognized that the risks undertaken by volunteers in the experiment are mainly associated with estimation, rather than the need to discover which of the treatments is superior.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Goodman SN. Stopping at Nothing? Some Dilemmas of Data Monitoring in Clinical Trials. Ann Intern Med. 2007;146:882–887. doi: 10.7326/0003-4819-146-12-200706190-00010
Download citation file:
Published: Ann Intern Med. 2007;146(12):882-887.
Results provided by:
Copyright © 2018 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use