Penny Whiting, MSc; Anne W.S. Rutjes, MSc; Johannes B. Reitsma, MD, PhD; Afina S. Glas, MD, PhD; Patrick M.M. Bossuyt, PhD; Jos Kleijnen, MD, PhD
Disclaimer: The views expressed in this paper are those of the authors and not necessarily those of the Standing Group, the Commissioning Group, or the Department of Health.
Acknowledgments: The authors thank Kath Wright (Centre for Reviews and Dissemination) for carrying out literature searches. They also thank the advisory panel to the review for their help during various stages, including commenting on the protocol and draft report.
Grant Support: Commissioned and funded by the National Health Service R&D Health Technology Assessment Programme (project number 98/27/99).
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: Penny Whiting, MSc, Centre for Reviews and Dissemination, University of York, York YO10 5DD, United Kingdom; e-mail, firstname.lastname@example.org.
Current Author Addresses: Ms. Whiting and Dr. Kleijnen: Centre for Reviews and Dissemination, University of York, York YO10 5DD, United Kingdom.
Ms. Rutjes and Drs. Reitsma, Glas, and Bossuyt: Department of Clinical Epidemiology & Biostatistics, Academic Medical Center, University of Amsterdam, P.O. Box 22700, 1100 DE Amsterdam, the Netherlands.
Studies of diagnostic accuracy are subject to different sources of bias and variation than studies that evaluate the effectiveness of an intervention. Little is known about the effects of these sources of bias and variation.
To summarize the evidence on factors that can lead to bias or variation in the results of diagnostic accuracy studies.
MEDLINE, EMBASE, and BIOSIS, and the methodologic databases of the Centre for Reviews and Dissemination and the Cochrane Collaboration. Methodologic experts in diagnostic tests were contacted.
Studies that investigated the effects of bias and variation on measures of test performance were eligible for inclusion, which was assessed by one reviewer and checked by a second reviewer. Discrepancies were resolved through discussion.
Data extraction was conducted by one reviewer and checked by a second reviewer.
The best-documented effects of bias and variation were found for demographic features, disease prevalence and severity, partial verification bias, clinical review bias, and observer and instrument variation. For other sources, such as distorted selection of participants, absent or inappropriate reference standard, differential verification bias, and review bias, the amount of evidence was limited. Evidence was lacking for other features, including incorporation bias, treatment paradox, arbitrary choice of threshold value, and dropouts.
Many issues in the design and conduct of diagnostic accuracy studies can lead to bias or variation; however, the empirical evidence about the size and effect of these issues is limited.
Whiting P, Rutjes AW, Reitsma JB, et al. Sources of Variation and Bias in Studies of Diagnostic Accuracy: A Systematic Review. Ann Intern Med. 2004;140:189–202. doi: 10.7326/0003-4819-140-3-200402030-00010
Download citation file:
Published: Ann Intern Med. 2004;140(3):189-202.
Cardiac Diagnosis and Imaging, Cardiology, Pulmonary/Critical Care.
Results provided by:
Copyright © 2019 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use