Rochelle P. Walensky, MD, MPH; A. David Paltiel, PhD
Acknowledgments: The authors thank Kenneth A. Freedberg, MD, MSc, for critical review of the manuscript and Lauren Mercincavage for technical support.
Grant Support: By the National Institute of Allergy and Infectious Diseases (K23 AI01794, R01 AI42006, and P30 AI060354); the National Institute of Mental Health (R01 MH65869); the Doris Duke Charitable Foundation, Clinical Scientist Development Award; and the National Institute on Drug Abuse (R01 DA015612).
Potential Financial Conflicts of Interest: None disclosed.
Corresponding Author: Rochelle P. Walensky, MD, MPH, Division of General Medicine, Massachusetts General Hospital, 50 Staniford Street, 9th Floor, Boston, MA 02114; e-mail, firstname.lastname@example.org.
Current Author Addresses: Dr. Walensky: Division of General Medicine, Massachusetts General Hospital, 50 Staniford Street, 9th Floor, Boston, MA 02114.
Dr. Paltiel: Yale School of Medicine, 60 College Street, New Haven, CT 06520.
Walensky R., Paltiel A.; Rapid HIV Testing at Home: Does It Solve a Problem or Create One?. Ann Intern Med. 2006;145:459-462. doi: 10.7326/0003-4819-145-6-200609190-00010
Download citation file:
Published: Ann Intern Med. 2006;145(6):459-462.
The U.S. Food and Drug Administration (FDA) is considering approval of an over-the-counter, rapid HIV test for home use. To date, testimony presented before the FDA has been overwhelmingly supportive. Advocates have argued enthusiastically that there is value in empowering individuals to manage their HIV risks and have suggested that the availability of a rapid home HIV test will dramatically increase rates of disease detection in communities that have proven difficult to reach and to link to appropriate care. The authors offer a more cautious perspective. According to what is already known about the market demand for over-the-counter HIV testing kits, their costs, and the performance of rapid HIV tests in that market, the authors do not anticipate that the rapid home test will have a profound impact either on the HIV public health crisis or on the populations in greatest need. Home HIV testing will attract a predominantly affluent clientele, composed disproportionately of HIV-uninfected new couples and â€œworried wellâ€ persons, as well as very recently infected persons with undetectable disease. The authors illustrate how testing in these populations may have the perverse effect of increasing both false-positive and false-negative results. A poorly functioning home HIV test may thereby undermine confidence in the reliability of HIV testing more generally and weaken critical efforts to expand HIV detection and linkage to lifesaving care for the estimated 300Â 000 U.S. citizens with unidentified HIV infection.
David H Lander
Virginia College of Osteopathic Medicine
October 3, 2006
False Postive HIV Tests
To the Editors:
As someone who worries about the impact of false-positive HIV tests, I thank Drs. Alensky and Paltiel for their thoughtful Perspective on home HIV testing1. I would like to amplify one of their points, and add one quibble.
Their table nicely shows us the outcome in differing prevalences. As described by Gigerernzer and Edwards2 physicians need to look at numbers that describe risk or statistical probability in a way that they can understand concretely and communicate effectively.
Take a more extreme case: Kleinman et al describe3 the results of screening over 5 million blood donors in North Carolina. There were 421 samples found to have a positive Western Blot test, making its prevalence 421/(5,020,421) = .00008386, or about .008%. Suppose we apply that prevalence to the OraQuick home test, using its given values for sensitivity and specificity, and choosing a nice round 5 million people to be tested. In this scenario of the 10,396 positive home tests, only 397 are true-positives: for every 100 positive home tests, only 4 will really have the infection.
In the Kleinman study the samples with a positive Western Blot were subjected to RNA PCR, and based upon that (and other evidence) 20 were judged false-positive. The specificity of the Western Blot test was a hefty 5020000/5020020 = .999996, or 99.9996%! In the face of that overwhelming degree of "accuracy", how many physicians, if asked by a patient "what is the chance that my positive HIV test is untrue", would answer "about one in 20"? But that is what you get: the 20 false-positives out of 421 total positives gives a 95.25% positive predictive value (PPV), or a false - positive rate of 4.75%.
How hard would it be for most (even educated) people to grasp the fact that on the one hand a test is "more than 99.99% accurate," while on the other hand it is highly likely that their own test result is false-positive? How should a warning label on the OraQuick be phrased? "Attention: if your life style is very low-risk for HIV, then a positive result on this test is probably an error." Would people understand this?
Making the jump between very very high test accuracy and the variable, and possibly low, positive predictive value can be flubbed by otherwise knowledgeable physicians. I fear that we can be lulled into complacency when experts seem to gloss over the false-positive possibility with statements such as: "standard testing for HIV infection has a sensitivity and specificity greater than 99% and that false-positive test results are rare, even in low-risk settings."4 The test is so accurate, so why worry? An excellent review of HIV counseling in the BMJ5 didn't even mention the false-positive issue.5
Where should this take us? I postulate, without evidence, that the diagnosis of HIV is still perceived by many people in our culture as representing a "death sentence". One hundred percent certainty is rarely going to be possible in medicine, so how far should we go? Should all persons with a positive Western Blot not be considered HIV positive until they are tested with RNA PCR? (This is, after all, a crucial next step in their management.6) If RNA PCR is needed for positive Western Blots, how about our home testers?
My quibble regarding the paper of Drs. Alensky and Paltiel concerns the authors' statement that sensitivity and specificity "are independent of the population in which the test is conducted." Coming after their reference to a test's "inherent accuracy", I think they might give the reader a false impression.
Certainly a test can have a sensitivity and a specificity that have been demonstrated to be stable when the test is studied in a wide spectrum of both healthy and diseased patients. This is theoretically possible, and in the case of OraQuick might be true. However, the traditional concept that they are inherent properties of a test is no longer to be blandly accepted.
The late Dr. Alvan Feinstein explained that it had been "assumed that the nosologic values of sensitivity and specificity were constant for each disease and for each non-diseased control group, regardless of the spectrum of patients who were tested. This assumption was incorrect. The nosologic indices are not constant: they will vary with variations in the clinical, pathological, or comorbid attributes of the patients in different parts of the spectrum for each disease and for the complementary states of non- disease."7
Perhaps the best way for practicing physicians to look at this issue is in the most general, and least technical, way. The "sensitivity and specificity" are just numbers, generated by clinical research. Consider: if one states that "the absolute risk reduction (ARR) of stroke, induced by aspirin therapy is X," it is clear that this simply refers to the particular results of one or more clinical trials of aspirin, not an inherent property of aspirin, or of humans-on- aspirin. Such an ARR is to be considered as possibly valid pending our review of the evidence and of its applicability to our particular patient. So it is with sensitivity and specificity: they are not some immutable property of the test, but merely the results of clinical research.
If used on a very very low prevalence group, the OraQuick may be just as specific as stated. We just need to be skeptical, and aware of the ever- present possibility of false-positives with even the most accurate tests. And, when false-positives have potentially devastating emotional impact, we must understand and then effectively communicate the statistics to our patients, and help to assure appropriate confirmation. I hope the OraQuick people and FDA think this all through carefully.
David Lander MD FACP FACEP Associate Professor, Edward Via Virginia College of Osteopathic Medicine Blacksburg, Virginia
Home Address: 5773 Franklin Pike Floyd, Virginia 24091
Potential Financial conflicts of Interest: None.
1 Walensky RP, Paltiel AD. Rapid HIV test at home: does it solve a problem or create one? Annals Intern Med. 2006; 145:459-62.
2 Gigerernzer G, Edwards A. Simple tools for understanding risks: from innumeracy to insight. BMJ. 2003; 327-741-44.
3 Kleinman S, Busch MP, et al. False-positive HIV-1 test results in a low-risk screening setting of voluntary blood donation. JAMA. 1998; 280:1080-1085.
4 U.S. Preventive Services Task Force (USPSTF). Screening for HIV: Recommendation Statement. Ann Intern Med. 2005; 143:32-37.
5 Chippendale S, French L. ABC of AIDS: HIV counseling and the psychosocial management of patients with HIV or AIDS. BMJ. 2001; 322:1533-35.
6 Hammer SM. Management of newly diagnosed HIV infection. N Engl J Med. 2005; 353:1702-10.
7 Feinstein AR. Misguided efforts and future challenges for research on "diagnostic tests". J Epidemiol Community Health. 2002; 56(5):330-332.
to gain full access to the content and tools.
Learn more about subscription options.
Register Now for a free account.
Infectious Disease, HIV.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only