Donald A. Redelmeier, MD
Acknowledgments: The author thanks Tracy Willson for administration support, and Chris Denny, MD, MSc; Edward Etchells, MD, MSc; Damon Scales, MD; Steven Shumak, MD; and Matthew Stanbrook, MD, PhD, for commenting on drafts of this manuscript.
Grant Support: By the Canada Research Chair in Medical Decision Sciences, Error Management Unit of Sunnybrook and Women's College Health Sciences Centre, and Canadian Institute for Health Research. These funding sources had no role in the design, conduct, or reporting of this project. Funding for the Quality Grand Rounds series is supported by the California HealthCare Foundation as part of its Quality Initiative. The authors are supported by general institutional funds.
Potential Financial Conflicts of Interest: None disclosed.
Requests for Single Reprints: Donald A. Redelmeier, MD, Sunnybrook and Women's College Health Sciences Centre, Room G-151, 2075 Bayview Avenue, Toronto, Ontario M4N 3M5 Canada; e-mail, firstname.lastname@example.org.
Redelmeier DA. The Cognitive Psychology of Missed Diagnoses. Ann Intern Med. 2005;142:115-120. doi: 10.7326/0003-4819-142-2-200501180-00010
Download citation file:
Published: Ann Intern Med. 2005;142(2):115-120.
Appendix: Questions and Answers from the Conference
“Quality Grand Rounds” is a series of articles and companion conferences designed to explore a range of quality issues and medical errors. Presenting actual cases drawn from institutions around the United States, the articles integrate traditional medical case histories with results of root-cause analyses and, where appropriate, anonymous interviews with the involved patients, physicians, nurses, and risk managers. Cases do not come from the discussants' home institutions.
A 65-year-old man who was followed in a dermatology clinic for moderately severe lichen planus was called back to the emergency department after blood cultures grew Staphylococcus aureus. The patient's ultimate diagnosis was initially missed through a series of errors in diagnostic reasoning.
Mr. Davis, a 65-year-old African-American man, presented to the emergency department of an academic medical center and reported several days of upper back pain and general body aches. He also reported a sore throat with odynophagia and subjective fever. On physical examination, he was noted to be afebrile, with mild oropharyngeal erythema and clear lung fields. No back examination was documented. He received a diagnosis of an upper respiratory tract infection, and, after blood and throat cultures were obtained, he was discharged with instructions to maintain fluid intake and to take ibuprofen for pain.
The next day, the hospital laboratory notified the treating resident, Dr. Rand, that Mr. Davis's blood cultures were positive. The organism was tentatively identified as Staphylococcus aureus. Dr. Rand contacted Mr. Davis at home and instructed him to return to the emergency department immediately. Upon his return later that day, Mr. Davis reported symptoms identical to those listed the previous day, with possible worsening of his neck and upper back pain.
This case begins with an unexpected and dramatic finding—positive blood cultures in a patient who received an initial diagnosis of viral pharyngitis. The case does not describe what motivated the providers to double-check their initial diagnosis by obtaining blood cultures. Perhaps blood cultures were routinely ordered in this particular emergency department, a practice that would emphasize sensitivity (few patients with bacteremia will be missed) over specificity (many results will be contaminants or “false positives”). Perhaps the clinician identified something intangible yet unsettling in the patient's presentation, a phenomenon often called the “eyeball test.” Regardless of explanation, the case exemplifies that misdiagnoses (for example, viral pharyngitis) can occur and can be corrected with follow-up.
The case is also notable for the scanty physical examination recorded. Ideally, a full physical examination should be documented in all patients; however, in reality, the record often omits details when patients present with findings that are not unusual or particularly worrisome, such as a sore throat. Indeed, charting the initial examination may seem inadequate only in hindsight, once a more complex diagnosis is established. Such insufficient chart notes frustrate many attempts to retrospectively reconstruct cases for the purposes of education, litigation, communication, insight, or creating new systems for error reduction (as in a root-cause analysis). Scanty documentation also suggests poor-quality care, but the relationship between charting and other quality problems is uncertain (1-4).
Cognitive psychology is the science that examines how people reason, formulate judgments, and make decisions. The term “science” implies that cognitive errors may be predictable in some situations—not the result of ignorance or the acts of a few bad performers. Instead, some pitfalls are sufficiently systematic that most people repeatedly make them in both routine and extraordinary situations. Thus, suspending a clinician after a missed diagnosis on the theory that he or she is simply a poor diagnostician may do little to improve future patient care. When the error results from cognitive errors to which all clinicians may fall prey, a replacement clinician may be just as fallible (5). The goal, therefore, is not to demonize or deny such errors in medical decision making but to understand how these mistakes typically are made and to take corrective action to avoid them.
Unfortunately, the seemingly straightforward goal to learn from mistakes is difficult. Some forms of errors, such as miscommunication, can be reduced through direct, unambiguous feedback and system controls (for example, wrong-patient errors can generally be achieved through read-backs and mechanistic constraints) (6). Selected technical procedures (for example, intravenous line insertion) offer a similar opportunity to immediately see one's mistakes and improve. Learning from past diagnostic errors, by contrast, presents greater difficulty because the mistakes may often be too distal in time or place for the erring clinician to be aware of them (no less to learn from them). In the case of viral pharyngitis, past misdiagnoses may not come to light at all (for example, the true diagnosis of sinusitis is never established) or the patient may return to see a different clinician. One might argue that the second clinician could learn from the mistakes of the first clinician, but such cases are often obvious only in retrospect, thus limiting the lesson (for example, florid Candida esophagitis is easy to diagnose once conservative therapy has failed and the condition has progressed).
Bayesian reasoning is an alternative way to learn from mistakes by numerically expressing uncertainty as a probability for each decision point. Its main strength is to provide a language and logic for considering potential diagnoses by using numerical representations of the relevant uncertainty. Its primary limitation is a lack of reliable data to characterize unique patients or, even when such data are available, the complex process of interpreting subtle interactions. In this case, for example, the clinician's subjective opinion of the probability of viral pharyngitis was not recorded and the patient's objective probability of viral pharyngitis was not known. Hence, a discrepancy between perception and reality that would indicate faulty judgment is impossible to detect, let alone correct. More generally, Bayesian reasoning may replace faulty judgment with accurate data, but the clinicians in this case (like so many of their peers) did not use the more numerical approach.
Studying cognitive pitfalls in reasoning is a different way to learn from mistakes. Its focus is on the recognition of misleading intuitions. Consider, for example, an analogy between errors in vision and errors in thinking. When people view a road surface on a hot summer day, the mirage of water may appear in the distance. Regardless of effort, a person cannot help but see the optical illusion (and real problems can result if a driver acts on this impression and abruptly changes lanes). Drivers soon teach themselves that their eyes can be fooled, and they use this awareness to facilitate safer driving. The acuity of their vision does not increase; rather, drivers bring the insight gained from experience to the task and override their known fallibilities. Similarly, studying cognitive psychology may not lead to foolproof reasoning in all cases, but it may provide awareness that can help avoid predictable pitfalls in specific cognitive settings.
Many errors in diagnostic thinking seem attributable to shortcuts in reasoning, tempting teachers and clinicians to believe that eliminating such shortcuts would eliminate diagnostic mistakes. Yet cognitive psychology studies suggest that faulty judgments arise precisely because the underlying shortcuts are typically correct and produce the desired results with a minimum of delay, cost, and anxiety. By analogy, the hot road illusion persists because shimmering reflections on outside terrain usually, in fact, indicate water. In many patients, for example, odynophagia does resolve with the standard treatment for pharyngitis (for example, fluids and rest). Moreover, extensive deliberation for each case could induce substantial delays or excessive testing. Rather than discarding the shortcuts that serve well in the main, clinicians might be better served by recognizing the potential diagnostic hazards that arise from their reliance on specific shortcuts in reasoning and override them when appropriate.
Mr. Davis's medical history was notable for Hodgkin disease, which was diagnosed 10 years earlier as stage IIA, after the patient had presented with cervical lymphadenopathy. He received combined-modality treatment with radiation and chemotherapy, with no suggestion of recurrence during follow-up. His oncologist regarded him as cured.
In addition, the patient had a long history of moderately severe lichen planus, for which he was followed in the dermatology clinic. Over the years, he had used various therapies, including over-the-counter creams to reduce pruritus, topical steroids, and phototherapy. He had been intermittently nonadherent with follow-up and treatment recommendations, and his pruritus had worsened in recent weeks.
On examination, his face was flushed and he appeared to be experiencing mild discomfort (which he attributed to neck and back pain). His temperature was 36.9°C. The remaining vital signs were blood pressure, 170/90 mm Hg; heart rate, 98 beats/min; and respiratory rate, 17 breaths/min, with oxygen saturation of 97% on room air. He was alert and oriented, with no abnormalities of cranial nerves II to XII. His neck was supple with full range of motion. His oropharynx appeared normal, without pharyngeal erythema or exudate. His lungs were clear, and a cardiac examination revealed no murmurs. His abdomen was soft and nontender, with no hepatosplenomegaly. There was no warmth or swelling of the joints, and his back examination was described as unremarkable.
Skin examination was notable for scaling and erythematous papules on the wrists, as well as hyperpigmentation and hypertrophic plaques on the anterior legs. Scattered excoriations on the arms and trunk were consistent with the patient's recent pruritus. The patient had no needle-track marks suggestive of intravenous drug use or peripheral signs of endocarditis. The results of laboratory tests obtained at the time were as follows: leukocyte count, 9× 109cells/L; hematocrit, 0.40; platelet count, 190× 109cells/L; and sodium level, 133 mmol/L. Urinalysis showed 3 to 10 erythrocytes per high-power field, with 6 to 10 squamous epithelial cells per high-power field.
Cognitive psychologists refer to shortcuts in reasoning as “heuristics,” of which the availability heuristic is a prime example. Specifically, the availability heuristic leads people to judge likelihood by the ease with which examples spring to mind (7). A classic demonstration is to ask people whether the English language has more words that start with the letter “r” or more words that have “r” in the third position (8). Because people find it easier to retrieve words starting with “r” (for example, “red”) than words with “r” in the third position (for example, “car”), they incorrectly believe that there are more of the former than the latter (the actual ratio is about 1:2). Estimating the likelihood of a diagnosis with ease of recall is much more convenient than systematically collecting and memorizing probabilities from a rigorous epidemiologic study. Furthermore, the shortcut in reasoning is often appropriate, since familiar diagnoses tend to be those that are frequently encountered. In this case, the clinicians may have attributed the patient's upper back pain and myalgias to viremia (without sufficiently considering other causes) because such an association is a familiar experience that comes easily to mind.
A second shortcut in reasoning—the anchoring heuristic—may also have occurred. This heuristic leads people to stick with initial impressions once they are solidly formed (9). Doing so is far easier than integrating the sensitivity and specificity of every new finding encountered. However, the anchoring heuristic is fallible because it conflicts with the scientific principle of checking for disconfirming evidence. A classic demonstration of the anchoring heuristic is to ask people with arthritis whether their pain is related to the weather. Many people can remember 1 day of intense pain and stormy weather and formulate an idea that the weather led to the pain. The association is compelling, so they may forget pain that occurred on a fair day and scrutinize future days with a predilection to confirm the pain–weather link. As a result, people become convinced of a relationship, even though studies find no consistent objective correlation (10). In this case, the clinicians may have set their impression around their initial tentative diagnosis of pharyngitis and become cognitively mired there, even after the back pain persisted and the positive blood cultures returned.
The case may also illustrate a third shortcut in reasoning that highlights how people confront complexity. The concept is called a framing effect, in which people tend to come to different decisions depending on how information is presented, or framed (11). A classic demonstration of framing effects is a study in which participants were asked to choose between surgery and radiation for lung cancer treatment (12). The outcome data appeared in a “mortality frame” for half of the participants (for example, “10% chance of dying”) and a “survival frame” for the other half (for example, “90% chance of surviving”). Otherwise, both groups received identical information. The main finding was that respondents' decisions to elect surgery increased from 58% to 75% when the information was framed in survival rather than mortality terms. Through framing effects, small changes in wording alter decisions about management.
Substantial skill is required to both collect clinical findings and frame them correctly. Improperly framed data often underlie provider-to-provider miscommunication (for example, when the emergency department labels a patient as having pneumonia by framing the presentation as “fever, shortness of breath, and cough,” and the accepting physician does not consider pulmonary embolism despite clear lung fields that are evident on a chest radiograph). Even without provider-to-provider communication, clinicians may miss the diagnosis because they frame their own interpretation of cases and thereby do not consider alternative explanations. This patient, for example, could be legitimately framed as “pharyngitis, myalgias, and blood cultures positive for Staphylococcus” (which might lead the clinician to suspect a viral illness and contaminated blood cultures) or “fever, back pain, and hematuria” (generating a different diagnosis list, including hypernephroma, lupus erythematosus, and tuberculosis).
A chest radiograph revealed a tortuous aorta, a normal cardiac silhouette, and no pulmonary infiltrates or effusions. Plain films of the cervical and lumbar spine showed degenerative changes but no acute fractures or lesions that might indicate malignancy.
Mr. Davis was hospitalized and started on a course of empirical vancomycin, which was replaced with intravenous nafcillin when cultures identified methicillin-sensitive S. aureus.
On morning rounds the next day, Mr. Davis's inpatient physician, Dr. Douglas, noted a new murmur, a II/VI holosystolic murmur heard best at the apex. Dr. Douglas ordered a transthoracic echocardiogram. A subsequent chart note from the same physician states“no murmur today; change from yesterday is no longer appreciated.” Transthoracic echocardiography performed later that day showed no valvular lesions or vegetations.
Mr. Davis's symptoms improved, and, after 4 days of intravenous antibiotics, he was discharged to complete a 2-week course of oral dicloxacillin. The presumptive diagnosis was staphylococcal bacteremia secondary to excoriations from his chronic dermatologic condition. He was also given an appointment to see his primary care physician in 3 weeks and to have surveillance blood cultures drawn at that time. Dr. Douglas felt that the transthoracic echocardiogram ruled out endocarditis, especially in light of the fact that Mr. Davis reported feeling better, remained afebrile, and had a plausible portal of infection (the skin).
At this point in the patient's course, the clinicians settled on their final diagnosis as sufficient to fully explain the situation—specifically, S. aureus bacteremia without abscess formation. The team apparently felt that the negative transthoracic echocardiogram, coupled with the patient's response to antibiotic therapy, ruled out endocarditis. In addition, the negative plain films of the spine seem to have removed the possibility of osteomyelitis in their minds. Beyond these 2 exclusions, the clinicians generated a pathophysiologic explanation for the patient's signs, symptoms, and laboratory findings—namely, S. aureus bacteremia secondary to excoriations. This creative and coherent stream of reasoning far exceeds the powers of even the most advanced existing artificial intelligence program and underscores the irreplaceable role of clinicians' powers of reasoning.
In creating a coherent theory of the case, however, the clinicians also downplayed several dissonant notes. Staphylococcal bacteremia in an elderly patient who does not have diabetes mellitus is unusual and justifies some reconsideration. Lichen planus leading to bacteremia is also unusual and leaves unexplained why the situation had never developed previously. Moreover, the clinicians seem to have relied on negative findings on 2 diagnostic tests (echocardiography and the plain films of the spine) that both lack the sensitivity for their corresponding target diseases (endocarditis and osteomyelitis, respectively) to truly exclude those possibilities in any patient with more than a tiny pretest probability of the disorder (13-15). The apparent over-reliance on diagnostic technology results (and underappreciation of technology's limitations) may be akin to blind obedience, which leads people to stop thinking when confronted with an apparent authority that may be human (for example, an assertive colleague) or technological (for example, an objective test result) (16).
The general pattern at this point exemplifies a particular form of anchoring bias, sometimes denoted as premature closure, which is characterized by a reluctance to pursue alternative possibilities once a commitment is made (17). Premature closure can be paradoxically more compelling in situations where several options are available. When just 1 alternative is available, generally it will be checked; when many alternatives are available, the inclination is to do nothing. A classic demonstration of premature closure when confronted by excess choice involves a hypothetical patient with hip pain being considered for orthopedic surgery (18). Half of the participants were told that the patient had not tried 1 medication (ibuprofen). The other half were told that the patient had not tried 2 medications (ibuprofen and piroxicam). Clinicians decided to forego medications completely far more often (72% vs. 53%) when 2 options were available rather than just a single option. Apparently, the difficulty in choosing between ibuprofen and piroxicam caused some respondents to give up on both medications. For this case, premature closure may explain the lack of a thorough search for a nidus of S. aureus since so many alternative possibilities were available (for example, osteomyelitis, endocarditis, and perinephric abscess).
Mr. Davis saw his primary care physician as scheduled. Although he reported that his symptoms had initially improved, his generalized fatigue and neck and back pain had recurred, and he now felt tingling sensations in his fingers and had difficulty urinating. He was not admitted, but laboratory tests were ordered and he was sent home. One of 2 surveillance blood cultures returned positive for S. aureus, so Mr. Davis was again called at home and referred to the hospital (this time for direct admission).
Mr. Davis again received empirical vancomycin (pending susceptibilities) and underwent urgent magnetic resonance imaging, which showed changes consistent with an epidural abscess and osteomyelitis at C6–7 accompanied by spinal cord impingement. Decadron was added to his medications and neurosurgery staff was consulted. A second transthoracic echocardiography performed during this hospitalization again showed no valvular lesions or vegetations. Mr. Davis was reluctant to undergo transesophageal echocardiography, and the physicians did not insist since he would receive prolonged antibiotics for his osteomyelitis anyway.
Mr. Davis declined the surgical intervention that was recommended by the neurosurgical consultant and the attending physician, and he elected instead to receive medical treatment only, with steroids for the spinal cord compression and a 6-week course of intravenous cefazolin. At the last follow-up, he reported some persistent paresthesias and mild weakness of his arms but no constitutional symptoms. Surveillance cultures on 3 subsequent occasions have been negative.
Follow-up may be a feasible strategy to prevent cognitive shortcuts from causing harm, since it allows clinicians to reconsider the entire picture from an alternative perspective (Table). For example, follow-up may offset the availability heuristic by providing an opportunity to verify with legitimate sources, such as MEDLINE; may mitigate the anchoring heuristic by providing more distance from initial impressions; and may counteract premature closure by allowing the clinician to reconsider the case when he or she is less fatigued. Many specific corrective strategies also have direct analogues to popular clinical maxims. Diligent follow-up is no panacea for mistakes in reasoning, but it allows for corrective intervention for the patient at hand and the opportunity to learn from mistakes for the benefit of future patients.
The caveat to follow-up is that it requires appropriate timing because some clinical problems are irreparable if they are delayed too long. When timed correctly, follow-up provides clinicians with all the joys of lucky hits (for example, cases in which the shortcut in reasoning proves correct and the physician seems brilliant). In addition, properly timed follow-up can make failures seem reasonable and anticipated (for example, by declaring “that's exactly why I scheduled you for follow-up—I was worried that something else was going on”). Furthermore, follow-up is high in patient satisfaction, relatively low in cost, and congruent with the strongest traditions in medicine. Indeed, the general failure to follow up is the root cause that allows cognitive errors to sometimes run amok in human judgment and decision making.
A conventional summary of this case might stress “never forget osteomyelitis” and emphasize the need for a more exhaustive history and examination. A cognitive psychology review, in contrast, focuses on being aware of the shortcuts in reasoning that prevail in decision making. For example, framing effects can be construed, in hindsight, as the failure to document a diligent clinical assessment. Yet framing is indispensable to succinct and coherent communication. Insights based on cognitive psychology, arguably, can provide versatile and practical corrective strategies, such as the reminder to examine cases from alternative perspectives. By adopting such corrective strategies, clinicians can strive to intercept their own errors and activate robust solutions. In fact, far from discarding the shortcuts in reasoning, the seasoned clinician simply adds safeguards to minimize reflexive decision making in nuanced situations. For instance, coming to the emergency department, especially if the patient has access to any other site of care, might be a flag to consider a more serious condition than viral pharyngitis.
Ideally, scientific evidence would eliminate the uncertainty that leads to missed diagnoses in medical practice. However, perfect knowledge of relevant probabilities is unlikely, given the expense and delays in gathering medical information. Even if patients never declined any recommendation, it is not realistic to order magnetic resonance imaging in every case. Notwithstanding swift Internet access to the medical literature, the principles of evidence-based medicine are not universally practiced. Moreover, exuberant testing and exhaustive combing of the literature would probably generate an abundance of false-positive reports and conflicting data that would increase—rather than decrease—the cognitive load on clinicians. In the future, therefore, clinicians will continue to use shortcuts in reasoning, to experience them as double-edged swords, and to benefit from strategies that mitigate the potential harms.
Dr. Lee Goldman, University of California, San Francisco (Moderator): You've made the point that follow-up will give you a second chance, which suggests that we can learn and be open to new information. From your studies, is there any data as to how often this actually occurs versus how often these heuristics persist in such a way that you could have follow-up forever but never be able to change the system?
Dr. Redelmeier (Discussant): The evidence shows that short procedures, such as a lecture like this, are awfully limited in what they can do. One of my mentors, a man who won a Nobel Prize in economics for a great deal of this work, tells me that he still falls prey to every one of those cognitive illusions, so if they can't be beaten out of Daniel Kahneman, I don't think they can be beaten out of the rest of us—nor necessarily should they, because these things usually work so well. What can be beaten out of us, though, is this enormous sense of arrogance. In particular, I think adding in a spice of humility allows the opportunity for both circumspection and perhaps error interception. We all know that our eyes can play tricks on us, so that's why we double-check and use assistive technology, such as eyeglasses.
Dr. Goldman: Do you have any reflections on how second opinions might play into this?
Dr. Redelmeier: Second opinions, and the opportunity to view things from an alternative perspective, can be remarkably helpful procedures, particularly for framing effects. But the opinion truly has to be independent. It can't be groupthink, as is all too common in command structures. But, with framing effects in particular, there is some evidence that by suitably viewing things from community perspectives, it can help with some distortions.
A rheumatologist: One of the things I thought I heard you say is that misdiagnoses are not events from which you can learn, but if I look back at my own career, in fact, the most effective learning that I have done has been from my mistakes. In fact, when you were talking, you described a number of events that were mistakes from which you obviously learned a great deal. I'm not advocating making mistakes in order to learn, but I disagree that we don't learn a lot from those mistakes that happen.
Dr. Redelmeier: I think it's a great point, and certainly I learned a great deal from those mistakes. Sometimes you can learn the wrong message, of course. What I want to emphasize is that we make so many mistakes that we're oblivious of, that we get lulled into a false sense of security. A classic example is that 90% of people believe that they are above-average drivers in skill. Of course, that is contrary to the laws of statistics. The genesis of the misconception, though, is that every time we inadvertently cut somebody off, we're oblivious of that fact. We never get feedback from that error. Whereas everybody else who cuts us off, we're painfully aware of, so that a lifetime of experience creates this impression that we truly are better than average drivers. The feedback of our mistakes often doesn't get back to us, because people are afraid to report to us what we've done wrong. So that's why after every talk I've given as a visiting professor here, I've searched out the chief resident and asked her, “How am I doing and what am I doing wrong?” You know, it's been kind of awkward for her to give feedback. She's been polite, so a lot of my mistakes will last a lifetime.
A rheumatologist: I think follow-up deserves special emphasis because many of us, as I like to say, make the same mistake over and over again and call it “experience.” So follow-up done correctly allows us not only to learn from our mistakes but also does something else, and that is to see the natural history of disease in this age of technological pervasiveness, given the uncontrollable urge to do a lab test or a high-tech procedure.
Dr. Redelmeier: I agree totally. This is particularly true when the length of time for the encounter is shrinking, so that you no longer have half an hour with a patient, you've only got 5 minutes. Then you maybe only have 2 minutes worth of work time, so that you then ask the patient to come back next week. In a 5-minute encounter, you can't possibly get it right, so you must rely on a follow-up. In terms of the role of technology, I think that technology is a great big help. I'm a great fan of computerized lab reports and entries, but technology is a wonderful assistive device, never a panacea. We all have wristwatches, yet we're all late for many, many sorts of things. Watches don't solve the problem even if they do go a big way to helping people out.
Dr. Robert M. Wachter, Quality Grand Rounds Editor: One aspect of the case that struck me is that you essentially have 2 errors: The initial doctor says viral pharyngitis, which then gets replaced by the finding of Staphylococcus bacteremia; then, after some work-up that was probably inadequate, the diagnosis of osteomyelitis is missed. Do you think the finding of an initial zig decreases the chances that you will then find the second zag, because we are limited in how many surprises we are capable of keeping our minds open to?
Dr. Redelmeier: Yes, because of the exhaustion factor. We don't really know what's going on, and we still don't have the final word on this patient, right? For all we know, there may be some sort of IV drug use going on in the background or some other process that we are yet to see—Mother Nature has no mercy. Just because we've identified and rectified the first 3 mistakes doesn't mean that mistakes 4 and 5 aren't out there as well. And it's not just the zigs and the zags, it's all these red herrings that are out there as well.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
David D Derauf
Kokua Kalihi Valley Health Center
January 19, 2005
Cogitating on Cognitive Errors
Dr. Redelmeier is to be thanked for his recent article "The Cognitive Psychology of Missed Diagnoses". I hope it will prove useful in my own clinical practice as well as in the teaching of clinical skills to medical students and residents.Regarding the framing pitfall, perhaps another clinical maxim to consider would be: "Never choose a frame too small for the picture". For it seems in my own experience that framing is often the act of narrowing a diagnosis through choice of words, the utility of which needs to be weighed against the danger of prematurely eliminating alternative explanations. In this sense, framing appears to be closely linked to premature closure. Regarding the underlying rationales for each type of cognitive error, the article mentioned "the ability to "produce the desired results with a minimum of delay, cost and anxiety". In addition, one would think that our perceived need as physicians to appear omniscient, or at least highly self -confident, may often underly common cognitive errors in arriving at a diagnosis. Expressing confidence in our diagnosis to the patient may be thought to be helpful in healing the patient or at least in increasing patient satisfaction with the visit, though evidence seems to be scanty for both of these.(1) To what extent might that underly some of the common cognitive errors in diagnosis seems to await further research.
1: Physician-patient communication in the primary care office: a systematic review.Beck RS, Daughtridge R, Sloane PD.J Am Board Fam Pract. 2002 Jan-Feb;15(1):25-38.
Ronald M Epstein
University of Rochester, Departments of Family Medicine and Psychiatry
February 18, 2005
Non-cognitive factors in diagnostic errors
The recent article by Redelmeier (The Cognitive Psychology of Missed Diagnoses. Ann Int Med 2005;142:115-120) provides a useful guide to mental processes that accompany cognitive errors in diagnostic reasoning. However, pragmatically, there are two pieces missing that might translate those insights into practical strategies for error reduction.
First, there should be some consideration of the involvement of emotions in rational decision making(1) Lack of sufficient emotional involvement can lead to carelessness and uncritical reliance on heuristics, whereas overactive emotions can generate paralyzing anxiety with similar lack of ability to achieve perspective (2). Thus, some capacity for recognition of one's own emotions -- both quantitatively and qualitatively -- can influence one's ability to recognize and accommodate to the heuristics being unconsciously employed.
Second is the embedded assumption that knowledge of the kinds of tricks one's mind can play is sufficient to prevent them. The level of self-awareness that the authors suggest is usually achieved with some effort, and should not be assumed to occur spontaneously. Some methods of achieving self awareness, familiar to performing artists, psychotherapists and athletes, have been adapted to medical training in a variety of forms: group formats that allow for reflection on the emotional content of clinical encounters (3), honing one's observational skills by viewing art or reading poetry (4), and writing narratives (5). Weick describes organizational features that promote mindfulness by rewarding transparency and disclosure, encouraging vigilance and fostering resilience that could be adapted to medical settings (6). Although the effort to reduce errors has increasingly focused on institutional factors, the creation of institutional cultures to foster individual self-awareness may also be an important ingredient in promoting recognition of and appropriate responses to faulty diagnostic reasoning.
Ronald Epstein, MD
(1) Damasio AR. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace; 1999.
(2) Borrell-Carrio F, Epstein RM. Preventing Errors in Clinical Practice: A Call for Self-Awareness. Ann Fam Med 2004 July 1;2(4):310-6.
(3) Novack DH, Suchman AL, Clark W, Epstein RM, Najberg E, Kaplan C. Calibrating the physician: Personal awareness and effective patient care. Jama 1997;278(6):502-9.
(4) Connelly J. Being in the present moment: developing the capacity for mindfulness in medicine. Acad Med 1999 April;74(4):420-4.
(5) DasGupta S, Charon R. Personal illness narratives: using reflective writing to teach empathy. Academic Medicine 2004 April;79(4):351-6.
(6) Weick KM, Sutcliffe KM. Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey-Bass; 2001.
Donald A Redelmeier
University of Toronto
April 27, 2005
Response from Original Article Author
To the Editor, Derauf provides some insightful feedback in a kindly manner. His introduction of another maxim is particularly welcome, as are his remarks about the role of confidence in medicine. In moderation, confidence is necessary because facing trouble requires courage, the stakes in medical care are high, and clinicians do not want to act capriciously. Taken to extreme, however, any self-deception is a source of fallibility. A valid rationale is what distinguishes confidence from self-deception.
Psychology research also shows that people sometimes are overly dependent on the rationale. In one study , students were offered an attractive vacation following a tough examination. By random assignment, a third were told they had passed; in this case, most accepted the offer (presumably as a reward). A third were told they had failed; in this case, most still accepted the offer (presumably as a consolation). The final third were told the examination results were delayed; in this case, most declined the offer. Apparently, the lack of a rationale dissuaded some students from accepting a vacation that was otherwise attractive regardless of circumstance. The general pattern is that even minor decisions require the presence of a rationale.
Biology is complex and patient presentations are uncertain; hence, a clinician may seek or construct all sorts of rationales. Once a rationale is obtained, such clinicians tend to lack the circumspection of dispassionate reviewers. As Derauf mentions, the self-deception underpins a basic vulnerability to framing effects and a failure to intercept errors. As Derauf also emphasizes, self-deception is sometimes reinforced by the patient. Alas, self-deception is a resource many of us have in abundance.
References  Tversky A, Shafir E. The disjunction effect in choice under uncertainty. Psychological Science 1992;3:305-9.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only