As a young doctor, I had an elderly patient who complained of discomfort under her breastbone. I examined her, performed several tests and quickly concluded that she had indigestion. The antacids I prescribed brought little relief, but my mind was so fixed that her persistent complaints sounded to me like a nail scratching a chalkboard.
Several weeks later, I was paged to the emergency room. The woman was in shock. The discomfort under her breastbone, it turned out, had been caused by a tear in her aorta. After she died, my colleagues commiserated, saying that a torn aorta can be hard to diagnose, that the woman was so old that she probably would not have survived surgery to repair the tear. But that provided cold comfort, and I have never forgotten, nor forgiven myself.
In some hospitals, mistakes are categorized as “E.T.” for errors in technique and “E.J.” for errors in judgment. Errors in technique might involve placing a needle too far into the chest and puncturing a lung or inserting a breathing tube into the esophagus instead of the trachea — mistakes that, with practice, doctors can learn to stop making.
Errors in judgment are not so easily avoided, because we have largely failed to learn anything about how we think. Modern clinical practice has incorporated DNA analysis to illuminate the causes of disease, robotics to facilitate operations in the brain and computers to refine M.R.I. images, but we have paid scant attention to the emerging science of cognitive psychology, which could help us explore how we make decisions.
This science has grown from the work of Amos Tversky and Daniel Kahneman, who some three decades ago began a series of experiments to examine how people make choices when they are uncertain. Economists have used their work to understand why people in the marketplace often make irrational decisions. People invest in a company because their relatives did in the past, for example, or they choose a fund manager simply because he outperformed the market two years in a row.
This growing body of research can illuminate many irrational aspects of medical decision-making, too. The snap judgments that doctors make, for example, can be understood as “anchoring errors”; the first symptoms anchor the doctor’s mind on an incorrect diagnosis. Doctors also fall into a cognitive trap known as “availability,” meaning that we too readily recall our most recent or dramatic clinical experiences and assume they correspond to a new patient’s problem.
We make “affective” errors, too, letting our feelings color our thinking. Such feelings may be drawn from stereotypes — the Connecticut pediatrician casting my wife as overanxious or my viewing my elderly patient as a chronic complainer — or they may be excessively positive. Too much empathy may keep a doctor from performing an uncomfortable procedure that is vital to making the correct diagnosis.
I have started teaching these concepts of cognitive psychology in continuing medical education courses, and recently used my misdiagnosis of the torn aorta to illustrate the common thinking trap. My wife, Pam, has introduced fourth-year medical students at our hospital to the cognitive detours doctors commonly take. But such instruction needs to be widespread. In classes and on hospital rounds, medical schools and hospitals should teach doctors why some diagnoses succeed and why some fail. And as part of the assessment of clinical competency for obtaining a license, doctors should be expected to demonstrate their fluency in the application of cognitive science, as they are required to do in other sciences.
Assorted on India
13 years ago
No comments:
Post a Comment