IT’S ESTIMATED THAT UP TO 5% OF MEDICAL DIAGNOSES, affecting more than 12 million adults each year in the United States alone, are incorrect. Sometimes doctors don’t know what they are seeing. More often, they’re pressed for time and miss a key clue, or leap to a premature and overconfident conclusion.

It’s an all-too-human problem, and one that can be addressed with the help of computerized decision support systems: artificial intelligences that dispassionately consider relevant information and suggest what may be wrong with a patient. “The theoretical foundation is clearly there. It’s possible and it’s doable,” says Edward Hoffer, M.D., a medical informaticist at the Massachusetts General Hospital Laboratory of Computer Science.

Hoffer is one of the architects of DXplain, a medical diagnostic support program developed at MGH with original encouragement from the American Medical Association. First released in 1986, it’s now used in dozens of hospitals and medical schools around the world. Proto talked to Hoffer about DXplain and its medical possibilities.

Proto: How does DXplain work?
Edward Hoffer: We have created our own database of nearly 2,500 disease descriptions. For a given disease, we’ll list findings—clinical symptoms, physical findings, laboratory test abnormalities, imaging results—associated with it. Each finding has two numbers linked to it: the frequency with which it’s expected to be present, and an “Evoking Strength” rating that represents the finding’s importance to suggesting the disease. Both are rated on a scale of one to nine. Taking those numbers into account, the program analyzes the findings entered by a doctor and ranks the likely causes. In a close call, more life-threatening diseases are ranked ahead of the less-important ones.

Proto: How does your database grow and evolve?
EH: We depend largely on a combination of medical literature, and where the literature can’t help us, expert opinion. We scan over 100 medical journals regularly and search others. When there isn’t good information, we try to get a consensus from the experts. As the medical editors, the pediatrician Mitchell Feldman and I translate the vague descriptors used in textbooks—words like often, seldom and the like—into numbers. When necessary, we call upon specialists at MGH for their opinion.

Proto: So it’s not a comprehensive analysis of every study out there?
EH: Very little in the published literature is relevant to clinical diagnosis. Much of it is obscure research that may point to useful things in the future, but isn’t relevant now, or it’s a synthesis of other articles.

When you want to know how to diagnose pancreatic cancer and which findings may confirm the diagnosis, it’s irrelevant that there are 200,000 articles in PubMed talking about pancreatic cancer. What is relevant are the four articles from major medical centers reviewing their last 200 patients with pancreatic cancer. That’s what we seek out.

We don’t pretend we’re the font of all medical wisdom, that you can come to us with any question and we’ll answer it. But come to us with one of these 2,500 diseases, and we can do a pretty good job with that.

Proto: A lot of medical decision support systems have come online since DXplain broke new ground in 1986. But most doctors still don’t use them. Why is that?
EH: One of the biggest problems with diagnostic decision support is getting people to know they need it. There was a 2013 JAMA Internal Medicine article in which a series of case vignettes was presented to doctors, who diagnosed each case and indicated their confidence. With the fairly straightforward cases, doctors were correct about 55% of the time; their confidence was 7.2 on a scale of one to ten. For challenging cases, their accuracy plummeted to 6%— but they had a 6.4 confidence level.

In other words, “I may not be right, but I’m usually certain.” That’s a problem. If someone is sure they’re right, why would they use diagnostic support? We see our future as integrating with electronic medical record systems, so DXplain can be lurking in the background, looking at every encounter and alerting the doctor.

Proto: Couldn’t these systems drive up medical costs? A wary doctor might order unnecessary tests to account for every suggested diagnosis.
EH: Medical professionals can filter the information they get. They won’t say, “These 12 diseases are listed, so I’ll do tests for all of them.”

Several years ago, a study at the Mayo Clinic compared medical residents who used DXplain with those who didn’t when considering challenging cases. Over a six-month period, when DXplain was used, the average length of stay was about a half day less, saving about $1,200. Having a better diagnosis made care more efficient. If your initial differential diagnosis list is comprehensive and contains the correct diagnosis, you waste less time and money pursuing dead ends.

Proto: Do DXplain and other diagnostic support tools reduce the role of doctors?
EH: No, because diagnostic decision support systems are tools, like a stethoscope, not “oracles.” A generation ago, Larry Weed taught us that in the modern era, it is critical that physicians not expect to carry all the information they need in their head. We use all sorts of references all the time. If I prescribe a drug with which I am not completely familiar, I look up the proper dosing and possible interactions with other medications being taken. If the patient has a constellation of findings that are not absolutely clear-cut (and sometimes even if I think they are!), I should use a diagnostic support system to be sure I have not forgotten a disease that should be considered. These tools expand a physician’s capabilities but do not replace them.