Published On July 23, 2010
TO NONMATHEMATICIANS, THE WORD ALGORITHM (from a latinized version of the name of the ninth-century Persian astronomer who wrote a treatise on calculation) may seem arcane and off-putting, its definition difficult to pin down. Yet the thing itself, if not the term, pops up everywhere. Across the spectrum of human activity, algorithms run vital decision-making processes behind the scenes. If you’ve taken out a home mortgage, an algorithm was applied to your financial records to determine how to price your loan. If you were stranded after the eruption of Iceland’s Eyjafjallajökull volcano, algorithms were responsible for the rerouting of thousands of planes and crews to get you home. If you own a Volvo S60 sedan, algorithms are used to scan for pedestrians in your path and hit the brakes even if you don’t. In every modern industry, including medicine, algorithms rule.
An algorithm is any step-by-step procedure that applies rules to data, producing an unambiguous solution to a problem, and there is now a vast universe of clinical examples. The Medical Algorithm Project (MedAL), which stores peer-reviewed algorithms in an online database, contains more than 14,400. These tools can help physicians make diagnoses, choose treatments, calculate dosages, predict outcomes and monitor side effects. More are being developed every day.
Like their counterparts in mathematics, medical algorithms take myriad shapes. They can look like equations, scales, truth tables, checklists, scoring systems or decision trees. The simplest are performed with pen and paper, and the answers they provide may seem intuitive, something experienced physicians might come up with on their own, at least when dealing with familiar conditions. The widely used body mass index calculation, for instance, uses a straightforward ratio—mass in kilograms divided by the square of height in meters—to produce a number that physicians can use to see where a patient falls in a range from dangerously thin to morbidly obese.
Other clinical algorithms, however, are more complex and can help specialists keep up with a knowledge base that’s expanding exponentially. These formulas are computerized and often sift huge amounts of data and alternative approaches before reaching conclusions. For example, the algorithms that drive automated external defibrillators analyze the pattern of a patient’s heart rhythm to determine the number and strength of shocks required to restore normal functioning.
But simple or complicated, and despite their proliferation in textbooks, journals and, increasingly, electronic databases, most formal algorithms don’t get used. To critics such as Herbert L. Fred of the University of Texas Health Science Center, that’s a good thing. Fred, a professor of internal medicine, has written that algorithms lead physicians to interact with numbers, not patients, and has urged medicine to “give algorithms back to the mathematicians.” But advocates, including John Svirbely, medical director of laboratories at the McCullough-Hyde Memorial Hospital in Oxford, Ohio, and co-founder of MedAL, argue that algorithms save time, money and lives—or would, if they were integrated into everyday practice.
THERE’S NOTHING NEW ABOUT THE NOTION OF A CLINICAL ALGORITHM. Hippocrates himself, some 2,500 years ago, devised a systematic protocol for diagnosing and treating head injuries. And physicians constantly use algorithm-like mental processes that lead systematically to solutions. In a 2005 article in the British Medical Journal, researcher Christopher Gill argues that physicians are natural Bayesian statisticians who apply elaborate mathematical reasoning in making decisions. As they collect findings from interviews and examinations, Gill points out, physicians use each piece of information to refine probability estimates—a process that helps them calculate, for instance, the odds that a patient with a fever may have a rare viral infection rather than a common cold.
“If you decide to prescribe drug A rather than drug B, you’ve done so based on some calculation,” says David Osser, an associate professor of psychiatry at Harvard University who has developed several psychopharmacology algorithms. “But ask doctors to expose their thinking, and it may not be based on the most rigorous analysis of the pertinent information. Maybe drug A worked on another of your patients—or maybe you heard about it yesterday from a drug company guy.”
Formal algorithms are designed to supply that missing logical link. They are created by teams of experts based on explicit reasoning (rather than intuition), establish a record that can be checked (unlike mental judgments) and rely on the analysis of years of research data (not personal experience).
Consider an ER patient with chest pain. That could indicate the early stages of a heart attack, in which case timely diagnosis and treatment are essential—or it could be nothing, in which case aggressive action would be unnecessary and potentially harmful. The physician on duty can order an electrocardiogram and cardiac enzyme tests, but the results may not be definitive. It’s left to the physician to make the call—and sometimes she’ll be wrong. “It happens all the time,” says Mark Graber, chief of medicine at the Veterans Affairs Medical Center in Northport, N.Y. “We use our intuition, send patients home, and over the next several days, some have heart attacks.”
As an alternative, the ER physician could use a 30-year-old computer-driven algorithm known as the Goldman Cardiac Risk Index. The doctor feeds in data about a patient’s history, symptoms, medical condition and test results, then the algorithm produces a point score quantifying the risk of death from heart-related causes. According to one study, this algorithm reduces unnecessary hospital admissions by 16.5%—and without missing any patient who’s actually having a heart attack.
Sriram Iyengar, an assistant professor of health information sciences at the University of Texas Health Science Center in Houston and Svirbely’s partner on the MedAL project, says algorithms don’t just aggregate variables but also offer precision. “Saying energy and mass are connected is different from saying E=mc2,” Iyengar says. “The latter is an exact relationship.” By the same token, knowing that age, body surface area, weight and height all influence the time it takes for a drug to filter out of the body is not the same as using an algorithm to predict a specific clearance time based on those factors.
One startlingly powerful algorithm can break down electroencephalographic data from epileptic patients and determine the probability of a seizure occurring during the next few hours; another can scan a kidney exchange market of 10,000 people and assemble an intricate chain of compatible donors and recipients. Some algorithms even incorporate information from patients’ genetic profiles. For example, it’s notoriously difficult to determine the ideal dose of the common anticoagulant warfarin for a particular individual. The proper amount may vary by as much as a factor of 10, and nearly half of patients do better on a dose that’s significantly higher or lower than the standard amount. What’s more, the stakes are high, with the wrong choice possibly resulting in life-threatening bleeding or blood clots. One algorithm matches new patients against thousands of existing patients known to have received ideal doses. The formula produces personalized recommendations that reduce the average size of dosing errors by nearly 35%.
TO CREATE THE WARFARIN ALGORITHM, RESEARCHERS APPLIED A HOST of tongue-twisting statistical methods—numerical support vector regression, regression trees, model trees and multivariate adaptive regression splines—to model the relationships among nearly 40 demographic, clinical, phenotypic and genetic characteristics. They then used the models to build a predictor that employs variables to calculate a patient’s ideal dose. Such sophisticated number crunching can’t be performed inside physicians’ heads—which are, in any case, stuffed with more and more facts as medical science moves forward.
Remembering everything may be a particular problem for generalists, who must try to diagnose and treat patients across a spectrum of conditions, some of which are rare. But because algorithms aggregate large amounts of information, they can be very powerful in dealing with such unusual phenomena. Consider a tool that evaluates the results of a hemoglobin electrophoresis blood test, commonly done to test for sickle cell anemia. In a tiny percentage of patients, the test will show elevated levels of a key protein—yet that finding doesn’t necessarily mean the patient has the disease. A primary care physician who had never encountered that result, or who couldn’t remember what it meant, would likely order a consultation with a hematologist. But with the algorithm, he could simply enter the protein level and responses to two yes or no questions (such as whether the protein is distributed evenly across all blood cells and whether the patient has symptoms of anemia) and receive a definitive answer about the patient’s status.
If an algorithm allowed a primary care physician to avoid an expensive referral, that cost savings would never enter into the medical record. In terms of reducing the high U.S. rate of medical errors, however, the effect of algorithms may be easier to measure. For example, drug-related mistakes in hospitals cause preventable injuries in 1.5 million Americans each year and cost $3.5 billion annually in unnecessary medical costs, according to a 2006 Institute of Medicine report. Research has shown that computerized physician order entry (CPOE) systems that use built-in algorithms to calculate dosages and check for drug interactions can cut the rate of all medication errors by 83% and serious errors by 55%. Yet according to a recent survey by KLAS, an independent health care technology research group, only 17.5% of U.S. hospitals with 200 or more beds use CPOEs, which require doctors to enter treatment orders into a computer system linked to electronic medical records.
ONE REASON MORE HOSPITALS DON’T ADOPT SUCH TECHNOLOGY is that it costs tens of millions of dollars to convert to electronic health records and institute a CPOE system. But another is physician resistance. In part, it’s a matter of inconvenience, and to breach this barrier, the VA worked with Svirbely and Iyengar to place a direct link to the MedAL database within its internal patient record system. “Two clicks and I’m there,” Graber explains. “But there are still doctors who say that’s one click too many.”
If physicians got into the habit of using algorithms, they potentially could save time, but that tends to be a hard sell. It’s also difficult to dispel the notion that using algorithms amounts to “cookbook medicine.” According to Carl Salzman, a professor of psychiatry at Harvard: “An algorithmic approach says that if you have a symptom from column A, you treat it with a drug from column B. The patient becomes nothing more than the sum of her test results.”
Salzman describes a patient who was given more and more medications for bipolar disorder because his symptoms, plugged into an algorithm, suggested those drugs. But the treatment only made the patient suicidal. If the psychiatrist involved had taken other factors into account, Salzman says, such as the patient’s family background and social life, he would have realized that the algorithm-derived diagnosis was wrong.
Part of the problem, adds Salzman, is that algorithms may be based on flawed or insufficient evidence. Consider, for instance, the Wells rule, an algorithm that calculates a risk score for patients whose symptoms suggest they could have the potentially fatal condition deep vein thrombosis. But an evaluation of Wells found that as many as 12% of the patients the algorithm slotted into the lowest risk category actually turned out to have DVT. The rule was based on data collected from patients in outpatient clinics only, and the sample included few elderly patients, women or patients who had undergone any previous surgery.
Perhaps the biggest knock on algorithms is that there are so many available, making it difficult to separate the wheat from the chaff. The MedAL database contains no fewer than 10 algorithms to differentiate iron deficiency from the inherited blood disorder thalassemia minor, 15 to predict survival rates for patients with severe renal failure, and 56 to estimate blood loss and future blood transfusion needs for an individual patient. To be sure of using the best tool for a given task, a physician would have to read through the original papers in which each algorithm was published and seek out additional references attesting to its validity.
BUT WHAT IF A HOSPITAL, SAY, DEEMED PARTICULAR TOOLS TO BE SAFE AND EFFECTIVE? At Massachusetts General Hospital, each time physicians order a radiological imaging test like a CT or MRI scan, they’re prompted by a computer to enter information about a series of clinical criteria. An algorithm aggregates the criteria and determines whether the circumstances warrant the scan in question. Then the algorithm literally gives the doctor a red (1–3 points), yellow (4–6 points) or green light (7–9 points) and, in the case of a low score, offers alternative procedures.
Physicians can override the algorithm to proceed with a low-rated scan, but they have to enter additional information justifying their choice. The MGH has studied the benefits of its radiology algorithm, which was implemented in 2004, and it seems to be working. The percentage of CT scans that show no significant clinical findings has been reduced, and more scans now turn up important data.
Yet there are relatively few places where fully integrated algorithms have become the rule. Even the VA, which has enthusiastically adopted algorithm-based electronic support systems, leaves it up to physicians whether to use a particular algorithm. How many VA doctors use algorithms on a daily basis? “I have a hunch it’s the minority,” says Graber. “The reason algorithms work so well is that they average out huge amounts of patient data. But as physicians, we tend to think we’re wiser than a rule.”
To overcome such barriers, algorithms will have to be fully integrated into the everyday practice of health care, suggest Svirbely and Iyengar. Every doctor would carry a smartphone or some other hand-held device that was directly connected to both an electronic medical records system and a carefully vetted database of algorithms. Then each time the doctor saw a patient, got back a lab result or had to deal with an adverse drug reaction, the device would query the database and choose the most relevant calculation. A doctor would only have to look down at the flashing device, then accept or override the result.
In such a world, proponents say, algorithms would reach their potential to save time, money and lives because they would blend seamlessly into the background. “If you mention the wordalgorithm, it pushes people’s buttons,” says Svirbely. “If an algorithm is running automatically, no one thinks twice about it.”
If the history of algorithms in other industries is any guide, that assimilation may be inevitable. Not long ago, it would have seemed shocking to cede control of an airplane cockpit to anything other than a human operator, yet today it’s algorithms, as much as pilots, that keep us safe. Ultimately, medicine’s choice isn’t between algorithms on one hand and physician judgment on the other, but between different ways of responding to the inexorable forward march of algorithms—resisting all the way, or carefully taking charge of these powerful tools.
“Why Physicians Do Not Follow Some Guidelines and Algorithms,” by David N. Osser, Drug Benefit Trends, Dec. 6, 2009. Osser explores the reasons behind the pervasive neglect of psychopharmacological algorithms, including issues of work flow and time management, and a preference for relying on personal clinical experience.
“Thinking About Diagnostic Thinking: A 30-Year Perspective,” by Arthur S. Elstein, Advances in Health Sciences Education, September 2009.The author reviews several types of diagnostic errors caused by faulty clinical reasoning and argues that algorithms can help remedy deficiencies in human judgment.
“The Limited Role of Expert Guidelines in Teaching Psychopharmacology,” by Carl Salzman, Academic Psychiatry, June 2005. Salzman reviews formal decision-making tools in psychopharmacology, arguing that algorithms overemphasize new drugs and lack utility in diagnosing patients with complex symptoms.
Stay on the frontiers of medicine