Stay on the Frontiers of Medicine.

Sign up for a free subscription to Proto.
Site Help
Font Size
Large Text
Contact Us
About Us
Search Results for “

Not finding what you're looking for? Articles from older issues of Proto can be found here.


Sorry, no results found.

Published On Jan 28, 2015

Clinical Research

Dr. Darwin

Can a refresher course in the laws of natural selection help doctors better understand human health and illness?

ANDREW READ RECENTLY SPENT SIX MONTHS on the wards at the University of Michigan Medical Center, observing doctors who treat infectious disease. He recalls one patient, a woman whose bacterial pneumonia resisted treatment by every antibiotic available, and who perished after 18 months. “She died of uncontrolled evolution,” says Read, an evolutionary biologist at Pennsylvania State University who studies antimicrobial resistance. “One of my colleagues said, ‘This was a failure of our science,’ and I agree entirely. We did not know how to slow evolution down.”

“Evolution,” in this case, refers to the refusal of harmful pathogens to sit still and wait to be destroyed by antibiotics or other drugs. They evolve to survive, and Read’s research suggests that resistance to antibiotics occurs because the standard approach to prescribing—administering high doses for long periods in hopes of wiping out every last infectious bug—creates an environment in which resistant strains that somehow do avoid destruction can reproduce freely. Harmful species, now well defended, bounce back to mount ever more potent attacks. That demonstrates the principles of natural selection that Charles Darwin spelled out in On the Origin of Species more than 150 years ago, and Read says that we ignore Darwin’s lessons at our great peril. “We’re picking a fight with natural selection,” says Read. “Going into that fight without Darwin is like going to the moon without Newton.”

Read is part of a growing community of scientists and physicians who believe that medical research and practice have largely ignored evolution, to the detriment of patients. While  interest in applying Darwin’s ideas to medicine has waxed and waned almost since their introduction, many agree that the contemporary movement to promote the science of evolutionary medicine began with a paper by evolutionary biologist George C. Williams and psychiatrist Randolph M. Nesse, “The Dawn of Darwinian Medicine,” which appeared in The Quarterly Review of Biology in 1991. That article was followed by a popular and influential book by the same authors, Why We Get Sick: The New Science of Darwinian Medicine.

What Williams and Nesse meant by Darwinian or evolutionary medicine was the systematic application of the principles of evolution—competition among organisms, with those best suited for survival living to pass along their genes—to problems in medicine. This approach involves looking both at the human organism as a product of evolution, and at a host of cellular communities in which evolutionary forces are constantly at work.

In their landmark book, Williams and Nesse posed a question that has puzzled scientists and philosophers alike since Darwin’s day: Why has evolution left the human body so vulnerable to disease? Logically, natural selection would favor the hardiest among us, systematically eliminating genetic tendencies to have heart problems, cancer or neurodegenerative disorders. But that hasn’t happened, and building on the work of biologists such as Paul Ewald and Richard Dawkins, the authors of Why We Get Sick wrote that their goal was to challenge “the common view that evolution tends toward a world of health, harmony and stability.” Instead, they argued, natural selection preserves genes that increase reproductive success, even if some of those very genes may make us susceptible to disease.

Two decades later, interest in evolutionary medicine is flourishing. Nesse was recently named head of the new Center for Evolution & Medicine at Arizona State University. (Williams, of New York’s Stony Brook University, died in 2010.) Other centers for the study of evolution and medicine have sprung up at UCLA, Temple University, the University of California at San Francisco, and other institutions in the United States and abroad. Several new journals now cover the field, with Oxford University Press publishing Evolution, Medicine, and Public Health. And the International Society for Evolution, Medicine, & Public Health will hold its inaugural meeting in Tempe, Ariz., in March 2015.

Some skeptics wonder what all the fuss is about, insisting that the theory of evolution already informs much of modern medicine and questioning whether greater emphasis on Darwinian principles will result in better treatments. But Read, Nesse and others argue that if medical students learn more about evolution and researchers start thinking more like Darwin, then answers to some of medicine’s intractable problems might emerge more rapidly, especially in treating infectious diseases, cancer and  autoimmune disorders. 

ON THE ORIGIN OF SPECIES, WHICH would emerge as one of the most influential books ever published, was an immediate hit when it appeared in 1859. Physicians, among others, were intrigued by Darwin’s theory of natural selection, which suggests that traits in an organism that promote survival and reproduction tend to be preserved (or “selected”) and passed along to succeeding generations, while less useful traits are likely to disappear. 

Darwinism in medicine fell out of favor in the mid-twentieth century for a variety of reasons, among them the discovery of DNA and the rise of molecular and cell biology, says Yale University evolutionary biologist Stephen Stearns, editor-in-chief of the journal Evolution, Medicine, and Public Health. Biologists began to focus on the features common to all living processes and were less interested in the variations produced by evolution that set organisms apart, says Stearns. Molecular and cell biologists received prizes and funding, he notes, while evolutionary biology “wasn’t seen as important or having as much of a payoff.”

Of course, that doesn’t mean Darwin’s insights are entirely missing from medicine. Today’s focus on genomics and the crucial role of gene variations on sickness and health exists in the context of species that are constantly being subjected to the forces of natural selection. “Evolution is implicit in a lot of biomedical research,” says evolutionary biologist Irene Eckstrand, outgoing director of research grants in population genetics and evolutionary biology at the National Institute of General Medical Sciences. Using mice or bacteria as model organisms to study disease, for example, acknowledges that all living organisms share genetic similarities because of a common ancestry. Moreover, Eckstrand notes, researchers have long used branching evolutionary “trees,” or phylogenies, to study historical relationships within families or across populations and search for genes that underlie disease. “Understanding how those genes evolve in humans informs biomedical science,” says Eckstrand.

Yet medical schools don’t require students to study the specific ways in which natural selection continues to shape humans and all other species, and that leaves doctors at a disadvantage, says Nesse. He became interested in evolution as an undergraduate at Carleton College in 1967, which eventually led to his meeting with Williams. The two men became collaborators and, in their papers and in Why We Get Sick, they offer several explanations for evolution’s role in modern illnesses.

For example, one familiar theory that has transcended medicine and seeped into mainstream culture argues that humans can’t evolve fast enough to keep up with rapidly changing environments. According to the so-called mismatch theory, we’re saddled with Stone Age bodies designed to hunt and gather our food, then store calories efficiently in case we run out. Yet today most of us live in worlds with automobiles, desk jobs and 24-hour supermarkets. Darwinian thinkers argue that the incongruity between our bodies and our environment helps explain rising rates of many modern maladies, including obesity, type 2 diabetes and heart disease. (The mismatch theory also spawned the popular Paleo diet, which preaches the health benefits of eating like our distant ancestors, eschewing sugar, refined carbohydrates and other modern dietary staples in favor of lean protein and produce.)

A related theory, the hygiene hypothesis, is built on the idea that humans evolved with guts full of worms and a diverse population of bacteria. As a result, we developed powerful innate defenses to manage these pathogens. Cleaner water and living conditions have eliminated exposure to many of these bugs, but our defense systems remain vigilant and sometimes overreact, which may explain the existence of asthma, allergies and a range of autoimmune disorders. Some evidence suggests that a return to our dirty old ways could help some patients with hard-to-treat conditions. For instance, a preliminary University of Wisconsin study suggests that giving patients the eggs of helminths, or intestinal worms, may improve brain MRI scans in some patients with multiple sclerosis. Helminth therapy has been investigated for other autoimmune disorders, such as inflammatory bowel disease.

But all organisms, whether in temporal sync with their environment or not, face another unfortunate by-product of evolution: genetic compromises that ultimately increase risk for particular diseases. That is, certain genes that can make you sick may not have been eliminated from the human species by natural  selection because they also confer some benefit. The classic example of such a trade-off is the gene for sickle cell anemia, which also protects against malaria. Other hypothesized trade-offs are more controversial. For example, a 2012 study found that women who had the BRCA1 gene mutation that increases the risk of breast cancer had more children than other women, suggesting that the gene somehow conferred greater reproductive success—the result of evolution—though this finding conflicts with other research, and there are other potential explanations for the relationship.

SOME EVOLUTIONARY-MINDED scientists believe trade-offs may explain why cancer exists in the first place. “Cancer is a problem of multicellularity,” says Athena Aktipis, director of human and social evolution at the Center for Evolution and Cancer at the University of California at San Francisco’s Helen Diller Family Comprehensive Cancer Center. To evolve from single-cell organisms, larger creatures required cells that can proliferate, differentiate into other types of cells, and move about the body. Unfortunately, those are all characteristics inherent to cancer cells too. Multicellular organisms evolved systems to suppress uncontrolled cell growth and movement, says Aktipis, but that task becomes more challenging in large animals with complex biologies.

Aktipis is particularly interested in the significance of heterogeneity within tumors—that is, the observation that some cancerous tumors contain a diverse collection of cells. Those cells operate in their own environment in which proliferation and survival take place. “Tumors with more diversity have more variation in the population, meaning they’re probably more evolvable,” she says, and having more kinds of cells makes the tumor more likely to be aggressive—if one kind of cell is deterred by the body’s defenses or by attacks from cancer drugs, others may escape those countermeasures and proliferate. She and several colleagues are embarking on a study to determine whether cell diversity in breast cancer can be used to predict whether a tumor will progress, and thus guide therapeutic decisions.

Diversity also creates competition among species, an essential concept in evolutionary thought. But doctors must weigh the impact of therapies that step in on these cellular struggles for dominance, and one problem with the current approach to treating cancer is that it interferes with competition among tumor cells, says radiologist Robert Gatenby of the Moffitt Cancer Center, in Tampa. Gatenby began thinking about how evolutionary principles might be applied to cancer therapies in the early 1990s, but had a breakthrough several years ago while reading about the diamondback moth, a common crop pest. Farmers discovered that trying to wipe out this bug was fruitless; there were always some moths resistant to insecticides, which survived, propagated and resumed destroying crops.

Gatenby says the same thing occurs in malignant tumors, which are typically made up of some cells that are destroyed by chemotherapy and others that resist treatment. That suggested to him that simply killing all sensitive tumor cells was the wrong approach. “You’re maximizing the ability of the resistant cells to proliferate and repopulate the tumor,” says Gatenby. His idea: Instead of eradicating tumors, stabilize them. Use an adequate amount of chemotherapy to kill some sensitive cancer cells, but leave behind enough to compete with the resistant cells, an idea he calls adaptive therapy.

Gatenby tested this strategy in mice with ovarian cancer. He gave one group a regimen of chemotherapy calibrated to mimic standard treatment in humans. A second group got lower doses, which were adjusted as necessary to maintain the tumor at a stable size, not growing or shrinking. Tumors treated with standard therapy shrank at first, but they then grew back and the mice died. Mice treated with adaptive therapy fared much better. “We can actually maintain them indefinitely,” says Gatenby, “and we find we can do that with progressively lower doses of drugs.”

Gatenby is preparing to test his adaptive strategy in a group of men with advanced prostate cancer, using the drug abiraterone acetate (Zytiga). Abiraterone shrinks tumors, but they grow back in a year or two, and men then require chemotherapy. Gatenby plans to give men just enough abiraterone to stabilize their tumors and see whether this strategy provides a longer period before their cancer progresses. Gatenby believes adaptive therapy could give patients with some cancers extra years and perhaps turn other forms of the disease into manageable chronic conditions.

Read thinks that adopting a similar approach to antibiotic treatment could help reduce the number of Americans who die from drug-resistant infections annually—which is currently as high as 100,000, he says. Read takes particular issue with the standard advice to take the entire course of medication your doctor prescribes, even if the patient feels better after a few days of treatment. Read says that this admonishment originates with none other than Alexander Fleming, discoverer of penicillin, who said in his Nobel Prize lecture, “If you use penicillin, use enough.”

The problem with this aggressive approach is that untreated infections often feature both drug-resistant and drug-sensitive bugs, which compete with one another. Prior to treatment with antibiotics, Read explains, drug-sensitive germs help keep some of their drug-resistant counterparts in check. “But if you hit the infection with high levels of drugs—really firebomb it”—and thus kill off all of the drug-sensitive bugs—“then any resistant bugs that can survive suddenly have the world to themselves,” he says. “You just got rid of all their competitors. You took away the thing that was keeping the resistant bugs under control.”

In a 2013 study published in PLoS Pathogens,  Read and colleagues injected mice with malaria parasites. Then they treated some mice aggressively, giving them antibiotics for five to seven days, while others only got what Read called a “light touch,” or just one day of treatment. In the end, the aggressively treated mice were 162 times more likely to transmit resistant parasites, but there was no difference in the health of mice in either group. Read believes his paper offers some of the first hard data to question whether eliminating all pathogens as fast as possible should always be the goal of antibiotic therapy.

Convincing doctors to leave behind some infectious bugs for the sake of healthy evolutionary competition will take some doing, however. Henry Chambers, chief of the infectious disease division at University of California at San Francisco and chair of the Infectious Diseases Society of America’s antimicrobial resistance committee, notes that patients with tuberculosis, for example, often feel better after two months of antibiotic treatment, but recent research indicates that it takes at least six months of therapy to eradicate the disease. In the case of many other diseases, however, far less is known about what the minimum required course of treatment with antibiotics should be. “The reality is that we don’t know how long to treat many infections,” says Chambers. “Many, I know, we overtreat.” With few new antibiotics in the drug pipeline, Read says there’s an urgent need to discover the minimum necessary course of treatment for all common infections. Doing so, he believes, will reduce the forces of natural selection that antibiotic therapy enacts on pathogens, a process that can favor the most drug-resistant bugs.

IN 2009, THE AMERICAN ASSOCIATION of Medical Colleges recommended that premedical students should be able to “Demonstrate an understanding of how the organizing principle of evolution by natural selection explains the diversity of life on earth.” In addition, a few years ago, Nesse joined a working group of several like-minded colleagues from around the nation formed to develop new ways of “infusing medical education with evolutionary thinking.”

It has made progress, but Nesse is not optimistic that medical schools will soon require students to study evolution. “Most medical education authorities have told us there’s no way to include this well-established science in the curriculum as its own course,” says Nesse. Medical students are already overburdened, deans tell him, and Nesse wonders whether some school officials worry that adding evolution to the curriculum would anger donors who don’t believe in it. (According to a 2014 Gallup poll, 42% of Americans “believe that God created humans in their present form 10,000 years ago.”) To fill the gap, Nesse and his colleagues are creating brief mini-courses that are available online, which can be used by medical schools to provide an evolutionary perspective on specific medical topics. When studying difficulties in childbirth, for instance, students would learn why women evolved to have narrow birth canals. When studying the cardiovascular system, students could be briefed on how our modern environment increases the risk for heart disease.

Nesse says that talking about evolutionary causes of disease with patients he has treated for psychiatric disorders over the years has made him a more effective doctor. For example, he used to tell patients with panic disorder that the thumping in their chests wasn’t a sign of heart disease. “Then they would ask if I knew a good cardiologist,” he says. “They weren’t convinced.” So he started explaining to patients that humans evolved with a fight-or-flight system that is essential in the event of danger, but that the system can trigger false alarms and produce symptoms such a pounding heart. “Patients would say, ‘Oh, that makes sense,’” says Nesse. In fact, about one in five who got the message no longer needed medication to control their panic and other symptoms. Simply understanding evolution and its impact on the human body, it seems, was the only cure they needed. 



1. “Evolutionary Medicine: Its Scope, Interest and Potential,” by Stephen C. Stearns, Proceedings of the Royal Society B, September 2012. A professor of evolutionary biology at Yale explains how Darwinian concepts complement molecular and cell biology.

2. Why We Get Sick: The New Science of Darwinian Medicine, by Randolph M. Nesse and George C. Williams (Crown, 1995). In a brisk 291 pages, the authors make the case for bringing Darwin’s theories into the clinic and lab.

3. VIDEO: Researchers Athena Aktipis and Andrew Read discuss how evolution makes cancer a challenge to defeat.  

See More Actionlink-arrow