FOR THOSE WHO DREAM OF A FUTURE without animal testing, Jackson Laboratory, in Bar Harbor, Maine, may seem an unlikely place for this story to begin. Jackson is home to legions of mice, many of them genetically altered to exhibit particular traits that make them ideal research subjects. And Gary Churchill, a statistician-turned-biologist who serves as one of the laboratory’s principal investigators, spends much of his time devising experiments for the sometimes odd-looking creatures.

For example, there was a 2005 Jackson study involving a long, lean, muscular mouse—affectionately named Adonis—and a short, round mouse that looked rather like an ottoman. Wondering why only some people on a particular diet become obese, Churchill crossbred several hundred pairs of Adonises and ottomans to produce offspring with a mixed bag of body types and cholesterol profiles. He fed the offspring a buttery diet; tracked each mouse’s weight, muscle mass and bone density; and noted where on its body pockets of fat accumulated. Then he used computers to scan the mice’s genomes and scour their genetic variations in search of combinations that might act as an adiposity, or fat factor, that predisposed some animals to become obese.

The computer model then looked for causal connections and interactions among these genetic patterns and the mice’s physiological traits. Did a single genetically programmed characteristic, such as the length of a leg bone, determine muscle mass or another trait related to body weight, or were other factors involved? Was the influence of one trait on another a one-way street, or did it work in the other direction as well? The computer displayed the answers graphically, with arrows indicating all links and bolder arrows representing the strong connections. The analytical model further untangled those correlations and identified a few genes that seemed to determine which mice would become fat, independent of body type.

Could this still hypothetical adiposity factor predict obesity in mice—and perhaps someday in humans? Investigating that question is on Churchill’s agenda, and it will require more animal experiments—and a lot more mice. “You have to keep testing your model in mice,” Churchill says. “When you find a gene, you have to know its context in the body. What does a gene that helps clear cholesterol from the liver do in other organs? And what are the effects of environment and diet? You need animals to learn that.”

It’s Churchill’s approach to testing, which employs sophisticated computer models as well as animals, that could well become the norm for biomedical research in decades to come. And for those who dislike animal testing, that’s not the worst-case scenario.

W08_animal_labs_spot_3_630x420

THE KILLING OF MILLIONS OF MICE and other laboratory animals—cats, dogs, frogs, birds and monkeys—is controversial, to say the least. Many who find the practice abhorrent contend that it’s unnecessary. They point to advances in testing drugs and other products without the use of animals as indications of what could be. A determined quest for alternative ways to gauge toxicity, allergies and drug interactions has eliminated virtually all animal testing for cosmetics. Now, instead of dropping a dollop of shampoo in a rabbit’s eye to check for an allergic reaction, the shampoo goes into a dish containing cultured human cells or artificial skin tissue. And during preliminary drug development, researchers may feed a compound’s chemical makeup into a computer that crunches data about how the body metabolizes compounds and predicts whether a drug would be toxic to the liver or would interact with other drugs.

Those methods have fueled expectations that in vitro testing and computer models could also reduce the need for animals in studying diseases. With ever greater power and speed, computers should be able to sort through bewildering mazes of data about genetic and environmental factors to help determine how their effects in one organ influence other physiological systems.

“The sticking point is that, for many human diseases, we just don’t have enough information to feed into computer models,” says Rakesh K. Jain, a cancer biologist at the Massachusetts General Hospital. Jain trained as an engineer, and like Churchill, he is at the forefront of efforts to use analytical means to understand human disease. Jain has alternated between computer modeling and animal experiments in developing new strategies, including one, now in several human clinical trials, for delivering chemotherapy to tumors more effectively. But he notes that most modern killers—cancer, heart disease, diabetes and neurodegenerative diseases, not to mention neuropsychiatric disorders—involve still undeciphered interactions of multiple genes with the environment, life experiences and behavior. Each factor by itself may have just a small effect, and making sense of it all is extraordinarily difficult. Even relatively straightforward genetic diseases resist analytical modeling.

Take cystic fibrosis, a genetic disease that creates a life-threatening accumulation of mucus in the lungs and is known to result from a single mutation in a single gene. Researchers have managed to insert that gene into mice to make a living, breathing model of a human disease that mice would otherwise never acquire. That, in turn, has let the scientists tease out how the genetic mutation leads to the production of a protein that causes the physiological defects of the disease.

Theoretically, a computer model should be able to process this genetic information and predict what will happen to a child born with cystic fibrosis. But even this disease defies modeling, Churchill says. One child with cystic fibrosis may die in infancy, while another becomes a competitive gymnast, and no one yet understands what other factors account for the difference.

W08_animal_labs_spot_630x420

THE MOUSE MODEL FOR CYSTIC FIBROSIS, as with models for many diseases, owes its existence to a technique called gene targeting, which was developed in the 1980s by Mario Capecchi, a professor of human genetics and biology at the University of Utah who won the 2007 Nobel Prize in Physiology or Medicine for his work. Gene targeting makes it possible to study the effects of single genes, including those that don’t occur naturally in mice.

Capecchi recently used this approach to solve a biological mystery about a rare, aggressive childhood cancer called synovial sarcoma. For reasons no one had been able to discover, this cancer’s tumors usually settle near joints, and it was unclear where the cancer originated, or when. But researchers did know the genetic mutation that induced it, so Capecchi inserted that mutation into the embryonic stem cells of mice, then coaxed those cells into becoming embryos. Implanted in a surrogate mother, the cells grew into offspring that carried the mutation in every cell of their bodies.

Capecchi had equipped the target gene with “stop” codes that kept the mutation turned off until he gave the mice an enzyme that snipped out the code and allowed the mutation to become active. He and others had developed that technique, known as conditional expression, to turn a gene or mutation on or off only in a desired tissue or at specific life stages. That’s important, because many mutations cause harm only in specific organs or at particular times, as in early- or late-onset diseases.

In this experiment, Capecchi used different sets of mice to turn on the mutation in three developmental stages—early embryonic, prenatal and postnatal. And in different mice, he activated the genes in different tissues. To his surprise, the mutation arose only in immature muscle cells that occur mainly in early development. Usually, the activated gene immediately kills its host cells, which (self-defeatingly for the mutation) puts a stop to the cancer. But joints secrete a still unidentified protective factor that keeps the cells with the mutation alive, so only mutated cells near joints survive to grow into tumors.

Having started with many sets of mice, Capecchi narrowed the search to a single set that now serves as an animal model of synovial sarcoma. Researchers, able to concentrate on that one model, can learn more about the cancer’s pathology and then design drugs that protect cells from harm. Many other types of cancers also remain biological mysteries, and Capecchi expects that each may require its own animal model, as will other complex diseases and neurodegenerative and neuropsychiatric disorders. So, instead of leveling off, it appears that the number of animal models will proliferate rapidly.

THOUGH NO NATIONAL REGISTRY TRACKS the numbers of rats and mice employed in research, a few years ago the National Association for Biomedical Research estimated that more than 30 million rodents were bred for this purpose, and that number has undoubtedly risen. But according to the U.S. Department of Agriculture, the number of experimental cats, dogs, rabbits, guinea pigs, hamsters, sheep, swine and primates has been dropping, from 1.5 million (not including many farm animals that are now counted) in 1973 to 1 million in 2006. The numbers of cats and dogs, each accounting for well under 1% of research animals, has been cut by two-thirds in three decades; the proverbial guinea pig, by half. Just two-thirds of 1% of research animals are primates.

One reason for these declining numbers is the increasingly sophisticated use of zebrafish, fruit flies, snails, worms and other simple invertebrates. Work with these creatures, as well as with mice, has decreased the need to do basic research in larger vertebrates and primates, explains Steve Niemi, who directs the Center for Comparative Medicine as the chief veterinarian at the Massachusetts General Hospital.

Among those lower creatures, the first fruit fly (Drosophila melanogaster) model elevated to the lofty region of human neurodegenerative diseases debuted in 1998 in the lab of Nancy Bonini, professor of biology at the University of Pennsylvania. “Drosophila approximates many of the fundamental mechanisms of early development in vertebrates,” Bonini says. “So we thought, why can’t we use it to study late-onset neurodegenerative diseases?”

She chose to look at spinocerebellar ataxia type 3 (SCA3), a “genetic stutter” disease, similar to Huntington’s and fragile X syndrome, in which a triplet of DNA bases gets repeated too often. Those repeats lead to sticky, misfolded proteins that clump in neurons and kill them. As neurons degenerate, a person’s movements become erratic or ataxic. The longer the stutter, the worse the symptoms and the earlier in life they appear.

Researchers had recently identified the SCA3 gene in humans, and when Bonini inserted that gene into a fly, she saw the results of neurons collapsing as the fly entered adulthood. “If you could imagine what a fly with ataxia might look like, that’s what we saw,” she says. “The effects were spectacular.” The fruit flies developed normally but later began staggering as if mimicking humans with the disease, and died early. Plus the stutter often lengthened in successive generations, as it does in humans.

Because the fly modeled the key features of the human disease, researchers can now screen proteins and molecules quickly to see which might have therapeutic effects. “The fly is an extremely powerful way to establish a ‘proof of principle’ for possible therapeutics,” Bonini explains. “We can try many things and narrow down a few promising ones to test in vertebrates. That’s a necessary next step, because vertebrates’ brains are more complicated, and it’s more challenging to get therapeutic compounds through their blood-brain barrier.”

Bonini is collaborating with mouse researchers, and while she doesn’t discount the idea of someday also using her findings to develop analytical computer models, she thinks it’s too early to move in that direction. “We still know so little about brain diseases that it will take a while before we can confidently develop analytic models of such complex biology,” she says.

W08_animal_labs_spot_2_630x420

“I WISH ANALYTICAL MODELS COULD replace animals,” Capecchi says. “It’s a legitimate goal to reduce the animals used in research, for ethical and animal-welfare reasons and for costs. We don’t want to do trivial things with animals, or waste them in ill-conceived experiments. Analytical computer models can help with that, but the success of those models depends on knowledge that is not yet available.”

For now, expanding what we know about disease means continuing to design experiments involving mice, fruit flies and even primates. Churchill and Jain, though both firm believers in analytical models, don’t expect computers to replace animals, at least not for the next 100 years. Indeed, success with analytical models—and with animals genetically engineered to serve very particular experimental purposes—may be as likely to spur animal experiments as to reduce their numbers, though both can help researchers design more targeted experiments that waste fewer subjects. “Computers help extract the maximum amount of data from the minimum amount of animals,” Churchill says. “They also find patterns that researchers would miss in animal experiments.” Future models, it seems, will come in many varieties, ranging from several kinds of animals to in vitro systems to computers, with each approach building on the others. It should be a productive formula.