IN 2014 THE LANDMARK research of Philip Kennedy had come to a full stop. For decades, the Atlanta neurologist had worked to harness the brainpower of disabled adults, at one point pioneering the science of brain-computer interfaces by implanting electrodes he had invented into the brain of a paralyzed Vietnam veteran, who then was able to move a computer cursor with his mind. Next, Kennedy wanted to transform thoughts into spoken language, and he used a combination of surgery and software to enable a patient to think about making sounds that were then converted into a rough form of synthesized speech.

But Kennedy’s plans to push his research further were thwarted when the Food and Drug Administration revoked its approval for using his implants on human subjects. A new rule from the agency, which regulates medical devices as well as drugs, required that he demonstrate that the implants were safe and sterile, which was impossible because the provider company would not release that information. So, with many years of work at stake, Kennedy found a work-around: Leave the country and become his own guinea pig. “I realized most people wouldn’t want to volunteer, so the next person available was myself,” Kennedy says.

Three years ago, Kennedy, then a healthy 67-year-old, put himself under the knife at a small hospital in Belize. He had trained a surgeon in the Central American nation in the particulars of electrode implantation, a 12-hour procedure that required sawing off the side of Kennedy’s skull. A spike in his blood pressure caused his brain to swell dangerously, and Kennedy awoke unable to speak or move.

With his work in Belize, Kennedy joined a long line of other scientists who have risked their health and often their lives by deciding to put themselves under the microscope. Hundreds of physicians and medical researchers have documented their self-experimentation through the centuries, and their work has led to crucial discoveries in infectious disease, anesthesiology, physiology, radiology, pharmacology, oncology and other areas.

Many of these pioneers lived in the early days of medicine, before the current era of regulations and ethical guidelines for research trials that have tended to discourage the researcher as research subject. These policies cite concerns for the potential physical risks that an investigator may overlook in a self-experiment, as well as the inherent bias of a study that focuses on a single person.

Yet the tradition endures, and it could even see a comeback in the future. Personalized medicine is the watchword of the genomic revolution, with its promise of cures tailored to a single patient. An “n-of-1” study—one that involves just one subject—may prove to be an alluring fit.

“Self-experimentation by a single individual tucked away in his laboratory seems almost quaint, a relic of the past,” says Allen B. Weisse, a New Jersey cardiologist who has researched the practice. “But not every advance in medicine is made by teams working along conventional lines.”

THE HISTORY OF MEDICINE is full of those who put themselves at the center of their experiments. Many embraced self-experimentation as a matter of conscience. “Before testing their theories and interventions in other people, scientists thought they ought to expose themselves to experimental hazards and work out the bugs,” says Rebecca Dresser, a medical ethicist and professor at the Washington University School of Law in St. Louis. And there were also practical considerations—nothing could be easier to observe or more reliable than the subject in the mirror.

In his review of more than 400 cases spanning several centuries, Weisse notes that nine out of 10 self-experiments succeeded, yielding valuable data or positive results to support a hypothesis. One of the first may have been Santorio Santorio, known today as the father of the science of metabolism, who in 16th- and 17th-century Italy took daily measurements of his weight, food intake and bodily waste.

A dozen of these pioneers have received Nobel Prizes for what they’ve put themselves through. In 1929, for example, German medical resident Werner Forssmann inserted a catheter through the vein in his elbow and snaked it into the right atrium of his heart, and documented the accomplishment with an X-ray of his chest—a very early version of cardiac catheterization, and work that led to Forssmann’s sharing the 1956 Nobel Prize in Physiology or Medicine. More than half a century later, in 1984, Australian physician Barry Marshall guzzled a potent cocktail of Helicobacter pylori bacteria and then became ill with stomach inflammation—thus proving his theory that this bacterium caused ulcers. His Nobel was awarded in 2005.

But for others, self-experimentation has led to disability or death. In 1767, John Hunter, a British surgeon regarded as the founder of scientific surgery, allegedly injected himself with what he thought was gonorrhea to study venereal disease but contracted both gonorrhea and syphilis, complications from which he continued to suffer until his death. Jesse Lazear, an American Army surgeon, became fatally ill in 1900 in Cuba after he supposedly allowed himself to be bitten by mosquitoes suspected of carrying yellow fever—a sacrifice that ultimately contributed to the development of a yellow fever vaccine. The delayed effects of radioactivity, unknown at the time, resulted in French physicist Marie Curie’s death from leukemia in 1934. Walter James Dodd, a physician at Massachusetts General Hospital whose work greatly advanced the state of the art of X-ray technology, suffered severe damage to his skin and intense pain from his work; he died in 1916.

fa17_self

As modern medicine progressed, the practice of self-experimentation waxed and waned, often in response to the introduction of new medical methods or technologies. For example, after general anesthesia began to be used during the mid 19th century, followed by the advent of local anesthesia during the 1880s, there were a raft of self-experiments that dropped off as anesthesiology was absorbed into standard medical practice, according to Weisse’s study. By the second half of the 20th century, however, decidedly fewer researchers were using themselves as subjects, the study shows.

Nazi atrocities performed in the name of medical research during World War II played a role. The work of Josef Mengele and others shone a harsh spotlight on the human rights of research subjects. In 1954, the National Institutes of Health established policies about what could and could not be done in human trials and began requiring subjects’ written consent. Institutional review boards, or IRBs, also came into being, bringing together independent experts at hospitals and other institutions to assess the scientific and ethical legitimacy of proposed research before allowing it to proceed. The National Research Act, passed by Congress in 1974, made IRB review and approval a legal requirement for all federally funded research involving human subjects, and current regulations provide additional protections for vulnerable groups such as pregnant women, children and prisoners. Against this regulatory backdrop, it became more and more difficult for would-be self-experimenters to proceed without violating rules that, among other things, often served as prerequisites for research funding and publication in reputable medical journals.

What U.S. law doesn’t explicitly ban is self-experimentation by a physician or medical researcher, so regulations remain fuzzy about the researcher as subject, says Washington University’s Dresser. But the gold standard, the federal protections for all human research subjects, should apply at all times, she says. “An investigator’s self-experimentation should be scientifically justifiable and ethically acceptable.”

Still, even if there is room for interpretation in the rules and guidelines, many IRBs have categorically excluded investigators from their own experiments, while others consider the issue case by case. “Self-experimentation is something we don’t like to see and that we generally discourage, particularly when it involves medical risks,” says Elizabeth Hohmann, director of Partners Human Research Committees in Boston. Nonetheless, n-of-1 proposals still cross her desk several times a year, even if she just as routinely rejects them for many practical reasons, including concerns about research bias. “Why make a problem when none need exist?” she asks. “It isn’t essential for investigators to take part in their own studies.”

YET RESEARCHERS CONTINUE TO offer themselves as subjects, for a variety of reasons. For Sushrut Waikar, a nephrologist at Brigham and Women’s Hospital in Boston, it stemmed from a desire to test the protocols of his study before involving other human volunteers. Earlier this year, Waikar proposed serving as the first subject in his lab’s study of human kidney function.

Already approved by the hospital’s IRB, the study had more than two dozen healthy volunteers standing by. They would be injected with a harmless iodine contrast agent, and would have their urine and blood sampled several times during the hours before and after they drank a protein shake. Waikar thought the process sounded cumbersome, and that if he experienced it himself, he might be able to work out potential kinks. He went so far as to write an amendment to his protocol for an n-of-1 pilot trial for IRB approval.

But when word got out about what he was doing, some members of his staff were far from enthusiastic about having the boss be the subject. In an online survey of the fellows, junior faculty, research coordinators and nurses on the team, opinions varied—ranging from no concern to some doubts about the need for Waikar’s participation to squeamishness about performing procedures on a colleague. “I took their comments quite seriously,” says Waikar, who bowed out and let the study proceed with the research subjects it had enrolled.

Russell Poldrack embarked on a study starting in September 2012 to track changes in his own brain over many months. Normal alterations could happen when a subject came down with a disease, had a change in environment or just underwent a mood shift. And whereas conventional brain imaging studies tended to average together snapshots of multiple subjects’ brains, Poldrack wanted to know what would happen inside just one person’s head. “If we want to understand fluctuations in disease, the first thing we need to do is to understand how a healthy person’s brain function fluctuates,” he says.

Poldrack reasoned that no volunteers would want to come in twice weekly over many months to have their blood drawn and their brains scanned through magnetic resonance imaging (MRI). But because he ran the UT-Austin Imaging Research Center, Poldrack had easy access to MRI equipment, and he submitted a proposal to the university’s IRB to become his own research subject. The review board, however, concluded that his self-experimentation didn’t fit its criteria for research on human subjects. “Aside from being an n-of-1, it was also basically a fishing expedition,” Poldrack says. “It was discovery science, not testing a hypothesis, and those kinds of grants are generally impossible to get approved and funded.”

Poldrack chose to go ahead on his own, hoping he could not just map changes in his brain but also analyze and draw connections between brain functions and gene expression. Twice a week he climbed into an MRI machine, for 30 minutes at a time, and he kept at it for 18 months. (The scans happened before 8 a.m., to qualify for a discounted rate of $150.) Though he wasn’t particularly worried about the risks of having more than 100 MRIs, there were several possible side effects, including anxiety, vertigo and hearing loss from repeated exposure to the claustrophobic machine and its noise.

Poldrack also fasted and had his blood drawn every week, and he tracked virtually every aspect of his daily life, logging vital signs, diet, sleep and stress. To avoid changing the imaging patterns in his MRI series, he even tried to control what he was thinking about while he was in the scanner. “I went out of my way to just zone out,” he says. He also looked at the data he was compiling as seldom as possible—because thinking about that, too, might alter what showed up on his MRIs.

Six months into his study, researchers from Washington University in St. Louis, who knew of his work, approached him about using his data to supplement their own neurological studies. That collaboration ultimately produced the most detailed map ever made of functional brain connectivity in a single person. The results showed a correlation between changes in his brain and alterations in the expression of many families of genes in his blood. For example, when his psoriasis flared up, the expression of genes related to inflammation and other immune responses also increased.

Poldrack and his colleagues published their findings in two leading journals, and other academic papers have cited the work. And he and the Washington University scientists have made the entire data set and tools to analyze it available through Poldrack’s lab at Stanford University, where he now teaches psychology. “The payoff is that the data set is useful for a lot of groups testing other types of questions,” says Poldrack.

fa17_self

POLDRACK CITES COLLEAGUE Michael Snyder, professor and chair of the genetics department at Stanford, as his inspiration for self-experimentation. Snyder’s own n-of-1, in 2010, involved sequencing his own genome and then, over the next 14 months, contributing 20 blood samples on which three billion measurements were taken. His colleagues analyzed in detail how proteins, RNA, metabolites and an array of other molecular components interacted in his body when he was in good health and bad.

The self-experiment showed the value of looking at a patient’s genome and then monitoring that person’s health status in light of those results, Snyder says. Yet one leading journal declined to publish the study, and when it finally did make it into print, many people criticized his approach as well as the results. “I got blasted by several people,” says Snyder.

But he defends the value of beginning with an extensive exploration of his own health and genetic makeup. Snyder and his team have since expanded the study to include 100 people from whom they’re collecting a continuous stream of data provided by wearable biosensors, as well as from standard lab tests of blood chemistry, gene expression and other information. The scientists have found distinctive patterns of deviations from normal that seem to correlate with particular health problems, he says. And that’s the point—that “normal” may be different for different people, and charting individual changes and the possible impact of environmental factors can lead to better diagnosis and treatment.

To look for such differences, experiments on a single person may be exactly the right size, Snyder says. “Scientists are used to thinking you need to study 1,000 people and 1,000 controls to learn anything about disease markers, for example, and that’s exactly what we don’t want to do,” he adds. “We want to understand what a healthy state is for one individual, and then to measure in detail deviations from that state. That lets us catch when something is going on, when someone has a predisposition for a disease or is on the verge of developing it.”

Atlanta’s Philip Kennedy, meanwhile, eventually recovered from his surgery in Belize and moved forward with his research. The brain seizures from his first surgery didn’t dissuade him from flying back to Belize four months later for a second procedure, to insert a radio transceiver just under his scalp that he used to record his brain’s neural patterns. He had hoped to leave the device in place indefinitely, but his incision failed to heal and the electrodes and transceiver had to be removed at a local Georgia hospital. He continues to analyze information from his self-experiments, and the results have been encouraging, he says—well worth the risks, scars and $60,000 in out-of-pocket costs.

Others, however, question the reliability and replicability of Kennedy’s data. “No one else is working with his electrodes currently, so part of the problem is verifying his results,” says Laura Specker Sullivan, a research fellow at Harvard Medical School’s Center for Bioethics. “Other researchers would have to take up his work for it to bear scientific fruit.” And Sullivan and others point to new technology, including sensors that can measure brain activity from just under the surface of the skull, that may have rendered his invasive approach unnecessary.

Yet Kennedy has his defenders. “He cares for nothing but the well-being of his patients,” says Lee Miller, distinguished professor of neuroscience at the Feinberg School of Medicine at Northwestern University in Chicago, who has known Kennedy for many years. “Phil’s heroic. His motivation was to come up with solutions to hard problems, having run out of other options.” And for Kennedy, as for so many other self-experimenters before and after him, that is reason enough, even if those solutions are inevitably overtaken by others.