THERE’S A LONG AND WINDING ROAD to Food and Drug Administration approval for a new drug. First the medication has to be tested in animals, followed by three phases of human clinical trials with progressively higher hurdles. Ultimately, a manufacturer must demonstrate that a new drug is safe and effective, that its benefits outweigh its risks, that labeling is accurate, and that it can be produced with consistent quality and purity.

Most would-be therapies don’t make it through that gauntlet, and though there are widely varying estimates of the expense of getting a new drug approved, in 2009 the Tufts Center for the Study of Drug Development pegged the average at more than $1 billion. Yet almost no one would argue that today’s system is adequate to ensure that new drugs really work as they are supposed to, and that unexpected side effects won’t make the cure worse than the disease. Clinical trials routinely exclude the very young and the old, pregnant women, patients with multiple diseases and those taking medications that might interact with the drug being tested. The biggest problem is how few people get the therapy and how short a time they take it. The acid test comes after approval, when the population exposed to the drug expands exponentially.

The FDA agency’s process of “pharmacovigilance” is long and involved and is estimated to detect fewer than 10 in 100 adverse reactions to prescription drugs. The poster child for what’s wrong with post-approval monitoring—the arthritis drug Vioxx—won approval after clinical trials during which some 5,000 people took it. And though cardiovascular events were high on the list of side effects reported during the trial, it wasn’t clear whether Vioxx itself was the cause. Then, during the first six months that the drug was available, there were 2.5 million prescriptions filled, and that number would swell to 105 million during the five years it was on the market. Having huge numbers of people taking Vioxx was the real test, and it’s estimated to have caused at least 88,000 heart attacks before it was pulled from shelves in late 2004. More recently, there have been unexpected problems with the diabetes drug Avandia, which appears to increase cardiovascular risk, and cholesterol- lowering statin drugs that increase the risk of muscular injury when taken with certain HIV or hepatitis C drugs.

Already, more than two million injuries, hospitalizations and deaths each year are attributed to prescription drug reactions, and the situation could be getting worse. “Adverse event reports” to the FDA from physicians, patients and drug companies about medication side effects tripled from 2000 to 2010, with 758,890 reports filed in 2010, compared with 266,866 a decade earlier. Part of that reflects an aging population taking more prescription medicines. Some 30% of elderly patients are on six or more drugs, and about a third of harmful drug reactions result from problems with multiple prescriptions.

“With the number of people using prescription drugs and the number of drugs being consumed per individual rising, adverse drug events could go up exponentially,” says Nigam Shah, an assistant professor of biomedical informatics at Stanford University. “We need active surveillance to monitor drugs almost in real time so that we can reduce how long patients are at risk.”

Where could that real-time intelligence come from? Shah is among several scientists who believe that at least part of the answer lies in the Internet and social media. Four out of five adult Internet users go online to research ailments and prescription drugs. In addition, many people blog about their conditions; network with others through chat rooms, Facebook and other online communities; or may tweet about their illnesses. Now researchers are investigating whether those data channels can be harnessed to provide timely red flags about prescription drugs and predict harmful side effects.

To extract nuggets of useful information from billions of bytes of data, scientists must filter out spam and other “noise”; make sense of Web users’ phrasings, jargon and misspellings; and link comments to potential side effects. Researchers have constructed complex models that use algorithms and natural language processing to scour Websites and detect key patterns or relationships between words in social media posts.

Web searches are another potential source of useful data, and patients’ electronic medical records can also be mined. The FDA is aggressively exploring these new channels, in part to comply with the Food and Drug Administration Amendments Act of 2007, which called for the agency to establish a new drug safety surveillance system. It will likely take a combination of several kinds of heightened monitoring to provide an effective post-approval system, and many of the latest efforts are in very early stages. Yet Shah is optimistic that at least some of the current research will pay off. “The good news is that this is an exciting time of innovation,” he says.

THE FDA’S CURRENT TOOL FOR KEEPING TABS on approved drugs is MedWatch, a system for reporting safety issues with drugs and other medical products. A crucial part of that effort is the FDA Adverse Event Reporting System, or FAERS. A computer database developed in 1998, FAERS includes more than six million reports of adverse drug reactions and medication errors. Health care professionals and consumers can file complaints through the MedWatch Website or by e-mail, phone, fax or letter. In addition, pharmaceutical companies typically have 15 days to report serious side effects. That information from drugmakers accounts for about 80% of the most serious complaints.

Though a gold mine of information, FAERS has many limitations. It can provide raw data, but the database can’t be readily queried. And FDA criteria for what constitutes a “reportable” adverse event are insufficient to determine whether a fever, a rash, dizziness, a seizure or other problem was a true side effect of a drug, a result of combining it with other medications or something that arose because of another factor.

Still, pharmaceutical companies regularly monitor FAERS data, and the nonprofit Institute for Safe Medication Practices publishes Quarter Watch, an electronic newsletter that identifies serious trends it has discovered when analyzing FAERS reports. The FDA also posts on its Website a list of all drugs it has flagged to evaluate for potential safety issues.

Epidemiologist John Brownstein, an associate professor at Harvard Medical School, and his team are working with the FDA to streamline the cumbersome reporting interface on the MedWatch Website, which he says often deters patients and physicians from passing along complaints. The research group is also developing data-mining tools that can look past duplications, spelling errors and other problems that make it difficult to analyze FAERS data. The goal is to detect early signals of trouble with a particular drug and correlate those with demographic information, so that the MedWatch site could include descriptions of a drug’s side effects based on personal characteristics such as age and gender. “We want you to be able to see what adverse effects a drug may have for a person like you,” says Nabarun Dasgupta, a pharmacoepidemiologist and co-founder of Epidemico, a health data analytics company based in Chapel Hill, N.C., and Boston.

Brownstein hopes that MedWatcher, a mobile application launched in 2010, will make it much easier for patients and physicians to report problems. He developed the app in collaboration with the University of North Carolina and says it should take less than three minutes for a user to enter key variables about side effects from any of more than 10,000 medications. “We need to get people more actively engaged in drug safety and reporting an event,” Brownstein says. That includes not only people taking a drug but also, say, a physician who has just seen a patient and heard about a problem. So far, more than 30,000 people have downloaded the app.

SP13_drugrisks_thumb_630x420

GETTING MORE PEOPLE TO REPORT DRUG ISSUES to the FDA could certainly help detect post-approval problems. But many patients, for various reasons, aren’t likely to make those reports, no matter how painless the process becomes, says <a data-cke-saved-href=” https:=”” opimweb.wharton.upenn.edu=”” profile=”” 33=”” “=”” href=”https://opimweb.wharton.upenn.edu/profile/33/”>

Shawndra Hill, an assistant professor of operations and information management at the University of Pennsylvania Wharton School. Often people prefer informal networks such as message boards and social media for discussing drug side effects, and in several recent and ongoing studies, researchers have been looking for ways to mine those communications.

A study that Hill and others published in 2011 in the Journal of Biomedical Informatics focused on information about the side effects of four hormonal breast cancer treatment drugs that researchers extracted from 1.1 million messages on popular breast cancer Internet message boards. They found that about one in four side effects mentioned in message board comments hadn’t been listed in labeling information for the drugs. Most of those problems, however, weren’t serious. “That could mean one of two things,” says Hill. “If the side effect is really adverse, people may be too sick to talk about it. Or the FDA trials are doing a good job at uncovering the things that are really problematic.” The research also couldn’t determine whether the side effects were the direct result of taking one of the four drugs.

Brownstein and his team, meanwhile, are six months into a three-year project to explore what they can glean by monitoring online patient discussions. A similar study at the University of Virginia and West Virginia University is using many of the same tools Brownstein’s group is applying—computer algorithms, natural language processing, data parsing—in trying to extract information about adverse drug events from social media chatter. That research, in turn, builds on a study analyzing online posts from 2000 to early 2012 to find mentions of 20 drugs. The retrospective survey identified known adverse drug reactions 80% of the time—and its methods would have detected problems much earlier than FDA warnings were issued.

A related approach is to monitor Web searches, as the information patients are seeking may suggest what problems they’re having. For a study published this year in the Journal of the American Medical Informatics Association, researchers from Stanford and other institutions partnered with Microsoft to analyze the search logs of 6 million Web users who agreed to take part in the project. Scientists analyzed 82 million queries about drugs, symptoms and conditions in 2010. They paid particular attention to pairings of paroxetine, an antidepressant, and pravastatin, a cholesterol-lowering drug—because in 2011, the combination of the two drugs was reported to cause hyperglycemia (high blood sugar).

Researchers found that about 5% of people who searched for paroxetine or its brand names also searched for a hyperglycemia-related term; for pravastatin and its brand names, the rate was below 4%. But for those who searched for both drugs, suggesting they might be taking both, the search rate for hyperglycemia doubled to 10%—offering further, but far from conclusive, evidence. “The challenge now is to figure out what application this has in continuous monitoring for such side effects,” says Stanford’s Shah. To be more effective, he says, search histories will need to be compared with data from other sources—social media, patient support forums, electronic health records and FAERS. “If a warning signal shows up in one and can be confirmed in another,” he says, “there’s a high probability that it’s true and worth investigating in more detail.”

YET ANOTHER POTENTIALLY RICH SOURCE OF DATA source of data about drug problems is patient records. The FDA’s Sentinel Initiative, launched in 2008, is charged with developing tools and policies for a new national database that will utilize health information routinely collected by hospitals, insurance companies, physician offices and other sites. “Vioxx was something we should have known about, but no one was looking at the data, which was embedded in health insurance claims,” says Jeff Brown, an associate professor of population medicine at Harvard Medical School. “Our goal is to monitor those data sources.”

Brown is involved in what’s known as Mini-Sentinel, a pilot program joining Harvard Pilgrim, Kaiser Permanente, Humana, Aetna and other large health plans in efforts to amass data from electronic health records, administrative and insurance claims, and health registries. Mini-Sentinel has established a network of electronic health records of more than 130 million people, and researchers are using those to monitor the safety of medical products. Because patient records aren’t designed to capture adverse drug events, researchers must use epidemiological methods to find problems that may be inferred from billing codes in insurance claims.

Recently, Mini-Sentinel assessed the risk of serious bleeding associated with the use of the anticoagulant dabigatran (Pradaxa). After the 2010 approval of Pradaxa, the FDA received a flood of reports that the drug caused bleeding. The Mini-Sentinel partners searched their data about patients who’d taken the drug, but analysis ultimately found no evidence that bleeding rates for new users of dabigatran were any higher than for new users of warfarin, another common anticoagulant—a result that was consistent with observations from the clinical trial used to approve the drug. The study took several weeks to conduct, though Brown says upgrades could eventually reduce that time frame to days.

In another study utilizing patient information, a 2009 pilot known as ASTER (Adverse Drug Event Spontaneous Triggered Event Reporting), sponsored by Pfizer, Partners HealthCare and others, evaluated whether electronic health records could improve physician reporting of drug problems. A special screen was added to the records so that physicians who had dropped a medication because of a patient’s adverse reaction could enter details about the problem and send them immediately to the FDA. Though only 2 of the 26 physicians participating in the study had submitted an adverse event report in the prior year, during the seven-month study the full group filed 200, with one in five deemed serious. “As a proof complete success,” says Michael Ibara, head of Pharmacovigilance Information Management at Pfizer. But he notes that turning that concept into reality would require scrutinizing regulations pertaining to adverse event reporting, among other obstacles.

THAT’S HOW IT IS FOR MOST ATTEMPTS to improve post-approval drug monitoring. Though there are many promising possibilities, few have yet proved they can quickly detect serious problems so that the FDA can decide whether to beef up its warnings on a therapy or pull it off the market.

To detect post-approval drug events reliably and at an early stage will almost certainly require combining several different data sources, and the potential of a multipronged approach was demonstrated in a study published in Science Translational Medicine in 2012. Researchers at Stanford University School of Medicine used a computer algorithm to sort through more than 1.8 million reports in the FAERS database from 2004 through 2009, as well as to look at a smaller Canadian database of 300,000 adverse event reports and chemical databases used to correlate drug side effects. Those sources enabled them to identify 47 pairs of previously unsuspected drug-to-drug interactions. The algorithm compared FAERS data from reports filed by patients taking a single drug and experiencing side effects against a separate database, a control group of one or more patients with the same condition and other matching factors, such as sex and age, but who weren‘t taking the same drug. “It sounds obvious, but if a lot more people on the drug reported side effects than those not taking the drug, the medication is the likely cause,” says Nicholas Tatonetti, a lead researcher on the study and now an assistant professor at Columbia University.

In a universe of 1,332 drugs, Tatonetti and his colleagues found an average of 329 new adverse events for each medicine?dwarfing the 69, on average, that are listed on drug labels. They then applied the same algorithm to mine FAERS for drug-to-drug interactions against another database of off-label side effects. That produced 1,301 adverse events from an analysis of 59,220 pairs of drugs. Those discoveries were then tested against “real patients” by mining lab test results in the electronic records of patients at Stanford Hospital & Clinics. That final step confirmed 47 previously unknown drug-to-drug combinations that seemed particularly likely to cause problems. The worst involved diuretics called thiazides, often prescribed to treat high blood pressure, and selective serotonin reuptake inhibitors, used for depression. Patients who took both drugs were significantly more likely to develop a heart condition known as prolonged QT, associated with increased risk of irregular heartbeats and sudden death. The FDA is evaluating those findings to see whether updated drug labeling is warranted.

Online conversations about drugs and their side effects could add still another useful source of intelligence that might speed detection of serious problems. But before that can happen, there will need to be further testing and refining of computational tools, and regulatory and legal issues will also have to be addressed. “In digitizing data, we’re looking at a paradigm shift and a new business model as these tools get validated,” says Pfizer’s Ibara.