Published On December 7, 2018
IN 2014, REGINA BARZILAY LEARNED THAT SHE HAD BREAST CANCER. As a professor of computer science at the Massachusetts Institute of Technology, her next step was to look for data points. Each stage of her cancer treatment would require some kind of decision—to pursue a more aggressive treatment, to explore experimental or personalized medicines—and in an ideal world, the findings from past patients and their outcomes could serve her as a guide.
What she found in reality was disappointing. Although thousands of women in the United States get a similar diagnosis every year, the information that guides cancer care, according the American Society of Clinical Oncology, is largely based on the 3% of patients who participate in clinical trials. The data—and lessons—from the experiences of the other 97% are largely ignored. In a world in which analysis of “big data” influences even the most banal aspects of life, cancer treatment lagged far behind.
After her recovery, Barzilay connected with Constance (“Connie”) Lehman, who had just been appointed chief of the breast imaging division at Massachusetts General Hospital. In the hospital’s trove of patient mammograms they saw an immediate opportunity. Their ongoing collaboration has already led to a tool that uses artificial intelligence to help sort out a tricky, high-risk subset of patients.
How did you two meet?
Connie Lehman: A mutual friend and colleague at MGH, Alphonse Taghian, suggested it. We realized immediately that we shared a passion to change the current paradigm for detecting, diagnosing and treating breast cancer.
So we started a conversation. Which problems could we address? In which areas could AI most improve our care of women at risk for breast cancer? How could we collect quality data with clear outcomes regarding our patients at risk?
Why is breast cancer treatment a good candidate for data science?
Regina Barzilay: Women in the United States are screened for breast cancer pretty regularly. It’s one of the rare areas in which we have images for an entire risk population over time. This gives us both control images with no disease, and images of cancer at all stages of development. That’s a great fit for “deep learning,” a technique that trains machines to find patterns in images by showing them lots of examples. To create an artificial intelligence model to assess, say, breast density or cancer risk, we need just that kind of large, high-quality data set.
CL: At MGH, we screen large numbers of women with high-quality digital mammography techniques. That gives us good baseline images. Many of our patients come back for regular screenings, and we have the advantage of linkage to tumor registries, so we know what happens to patients over time.
Experts in breast imaging interpret all our mammograms, producing high cancer detection rates and a low percentage of false positives. We try to keep reports structured, consistent and accurate. In short, we had the kind of reliable data that Regina and I needed for our planned work in AI.
What has your collaboration produced so far?
CL: We’ve recently published work in Radiology about detecting breast density. The tissue in some women’s breasts is more dense than in others, and there are two reasons that’s important. First, cancers can be hidden in dense breast tissue on the mammogram. Cancers present as white spots on a mammogram, as does dense breast tissue (as opposed to fatty tissue, which is relatively dark). Although most cancers can be seen in dense breast tissue, some can be missed or “masked” by the white dense tissue. Second, dense breast tissue, by itself, is a risk factor for developing breast cancer in the future. Currently, over 30 states in the United States require by law that women receive some form of notification about their breast density.
RB: We set out to have our AI learn to assess breast density from mammograms. This is tricky even for radiologists, and we wanted our program to recognize it at the same rate as the experts. We looked at more than 40,000 mammograms, and by the end, our model was picking up the same dense breasts as the radiologists about 95% of the time.
How would a radiologist use this tool?
CL: We have been using it at MGH since the beginning of this year. Typically, radiologists review a mammogram, assign a breast density rating and search the image for signs of cancer. Now, the AI tool intercepts each scan before it reaches the radiologist and assigns a density rating in a fraction of a second. The radiologists see the assigned rating, which they can then accept or reject. More than 90% of the time, the tool and the radiologist agree. Radiologists still have the final word on the rating, but having this AI support helps them focus on the important task of searching the mammogram for cancer.
RB: This has been helpful at MGH, and we think it might be a real lifesaver at small and rural hospitals. Big hospitals may have the luxury of having breast specialists read mammograms, whereas in many other places they’re read by general radiologists who don’t have as much experience with breast tissue. That’s why breast density assessment is subject to a lot of human reader variation and error. We hope our tool can lead to more consistent assessments everywhere it’s used.
This is the first time an AI model based on deep learning has been successfully implemented in any kind of clinic.
What other projects are you working on?
CL: We are just getting started. There is incredibly rich information embedded in every woman’s mammogram, unique to her, which could help us assess more accurately her future risk for cancer and other diseases. We have also developed a model to support more precise and more informed decisions about treatment—which women need surgical excision of certain types of breast lesions, and which women can safely monitor.
AI is likely the most important tool we’ve got to realize the promise of precision medicine. We can’t translate big data to knowledge without this kind of technology—elegant methods of data analytics that are a step above those we have used historically.
RB: We can also start to link these images to patient records to help us make even better predictions. What is the likelihood that a woman of a particular age, with a certain breast density rating, will get cancer in, say, two years? Or further down the line? Should she get an MRI as her risk increases? Can a woman whose breasts aren’t very dense wait two years before getting her next mammogram? Being able to make accurate predictions about risks and benefits can help everyone—patients, health care systems, providers, insurance companies.
Previously, there was no technology to look at a mammogram and say what might come next for that patient. Now we can go beyond breast density and let the model figure out things based on the image alone.
Stay on the frontiers of medicine
- The Big (Data) Picture
Medical images are the next frontier for the data revolution, and a new research center plans on taking a closer look.
- Doctors in the Machine
The next generation of medical software offers extraordinary support. But how can such tools be used to the best effect?
- A Better PET
Expensive, clunky yet clinically invaluable, the positron emission tomography scanner is due for reinvention.