DOCTORS RELY ON VISUAL INFORMATION TO HELP UNDERSTAND what’s happening with a patient. Trained eyes can discern that the fuzzy area on an X-ray is a tumor, for instance, or that a curious cell under the microscope points to a rare infectious disease.

But manually reading images lacks the data-driven precision of much of twenty-first century medicine. With vital signs that can easily be turned into numbers, computers may give doctors a big assist, helping them interpret test results in the context of countless other measurements. Medical images are more difficult to quantify, and making sense of them still depends mostly on what a clinician’s eye can see.

But last month, Massachusetts General Hospital launched the Clinical Data Science Center. Among its goals is to use advanced technology—assisted by the rich repository of more than 2 billion MRIs, X-rays, slides and other diagnostic measurements in the hospital’s medical records—to help computers make sense of images. Keith Dreyer, vice chairman of radiology at MGH and director of the new center, explains how the center’s team plans to teach computers to see like doctors do—and to explore a new frontier for diagnostics and clinical discoveries.

What will your new data center do with more than 2 billion clinical images?
We want to take these historical images from our archives, and make them useful to doctors today. Part of that involves using them to teach computers how to recognize patterns.

How can that help doctors?
In the past, if I wanted to find a new way to identify patients with pancreatic disease before they show symptoms, I would come up with a hypothesis and test it in new patients. But now we’re asking the machine to look into the archives for the answers. I can take millions of past images that show healthy and diseased pancreases and, along with additional medical data on the patients, load them into software algorithms. With that information, the algorithms can learn to recognize those with pancreatic disease, and learn to identify markers that are precursors to their condition. It can then look at current patients’ medical records and pinpoint patients who might develop pancreatic disease and are in need of diagnostic evaluation.

How do the machines learn?
One technique we use is called “deep learning.” If I show you 100 apples and 100 oranges, you will soon learn the unique features that make an apple an apple, and an orange an orange. If I then show you a new fruit, you’ll be able to tell me whether it’s an apple or an orange, even if you have never seen that specific apple or orange before.

For our computers, being fed vast numbers of clinical images can train them to understand subtle differences and arrive at algorithms that can help them interpret new images.

What are your biggest challenges?
In radiology, which relies heavily on images, the data from a scanner is inherently digital—it’s in a form that can be processed by our computers. But pathology also relies on clinicians making visual assessments, and although there are processes that can convert, say, a tumor biopsy into digital information, there are limitations today. That’s largely because the resolution of the image data is quite high, and that makes it difficult to digitize precisely. So the logistics of lining up data from multiple specialties correctly to perform deep learning will be an interesting and ongoing challenge—as will integrating the processes we’re developing into the medical mainstream.

Could this technology eventually take the place of human physicians?
The plan is not to replace humans. It is becoming clear that you will have the greatest accuracy when you are able to combine the best attributes of both machines and humans.