Electronic health records (EHRs) weren’t originally designed to predict disease risk or determine a more precise treatment. But when combined with artificial intelligence (AI), EHR data could do both, transforming health care in the process.
At the Ichan School of Medicine at Mount Sinai, a project referred to as “Deep Patient” is using AI to comb through mountains of health data, uncovering new insights and connections to predict disease risk. More specifically, Deep Patient uses deep learning, a machine-learning approach that essentially mimics the neural networks of the brain, allowing the computer system to learn new things without being programmed to do so.
A study about cats offers an easier way to understand deep learning, according to Joel Dudley, PhD, associate professor in the Department of Genetics and Genomic Sciences at Mount Sinai and director of the school’s Institute for Next Generation Healthcare. In 2012, computer scientists at Google fed millions of YouTube images into a giant computerized neural network to see if it could learn what cats looked like without being fed any information on cat characteristics. The network achieved near 75% accuracy in identifying cats. Now, Dudley wants to bring that kind of learning to medical practice.
With Deep Patient, scientists fed deidentified data from 700,000 EHRs into a computer neural network, which randomly combined data to make and test new variables for disease risk. The hope was that the machine could train itself to understand all the data in a way that facilitated predictive modeling.
In a 2016 study published in Scientific Reports, Dudley and colleagues evaluated the newly trained Deep Patient using more than 76,000 test patients who had 78 diseases. They found that Deep Patient “significantly outperformed” evaluations based only on raw EHR data, doing particularly well at predicting severe diabetes, schizophrenia, and various cancers.
“We’re at a point right now where doctors can’t keep up with everything they need to know. We need a machine to help us do that.”
Kevin Hughes, MD
Massachusetts General Hospital
The ultimate goal, Dudley says, is that a system like Deep Patient would replace the traditional EHR. “I’m not as interested in dragging 2017 medicine into the future as I am in building a new system that replaces [EHRs],” he said. “There’s really fertile ground right now for a pretty big shift in health care.”
Deep learning unlocks big data
A growing body of research on AI and medicine underscores its transformative potential. For example, in a study published in August 2017 in Radiology, researchers used more than 1,000 deidentified patient X-rays to train a deep-learning network to detect tuberculosis. The network had close to a 100% accuracy rate, which could be especially promising in areas with shortages of radiologists, according to researchers. Another study, published in April 2017 in PLOS ONE, found that machine-learning algorithms improved the accuracy of cardiovascular risk prediction.
“We’re at a point right now where doctors can’t keep up with everything they need to know,” says Kevin Hughes, MD, codirector of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital. “We need a machine to help us do that.”
Hughes is collaborating on a project to improve cancer care with machine learning. Led by Regina Barzilay, PhD, a scientist at the Massachusetts Institute of Technology, the effort uses natural language processing to teach computers how to read and interpret EHR data, including nonstandardized parts known as “free text.” In other words, algorithms learn to read breast pathology reports and accurately identify if cancer is present.
That’s a big deal, according to Hughes, because it means researchers and clinicians can use AI to sort and identify massive swaths of relevant pathology data that were once only intelligible to humans—and that holds the potential for revealing new insights into cancer prevention, detection, and treatment. Hughes added that right now, only about 3% of cancer patients participate in clinical trials, which means much of the field’s therapeutic recommendations are based on a small cohort of patients. With deep learning, however, researchers can better leverage and learn from data on the other 97%.
“Right now, we have all the data in one place, but it still has to be interpreted by a human,” Hughes says. “This is the next step beyond that.”
Future of medicine has arrived
Today, about 7,000 clinicians from 500 institutions worldwide are using and building a diagnostic and management tool that uses AI to glean useful information from the world’s collective medical knowledge.
Known as the Human Diagnosis Project, or Human Dx, the online tool lets physicians ask the project’s community a clinical question and upload any supporting information. Shortly after, they receive a report of aggregated and prioritized responses.
“We’re crowdsourcing clinician insight and intelligence,” says Seiji Hayashi, MD, MPH, director of medicine at Human Dx and a family physician at Mary’s Center, a community health center in Washington, D.C. “In a sense, we already do this—if I’m in the clinic and I’m stumped, I go next door and talk with a colleague. Human Dx lets us do that simpler, faster, and without being constrained by geography.”
“I’m not as interested in dragging 2017 medicine into the future as I am in building a new system that replaces it. There’s really fertile ground right now for a pretty big shift in health care.”
Joel Dudley, PhD
Ichan School of Medicine at Mount Sinai
Over the past several years, the AAMC has been working with health systems to improve communication and coordination between providers through Project CORE (Coordinating Optimal Referral Experiences). Project CORE is working with Human Dx to promote the use of high-quality eConsults to support high-value care, while providing clinical insights to the AI system. Currently, a primary care physician uses the eConsult system to discuss a particular patient case with a specialist rather than referring the patient for an in-person visit. Under the Project CORE–Human Dx partnership, AI will eventually enable primary care physicians to send an eConsult and receive consensus from multiple specialists simultaneously.
In addition to its clinical benefits, Human Dx is an educational gold mine. Every morning, Human Dx sends out its Global Morning Report (GMR), a collection of teaching scenarios based on real-life situations. To date, Hayashi says, about 40% of internal medicine residency programs in the United States subscribe to GMR.
“We think that it’s probably the only system out there that can actually track clinical reasoning,” Hayashi says.
Of course, AI in medicine raises its own questions. A big one is that scientists don’t yet understand exactly how deep learning does what it does.
“The problem is we don’t know why it’s predicting things so well—it just is,” says Dudley, who oversees the Deep Patient project. “The challenge with deep learning in general is that it’s hard to peek inside the box.”
But Dudley doesn’t think that should keep scientists from using deep learning in research or clinical settings. In fact, he predicts that the largest health care company of the future will be focused on data and won’t own any hospitals—a situation similar to Uber, the world’s largest taxi company, which doesn’t actually own any vehicles.
“Health is defined as the absence of disease, and diseases are defined by different symptoms. But it always seemed strange to me that we can’t quantify health,” Dudley says. “Now, I think [AI] can help do that.”