Home PublicationsData Innovators 5 Q’s for Mihaela van der Schaar, Professor of Machine Learning, AI, and Medicine at the University of Cambridge

5 Q’s for Mihaela van der Schaar, Professor of Machine Learning, AI, and Medicine at the University of Cambridge

by Hodan Omaar
by
Mihaela van der Schaar

The Center for Data Innovation spoke with Dr. Mihaela van der Schaar, Professor of Machine Learning, AI and Medicine at the University of Cambridge and a fellow at The Alan Turing Institute in London, where she leads the effort on data science and machine learning for personalized medicine. Van der Schaar discussed the challenges and opportunities for machine learning in medicine and her Cambridge team’s recent partnership with NHS Digital and Public Health England to fight COVID-19.

This interview has been lightly edited. 

Hodan Omaar: Problems in medicine, such as prognosis or disease trajectory, do not seem to be as well-posed as other problems machine learning (ML) has had great success in solving, like spam detection. What challenges does this bring, where are the opportunities for AI and ML in healthcare, and how did you get involved in this field?

Mihaela van der Schaar: I have worked in machine learning for the past 17 years. My NSF Career Award (a beginning award for starting U.S. academics) was about developing new multi-agent learning methods in scenarios where agents are strategic, have limited information, and compete for scarce resources. At the time I received this award, I was probably the only researcher working on multi-agent reinforcement learning in strategic, competitive environments. Now, 17 years later, this is finally becoming a “hot” area to work on. However, I have moved on. Over the past few years I have worked primarily on machine learning for medicine.

Machine learning has, of course, already achieved very impressive results in areas where problems are easily stated and solutions are well-defined and easily verifiable. These areas include not only spam detection, but also computer vision and image recognition (e.g. recognizing pictures of cats vs. dogs), playing games (e.g. AlphaGo), teaching robots how to act (e.g. imitation learning), etc. Unfortunately, in medicine the problems are not well-posed, the solutions are often not well-defined, and they aren’t easy to verify. Furthermore, we are unable to go into the wild and collect new data at will.  

This is what makes medicine such a complex challenge, but also the most exciting area for anyone who is really interested in exploring the boundaries of machine learning. We’ve left the map and have had to discover entirely new ways to navigate, and the great thing is, these new methods we’ve developed can be brought back and shared for others to apply.

Another exciting aspect about machine learning in medicine is that we are given real-world problems to formalize and solve. Not only that, the solutions are ones that are societally important and potentially impact us alljust think of COVID-19!

Omaar: Your Cambridge team recently partnered with NHS Digital and Public Health England to tackle the real challenge of COVID-19: managing limited healthcare resources. How is your team using machine learning to help hospitals cope with COVID-19 and what types of data has the NHS provided for your algorithms?

Van der Schaar: We have adapted our state-of-the-art predictive analytics system, Cambridge Adjutorium, to provide hospital managers with forward guidance about usage of scarce resources like ventilators and ICU beds. This is important in ensuring that the NHS can withstand a potential increase in COVID-19 cases, as well as allowing hospitals to eventually start to accept patients with serious medical conditions unrelated to COVID-19.

Cambridge Adjutorium was initially used as a predictive tool for cardiovascular diseases, cystic fibrosis, and breast cancer, but is essentially adaptable to any disease for which usable data can be obtained. In the case of COVID-19, we trained Cambridge Adjutorium using depersonalized patient data from Public Health England’s CHESS dataset. We have recently been given access to additional intensive care datasets which have enabled us to further improve the system’s predictive accuracy.

I should also note that, in addition to NHS Digital and Public Health England, we are in discussions with teams from several other countries about implementing this system. This is entirely possible as the system can be applied across a variety of contexts and will even stay current as the nature of COVID-19 itself potentially changes over time.

These kinds of applications of AI and machine learning are based on proven methodologies and make good use of the wealth of data available across healthcare systems. Unlike one-off solutions specifically addressing the current pandemic, a system like Cambridge Adjutorium can be used within healthcare systems on an ongoing basis. Our hope is that the current pandemic can at least drive positive changes that improve the robustness and digital capabilities of healthcare systems going forward.

You can find out more about our partnership on our site (or from the NHS press release). We also have a short video featuring a clinical collaborator and some of our lab members who created the system.

Omaar: How is your lab, and the ML community more generally, developing solutions that demonstrate to the non-ML community that health data used to train models preserves the privacy of patients?

Van der Schaar: This is a twofold challenge. First, we must develop methods that are fully capable of ensuring that privacy of provided datasets is not compromised. Second, we need to be able to communicate their efficacy convincingly to numerous groups of stakeholders, including the general public.

We are coming close to solving the first challenge. The most common way to mitigate the risk of sharing sensitive data is to de-identify the data but it is well-known by now that records that have been de-identified can be easily re-identified by linking them to other identifiable datasets. It is our belief that to keep data safe, we should use differential privacy, which is a notion of privacy that is immune to post-processing or manipulation. It also makes no assumptions about any auxiliary data that may (publicly) exist about a given individual. To solve these various challenges, my team has been developing frameworks for generating “synthetic” data that closely resembles the original healthcare data while ensuring differential privacy guarantees. In this way, datasets can be developed that can be made publicly available for research while considerably lowering the risk of breaching patient confidentiality.

Addressing the second challenge, however, is something that must be tackled in a distributed, grass-roots manner by the machine learning community at large. Realistically, the only viable approach is frequent and close collaboration across the board with patients, with clinical counterparts, and with stakeholders throughout the medical community. This is something our lab is already doing, but for us, it is a case of “being the change you want to see.” 

Omaar: In a recent Turing talk, you explained that different users of ML models—ranging from clinicians to medical researchers to patients—want different types of interpretations from these models. Can you explain the difference between ML interpretability and explainability in the medical context? What has your research lab been doing to ensure that ML models users can trust and understand the recommendations made by models?

Van der Schaar: The problem we face is neatly summarized in a 2018 editorial from The Lancet, “Machine learning is frequently referred to as a black box—data goes in, decisions come out, but the processes between input and output are opaque.” More specifically, we often do not know how machine learning models make predictions, how reliable those predictions are, or what we can learn from them. They lack interpretability.

Since our lab works extensively with clinicians, we’re keenly aware of the importance of interpretability. As a result, I think it’s somewhere we excel.

For any machine learning model we develop, we must ensure that the output is understandable and actionable by its users. These could be clinicians, researchers, or patients, and they all need different explanations about the information provided by machine learning models. A clinician may want to know why a specific treatment is recommended for the patient at hand; a researcher may need to use similar information to make a data-induced hypothesis; and a patient may want to use the same information to provide informed consent or make lifestyle improvements. This kind of tailored interpretability is what we call explainability. 

In our view, user-friendly interpretable models should do the following:

  • Ensure transparency: users need to understand how the model makes predictions.
  • Enable risk understanding: users need to understand, quantify, and manage risk.
  • Avoid implicit bias: users should be confident that the model won’t learn biases.
  • Support discovery: users need to distill insights and new knowledge from the model.

In addition, interpretable models should be trustworthy which means users should have a good idea of how reliable they are. 

Last year, my team made an initial breakthrough with the development of INVASE. INVASE is a new method that uses reinforcement learning (remember AlphaGo?) to examine black box machine learning models and work out why they make specific predictions for patients. It does this by using an actor-critic method, which simultaneously makes decisions and evaluates the effectiveness of those decisions. Specifically, the “actor” looks at recommendations made by a black box model and evaluates the importance of selected patient features (e.g. age, weight, blood pressure). The “critic” then assesses the effectiveness of the actor’s selections and compares the outcome to the original recommendations made by the black box model. This process is repeated until INVASE has determined which features are most important and reached a level of accuracy comparable to the original black box model. 

While there are other methods that also examine the importance of individual patient features, the unique thing about INVASE is that it can also determine the set of important features for each patient.

Building on our work with INVASE, we have made astonishing progress over the last year using a technique called symbolic metamodeling. This is an approach that takes black boxes and unpacks them into transparent equations. In essence, symbolic metamodeling replaces an accurate but opaque model with a similarly accurate and transparent model. 

In a sense, using such methods means we can now have our cake and eat it: we can keep the highly-accurate black box models that reach conclusions humans can’t, but at the same time we can now gain insight into how those conclusions were reached and repurpose that insight for different users with specific needs. 

Omaar: Looking to the future, in what ways do you envision machine learning will be used by clinicians and medical researchers? What benefits will this bring to patients?

Van der Schaar: It’s impossible to list all the areas of medicine where machine learning will make a transformative impact and there are probably many that we haven’t even thought of yet.

A few examples (really just the tip of the iceberg) come to mind:

  • Improving personalized diagnosis and prognosis.
  • Mapping personalized disease trajectories.
  • Determining personalized treatment courses.
  • Determining personalized screening policies.
  • Optimizing healthcare systems, making them more reliable and more effective.
  • Optimizing training for medical professionals.
  • Optimizing clinical trials.
  • Enabling new drug discoveries. 

Put more simply, the goal of my lab is to empower healthcare stakeholderspatients, clinicians, researchers, policymakers, and beyondwith actionable intelligence and reliable decision support.

I’d like to sign off by mentioning that we have fully-funded Ph.D. studentships and post-doctoral positions available. We have an incredible team and our research is world-leading. If you are up for helping us discover new frontiers, visit our site to learn more, and get in touch.

You may also like

Show Buttons
Hide Buttons