Skip to main content

Nvidia uses federated learning to create medical imaging AI

Nvidia
Nvidia
Image Credit: Khari Johnson / VentureBeat

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.


AI researchers from Nvidia and King’s College London have used federated learning to train a neural network for brain tumor segmentation, a milestone Nvidia claims is a first for medical image analysis. The technique can allow data-sharing between hospitals and researchers while preserving patient privacy.

Federated learning is an approach to machine learning that — when using a client-server approach — can eliminate the need to create a single data lake in order to train models. Instead, models are trained locally on devices that then transfer insights from multiple machines to a central model.

“You need to get to these innovations, and I believe there’s kind of two ways. One, which we released last August, is create the best generalizable model that you have today and just send it to each one of these hospitals, where they can localize it for their own patients,” Nvidia director of healthcare Abdul Halabi told VentureBeat in a phone interview. “The other one is to say: ‘Let’s fight together from the beginning, build this robust model [or generalizable model] as much as we can.’ And I think this research shows that it’s possible to actually do that here. It’s possible for you to achieve a high-quality model without really bringing the data all together, which is why it’s really exciting.”

The model uses a data set from the BraTS (Multimodal Brain Tumor Segmentation) Challenge of 285 patients with brain tumors. The work will be presented at the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) conference that begins today in Shenzhen, China.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“Most of the experiments that have been done [are] on synthetic data, or just randomize data sets,” Halabi told VentureBeat. “But applying this was experimenting with real hospital data, using the BraTS challenge — to my knowledge, there’s no work out there that goes into the privacy direction.”

The potential impact of machine learning in healthcare is made evident today by the fact that some computer vision systems have demonstrated the ability to outperform human radiological experts, but diverse data sets with hundreds of thousands of cases aren’t always available, due to strict privacy requirements in healthcare.

That’s why many researchers in the field have used synthetic data sets or those compiled for challenges, Nvidia senior research Nicola Rieke explained.

“So we’re saying that this research is really an important step toward the deployment of secure federated learning, and we hope that it will enable data-driven precision at [a] very large scale,” she said.

The work explores the limits of differential privacy, a technique to add noise to data sets in order to make federated learning models more secure. Research has shown that without the application of differential privacy, neural networks can still leave some insights from underlying data exposed.

Apple and Google apply the same technique to federated learning for keyboard customization models on Android and iOS devices. Rieke said federated learning for medical image analysis comes with its own set of challenges, like the size of 3D medical images and the need for much more compute power.

“We do this by injecting noise [into] each participating node, and this way it stores the updates and limits the granularity of the information that we actually share among the institutions,” Rieke said.

The work Nvidia and King’s College are conducting looks specifically at the ability to replicate models when portions are purposely hidden.

“If you only see, let’s say 50% or 60% of the model updates, can we still combine the contributions in the way that the global model converges? And we found out ‘Yes, we can.’ It’s actually quite impressive. So it’s even possible to aggregate the model in a way if you only share 10% of the model,” she said. “So it’s possible to only share 40% of the model and we still reach the same accuracy or the same performance as if the model were to be trained on the pooled data.”

In other medical imaging news, the American College of Radiology (ACR) Data Science Institute said this spring it will incorporate Nvidia’s Clara AI toolkit into its platform.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Want must-read news straight to your inbox?
Sign up for AI Weekly