Anatomical context protects deep learning from adversarial perturbations in medical imaging

  • Yi Li
  • , Huahong Zhang
  • , Camilo Bermudez
  • , Yifan Chen
  • , Bennett A. Landman
  • , Yevgeniy Vorobeychik

Research output: Contribution to journalArticlepeer-review

38 Scopus citations

Abstract

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medical image processing where the goal is to predict an individual's age based on a 3D MRI brain image. We consider two models: a conventional deep neural network, and a hybrid deep learning model which additionally uses features informed by anatomical context. We find that we can introduce significant errors in predicted age by adding imperceptible noise to an image, can accomplish this even for large batches of images using a single perturbation, and that the hybrid model is much more robust to adversarial perturbations than the conventional deep neural network. Our work highlights limitations of current deep learning techniques in clinical applications, and suggests a path forward.

Original languageEnglish
Pages (from-to)370-378
Number of pages9
JournalNeurocomputing
Volume379
DOIs
StatePublished - Feb 28 2020

Keywords

  • Adversarial deep learning
  • Medical image processing

Fingerprint

Dive into the research topics of 'Anatomical context protects deep learning from adversarial perturbations in medical imaging'. Together they form a unique fingerprint.

Cite this