TY - JOUR
T1 - ICAM-Reg
T2 - Interpretable Classification and Regression With Feature Attribution for Mapping Neurological Phenotypes in Individual Scans
AU - Bass, Cher
AU - Silva, Mariana Da
AU - Sudre, Carole
AU - Williams, Logan Z.J.
AU - Sousa, Helena S.
AU - Tudosiu, Petru Daniel
AU - Alfaro-Almagro, Fidel
AU - Fitzgibbon, Sean P.
AU - Glasser, Matthew F.
AU - Smith, Stephen M.
AU - Robinson, Emma C.
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration, historically fail to detect variable features of disease, as they utilise population-based analyses, suited primarily to studying group-average effects. In this paper we therefore take advantage of recent developments in generative deep learning to develop a method for simultaneous classification, or regression, and feature attribution (FA). Specifically, we explore the use of a VAE-GAN (variational autoencoder - general adversarial network) for translation called ICAM, to explicitly disentangle class relevant features, from background confounds, for improved interpretability and regression of neurological phenotypes. We validate our method on the tasks of Mini-Mental State Examination (MMSE) cognitive test score prediction for the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age prediction, for both neurodevelopment and neurodegeneration, using the developing Human Connectome Project (dHCP) and UK Biobank datasets. We show that the generated FA maps can be used to explain outlier predictions and demonstrate that the inclusion of a regression module improves the disentanglement of the latent space. Our code is freely available on GitHub https://github.com/CherBass/ICAM.
AB - An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration, historically fail to detect variable features of disease, as they utilise population-based analyses, suited primarily to studying group-average effects. In this paper we therefore take advantage of recent developments in generative deep learning to develop a method for simultaneous classification, or regression, and feature attribution (FA). Specifically, we explore the use of a VAE-GAN (variational autoencoder - general adversarial network) for translation called ICAM, to explicitly disentangle class relevant features, from background confounds, for improved interpretability and regression of neurological phenotypes. We validate our method on the tasks of Mini-Mental State Examination (MMSE) cognitive test score prediction for the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age prediction, for both neurodevelopment and neurodegeneration, using the developing Human Connectome Project (dHCP) and UK Biobank datasets. We show that the generated FA maps can be used to explain outlier predictions and demonstrate that the inclusion of a regression module improves the disentanglement of the latent space. Our code is freely available on GitHub https://github.com/CherBass/ICAM.
KW - Brain imaging
KW - deep generative models
KW - feature attribution
KW - image-to-image translation
UR - http://www.scopus.com/inward/record.url?scp=85142816631&partnerID=8YFLogxK
U2 - 10.1109/TMI.2022.3221890
DO - 10.1109/TMI.2022.3221890
M3 - Article
C2 - 36374873
AN - SCOPUS:85142816631
SN - 0278-0062
VL - 42
SP - 959
EP - 970
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 4
ER -