TY - GEN
T1 - Self-explaining neural network with concept-based explanations for ICU mortality prediction
AU - Kumar, Sayantan
AU - Yu, Sean C.
AU - Kannampallil, Thomas
AU - Abrams, Zachary
AU - Michelson, Andrew
AU - Payne, Philip R.O.
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/8/7
Y1 - 2022/8/7
N2 - Complex deep learning models show high prediction tasks in various clinical prediction tasks but their inherent complexity makes it more challenging to explain model predictions for clinicians and healthcare providers. Existing research on explainability of deep learning models in healthcare have two major limitations: using post-hoc explanations and using raw clinical variables as units of explanation, both of which are often difficult for human interpretation. In this work, we designed a self-explaining deep learning framework using the expert-knowledge driven clinical concepts or intermediate features as units of explanation. The self-explaining nature of our proposed model comes from generating both explanations and predictions within the same architectural framework via joint training. We tested our proposed approach on a publicly available Electronic Health Records (EHR) dataset for predicting patient mortality in the ICU. In order to analyze the performance-interpretability trade-off, we compared our proposed model with a baseline having the same set-up but without the explanation components. Experimental results suggest that adding explainability components to a deep learning framework does not impact prediction performance and the explanations generated by the model can provide insights to the clinicians to understand the possible reasons behind patient mortality.
AB - Complex deep learning models show high prediction tasks in various clinical prediction tasks but their inherent complexity makes it more challenging to explain model predictions for clinicians and healthcare providers. Existing research on explainability of deep learning models in healthcare have two major limitations: using post-hoc explanations and using raw clinical variables as units of explanation, both of which are often difficult for human interpretation. In this work, we designed a self-explaining deep learning framework using the expert-knowledge driven clinical concepts or intermediate features as units of explanation. The self-explaining nature of our proposed model comes from generating both explanations and predictions within the same architectural framework via joint training. We tested our proposed approach on a publicly available Electronic Health Records (EHR) dataset for predicting patient mortality in the ICU. In order to analyze the performance-interpretability trade-off, we compared our proposed model with a baseline having the same set-up but without the explanation components. Experimental results suggest that adding explainability components to a deep learning framework does not impact prediction performance and the explanations generated by the model can provide insights to the clinicians to understand the possible reasons behind patient mortality.
KW - Deep learning
KW - Electronic health records
KW - Intermediate concepts
KW - Mortality prediction
KW - Self-explainable
UR - http://www.scopus.com/inward/record.url?scp=85137373998&partnerID=8YFLogxK
U2 - 10.1145/3535508.3545547
DO - 10.1145/3535508.3545547
M3 - Conference contribution
AN - SCOPUS:85137373998
T3 - Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2022
BT - Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2022
PB - Association for Computing Machinery, Inc
T2 - 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2022
Y2 - 7 August 2022 through 8 August 2022
ER -