TY - GEN
T1 - Learning numerical observers using unsupervised domain adaptation
AU - He, Shenghua
AU - Zhou, Weimin
AU - Li, Hua
AU - Anastasio, Mark A.
N1 - Funding Information:
This research was supported in part by NIH awards EB020604, EB023045, NS102213, EB028652, R01CA233873, R21CA223799, and NSF award DMS1614305.
Publisher Copyright:
© 2020 SPIE.
PY - 2020
Y1 - 2020
N2 - Medical imaging systems are commonly assessed by use of objective image quality measures. Supervised deep learning methods have been investigated to implement numerical observers for task-based image quality assessment. However, labeling large amounts of experimental data to train deep neural networks is tedious, expensive, and prone to subjective errors. Computer-simulated image data can potentially be employed to circumvent these issues; however, it is often difficult to computationally model complicated anatomical structures, noise sources, and the response of real-world imaging systems. Hence, simulated image data will generally possess physical and statistical differences from the experimental image data they seek to emulate. Within the context of machine learning, these differences between the sets of two images is referred to as domain shift. In this study, we propose and investigate the use of an adversarial domain adaptation method to mitigate the deleterious effects of domain shift between simulated and experimental image data for deep learning-based numerical observers (DL-NOs) that are trained on simulated images but applied to experimental ones. In the proposed method, a DL-NO will initially be trained on computer-simulated image data and subsequently adapted for use with experimental image data, without the need for any labeled experimental images. As a proof of concept, a binary signal detection task is considered. The success of this strategy as a function of the degree of domain shift present between the simulated and experimental image data is investigated.
AB - Medical imaging systems are commonly assessed by use of objective image quality measures. Supervised deep learning methods have been investigated to implement numerical observers for task-based image quality assessment. However, labeling large amounts of experimental data to train deep neural networks is tedious, expensive, and prone to subjective errors. Computer-simulated image data can potentially be employed to circumvent these issues; however, it is often difficult to computationally model complicated anatomical structures, noise sources, and the response of real-world imaging systems. Hence, simulated image data will generally possess physical and statistical differences from the experimental image data they seek to emulate. Within the context of machine learning, these differences between the sets of two images is referred to as domain shift. In this study, we propose and investigate the use of an adversarial domain adaptation method to mitigate the deleterious effects of domain shift between simulated and experimental image data for deep learning-based numerical observers (DL-NOs) that are trained on simulated images but applied to experimental ones. In the proposed method, a DL-NO will initially be trained on computer-simulated image data and subsequently adapted for use with experimental image data, without the need for any labeled experimental images. As a proof of concept, a binary signal detection task is considered. The success of this strategy as a function of the degree of domain shift present between the simulated and experimental image data is investigated.
KW - Adversarial learning
KW - Image quality assessment
KW - Numerical observers
KW - Unsupervised domain adaptation
UR - http://www.scopus.com/inward/record.url?scp=85085246817&partnerID=8YFLogxK
U2 - 10.1117/12.2549812
DO - 10.1117/12.2549812
M3 - Conference contribution
AN - SCOPUS:85085246817
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2020
A2 - Samuelson, Frank W.
A2 - Taylor-Phillips, Sian
PB - SPIE
T2 - Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment
Y2 - 19 February 2020 through 20 February 2020
ER -