TY - JOUR
T1 - Rectal Cancer Treatment Management
T2 - Deep-Learning Neural Network Based on Photoacoustic Microscopy Image Outperforms Histogram-Feature-Based Classification
AU - Leng, Xiandong
AU - Amidi, Eghbal
AU - Kou, Sitai
AU - Cheema, Hassam
AU - Otegbeye, Ebunoluwa
AU - Chapman, William Jr
AU - Mutch, Matthew
AU - Zhu, Quing
N1 - Funding Information:
Research reported in this publication was partially supported by the Siteman Cancer Center and the Foundation for Barnes-Jewish Hospital and by NIH R01 CA237664 and NCI T32CA009621).
Publisher Copyright:
© Copyright © 2021 Leng, Amidi, Kou, Cheema, Otegbeye, Chapman, Mutch and Zhu.
PY - 2021/9/23
Y1 - 2021/9/23
N2 - We have developed a novel photoacoustic microscopy/ultrasound (PAM/US) endoscope to image post-treatment rectal cancer for surgical management of residual tumor after radiation and chemotherapy. Paired with a deep-learning convolutional neural network (CNN), the PAM images accurately differentiated pathological complete responders (pCR) from incomplete responders. However, the role of CNNs compared with traditional histogram-feature based classifiers needs further exploration. In this work, we compare the performance of the CNN models to generalized linear models (GLM) across 24 ex vivo specimens and 10 in vivo patient examinations. First order statistical features were extracted from histograms of PAM and US images to train, validate and test GLM models, while PAM and US images were directly used to train, validate, and test CNN models. The PAM-CNN model performed superiorly with an AUC of 0.96 (95% CI: 0.95-0.98) compared to the best PAM-GLM model using kurtosis with an AUC of 0.82 (95% CI: 0.82-0.83). We also found that both CNN and GLMs derived from photoacoustic data outperformed those utilizing ultrasound alone. We conclude that deep-learning neural networks paired with photoacoustic images is the optimal analysis framework for determining presence of residual cancer in the treated human rectum.
AB - We have developed a novel photoacoustic microscopy/ultrasound (PAM/US) endoscope to image post-treatment rectal cancer for surgical management of residual tumor after radiation and chemotherapy. Paired with a deep-learning convolutional neural network (CNN), the PAM images accurately differentiated pathological complete responders (pCR) from incomplete responders. However, the role of CNNs compared with traditional histogram-feature based classifiers needs further exploration. In this work, we compare the performance of the CNN models to generalized linear models (GLM) across 24 ex vivo specimens and 10 in vivo patient examinations. First order statistical features were extracted from histograms of PAM and US images to train, validate and test GLM models, while PAM and US images were directly used to train, validate, and test CNN models. The PAM-CNN model performed superiorly with an AUC of 0.96 (95% CI: 0.95-0.98) compared to the best PAM-GLM model using kurtosis with an AUC of 0.82 (95% CI: 0.82-0.83). We also found that both CNN and GLMs derived from photoacoustic data outperformed those utilizing ultrasound alone. We conclude that deep-learning neural networks paired with photoacoustic images is the optimal analysis framework for determining presence of residual cancer in the treated human rectum.
KW - machine learning
KW - photoacoustic imaging of rectal cancer
KW - rectal cancer
KW - regression analysis
KW - ultrasound imaging
UR - http://www.scopus.com/inward/record.url?scp=85116574891&partnerID=8YFLogxK
U2 - 10.3389/fonc.2021.715332
DO - 10.3389/fonc.2021.715332
M3 - Article
C2 - 34631543
AN - SCOPUS:85116574891
SN - 2234-943X
VL - 11
JO - Frontiers in Oncology
JF - Frontiers in Oncology
M1 - 715332
ER -