TY - GEN
T1 - Image quality improvement in cone-beam CT using deep learning
AU - Lei, Yang
AU - Wang, Tonghe
AU - Harms, Joseph
AU - Shafai-Erfani, Ghazal
AU - Dong, Xue
AU - Zhou, Jun
AU - Patel, Pretesh
AU - Tang, Xiangyang
AU - Liu, Tian
AU - Curran, Walter J.
AU - Higgins, Kristin
AU - Yang, Xiaofeng
N1 - Publisher Copyright:
© SPIE. Downloading of the abstract is permitted for personal use only.
PY - 2019
Y1 - 2019
N2 - We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
AB - We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
KW - Artifact correction
KW - Cone-beam CT
KW - Cycle consistent adversarial network
KW - Residual network
UR - https://www.scopus.com/pages/publications/85068370012
U2 - 10.1117/12.2512545
DO - 10.1117/12.2512545
M3 - Conference contribution
AN - SCOPUS:85068370012
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2019
A2 - Schmidt, Taly Gilat
A2 - Chen, Guang-Hong
A2 - Bosmans, Hilde
PB - SPIE
T2 - Medical Imaging 2019: Physics of Medical Imaging
Y2 - 17 February 2019 through 20 February 2019
ER -