Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy

Sven Olberg, Hao Zhang, William R. Kennedy, Jaehee Chun, Vivian Rodriguez, Imran Zoberi, Maria A. Thomas, Jin Sung Kim, Sasa Mutic, Olga L. Green, Justin C. Park

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Purpose: The superior soft-tissue contrast achieved using magnetic resonance imaging (MRI) compared to x-ray computed tomography (CT) has led to the popularization of MRI-guided radiation therapy (MR-IGRT), especially in recent years with the advent of first and second generation MRI-based therapy delivery systems for MR-IGRT. The expanding use of these systems is driving interest in MRI-only RT workflows in which MRI is the sole imaging modality used for treatment planning and dose calculations. To enable such a workflow, synthetic CT (sCT) data must be generated based on a patient’s MRI data so that dose calculations may be performed using the electron density information derived from CT images. In this study, we propose a novel deep spatial pyramid convolutional framework for the MRI-to-CT image-to-image translation task and compare its performance to the well established U-Net architecture in a generative adversarial network (GAN) framework. Methods: Our proposed framework utilizes atrous convolution in a method named atrous spatial pyramid pooling (ASPP) to significantly reduce the total number of parameters required to describe the model while effectively capturing rich, multi-scale structural information in a manner that is not possible in the conventional framework. The proposed framework consists of a generative model composed of stacked encoders and decoders separated by the ASPP module, where atrous convolution is applied at increasing rates in parallel to encode large-scale features. The performance of the proposed method is compared to that of the conventional GAN framework in terms of the time required to train the model and the image quality of the generated sCT as measured by the root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) depending on the size of the training data set. Dose calculations based on sCT data generated using the proposed architecture are also compared to clinical plans to evaluate the dosimetric accuracy of the method. Results: Significant reductions in training time and improvements in image quality are observed at every training data set size when the proposed framework is adopted instead of the conventional framework. Over 1042 test images, values of 17.7 ± 4.3 HU, 0.9995 ± 0.0003, and 71.7 ± 2.3 are observed for the RMSE, SSIM, and PSNR metrics, respectively. Dose distributions calculated based on sCT data generated using the proposed framework demonstrate passing rates equal to or greater than 98% using the 3D gamma index with a 2%/2 mm criterion. Conclusions: The deep spatial pyramid convolutional framework proposed here demonstrates improved performance compared to the conventional GAN framework that has been applied to the image-to-image translation task of sCT generation. Adopting the method is a first step toward an MRI-only RT workflow that enables widespread clinical applications for MR-IGRT including online adaptive therapy.

Original languageEnglish
Pages (from-to)4135-4147
Number of pages13
JournalMedical physics
Volume46
Issue number9
DOIs
StatePublished - Sep 1 2019

Keywords

  • MRI
  • MRI-guided RT
  • MRI-only RT
  • machine learning
  • synthetic CT

Fingerprint Dive into the research topics of 'Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy'. Together they form a unique fingerprint.

Cite this