Semi-supervised semantic segmentation of prostate and organs-at-risk on 3D pelvic CT images

Zhuangzhuang Zhang, Tianyu Zhao, Hiram Gay, Weixiong Zhang, Baozhou Sun

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The recent development of deep learning approaches has revoluted medical data processing, including semantic segmentation, by dramatically improving performance. Automated segmentation can assist radiotherapy treatment planning by saving manual contouring efforts and reducing intra-observer and inter-observer variations. However, training effective deep learning models usually Requires a large amount of high-quality labeled data, often costly to collect. We developed a novel semi-supervised adversarial deep learning approach for 3D pelvic CT image semantic segmentation. Unlike supervised deep learning methods, the new approach can utilize both annotated and un-annotated data for training. It generates un-annotated synthetic data by a data augmentation scheme using generative adversarial networks (GANs). We applied the new approach to segmenting multiple organs in male pelvic CT images. CT images without annotations and GAN-synthesized un-annotated images were used in semi-supervised learning. Experimental results, evaluated by three metrics (Dice similarity coefficient, average Hausdorff distance, and average surface Hausdorff distance), showed that the new method achieved comparable performance with substantially fewer annotated images or better performance with the same amount of annotated data, outperforming the existing state-of-the-art methods.

Original languageEnglish
Article number065023
JournalBiomedical Physics and Engineering Express
Volume7
Issue number6
DOIs
StatePublished - Nov 2021

Keywords

  • deep learning
  • generative adversarial networks
  • multi-organ segmentation

Fingerprint

Dive into the research topics of 'Semi-supervised semantic segmentation of prostate and organs-at-risk on 3D pelvic CT images'. Together they form a unique fingerprint.

Cite this