Observer study-based evaluation of TGAN architecture used to generate oncological PET images

Roberto Fedrigo, Fereshteh Yousefirizi, Ziping Liu, Abhinav K. Jha, Robert V. Bergen, Jean Francois Rajotte, Raymond T. Ng, Ingrid Bloise, Sara Harsini, Dan J. Kadrmas, Carlos Uribe, Arman Rahmim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

The application of computer-vision algorithms in medical imaging has increased rapidly in recent years. However, algorithm training is challenging due to limited sample sizes, lack of labeled samples, as well as privacy concerns regarding data sharing. To address these issues, we previously developed (Bergen et al. 2022) a synthetic PET dataset for Head & Neck (H&N) cancer using the temporal generative adversarial network (TGAN) architecture and evaluated its performance segmenting lesions and identifying radiomics features in synthesized images. In this work, a two-alternative forced-choice (2AFC) observer study was performed to quantitatively evaluate the ability of human observers to distinguish between real and synthesized oncological PET images. In the study eight trained readers, including two board-certified nuclear medicine physicians, read 170 real/synthetic image pairs presented as 2D-transaxial using a dedicated web app. For each image pair, the observer was asked to identify the "real"image and input their confidence level with a 5-point Likert scale. P-values were computed using the binomial test and Wilcoxon signed-rank test. A heat map was used to compare the response accuracy distribution for the signed-rank test. Response accuracy for all observers ranged from 36.2% [27.9-44.4] to 63.1% [54.8-71.3]. Six out of eight observers did not identify the real image with statistical significance, indicating that the synthetic dataset was reasonably representative of oncological PET images. Overall, this study adds validity to the realism of our simulated H&N cancer dataset, which may be implemented in the future to train AI algorithms while favoring patient confidentiality and privacy protection.

Original languageEnglish
Title of host publicationMedical Imaging 2023
Subtitle of host publicationImage Perception, Observer Performance, and Technology Assessment
EditorsClaudia R. Mello-Thoms, Yan Chen
PublisherSPIE
ISBN (Electronic)9781510660397
DOIs
StatePublished - 2023
EventMedical Imaging 2023: Image Perception, Observer Performance, and Technology Assessment - San Diego, United States
Duration: Feb 21 2023Feb 23 2023

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12467
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2023: Image Perception, Observer Performance, and Technology Assessment
Country/TerritoryUnited States
CitySan Diego
Period02/21/2302/23/23

Keywords

  • image perception
  • image quality
  • neural networks
  • observers
  • oncology
  • PET

Fingerprint

Dive into the research topics of 'Observer study-based evaluation of TGAN architecture used to generate oncological PET images'. Together they form a unique fingerprint.

Cite this