Breast density assessment via deep learning: Head-to-head model comparisons in full-field digital mammograms and synthetic mammograms

Krisha Anant, Juanita Hernandez Lopez, Sneha Das Gupta, Debbie L. Bennett, Aimilia Gastounioti

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

To enhance reproducibility and robustness in mammographic density assessment, various deep learning models have been proposed to automatically classify mammographic images into BI-RADS density categories. Our study aims to compare the performances of different deep learning models in making breast density classifications from full-field digital mammography (FFDM) versus synthetic mammography (SM), the newer 2D mammographic image format acquired with digital breast tomosynthesis (DBT). We retrospectively analyzed negative (BI-RADS 1 or 2) routine mammographic screening exams (Selenia or Selenia Dimensions; Hologic) acquired at sites within the Barnes-Jewish/Christian (BJC) Healthcare network in St. Louis, MO from 2015 to 2018. BI-RADS breast density assessments of radiologists were obtained from BJC’s mammography reporting software (Magview 7.1). For each mammographic imaging modality, a balanced dataset of 4,000 women was selected so there were equal numbers of women in each of the four BI-RADS density categories, and each woman had at least one mediolateral oblique (MLO) and one craniocaudal (CC) view per breast in that mammographic imaging modality. Previously validated pre-processing steps were applied to all FFDM and SM images to standardize image orientation and intensity. Images were then split into training, validation, and test sets at ratios of 80%, 10%, and 10%, respectively, while maintaining the distribution of breast density categories and ensuring that all images of the same woman appear only in one set. ResNet-50 and EfficientNet-B0 architectures were optimized, trained, and evaluated separately for different imaging modalities. Overall, the models had comparable performance, though ResNet-50 performed slightly better in most cases. Furthermore, FFDM images had better classification accuracies than SM images. Our preliminary findings suggest that further deep learning developments and optimizations may be needed as we develop breast density deep learning models for the newer mammographic imaging modality, DBT.

Original languageEnglish
Title of host publicationMedical Imaging 2024
Subtitle of host publicationComputer-Aided Diagnosis
EditorsWeijie Chen, Susan M. Astley
PublisherSPIE
ISBN (Electronic)9781510671584
DOIs
StatePublished - 2024
EventMedical Imaging 2024: Computer-Aided Diagnosis - San Diego, United States
Duration: Feb 19 2024Feb 22 2024

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12927
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2024: Computer-Aided Diagnosis
Country/TerritoryUnited States
CitySan Diego
Period02/19/2402/22/24

Keywords

  • artificial intelligence
  • deep learning
  • digital mammogram
  • mammographic density
  • synthetic mammogram

Fingerprint

Dive into the research topics of 'Breast density assessment via deep learning: Head-to-head model comparisons in full-field digital mammograms and synthetic mammograms'. Together they form a unique fingerprint.

Cite this