Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence

  • Peter M. Maloca
  • , Philipp L. Müller
  • , Aaron Y. Lee
  • , Adnan Tufail
  • , Konstantinos Balaskas
  • , Stephanie Niklaus
  • , Pascal Kaiser
  • , Susanne Suter
  • , Javier Zarranz-Ventura
  • , Catherine Egan
  • , Hendrik P.N. Scholl
  • , Tobias K. Schnitzer
  • , Thomas Singer
  • , Pascal W. Hasler
  • , Nora Denk

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.

Original languageEnglish
Article number170
JournalCommunications Biology
Volume4
Issue number1
DOIs
StatePublished - Dec 2021

Fingerprint

Dive into the research topics of 'Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence'. Together they form a unique fingerprint.

Cite this