PA-NeRF, a neural radiance field model for 3D photoacoustic tomography reconstruction from limited Bscan data

Yun Zou, Yixiao Lin, Quing Zhu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

We introduce a novel deep-learning-based photoacoustic tomography method called Photoacoustic Tomography Neural Radiance Field (PA-NeRF) for reconstructing 3D volumetric PAT images from limited 2D Bscan data. In conventional 3D volumetric imaging, a 3D reconstruction requires transducer element data obtained from all directions. Our model employs a NeRF-based PAT 3D reconstruction method, which learns the relationship between transducer element positions and the corresponding 3D imaging. Compared with convolution-based deep-learning models, such as Unet and TransUnet, PA-NeRF does not learn the interpolation process but rather gains insight from 3D photoacoustic imaging principles. Additionally, we introduce a forward loss that improves the reconstruction quality. Both simulation and phantom studies validate the performance of PA-NeRF. Further, we apply the PA-NeRF model to clinical examples to demonstrate its feasibility. To the best of our knowledge, PA-NeRF is the first method in photoacoustic tomography to successfully reconstruct a 3D volume from sparse Bscan data.

Original languageEnglish
Pages (from-to)1651-1667
Number of pages17
JournalBiomedical Optics Express
Volume15
Issue number3
DOIs
StatePublished - Mar 1 2024

Fingerprint

Dive into the research topics of 'PA-NeRF, a neural radiance field model for 3D photoacoustic tomography reconstruction from limited Bscan data'. Together they form a unique fingerprint.

Cite this