Abstract
In the past decade, sparsity-driven regularization has led to the advancement of image reconstruction algorithms. Traditionally, such regularizers rely on analytical models of sparsity [e.g., total variation (TV)]. However, more recent methods are increasingly centered around data-driven arguments inspired by deep learning. In this letter, we propose to generalize TV regularization by replacing the ℓ1-penalty with an alternative prior that is trainable. Specifically, our method learns the prior via extending the recently proposed fast parallel proximal algorithm to incorporate data-adaptive proximal operators. The proposed framework does not require additional inner iterations for evaluating the proximal mappings of the corresponding learned prior. Moreover, our formalism ensures that the training and reconstruction processes share the same algorithmic structure, making the end-to-end implementation intuitive. As an example, we demonstrate our algorithm on the problem of deconvolution in a fluorescence microscope.
| Original language | English |
|---|---|
| Pages (from-to) | 989-993 |
| Number of pages | 5 |
| Journal | IEEE Signal Processing Letters |
| Volume | 25 |
| Issue number | 7 |
| DOIs | |
| State | Published - Jul 2018 |
Keywords
- Image reconstruction
- inverse problems
- iterative shrinkage
- learning
- statistical modeling
Fingerprint
Dive into the research topics of 'Learning-based image reconstruction via parallel proximal algorithm'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver