Scalable Plug-and-Play ADMM with Convergence Guarantees

Yu Sun, Zihui Wu, Xiaojian Xu, Brendt Wohlberg, Ulugbek Kamilov

Research output: Contribution to journalArticlepeer-review

68 Scopus citations

Abstract

Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to problems involving a large number measurements. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.

Original languageEnglish
Article number9473005
Pages (from-to)849-863
Number of pages15
JournalIEEE Transactions on Computational Imaging
Volume7
DOIs
StatePublished - 2021

Keywords

  • deep learning
  • plug-and-play priors
  • regularization parameter
  • Regularized image reconstruction

Fingerprint

Dive into the research topics of 'Scalable Plug-and-Play ADMM with Convergence Guarantees'. Together they form a unique fingerprint.

Cite this