TY - JOUR
T1 - Scalable Plug-and-Play ADMM with Convergence Guarantees
AU - Sun, Yu
AU - Wu, Zihui
AU - Xu, Xiaojian
AU - Wohlberg, Brendt
AU - Kamilov, Ulugbek
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2021
Y1 - 2021
N2 - Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to problems involving a large number measurements. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.
AB - Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to problems involving a large number measurements. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.
KW - deep learning
KW - plug-and-play priors
KW - regularization parameter
KW - Regularized image reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85112627812&partnerID=8YFLogxK
U2 - 10.1109/TCI.2021.3094062
DO - 10.1109/TCI.2021.3094062
M3 - Article
AN - SCOPUS:85112627812
SN - 2573-0436
VL - 7
SP - 849
EP - 863
JO - IEEE Transactions on Computational Imaging
JF - IEEE Transactions on Computational Imaging
M1 - 9473005
ER -