Loading [MathJax]/jax/output/HTML-CSS/config.js
arrow
Online First
Convergence of a Generalized Primal-Dual Algorithm with an Improved Condition for Saddle Point Problems
Fan Jiang, Yueying Luo, Xingju Cai and Tanxing Wang

Numer. Math. Theor. Meth. Appl. DOI: 10.4208/nmtma.OA-2024-0105

Publication Date : 2025-02-20

  • Abstract

We consider a general convex-concave saddle point problem that frequently arises in large-scale image processing. First-order primal-dual algorithms have garnered significant attention due to their promising results in solving saddle point problems. Notably, these algorithms exhibit improved performance with larger step sizes. In a recent series of articles, the upper bound on step sizes has been increased, thereby relaxing the convergence-guaranteeing condition. This paper analyzes the generalized primal-dual method proposed in [B. He, F. Ma, S. Xu, X. Yuan, SIAM J. Imaging Sci. 15 (2022)] and introduces a better condition to ensure its convergence. This enhanced condition also encompasses the optimal upper bound of step sizes in the primal-dual hybrid gradient method. We establish both the global convergence of the iterates and the ergodic $\mathcal{O}(1/N)$ convergence rate for the objective function value in the generalized primal-dual algorithm under the enhanced condition.

  • Copyright

COPYRIGHT: © Global Science Press