Numer. Math. Theor. Meth. Appl., 11 (2018), pp. 187-210.
Published online: 2018-11
Cited by
- BibTex
- RIS
- TXT
The primal-dual hybrid gradient method is a classic way to tackle saddle-point problems. However, its convergence is not guaranteed in general. Some restrictions on the step size parameters, e.g., $τσ ≤ 1/‖A^TA‖$, are imposed to guarantee the convergence. In this paper, a new convergent method with no restriction on parameters is proposed. Hence the expensive calculation of $‖A^TA‖$ is avoided. This method produces a predictor like other primal-dual methods but in a parallel fashion, which has the potential to speed up the method. This new iteration is then updated by a simple correction to guarantee the convergence. Moreover, the parameters are adjusted dynamically to enhance the efficiency as well as the robustness of the method. The generated sequence monotonically converges to the solution set. A worst-case $\mathcal{O}(1/t)$ convergence rate in ergodic sense is also established under mild assumptions. The numerical efficiency of the proposed method is verified by applications in LASSO problem and Steiner tree problem.
}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.2018.m1621}, url = {http://global-sci.org/intro/article_detail/nmtma/10650.html} }The primal-dual hybrid gradient method is a classic way to tackle saddle-point problems. However, its convergence is not guaranteed in general. Some restrictions on the step size parameters, e.g., $τσ ≤ 1/‖A^TA‖$, are imposed to guarantee the convergence. In this paper, a new convergent method with no restriction on parameters is proposed. Hence the expensive calculation of $‖A^TA‖$ is avoided. This method produces a predictor like other primal-dual methods but in a parallel fashion, which has the potential to speed up the method. This new iteration is then updated by a simple correction to guarantee the convergence. Moreover, the parameters are adjusted dynamically to enhance the efficiency as well as the robustness of the method. The generated sequence monotonically converges to the solution set. A worst-case $\mathcal{O}(1/t)$ convergence rate in ergodic sense is also established under mild assumptions. The numerical efficiency of the proposed method is verified by applications in LASSO problem and Steiner tree problem.