Year: 2018
Numerical Mathematics: Theory, Methods and Applications, Vol. 11 (2018), Iss. 1 : pp. 187–210
Abstract
The primal-dual hybrid gradient method is a classic way to tackle saddle-point problems. However, its convergence is not guaranteed in general. Some restrictions on the step size parameters, e.g., $τσ ≤ 1/‖A^TA‖$, are imposed to guarantee the convergence. In this paper, a new convergent method with no restriction on parameters is proposed. Hence the expensive calculation of $‖A^TA‖$ is avoided. This method produces a predictor like other primal-dual methods but in a parallel fashion, which has the potential to speed up the method. This new iteration is then updated by a simple correction to guarantee the convergence. Moreover, the parameters are adjusted dynamically to enhance the efficiency as well as the robustness of the method. The generated sequence monotonically converges to the solution set. A worst-case $\mathcal{O}(1/t)$ convergence rate in ergodic sense is also established under mild assumptions. The numerical efficiency of the proposed method is verified by applications in LASSO problem and Steiner tree problem.
You do not have full access to this article.
Already a Subscriber? Sign in as an individual or via your institution
Journal Article Details
Publisher Name: Global Science Press
Language: English
DOI: https://doi.org/10.4208/nmtma.2018.m1621
Numerical Mathematics: Theory, Methods and Applications, Vol. 11 (2018), Iss. 1 : pp. 187–210
Published online: 2018-01
AMS Subject Headings:
Copyright: COPYRIGHT: © Global Science Press
Pages: 24