Proximal-Proximal-Gradient Method

Proximal-Proximal-Gradient Method

Year:    2019

Author:    Ernest K. Ryu, Wotao Yin

Journal of Computational Mathematics, Vol. 37 (2019), Iss. 6 : pp. 778–812

Abstract

In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient method and ADMM and is applicable to minimization problems written as a sum of many differentiable and many non-differentiable convex functions. The non-differentiable functions can be coupled. We furthermore present a related stochastic variation, which we call stochastic PPG (S-PPG). S-PPG can be interpreted as a generalization of Finito and MISO over to the sum of many coupled non-differentiable convex functions.
We present many applications that can benefit from PPG and S-PPG and prove convergence for both methods. We demonstrate the empirical effectiveness of both methods through experiments on a CUDA GPU. A key strength of PPG and S-PPG is, compared to existing methods, their ability to directly handle a large sum of non-differentiable non-separable functions with a constant step size independent of the number of functions. Such non-diminishing step size allows them to be fast.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/jcm.1906-m2018-0282

Journal of Computational Mathematics, Vol. 37 (2019), Iss. 6 : pp. 778–812

Published online:    2019-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    35

Keywords:    Proximal-gradient ADMM Finito MISO SAGA Operator splitting First-order methods Distributed optimization Stochastic optimization Almost sure convergence linear convergence.

Author Details

Ernest K. Ryu

Wotao Yin