A Convergence Study of SGD-Type Methods for Stochastic Optimization

A Convergence Study of SGD-Type Methods for Stochastic Optimization

Year:    2023

Author:    Guoguo Yang, Tiannan Xiao, Guoguo Yang

Numerical Mathematics: Theory, Methods and Applications, Vol. 16 (2023), Iss. 4 : pp. 914–930

Abstract

In this paper, we first reinvestigate the convergence of the vanilla SGD method in the sense of $L^2$ under more general learning rates conditions and a more general convex assumption, which relieves the conditions on learning rates and does not need the problem to be strongly convex. Then, by taking advantage of the Lyapunov function technique, we present the convergence of the momentum SGD and Nesterov accelerated SGD methods for the convex and non-convex problem under $L$-smooth assumption that extends the bounded gradient limitation to a certain extent. The convergence of time averaged SGD was also analyzed.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/nmtma.OA-2022-0179

Numerical Mathematics: Theory, Methods and Applications, Vol. 16 (2023), Iss. 4 : pp. 914–930

Published online:    2023-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    17

Keywords:    SGD momentum SGD Nesterov acceleration time averaged SGD convergence analysis non-convex.

Author Details

Guoguo Yang

Tiannan Xiao

Guoguo Yang