Volume 38, Issue 2
An Adaptive Gradient Method with Energy and Momentum

Hailiang Liu & Xuping Tian

Ann. Appl. Math., 38 (2022), pp. 183-222.

Published online: 2022-04

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

We introduce a novel algorithm for gradient-based optimization of stochastic objective functions. The method may be seen as a variant of SGD with momentum equipped with an adaptive learning rate automatically adjusted by an ‘energy’ variable. The method is simple to implement, computationally efficient, and well suited for large-scale machine learning problems. The method exhibits unconditional energy stability for any size of the base learning rate. We provide a regret bound on the convergence rate under the online convex optimization framework. We also establish the energy-dependent convergence rate of the algorithm to a stationary point in the stochastic non-convex setting. In addition, a sufficient condition is provided to guarantee a positive lower threshold for the energy variable. Our experiments demonstrate that the algorithm converges fast while generalizing better than or as well as SGD with momentum in training deep neural networks, and compares also favorably to Adam.

  • AMS Subject Headings

65K10, 90C15, 68Q25

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{AAM-38-183, author = {Liu , Hailiang and Tian , Xuping}, title = {An Adaptive Gradient Method with Energy and Momentum}, journal = {Annals of Applied Mathematics}, year = {2022}, volume = {38}, number = {2}, pages = {183--222}, abstract = {

We introduce a novel algorithm for gradient-based optimization of stochastic objective functions. The method may be seen as a variant of SGD with momentum equipped with an adaptive learning rate automatically adjusted by an ‘energy’ variable. The method is simple to implement, computationally efficient, and well suited for large-scale machine learning problems. The method exhibits unconditional energy stability for any size of the base learning rate. We provide a regret bound on the convergence rate under the online convex optimization framework. We also establish the energy-dependent convergence rate of the algorithm to a stationary point in the stochastic non-convex setting. In addition, a sufficient condition is provided to guarantee a positive lower threshold for the energy variable. Our experiments demonstrate that the algorithm converges fast while generalizing better than or as well as SGD with momentum in training deep neural networks, and compares also favorably to Adam.

}, issn = {}, doi = {https://doi.org/10.4208/aam.OA-2021-0095}, url = {http://global-sci.org/intro/article_detail/aam/20454.html} }
TY - JOUR T1 - An Adaptive Gradient Method with Energy and Momentum AU - Liu , Hailiang AU - Tian , Xuping JO - Annals of Applied Mathematics VL - 2 SP - 183 EP - 222 PY - 2022 DA - 2022/04 SN - 38 DO - http://doi.org/10.4208/aam.OA-2021-0095 UR - https://global-sci.org/intro/article_detail/aam/20454.html KW - Stochastic optimization, SGD, energy stability, momentum. AB -

We introduce a novel algorithm for gradient-based optimization of stochastic objective functions. The method may be seen as a variant of SGD with momentum equipped with an adaptive learning rate automatically adjusted by an ‘energy’ variable. The method is simple to implement, computationally efficient, and well suited for large-scale machine learning problems. The method exhibits unconditional energy stability for any size of the base learning rate. We provide a regret bound on the convergence rate under the online convex optimization framework. We also establish the energy-dependent convergence rate of the algorithm to a stationary point in the stochastic non-convex setting. In addition, a sufficient condition is provided to guarantee a positive lower threshold for the energy variable. Our experiments demonstrate that the algorithm converges fast while generalizing better than or as well as SGD with momentum in training deep neural networks, and compares also favorably to Adam.

Hailiang Liu & Xuping Tian. (2022). An Adaptive Gradient Method with Energy and Momentum. Annals of Applied Mathematics. 38 (2). 183-222. doi:10.4208/aam.OA-2021-0095
Copy to clipboard
The citation has been copied to your clipboard