Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks

Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks

Year:    2008

Journal of Computational Mathematics, Vol. 26 (2008), Iss. 4 : pp. 613–623

Abstract

In this work, a gradient method with momentum for BP neural networks is considered. The momentum coefficient is chosen in an adaptive manner to accelerate and stabilize the learning procedure of the network weights. Corresponding convergence results are proved.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/2008-JCM-8645

Journal of Computational Mathematics, Vol. 26 (2008), Iss. 4 : pp. 613–623

Published online:    2008-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    11

Keywords:    Back-propagation (BP) neural networks Gradient method Momentum Convergence.