Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks
Abstract
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coefficient is chosen in an adaptive manner to accelerate and stabilize the learning procedure of the network weights. Corresponding convergence results are proved.
About this article
Abstract View
- 31433
Pdf View
- 3277
How to Cite
Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks. (2018). Journal of Computational Mathematics, 26(4), 613-623. https://global-sci.com/index.php/JCM/article/view/11901