Year: 2010
Author: Hongmei Shao, Wei Wu, Lijun Liu
Communications in Mathematical Research , Vol. 26 (2010), Iss. 1 : pp. 67–75
Abstract
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
You do not have full access to this article.
Already a Subscriber? Sign in as an individual or via your institution
Journal Article Details
Publisher Name: Global Science Press
Language: English
DOI: https://doi.org/2010-CMR-19174
Communications in Mathematical Research , Vol. 26 (2010), Iss. 1 : pp. 67–75
Published online: 2010-01
AMS Subject Headings: Global Science Press
Copyright: COPYRIGHT: © Global Science Press
Pages: 9
Keywords: convergence online gradient method penalty monotonicity.