Convergence of Gradient Method for Double Parallel Feedforward Neural Network

Convergence of Gradient Method for Double Parallel Feedforward Neural Network

Year:    2011

International Journal of Numerical Analysis and Modeling, Vol. 8 (2011), Iss. 3 : pp. 484–495

Abstract

The deterministic convergence for a Double Parallel Feedforward Neural Network (DPFNN) is studied. DPFNN is a parallel connection of a multi-layer feedforward neural network and a single layer feedforward neural network. Gradient method is used for training DPFNN with finite training sample set. The monotonicity of the error function in the training iteration is proved. Then, some weak and strong convergence results are obtained, indicating that the gradient of the error function tends to zero and the weight sequence goes to a fixed point, respectively. Numerical examples are provided, which support our theoretical findings and demonstrate that DPFNN has faster convergence speed and better generalization capability than the common feedforward neural network.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/2011-IJNAM-697

International Journal of Numerical Analysis and Modeling, Vol. 8 (2011), Iss. 3 : pp. 484–495

Published online:    2011-01

AMS Subject Headings:    Global Science Press

Copyright:    COPYRIGHT: © Global Science Press

Pages:    12

Keywords:    Double parallel feedforward neural network gradient method monotonicity convergence.