arrow
Volume 8, Issue 3
Convergence of Gradient Method for Double Parallel Feedforward Neural Network

J. Wang, W. Wu, Z. Li & L. Li

Int. J. Numer. Anal. Mod., 8 (2011), pp. 484-495.

Published online: 2011-08

Export citation
  • Abstract

The deterministic convergence for a Double Parallel Feedforward Neural Network (DPFNN) is studied. DPFNN is a parallel connection of a multi-layer feedforward neural network and a single layer feedforward neural network. Gradient method is used for training DPFNN with finite training sample set. The monotonicity of the error function in the training iteration is proved. Then, some weak and strong convergence results are obtained, indicating that the gradient of the error function tends to zero and the weight sequence goes to a fixed point, respectively. Numerical examples are provided, which support our theoretical findings and demonstrate that DPFNN has faster convergence speed and better generalization capability than the common feedforward neural network.

  • AMS Subject Headings

68W40, 92B20, 62M45

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{IJNAM-8-484, author = {J. Wang, W. Wu, Z. Li and L. Li}, title = {Convergence of Gradient Method for Double Parallel Feedforward Neural Network}, journal = {International Journal of Numerical Analysis and Modeling}, year = {2011}, volume = {8}, number = {3}, pages = {484--495}, abstract = {

The deterministic convergence for a Double Parallel Feedforward Neural Network (DPFNN) is studied. DPFNN is a parallel connection of a multi-layer feedforward neural network and a single layer feedforward neural network. Gradient method is used for training DPFNN with finite training sample set. The monotonicity of the error function in the training iteration is proved. Then, some weak and strong convergence results are obtained, indicating that the gradient of the error function tends to zero and the weight sequence goes to a fixed point, respectively. Numerical examples are provided, which support our theoretical findings and demonstrate that DPFNN has faster convergence speed and better generalization capability than the common feedforward neural network.

}, issn = {2617-8710}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/ijnam/697.html} }
TY - JOUR T1 - Convergence of Gradient Method for Double Parallel Feedforward Neural Network AU - J. Wang, W. Wu, Z. Li & L. Li JO - International Journal of Numerical Analysis and Modeling VL - 3 SP - 484 EP - 495 PY - 2011 DA - 2011/08 SN - 8 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/ijnam/697.html KW - Double parallel feedforward neural network, gradient method, monotonicity, convergence. AB -

The deterministic convergence for a Double Parallel Feedforward Neural Network (DPFNN) is studied. DPFNN is a parallel connection of a multi-layer feedforward neural network and a single layer feedforward neural network. Gradient method is used for training DPFNN with finite training sample set. The monotonicity of the error function in the training iteration is proved. Then, some weak and strong convergence results are obtained, indicating that the gradient of the error function tends to zero and the weight sequence goes to a fixed point, respectively. Numerical examples are provided, which support our theoretical findings and demonstrate that DPFNN has faster convergence speed and better generalization capability than the common feedforward neural network.

J. Wang, W. Wu, Z. Li and L. Li. (2011). Convergence of Gradient Method for Double Parallel Feedforward Neural Network. International Journal of Numerical Analysis and Modeling. 8 (3). 484-495. doi:
Copy to clipboard
The citation has been copied to your clipboard