Volume 8 - Issue 4
Training feed-forward neural networks using the gradient descent method with the optimal stepsize
Abstract
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation (EBP), is an iterative gradient descend algorithm by nature. Variable stepsize is the key to fast convergence of BP networks. A new optimal stepsize algorithm is proposed for accelerating the training process. It modifies the objective function to reduce the computational complexity of the Jacobin and consequently that of Hessian matrices, and hereby directly computes the optimal iterative stepsize. The improved backpropagation algorithm helps alleviating the problem of slow convergence and oscillations. The analysis indicates that the backpropagation with optimal stepsize (BPOS) is more efficient when treating large-scale samples. The numerical experiment results on pattern recognition and function approximation problems show that the proposed algorithm possesses the features of fast convergence and less intensive computational complexity.
Paper Details
PaperID: 84859637207
Author's Name: Gong, L., Liu, C., Li, Y., Yuan, F.
Volume: Volume 8
Issues: Issue 4
Keywords: BP algorithm, Fast convergence, Feedforward neural networks, Hessian matrix computation, Optimal stepsize
Year: 2012
Month: April
Pages: 1359 - 1371