Towards the optimal learning rate for backpropagation

Danilo P. Mandic, Jonathon A. Chambers

Research output: Contribution to journalArticlepeer-review

31 Citations (Scopus)

Abstract

A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate is derived. The algorithm is based upon minimising the instantaneous output error and does not include any simplifications encountered in the corresponding Least Mean Square (LMS) algorithms for linear adaptive filters. The backpropagation algorithm with an adaptive learning rate, which is derived based upon the Taylor series expansion of the instantaneous output error, is shown to exhibit behaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trained by backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisation effects to the traditional backpropagation learning algorithm.
Original languageEnglish
Pages (from-to)1-5
Number of pages5
JournalNeural Processing Letters
Volume11
Issue number1
DOIs
Publication statusPublished - 1999

Cite this