Abstract
The authors provide relationships between the a priori and a posteriori errors of adaptation algorithms for real-time output-error nonlinear adaptive filters realised as feedforward or recurrent neural networks. The analysis is undertaken for a general nonlinear activation function of a neuron, and for gradient-based learning algorithms, for both a feedforward (FF) and recurrent neural network (RNN). Moreover, the analysis considers both contractive and expansive forms of the nonlinear activation functions within the networks. The relationships so obtained provide the upper and lower error bounds for general gradient based a posteriori learning in neural networks
Original language | English |
---|---|
Pages (from-to) | 293-296 |
Number of pages | 4 |
Journal | IEE Proceedings: Vision, Image and Signal Processing |
Volume | 146 |
Issue number | 6 |
DOIs | |
Publication status | Published - Dec 1999 |