Abstract
Examines the generalisation properties of various types of neural network, such as radial basis function systems and the multilayer perceptron (MLP). It is concluded that their behaviour can be explained in terms of lowpass interpolation in which discrete training examples of a function are implicitly convolved with the impulse response of a lowpass filter to produce an estimate of the function for previously unseen arguments. A different form of neural network, in the form of a single-layer lookup perceptron (SLLUP), is described, and this type of perceptron is shown to also generalise by lowpass interpolation. However, the SLLUP can learn reliably and rapidly compared to the multilayer perceptron and experiments are described which show that it compares well with the MLP on problems such as speech recognition and text-to-speech synthesis
Original language | English |
---|---|
Pages (from-to) | 46-54 |
Number of pages | 9 |
Journal | IEE Proceedings Part F: Radar and Signal Processing |
Volume | 138 |
Issue number | 1 |
Publication status | Published - Feb 1991 |