Abstract
The individual elements of pattern vectors generated by real systems often have widely different value ranges. Direct application of these patterns to a distance-based classifier such as a multilayer perceptron can cause the large value range elements to dominate in the classification decision. A commonly used remedy is to normalise the variance of each pattern element before use. However, the author shows that this approach is often inappropriate and that better results can be obtained by nonlinearly scaling the pattern elements to render their probability distributions approximately uniform as well as having the same variance
Original language | English |
---|---|
Pages (from-to) | 56-57 |
Number of pages | 2 |
Journal | IEE Electronics Letters |
Volume | 30 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 1994 |