Abstract
Asymmetric margin error costs for positive and negative examples are often cited as an efficient heuristic compensating for unrepresentative priors in training support vector classifiers. In this paper we show that this heuristic is well justified via simple re-sampling ideas applied to the dual Lagrangian defining the 1-norm soft-margin support vector machine. This observation also provides a simple expression for the asymptotically optimal ratio of margin error penalties, eliminating the need for the trial-and-error experimentation normally encountered. This method allows the use of a smaller, balanced training data set in problems characterised by widely disparate prior probabilities, reducing the training time. The usefulness of this method is then demonstrated on a real world benchmark problem
Original language | English |
---|---|
Pages | 2433-2438 |
Number of pages | 6 |
DOIs | |
Publication status | Published - Jul 2001 |
Event | 2001 International Joint Conference on Neural Networks - Washington DC, United States Duration: 15 Jul 2001 → 19 Jul 2001 |
Conference
Conference | 2001 International Joint Conference on Neural Networks |
---|---|
Abbreviated title | IJCNN-2001 |
Country/Territory | United States |
City | Washington DC |
Period | 15/07/01 → 19/07/01 |