Empirical evaluation of resampling procedures for optimising SVM hyperparameters

Jacques Wainer, Gavin Cawley

Research output: Contribution to journalArticlepeer-review

42 Citations (Scopus)
16 Downloads (Pure)

Abstract

Tuning the regularisation and kernel hyperparameters is a vital step in optimising the generalisation performance of kernel methods, such as the support vector machine (SVM). This is most often performed by minimising a resampling/cross-validation based model selection criterion, however there seems little practical guidance on the most suitable form of resampling. This paper presents the results of an extensive empirical evaluation of resampling procedures for SVM hyperparameter selection, designed to address this gap in the machine learning literature. Wetested 15 different resampling procedures on 121 binary classification data sets in order to select the best SVM hyperparameters. Weused three very different statistical procedures to analyse the results: the standard multi-classifier/multidata set procedure proposed by Demˇsar, the confidence intervals on the excess loss of each procedure in relation to 5-fold cross validation, and the Bayes factor analysis proposed by Barber. We conclude that a 2-fold procedure is appropriate to select the hyperparameters of an SVM for data sets for 1000or more datapoints, while a 3-fold procedure is appropriate for smaller data sets.
Original languageEnglish
Pages (from-to)1-35
Number of pages35
JournalJournal of Machine Learning Research
Volume18
Issue number15
Publication statusPublished - 1 Feb 2017

Keywords

  • Hyperparameters
  • SVM
  • Resampling
  • Cross-validation
  • k-fold
  • bootstrap

Cite this