Agnostic learning versus prior knowledge in the design of kernel machines

Gavin C Cawley, Nicola L. C Talbot

Research output: Contribution to conferencePaper

5 Citations (Scopus)

Abstract

The optimal model parameters of a kernel machine are typically given by the solution of a convex optimisation problem with a single global optimum. Obtaining the best possible performance is therefore largely a matter of the design of a good kernel for the problem at hand, exploiting any underlying structure and optimisation of the regularisation and kernel parameters, i.e. model selection. Fortunately, analytic bounds on, or approximations to, the leave-one-out cross-validation error are often available, providing an efficient and generally reliable means to guide model selection. However, the degree to which the incorporation of prior knowledge improves performance over that which can be obtained using "standard" kernels with automated model selection (i.e. agnostic learning), is an open question. In this paper, we compare approaches using example solutions for all of the benchmark tasks on both tracks of the IJCNN-2007 Agnostic Learning versus Prior Knowledge Challenge.
Original languageEnglish
Pages1732-1737
Number of pages6
DOIs
Publication statusPublished - 2007
EventIEEE/INNS International Joint Conference on Neural Networks - Orlando, United States
Duration: 12 Aug 200717 Aug 2007

Conference

ConferenceIEEE/INNS International Joint Conference on Neural Networks
Abbreviated titleIJCNN-2007
Country/TerritoryUnited States
CityOrlando
Period12/08/0717/08/07

Cite this