The optimal model parameters of a kernel machine are typically given by the solution of a convex optimisation problem with a single global optimum. Obtaining the best possible performance is therefore largely a matter of the design of a good kernel for the problem at hand, exploiting any underlying structure and optimisation of the regularisation and kernel parameters, i.e. model selection. Fortunately, analytic bounds on, or approximations to, the leave-one-out cross-validation error are often available, providing an efficient and generally reliable means to guide model selection. However, the degree to which the incorporation of prior knowledge improves performance over that which can be obtained using "standard" kernels with automated model selection (i.e. agnostic learning), is an open question. In this paper, we compare approaches using example solutions for all of the benchmark tasks on both tracks of the IJCNN-2007 Agnostic Learning versus Prior Knowledge Challenge.
|Number of pages||6|
|Publication status||Published - 2007|
|Event||Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007) - Orlando, Florida|
Duration: 12 Aug 2007 → 17 Aug 2007
|Conference||Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007)|
|Period||12/08/07 → 17/08/07|