Abstract
In this paper we present a simple hierarchical Bayesian treatment of the sparse kernel logistic regression (KLR) model based on the evidence framework introduced by MacKay. The principal innovation lies in the re-parameterisation of the model such that the usual spherical Gaussian prior over the parameters in the kernel-induced feature space also corresponds to a spherical Gaussian prior over the transformed parameters, permitting the straight-forward derivation of an efficient update formula for the regularisation parameter. The Bayesian framework also allows the selection of good values for kernel parameters through maximisation of the marginal likelihood, or evidence, for the model. Results obtained on a variety of benchmark data sets are provided indicating that the Bayesian KLR model is competitive with KLR models, where the hyper-parameters are selected via cross-validation and with the support vector machine and relevance vector machine.
Original language | English |
---|---|
Pages (from-to) | 119-135 |
Number of pages | 17 |
Journal | Neurocomputing |
Volume | 64 |
DOIs | |
Publication status | Published - Mar 2005 |