A greedy training algorithm for sparse least-squares support vector machines

Gavin C. Cawley, Nicola L. C. Talbot

Research output: Chapter in Book/Report/Conference proceedingChapter

16 Citations (Scopus)


Suykens et al. [1] describes a form of kernel ridge regression known as the least-squares support vector machine (LS-SVM). In this paper, we present a simple, but efficient, greedy algorithm for constructing near optimal sparse approximations of least-squares support vector machines, in which at each iteration the training pattern minimising the regularised empirical risk is introduced into the kernel expansion. The proposed method demonstrates superior performance when compared with the pruning technique described by Suykens et al. [1], over the motorcycle and Boston housing datasets.
Original languageEnglish
Title of host publicationArtificial Neural Networks — ICANN 2002
EditorsJosé R. Dorronsoro
PublisherSpringer Berlin / Heidelberg
Number of pages6
ISBN (Print)978-3-540-44074-1
Publication statusPublished - 2002
EventInternational Conference on Artificial Neural Networks - Madrid, Spain
Duration: 28 Aug 200230 Aug 2002

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin / Heidelberg


ConferenceInternational Conference on Artificial Neural Networks
Abbreviated titleICANN 2002

Cite this