Scaling Boosting by Margin-based Inclusion of Features and Relations

Susanne Hoche, Stefan Wrobel

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Citations (Scopus)


Boosting is well known to increase the accuracy of propositional and multi-relational classification learners. However, the base learner’s efficiency vitally determines boosting’s efficiency since the complexity of the underlying learner is amplified by iterated calls of the learner in the boosting framework. The idea of restricting the learner to smaller feature subsets in order to increase efficiency is widely used. Surprisingly, little attention has been paid so far to exploiting characteristics of boosting itself to include features based on the current learning progress. In this paper, we show that the dynamics inherent to boosting offer ideal means to maximize the efficiency of the learning process. We describe how to utilize the training examples’ margins—which are known to be maximized by boosting—to reduce learning times without a deterioration of the learning quality. We suggest to stepwise include features in the learning process in response to a slowdown in the improvement of the margins. Experimental results show that this approach significantly reduces the learning time while maintaining or even improving the predictive accuracy of the underlying fully equipped learner.
Original languageEnglish
Title of host publicationMachine Learning: ECML 2002
EditorsTapio Elomaa, Heikki Mannila, Hannu Toivonen
PublisherSpringer Berlin / Heidelberg
Number of pages13
ISBN (Print)978-3-540-44036-9
Publication statusPublished - 2002
Event13th European Conference on Machine Learning - Helsinki, Finland
Duration: 19 Aug 200223 Aug 2002

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin / Heidelberg


Conference13th European Conference on Machine Learning

Cite this