Improving decision tree performance through induction and cluster-based stratified sampling

Abdul A. Gill, George D. Smith, Anthony J. Bagnall

Research output: Chapter in Book/Report/Conference proceedingChapter

9 Citations (Scopus)

Abstract

It is generally recognised that recursive partitioning, as used in the construction of classification trees, is inherently unstable, particularly for small data sets. Classification accuracy and, by implication, tree structure, are sensitive to changes in the training data. Successful approaches to counteract this effect include multiple classifiers, e.g. boosting, bagging or windowing. The downside of these multiple classification models, however, is the plethora of trees that result, often making it difficult to extract the classifier in a meaningful manner. We show that, by using some very weak knowledge in the sampling stage, when the data set is partitioned into the training and test sets, a more consistent and improved performance is achieved by a single decision tree classifier.
Original languageEnglish
Title of host publicationIntelligent Data Engineering and Automated Learning – IDEAL 2004
PublisherSpringer
Pages339-344
Number of pages6
Volume3177
ISBN (Print)978-3-540-22881-3
DOIs
Publication statusPublished - 2004

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Verlag, Berlin

Cite this