Abstract
Feature selection is increasingly important in data analysis and machine learning in big data era. However, how to use the data in feature selection, i.e. using either ALL or PART of a dataset, has become a serious and tricky issue. Whilst the conventional practice of using all the data in feature selection may lead to selection bias, using part of the data may, on the other hand, lead to underestimating the relevant features under some conditions. This paper investigates these two strategies systematically in terms of reliability and effectiveness, and then determines their suitability for datasets with different characteristics. The reliability is measured by the Average Tanimoto Index and the Inter-method Average Tanimoto Index, and the effectiveness is measured by the mean generalisation accuracy of classification. The computational experiments are carried out on ten real-world benchmark datasets and fourteen synthetic datasets. The synthetic datasets are generated with a pre-set number of relevant features and varied numbers of irrelevant features and instances, and added with different levels of noise. The results indicate that the PART approach is more effective in reducing the bias when the size of a dataset is small but starts to lose its advantage as the dataset size increases.
Original language | English |
---|---|
Pages (from-to) | 915–928 |
Journal | International Journal of Machine Learning and Cybernetics |
Volume | 8 |
Issue number | 3 |
Early online date | 22 Dec 2015 |
DOIs | |
Publication status | Published - Jun 2017 |
Keywords
- Features selection
- Reliability
- Big data
- Cross-Validation
- Classification
- Similarity measure
Profiles
-
Wenjia Wang
- School of Computing Sciences - Professor of Artificial Intelligence
- Data Science and AI - Member
Person: Research Group Member, Academic, Teaching & Research