Determining appropriate approaches for using data in feature selection

Ghadah Aldehim, Wenjia Wang

Research output: Contribution to journalArticlepeer-review

29 Citations (Scopus)
16 Downloads (Pure)


Feature selection is increasingly important in data analysis and machine learning in big data era. However, how to use the data in feature selection, i.e. using either ALL or PART of a dataset, has become a serious and tricky issue. Whilst the conventional practice of using all the data in feature selection may lead to selection bias, using part of the data may, on the other hand, lead to underestimating the relevant features under some conditions. This paper investigates these two strategies systematically in terms of reliability and effectiveness, and then determines their suitability for datasets with different characteristics. The reliability is measured by the Average Tanimoto Index and the Inter-method Average Tanimoto Index, and the effectiveness is measured by the mean generalisation accuracy of classification. The computational experiments are carried out on ten real-world benchmark datasets and fourteen synthetic datasets. The synthetic datasets are generated with a pre-set number of relevant features and varied numbers of irrelevant features and instances, and added with different levels of noise. The results indicate that the PART approach is more effective in reducing the bias when the size of a dataset is small but starts to lose its advantage as the dataset size increases.
Original languageEnglish
Pages (from-to)915–928
JournalInternational Journal of Machine Learning and Cybernetics
Issue number3
Early online date22 Dec 2015
Publication statusPublished - Jun 2017


  • Features selection
  • Reliability
  • Big data
  • Cross-Validation
  • Classification
  • Similarity measure

Cite this