Abstract
The performance of an ensemble can be affected by several factors and diversity amongst its member models is considered as a key factor. However, creating a high-level of diversity is not a simple task as the models that are trained using a single learning algorithm for a given problem, hence named homogeneous models, tend to be closely correlated. Different learning algorithms are then used to generate methodologically different type of models - heterogeneous models, in the hope of increasing diversity and therefore improving ensemble accuracy. Ensemble applications that are implemented without considering diversity between their member models are likely to result in little or no performance gain. This study evaluates the ability of diversity generation of different learning algorithms through quantitatively examining the diversity between homogeneous models and heterogeneous models. Further, the characteristics of ten diversity definitions are evaluated and analysed with an intention of finding out which one is more effective in terms of improving ensemble performance. Fifteen data sets are used in this study to verify the consistence of the experimental findings.
Original language | English |
---|---|
Pages | 3078-3085 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 2006 |
Event | 2006 International Joint Conference on Neural Networks - Vancouver, Canada Duration: 16 Jul 2007 → 21 Jul 2007 |
Conference
Conference | 2006 International Joint Conference on Neural Networks |
---|---|
Abbreviated title | IJCNN-2006 |
Country/Territory | Canada |
City | Vancouver |
Period | 16/07/07 → 21/07/07 |