Analysis of the IJCNN 2007 agnostic learning versus prior knowledge challenge

Isabelle Guyon, Amir Saffari, Gideon Dror, Gavin Cawley

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

We organized a challenge for IJCNN 2007 to assess the added value of prior domain knowledge in machine learning. Most commercial data mining programs accept data pre-formatted in the form of a table, with each example being encoded as a linear feature vector. Is it worth spending time incorporating domain knowledge in feature construction or algorithm design, or can off-the-shelf programs working directly on simple low-level features do better than skilled data analysts? To answer these questions, we formatted five datasets using two data representations. The participants in the “prior knowledge” track used the raw data, with full knowledge of the meaning of the data representation. Conversely, the participants in the “agnostic learning” track used a pre-formatted data table, with no knowledge of the identity of the features. The results indicate that black-box methods using relatively unsophisticated features work quite well and rapidly approach the best attainable performance. The winners on the prior knowledge track used feature extraction strategies yielding a large number of low-level features. Incorporating prior knowledge in the form of generic coding/smoothing methods to exploit regularities in data is beneficial, but incorporating actual domain knowledge in feature construction is very time consuming and seldom leads to significant improvements. The AL vs. PK challenge web site remains open for post-challenge submissions: http://www.agnostic.inf.ethz.ch/.
Original languageEnglish
Pages (from-to)544-550
Number of pages7
JournalNeural Networks
Volume21
Issue number2-3
DOIs
Publication statusPublished - 2008

Cite this