Music is a complex form of communication in which both artists and cultures express their ideas and identity. When we listen to music we do not simply perceive the acoustics of the sound in a temporal pattern, but also its relationship to other sounds, songs, artists, cultures and emotions. Owing to the complex, culturally-defined distribution of acoustic and temporal patterns amongst these relationships, it is unlikely that a general audio similarity metric will be suitable as a music similarity metric. Hence, we are unlikely to be able to emulate human perception of the similarity of songs without making reference to some historical or cultural context.The success of music classification systems, demonstrates that this difficulty can be overcome by learning the complex relationships between audio features and the metadata classes to be predicted. We present two approaches to the construction of music similarity metrics based on the use of a classification model to extract high-level descriptions of the music. These approaches achieve a very high-level of performance and do not produce the occasional spurious results or 'hubs' that conventional music similarity techniques produce.
|Number of pages||8|
|Publication status||Published - 2006|
|Event||1st ACM Workshop on Audio and Music Computing Multimedia - Santa Barbara, United States|
Duration: 23 Oct 2006 → 27 Oct 2006
|Conference||1st ACM Workshop on Audio and Music Computing Multimedia|
|Period||23/10/06 → 27/10/06|