Advanced search

A comparison of human and automatic musical genre classification

Author
Organization
Abstract
Recently there has been an increasing amount of work in the area of automatic genre classification of music in audio format. In addition to automatically structuring large music collections such classification can be used as a way to evaluate features for describing musical content. However the evaluation and comparison of genre classification systems is hindered by the subjective perception of genre definitions by users. In this work we describe a set of experiments in automatic musical genre classification. An important contribution of this work is the comparison of the automatic results with human genre classifications on the same dataset. The results show that, although there is room for improvement, genre classification is inherently subjective and therefore perfect results can not be expected neither from automatic nor human classification. The experiments also show that features derived from an auditory model have similar performance with features based on Mel-Frequency Cepstral Coefficients (MFCC).

Citation

Please use this url to cite or link to this publication:

Chicago
Lippens, Stefaan, Jean-Pierre Martens, Tom De Mulder, and G Tzanetakis. 2004. “A Comparison of Human and Automatic Musical Genre Classification.” In 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS, 233–236. New York, NY, USA: IEEE.
APA
Lippens, Stefaan, Martens, J.-P., De Mulder, T., & Tzanetakis, G. (2004). A comparison of human and automatic musical genre classification. 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS (pp. 233–236). Presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing, New York, NY, USA: IEEE.
Vancouver
1.
Lippens S, Martens J-P, De Mulder T, Tzanetakis G. A comparison of human and automatic musical genre classification. 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS. New York, NY, USA: IEEE; 2004. p. 233–6.
MLA
Lippens, Stefaan, Jean-Pierre Martens, Tom De Mulder, et al. “A Comparison of Human and Automatic Musical Genre Classification.” 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS. New York, NY, USA: IEEE, 2004. 233–236. Print.
@inproceedings{404062,
  abstract     = {Recently there has been an increasing amount of work in the area of automatic genre classification of music in audio format. In addition to automatically structuring large music collections such classification can be used as a way to evaluate features for describing musical content. However the evaluation and comparison of genre classification systems is hindered by the subjective perception of genre definitions by users. In this work we describe a set of experiments in automatic musical genre classification. An important contribution of this work is the comparison of the automatic results with human genre classifications on the same dataset. The results show that, although there is room for improvement, genre classification is inherently subjective and therefore perfect results can not be expected neither from automatic nor human classification. The experiments also show that features derived from an auditory model have similar performance with features based on Mel-Frequency Cepstral Coefficients (MFCC).},
  author       = {Lippens, Stefaan and Martens, Jean-Pierre and De Mulder, Tom and Tzanetakis, G},
  booktitle    = {2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS},
  isbn         = {0-7803-8484-9},
  issn         = {1520-6149},
  language     = {eng},
  location     = {Montr{\'e}al, QU, Canada},
  pages        = {233--236},
  publisher    = {IEEE},
  title        = {A comparison of human and automatic musical genre classification},
  url          = {http://dx.doi.org/10.1109/ICASSP.2004.1326806},
  year         = {2004},
}

Altmetric
View in Altmetric
Web of Science
Times cited: