Ghent University Academic Bibliography

Advanced

A comparison of human and automatic musical genre classification

Stefaan Lippens UGent, Jean-Pierre Martens UGent, Tom De Mulder and G Tzanetakis (2004) 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS. p.233-236
abstract
Recently there has been an increasing amount of work in the area of automatic genre classification of music in audio format. In addition to automatically structuring large music collections such classification can be used as a way to evaluate features for describing musical content. However the evaluation and comparison of genre classification systems is hindered by the subjective perception of genre definitions by users. In this work we describe a set of experiments in automatic musical genre classification. An important contribution of this work is the comparison of the automatic results with human genre classifications on the same dataset. The results show that, although there is room for improvement, genre classification is inherently subjective and therefore perfect results can not be expected neither from automatic nor human classification. The experiments also show that features derived from an auditory model have similar performance with features based on Mel-Frequency Cepstral Coefficients (MFCC).
Please use this url to cite or link to this publication:
author
organization
year
type
conference
publication status
published
subject
in
2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS
pages
233 - 236
publisher
IEEE
place of publication
New York, NY, USA
conference name
IEEE International Conference on Acoustics, Speech, and Signal Processing
conference location
Montréal, QU, Canada
conference start
2004-05-17
conference end
2004-05-21
Web of Science type
Article
Web of Science id
000222179500059
ISSN
1520-6149
ISBN
0-7803-8484-9
DOI
10.1109/ICASSP.2004.1326806
language
English
UGent publication?
yes
classification
P1
id
404062
handle
http://hdl.handle.net/1854/LU-404062
date created
2008-05-14 16:22:00
date last changed
2010-06-30 13:57:08
@inproceedings{404062,
  abstract     = {Recently there has been an increasing amount of work in the area of automatic genre classification of music in audio format. In addition to automatically structuring large music collections such classification can be used as a way to evaluate features for describing musical content. However the evaluation and comparison of genre classification systems is hindered by the subjective perception of genre definitions by users. In this work we describe a set of experiments in automatic musical genre classification. An important contribution of this work is the comparison of the automatic results with human genre classifications on the same dataset. The results show that, although there is room for improvement, genre classification is inherently subjective and therefore perfect results can not be expected neither from automatic nor human classification. The experiments also show that features derived from an auditory model have similar performance with features based on Mel-Frequency Cepstral Coefficients (MFCC).},
  author       = {Lippens, Stefaan and Martens, Jean-Pierre and De Mulder, Tom and Tzanetakis, G},
  booktitle    = {2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS},
  isbn         = {0-7803-8484-9},
  issn         = {1520-6149},
  language     = {eng},
  location     = {Montr{\'e}al, QU, Canada},
  pages        = {233--236},
  publisher    = {IEEE},
  title        = {A comparison of human and automatic musical genre classification},
  url          = {http://dx.doi.org/10.1109/ICASSP.2004.1326806},
  year         = {2004},
}

Chicago
Lippens, Stefaan, Jean-Pierre Martens, Tom De Mulder, and G Tzanetakis. 2004. “A Comparison of Human and Automatic Musical Genre Classification.” In 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS, 233–236. New York, NY, USA: IEEE.
APA
Lippens, S., Martens, J.-P., De Mulder, T., & Tzanetakis, G. (2004). A comparison of human and automatic musical genre classification. 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS (pp. 233–236). Presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing, New York, NY, USA: IEEE.
Vancouver
1.
Lippens S, Martens J-P, De Mulder T, Tzanetakis G. A comparison of human and automatic musical genre classification. 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS. New York, NY, USA: IEEE; 2004. p. 233–6.
MLA
Lippens, Stefaan, Jean-Pierre Martens, Tom De Mulder, et al. “A Comparison of Human and Automatic Musical Genre Classification.” 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PROCEEDINGS. New York, NY, USA: IEEE, 2004. 233–236. Print.