Ghent University Academic Bibliography

Advanced

A parameterizable spatiotemporal representation of popular dance styles for humanoid dancing characters

Joao Oliveira, Luiz Alberto Naveda UGent, Fabien Gouyon, Luis Reis, Paulo Sousa and Marc Leman UGent (2012) EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING. 18.
abstract
Dance movements are a complex class of human behavior which convey forms of non-verbal and subjective communication that are performed as cultural vocabularies in all human cultures. The singularity of dance forms imposes fascinating challenges to computer animation and robotics, which in turn presents outstanding opportunities to deepen our understanding about the phenomenon of dance by means of developing models, analyses and syntheses of motion patterns. In this article, we formalize a model for the analysis and representation of popular dance styles of repetitive gestures by specifying the parameters and validation procedures necessary to describe the spatiotemporal elements of the dance movement in relation to its music temporal structure (musical meter). Our representation model is able to precisely describe the structure of dance gestures according to the structure of musical meter, at different temporal resolutions, and is flexible enough to convey the variability of the spatiotemporal relation between music structure and movement in space. It results in a compact and discrete mid-level representation of the dance that can be further applied to algorithms for the generation of movements in different humanoid dancing characters. The validation of our representation model relies upon two hypotheses: (i) the impact of metric resolution and (ii) the impact of variability towards fully and naturally representing a particular dance style of repetitive gestures. We numerically and subjectively assess these hypotheses by analyzing solo dance sequences of Afro-Brazilian samba and American Charleston, captured with a MoCap (Motion Capture) system. From these analyses, we build a set of dance representations modeled with different parameters, and re-synthesize motion sequence variations of the represented dance styles. For specifically assessing the metric hypothesis, we compare the captured dance sequences with repetitive sequences of a fixed dance motion pattern, synthesized at different metric resolutions for both dance styles. In order to evaluate the hypothesis of variability, we compare the same repetitive sequences with others synthesized with variability, by generating and concatenating stochastic variations of the represented dance pattern. The observed results validate the proposition that different dance styles of repetitive gestures might require a minimum and sufficient metric resolution to be fully represented by the proposed representation model. Yet, these also suggest that additional information may be required to synthesize variability in the dance sequences while assuring the naturalness of the performance. Nevertheless, we found evidence that supports the use of the proposed dance representation for flexibly modeling and synthesizing dance sequences from different popular dance styles, with potential developments for the generation of expressive and natural movement profiles onto humanoid dancing characters.
Please use this url to cite or link to this publication:
author
organization
year
type
journalArticle (original)
publication status
published
subject
keyword
MUSIC, MOTION SYNTHESIS, VARIABILITY, ANIMATION, BEAT
journal title
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING
EURASIP J. Audio Speech Music Process.
volume
18
pages
14 pages
Web of Science type
Article
Web of Science id
000307304600001
JCR category
ENGINEERING, ELECTRICAL & ELECTRONIC
JCR impact factor
0.63 (2012)
JCR rank
172/242 (2012)
JCR quartile
3 (2012)
ISSN
1687-4722
DOI
10.1186/1687-4722-2012-18
language
English
UGent publication?
yes
classification
A1
copyright statement
I have transferred the copyright for this publication to the publisher
id
2967171
handle
http://hdl.handle.net/1854/LU-2967171
date created
2012-08-03 08:00:56
date last changed
2014-03-20 15:27:01
@article{2967171,
  abstract     = {Dance movements are a complex class of human behavior which convey forms of non-verbal and subjective communication that are performed as cultural vocabularies in all human cultures. The singularity of dance forms imposes fascinating challenges to computer animation and robotics, which in turn presents outstanding opportunities to deepen our understanding about the phenomenon of dance by means of developing models, analyses and syntheses of motion patterns. In this article, we formalize a model for the analysis and representation of popular dance styles of repetitive gestures by specifying the parameters and validation procedures necessary to describe the spatiotemporal elements of the dance movement in relation to its music temporal structure (musical meter). Our representation model is able to precisely describe the structure of dance gestures according to the structure of musical meter, at different temporal resolutions, and is flexible enough to convey the variability of the spatiotemporal relation between music structure and movement in space. It results in a compact and discrete mid-level representation of the dance that can be further applied to algorithms for the generation of movements in different humanoid dancing characters. The validation of our representation model relies upon two hypotheses: (i) the impact of metric resolution and (ii) the impact of variability towards fully and naturally representing a particular dance style of repetitive gestures. We numerically and subjectively assess these hypotheses by analyzing solo dance sequences of Afro-Brazilian samba and American Charleston, captured with a MoCap (Motion Capture) system. From these analyses, we build a set of dance representations modeled with different parameters, and re-synthesize motion sequence variations of the represented dance styles. For specifically assessing the metric hypothesis, we compare the captured dance sequences with repetitive sequences of a fixed dance motion pattern, synthesized at different metric resolutions for both dance styles. In order to evaluate the hypothesis of variability, we compare the same repetitive sequences with others synthesized with variability, by generating and concatenating stochastic variations of the represented dance pattern. The observed results validate the proposition that different dance styles of repetitive gestures might require a minimum and sufficient metric resolution to be fully represented by the proposed representation model. Yet, these also suggest that additional information may be required to synthesize variability in the dance sequences while assuring the naturalness of the performance. Nevertheless, we found evidence that supports the use of the proposed dance representation for flexibly modeling and synthesizing dance sequences from different popular dance styles, with potential developments for the generation of expressive and natural movement profiles onto humanoid dancing characters.},
  author       = {Oliveira, Joao and Naveda, Luiz Alberto and Gouyon, Fabien and Reis, Luis and Sousa, Paulo and Leman, Marc},
  issn         = {1687-4722},
  journal      = {EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING},
  keyword      = {MUSIC,MOTION SYNTHESIS,VARIABILITY,ANIMATION,BEAT},
  language     = {eng},
  pages        = {14},
  title        = {A parameterizable spatiotemporal representation of popular dance styles for humanoid dancing characters},
  url          = {http://dx.doi.org/10.1186/1687-4722-2012-18},
  volume       = {18},
  year         = {2012},
}

Chicago
Oliveira, Joao, Luiz Alberto Naveda, Fabien Gouyon, Luis Reis, Paulo Sousa, and Marc Leman. 2012. “A Parameterizable Spatiotemporal Representation of Popular Dance Styles for Humanoid Dancing Characters.” Eurasip Journal on Audio Speech and Music Processing 18.
APA
Oliveira, J., Naveda, L. A., Gouyon, F., Reis, L., Sousa, P., & Leman, M. (2012). A parameterizable spatiotemporal representation of popular dance styles for humanoid dancing characters. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 18.
Vancouver
1.
Oliveira J, Naveda LA, Gouyon F, Reis L, Sousa P, Leman M. A parameterizable spatiotemporal representation of popular dance styles for humanoid dancing characters. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING. 2012;18.
MLA
Oliveira, Joao, Luiz Alberto Naveda, Fabien Gouyon, et al. “A Parameterizable Spatiotemporal Representation of Popular Dance Styles for Humanoid Dancing Characters.” EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING 18 (2012): n. pag. Print.