Advanced search
1 file | 1.94 MB

A model-based sonification system for directional movement behavior

Pieter-Jan Maes (UGent) , Marc Leman (UGent) and Micheline Lesaffre (UGent)
(2010)
Author
Organization
Abstract
Computational algorithms are presented that create a virtual model of a person’s kinesphere (i.e. a concept of Laban denoting the space immediately surrounding a person’s body and reachable by the upper limbs). This model is approached as a virtual sound object/instrument (VSO) that could be “played” by moving the upper limbs in particular directions. As such, it provides an alternative for visual qualitative movement analysis tools, like bar plots. This model-based sonification system emphasizes the role of interaction in sonification. Moreover, this study claims that the integration of intentionality and expressivity in auditory biofeedback interaction systems is necessary in order to make the sonification process more precise and transparent. A method is proposed – based on the embodied music cognition theory – that is able to do this without disclaiming the scientific, systematic principles underlying the process of sonification.
Keywords
expressive gesture, Sonification, human-computer interaction, embodiment

Downloads

  • Maes ISON.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 1.94 MB

Citation

Please use this url to cite or link to this publication:

Chicago
Maes, Pieter-Jan, Marc Leman, and Micheline Lesaffre. 2010. “A Model-based Sonification System for Directional Movement Behavior.” In .
APA
Maes, P.-J., Leman, M., & Lesaffre, M. (2010). A model-based sonification system for directional movement behavior. Presented at the Interactive Sonification Workshop (ISon).
Vancouver
1.
Maes P-J, Leman M, Lesaffre M. A model-based sonification system for directional movement behavior. 2010.
MLA
Maes, Pieter-Jan, Marc Leman, and Micheline Lesaffre. “A Model-based Sonification System for Directional Movement Behavior.” 2010. Print.
@inproceedings{925845,
  abstract     = {Computational algorithms are presented that create a virtual model of a person{\textquoteright}s kinesphere (i.e. a concept of Laban denoting the space immediately surrounding a person{\textquoteright}s body and reachable by the upper limbs). This model is approached as a virtual sound object/instrument (VSO) that could be {\textquotedblleft}played{\textquotedblright} by moving the upper limbs in particular directions. As such, it provides an alternative for visual qualitative movement analysis tools, like bar plots. 
This model-based sonification system emphasizes the role of interaction in sonification. Moreover, this study claims that the integration of intentionality and expressivity in auditory biofeedback interaction systems is necessary in order to make the sonification process more precise and transparent. A method is proposed -- based on the embodied music cognition theory -- that is able to do this without disclaiming the scientific, systematic principles underlying the process of sonification.},
  author       = {Maes, Pieter-Jan and Leman, Marc and Lesaffre, Micheline},
  keyword      = {expressive gesture,Sonification,human-computer interaction,embodiment},
  language     = {eng},
  location     = {KTH, Stockholm, Sweden},
  pages        = {4},
  title        = {A model-based sonification system for directional movement behavior},
  year         = {2010},
}