Capturing Gestures for Expressive Sound Control




Symposium:


Session Title:

  • Sound and Interaction

Presentation Title:

  • Capturing Gestures for Expressive Sound Control

Presenter(s):



Venue(s):



Abstract:

  • We present a novel approach for live performances, giving musicians or dancers an extended control over the sound rendering of their representation. Contrarily to the usual sound rendering of a performance where sounds are externally triggered by specific events in the scene, or to usual augmented instruments that track the gestures used to play the instrument to expand its possibilities, the performer can configure the sound effects he produces in a way that its whole body is involved.

    We developed a Max/MSP toolbox to receive, decode and analyze the signals from a set of light wireless sensors that can be worn by performers. Each sensor node contains a digital 3-axes accelerometer, magnetometer and gyroscope and up to 6 analog channels to connect additional external sensors (pressure, flexion, light, etc.). The received data is decoded and scaled and a reliable posture information is extracted from the fusion of the data sensors mounted on each node. A visualization system gives the posture/attitude of each node, as well as the smoothed and maximum values of the individual sensing axes. Contrary to most commercial systems, our Max/MSP toolbox makes it easy for users to define the many available parameters, allowing to tailor the system and to optimize the bandwidth. Finally, we provide a real-time implementation of a gesture recognition tool based on Dynamic Time Warping (DTW), with an original ”multi-grid” DTW algorithm that does not require prior segmentation. We propose users different mapping tools for interactive projects, integrating 1-D, 2-D and 3-D interpolation tools.

    We focused on extracting short-term features that detect hits and give information about the intensity and direction of the hits to drive percussive synthesis models. Contrarily to available systems, we propose a sound synthesis that takes into account the changes of direction and orientation immediately preceding the detected hits in order to produce sounds depending on the preparation gestures. Because of real-time performance constraints, we direct our sound synthesis towards a granular approach which manipulates atomic sound grains for sound events composition. Our synthesis procedure specifically targets consistent sound events, sound variety and expressive rendering of the composition.


PDF Document:



Related Links:


Category: