Procedurally-generated audio is an important method for the automatic synthesis of realistic sounds for computer animations and virtual environments. While synthesis techniques for rigid bodies have been well studied, few publications have tackled the challenges of synthesizing sounds for soft bodies. In this paper, we propose a data-driven synthesis approach to simultaneously generate audio based on certain given soft-body animations. Our method uses granular synthesis to extract a database of sound from real-world recordings and then retarget these grains of sounds based on the motion of any input animations. We demonstrate the effectiveness of this method on a variety of soft-body animations including a basketball bouncing, apple slicing, hand clapping and a jelly simulation.

Additional Metadata
Keywords Data-driven methods, Granular synthesis, Soft-body animation, Sound synthesis
Persistent URL dx.doi.org/10.1145/3243274.3243285
Conference 2018 International Audio Mostly Conference - A Conference on Interaction with Sound: Sound in Immersion and Emotion, AM 2018
Citation
Su, F. (Feng), & Joslin, C. (2018). Procedurally-generated audio for soft-body animations. In ACM International Conference Proceeding Series. doi:10.1145/3243274.3243285