We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multi-instrument polyphonic music scores in MIDI format and a remapping rule set. Copyright

affective communication, data driven animation, facial animation, procedural art
dx.doi.org/10.1145/1183316.1183337
Sandbox Symposium 2006: ACM SIGGRAPH Video Game Symposium, Sandbox '06
Carleton University

DiPaola, S. (Steve), & Arya, A. (2006). Emotional remapping of music to facial animation. Presented at the Sandbox Symposium 2006: ACM SIGGRAPH Video Game Symposium, Sandbox '06. doi:10.1145/1183316.1183337