In this paper, we describe a modular multi-dimensional parameter space for real-time face game-based animation. Faces are our most expressive communication tools. Therefore a synthetic facial creation and animation system should have its own tailored authoring environment rather than using general purpose tools from image, 2D and 3D animation. This environment would take advantage of a knowledge space of faces types, expressions, and behavior, encoding known facial knowledge and meaning into a comprehensive, intuitive facial language and set of user tools. Since faces and face expression work on so many cognitive levels, we propose a multi-dimension parameter space called FaceSpace as the basic face model, and a comprehensive authoring environment based on this model. We describe the underlying mechanisms of our environment, and also demonstrate its early game applications and content process. Copyright 2007 ACM.

Additional Metadata
Keywords communication systems, facial animation, gaming
Persistent URL dx.doi.org/10.1145/1328202.1328225
Conference 2007 Conference on Future Play, Future Play '07
Citation
DiPaola, S. (Steve), & Arya, A. (2007). A framework for socially communicative faces for game and interactive learning applications. Presented at the 2007 Conference on Future Play, Future Play '07. doi:10.1145/1328202.1328225