This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a game-like health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions facial expression units.

Additional Metadata
Persistent URL
Journal International Journal of Computer Games Technology
Arya, A, Dipaola, S. (Steve), & Parush, A. (2009). Perceptually valid facial expressions for character-based applications. International Journal of Computer Games Technology, (1). doi:10.1155/2009/462315