This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.

Additional Metadata
Persistent URL dx.doi.org/10.1155/2007/48757
Journal Eurasip Journal on Image and Video Processing
Citation
Arya, A, & Dipaola, S. (Steve). (2007). Multispace behavioral model for face-based affective social agents. Eurasip Journal on Image and Video Processing, 2007. doi:10.1155/2007/48757