|
The suite of facial motions that constitute a single conversational expressions can be extremely complex. Previous research has shown, fortunately, that only a small subset of the actual motions present in an expression are perceptually necessary in order for the expression to be properly recognized (Cunningham et al., APGV2004). Knowing which areas of the face need to move can greatly ease the task of producing realistic facial animations in real-time.
We are using a variety of techniques to help determine the dynamic components of conversational facial expressions. One such technique involves the manipulation of real video sequences. To this end, we have developed a database of conversational facial expressions recorded at 30 Hz at full, non-interlaced, PAL resolution (for more information, see the Video Lab page).
Using software developed in-house by Mario Kleiner, we are able to selectively manipulate regions of the faces in the videos. For example, the first two videos below shows one of our actresses agreeing. The first video is the original sequence. The second video shows what the expression looks like when the entire interior of the face is "frozen" (i.e., the face was replaced with a static, neutral texture leaving the rigid head motion intact). The third video shows the same actresses thinking expression with various subregions of the face "frozen". (note: The green dots on the hat are part of the tracking rig; for more information, see: http://www.kyb. mpg.de/projects.html?prj=146&user=kleinerm).
Contact Douglas Cunningham for more information.
DivX codecs and players are available for Windows, GNU/Linux, and Macintosh |
Last updated 28 October 2004. Please contact Mary Ellen Foster with any comments, complaints, or reports of broken links.