Intelligent Expression Blending For Performance Driven Facial Animation

ASSIA KHANAM, . (2008) Intelligent Expression Blending For Performance Driven Facial Animation. Doctoral thesis, National University of Sciences and Technology (NUST), Rawalpindi.

[img] Text

Download (25kB)


Animated facial avatars are being increasingly incorporated into many applications including teleconferencing, entertainment, education, and web-commerce. In many cases, there is a tendency to replace human performers with these virtual actors, owing to the benefits they offer in terms of cost and flexibility. However, these animated face models are under a strong constraint to offer a convincing behavior in order to be truly acceptable to the users. One of the most important aspects of this believability is the use of facial expressions by an avatar. It is extremely desirable that an animated facial model should be able, on the one hand to understand the facial expressions shown by its user, and on the other, to show facial expressions to the user, and to adapt its own facial expressions in response to any change in the situation in which it is immersed. Motion-capture based Performance Driven Facial Animation (PDFA) provides a costeffective and intuitive means of creating expressive facial avatars. In PDFA, facial actions of an input facial video are tracked by different methods, and re-targeted to any desired facial model which may or may not be the same as the input face. However, there are a few limitations that are hindering the progress of PDFA at a pace it truly deserves. Most of these limitations arise from a lack of editing facilities in PDFA. In PDFA, once an input video has been shot, this is all that can be generated for the target model. Existing PDFA systems have to resort to frequent re-capture sessions in order to incorporate any deviations in the output animation vis-à-vis the input motion-capture data. This thesis presents a new approach to add flexibility and intelligence to PDFA by means of context-sensitive facial expression blending. The proposed approach uses a Fuzzy Logic based framework to automatically decide the facial changes that must be blended with the captured facial motion in the light of the present context. The required changes are translated into spatial movements of the facial features to be used by the synthesis module of the system for generating the enhanced facial expressions of the avatar. Within the above mentioned scope, the work presented in this thesis covers the following areas: Expression Analysis Intelligent Systems Digital Image Processing Expression Synthesis The presented approach lends to PDFA a flexibility it has been lacking so far. It has several potential applications in diverse areas including, for example, virtual tutoring, deaf communication, person identification, and the entertainment industry. Experimental results indicate very good analysis and synthesis performance.

Item Type: Thesis (Doctoral)
Uncontrolled Keywords: Intelligent, Expression, Blending, Performance, Driven, Facial, Animation, facial avatars, Motion-capture, teleconferencing, entertainmenT
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Depositing User: Unnamed user with email
Date Deposited: 25 Aug 2017 10:21
Last Modified: 25 Aug 2017 10:21

Actions (login required)

View Item View Item