Pakistan Research Repository Home
 

Title of Thesis

Intelligent Expression Blending For Performance Driven Facial Animation

Author(s)

ASSIA KHANAM

Institute/University/Department Details
Department of Computer Engineering, College of Electrical and Mechanical Engineering / National University of Sciences and Technology (NUST), Rawalpindi
Session
2008
Subject
Computer Engineering
Number of Pages
117
Keywords (Extracted from title, table of contents and abstract of thesis)
Intelligent, Expression, Blending, Performance, Driven, Facial, Animation, facial avatars, Motion-capture, teleconferencing, entertainment

Abstract
Animated facial avatars are being increasingly incorporated into many applications including teleconferencing, entertainment, education, and web-commerce. In many cases, there is a tendency to replace human performers with these virtual actors, owing to the benefits they offer in terms of cost and flexibility. However, these animated face models are under a strong constraint to offer a convincing behavior in order to be truly acceptable to the users. One of the most important aspects of this believability is the use of facial expressions by an avatar. It is extremely desirable that an animated facial model should be able, on the one hand to understand the facial expressions shown by its user, and on the other, to show facial expressions to the user, and to adapt its own facial expressions in response to any change in the situation in which it is immersed.
Motion-capture based Performance Driven Facial Animation (PDFA) provides a costeffective and intuitive means of creating expressive facial avatars. In PDFA, facial actions of an input facial video are tracked by different methods, and re-targeted to any desired facial model which may or may not be the same as the input face. However, there are a few limitations that are hindering the progress of PDFA at a pace it truly deserves. Most of these limitations arise from a lack of editing facilities in PDFA. In PDFA, once an input video has been shot, this is all that can be generated for the target model. Existing PDFA systems have to resort to frequent re-capture sessions in order to incorporate any deviations in the output animation vis--vis the input motion-capture data. This thesis presents a new approach to add flexibility and intelligence to PDFA by means of context-sensitive facial expression blending. The proposed approach uses a Fuzzy Logic based framework to automatically decide the facial changes that must be blended with the captured facial motion in the light of the present context. The required changes are translated into spatial movements of the facial features to be used by the synthesis module of the system for generating the enhanced facial expressions of the avatar.
Within the above mentioned scope, the work presented in this thesis covers the following areas:
• Expression Analysis
• Intelligent Systems
• Digital Image Processing
• Expression Synthesis
The presented approach lends to PDFA a flexibility it has been lacking so far. It has several potential applications in diverse areas including, for example, virtual tutoring, deaf communication, person identification, and the entertainment industry. Experimental results indicate very good analysis and synthesis performance.

Download Full Thesis
1,059 KB
S. No. Chapter Title of the Chapters Page Size (KB)
1 0 CONTENTS

 

i
97.5 KB
2

1

INTRODUCTION

1.1 Animated Face Models

1.2 Performance Driven Facial Animation (PDFA)

1.3 PDFA Challenges and Motivation for the Present Research

1.4 Problem Statement

1.5 Organization of the Thesis

1.6 Summary

1
135 KB
3 2 RELATED WORK

2.1 Expression Modeling Approaches

2.2 Expression Parameterization Approaches

2.3 Expression Analysis Approaches

2.4 Expression Blending Approaches

2.5 Approaches for Adding Flexibility to PDFA

2.6 Summary

10
158 KB
4 3 THE ROLE OF UNCERTAINTY IN THE PROBLEM DOMAIN

3.1 Sources of Uncertainty

3.2 Uncertainty Handling Techniques

3.3 Basic Concepts in Fuzzy Sets and Fuzzy Logic

3.4 Summary

22
220 KB
5 4 THEORETICAL FRAMEWORK AND COMPUTATIONAL MODEL FOR INTELLIGENT EXPRESSION BLENDING

4.1 Feature Domain

4.2 Feature Animation Parameter (FAP) Quantification

4.3 Fuzzy FAP Modeling

4.4 Fuzzy Expression Modeling

4.5 Expression Analysis

4.6 Expression Blending

4.7 Defuzzification

4.8 Animation

4.9 Summary

43
286 KB
6 5 IMPLEMENTATION

5.1 Experimental Testbed

5.2 FAP Membership Functions

5.3 Expression Model Creation

5.4 Feature Extraction

5.5 Summary

61


296 KB
7 6 RESULTS

6.1 Expression Analysis Results

6.2 Expression Synthesis and Blending Results

6.3 Summary

72


455 KB
8 7 CONCLUSION AND FUTURE WORK

85


55 KB
9 8 APPENDICES & BIBLIOGRAPHY

88


148 KB