Return to search

Facial expression analysis with graphical models

Facial expression recognition has become an active research topic in recent

years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain,

temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as

neural networks, Gaussian processes, support vector machines, etc.) have been

applied to facial expression analysis. Recently graphical models have emerged as

a general framework for applying probabilistic models. They provide a natural

framework for describing the generative process of facial expressions. However,

these models often su?er from too many latent variables or too complex model

structures, which makes learning and inference di±cult. In this thesis, we will

try to analyze the deformation of facial expression by introducing some recently

developed graphical models (e.g. latent topic model) or improving the recognition

ability of some already widely used models (e.g. HMM).

In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of

exemplars and topics in between. Our ¯rst model incorporates exemplar-based

representation into graphical models. To further improve computational e±-

ciency of the proposed model, we build it in a local linear subspace constructed

by principal component analysis. The second model is an extension of the recently

developed topic model by introducing temporal and categorical information into

Latent Dirichlet Allocation model. In our discriminative temporal topic model

(DTTM), temporal information is integrated by placing an asymmetric Dirichlet

prior over document-topic distributions. The discriminative ability is improved by

a supervised term weighting scheme. We describe the resulting DTTM in detail

and show how it can be applied to facial expression recognition. Our third model

is a nonparametric discriminative variation of HMM. HMM can be viewed as a

prototype model, and transition parameters act as the prototype for one category.

To increase the discrimination ability of HMM at both class level and state level,

we introduce linear interpolation with maximum entropy (LIME) and member-

ship coe±cients to HMM. Furthermore, we present a general formula for output

probability estimation, which provides a way to develop new HMM. Experimental

results show that the performance of some existing HMMs can be improved by

integrating the proposed nonparametric kernel method and parameters adaption

formula.

In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal

and categorical information into Latent Dirichlet Allocation (LDA) topic model,

and (iii) increasing the discrimination ability of HMM at both hidden state level

and class level. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

  1. 10.5353/th_b4784948
  2. b4784948
Identiferoai:union.ndltd.org:HKU/oai:hub.hku.hk:10722/174506
Date January 2012
CreatorsShang, Lifeng., 尚利峰.
ContributorsChan, KP
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Source SetsHong Kong University Theses
LanguageEnglish
Detected LanguageEnglish
TypePG_Thesis
Sourcehttp://hub.hku.hk/bib/B47849484
RightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works., Creative Commons: Attribution 3.0 Hong Kong License
RelationHKU Theses Online (HKUTO)

Page generated in 0.0015 seconds