Automatically determining the temporal characteristics of facial expressions has extensive application domains such as human-machine interfaces for emotion recognition, face identification, as well as medical analysis. However, many papers in the literature have not addressed the step of determining when such expressions occur. This dissertation is focused on the problem of automatically segmenting macro- and micro-expressions frames (or retrieving the expression intervals) in video sequences, without the need for training a model on a specific subset of such expressions. The proposed method exploits the non-rigid facial motion that occurs during facial expressions by modeling the strain observed during the elastic deformation of facial skin tissue. The method is capable of spotting both macro expressions which are typically associated with emotions such as happiness, sadness, anger, disgust, and surprise, and rapid micro- expressions which are typically, but not always, associated with semi-suppressed macro-expressions. Additionally, we have used this method to automatically retrieve strain maps generated from peak expressions for human identification. This dissertation also contributes a novel 3-D surface strain estimation algorithm using commodity 3-D sensors aligned with an HD camera. We demonstrate the feasibility of the method, as well as the improvements gained when using 3-D, by providing empirical and quantitative comparisons between 2-D and 3-D strain estimations.
Identifer | oai:union.ndltd.org:USF/oai:scholarcommons.usf.edu:etd-5967 |
Date | 01 January 2013 |
Creators | Shreve, Matthew Adam |
Publisher | Scholar Commons |
Source Sets | University of South Flordia |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Graduate Theses and Dissertations |
Rights | default |
Page generated in 0.0019 seconds