Return to search

Facial Features Tracking using Active Appearance Models

<p>This thesis aims at building a system capable of automatically extracting and parameterizing the position of a face and its features in images acquired from a low-end monocular camera. Such a challenging task is justified by the importance and variety of its possible applications, ranging from face and expression recognition to animation of virtual characters using video depicting real actors. The implementation includes the construction of Active Appearance Models of the human face from training images. The existing face model Candide-3 is used as a starting point, making the translation of the tracking parameters to standard MPEG-4 Facial Animation Parameters easy.</p><p>The Inverse Compositional Algorithm is employed to adapt the models to new images, working on a subspace where the appearance is "projected out" and thus focusing only on shape.</p><p>The algorithm is tested on a generic model, aiming at tracking different people’s faces, and on a specific model, considering one person only. In the former case, the need for improvements in the robustness of the system is highlighted. By contrast, the latter case gives good results regarding both quality and speed, with real time performance being a feasible goal for future developments.</p>

Identiferoai:union.ndltd.org:UPSALLA/oai:DiVA.org:liu-7658
Date January 2006
CreatorsFanelli, Gabriele
PublisherLinköping University, Department of Electrical Engineering, Universitetsbibliotek
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, text

Page generated in 0.0015 seconds