在電腦視覺的研究之中,有關物件的偵測與追蹤應用在速度及可靠性上的追求一直是相當具有挑戰性的問題,而現階段發展以視覺為基礎互動式的應用,所使用到技術諸如:類神經網路、SVM及貝氏網路等。
本論文中我們持續深入此領域,並提出及發展一個方向性邊緣特徵集(DEM)與修正後的AdaBoost訓練演算法相互結合,期能有效提高物件偵測與識別的速度及準確性,在實際驗證中,我們將之應用於多種角度之人臉偵測,以及臉部表情識別等兩個主要問題之上;在人臉偵測的應用中,我們使用CMU的臉部資料庫並與Viola & Jones方法進行分析比較,在準確率上,我們的方法擁有79% 的recall及90% 的precision,而Viola & ones的方法則分別為81%及77%;在運算速度上,同樣處理512x384的影像,相較於Viola & Jones需時132ms,我們提出的方法則有較佳的82ms。
此外,於表情識別的應用中,我們結合運用Component-based及Action-unit model 兩種方法。前者的優勢在於提供臉部細節特徵的定位及追蹤變化,後者主要功用則為進行情緒表情的分類。我們對於四種不同情緒表情的辨識準確度如下:高興(83.6%)、傷心(72.7%)、驚訝(80%) 、生氣(78.1%)。在實驗中,可以發現生氣及傷心兩種情緒較難區分,而高興與驚訝則較易識別。 / Rapid and robust detection and tracking of objects is a challenging problem in computer vision research. Techniques such as artificial neural networks, support vector machine and Bayesian networks have been developed to enable interactive vision-based applications. In this thesis, we tackle this issue by devising a novel feature descriptor named directional edge maps (DEM). When combined with a modified AdaBoost training algorithm, the proposed descriptor can produce effective results in many object detection and recognition tasks.
We have applied the newly developed method to two important object recognition problems, namely, face detection and facial expression recognition. The DEM-based methodology conceived in this thesis is capable of detecting faces of multiple views. To test the efficacy of our face detection mechanism, we have performed a comparative analysis with the Viola and Jones algorithm using Carnegie Mellon University face database. The recall and precision using our approach is 79% and 90%, respectively, compared to 81% and 77% using Viola and Jones algorithm. Our algorithm is also more efficient, requiring only 82 ms (compared to 132 ms by Viola and Jones) for processing a 512x384 image.
To achieve robust facial expression recognition, we have combined component-based methods and action-unit model-based approaches. The component-based method is mainly utilized to locate important facial features and track their deformations. Action-unit model-based approach is then employed to carry out expression recognition. The accuracy of classifying different emotion type is as follows: happiness 83.6%, sadness 72.7%, surprise 80%, and anger 78.1%. It turns out that anger and sadness are more difficult to distinguish, whereas happiness and surprise expression have higher recognition rates.
Identifer | oai:union.ndltd.org:CHENGCHI/G0095971003 |
Creators | 王財得, Wang, Tsai-Te |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 中文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.0018 seconds