Spelling suggestions: "subject:"eigenfrequencies"" "subject:"eigenfrequency""
1 |
A Comparison Of Subspace Based Face Recognition MethodsGonder, Ozkan 01 September 2005 (has links) (PDF)
Different approaches to the face recognition are studied in this thesis. These approaches are PCA (Eigenface), Kernel Eigenface and Fisher LDA. Principal component analysis extracts the most important information contained in the face to construct a computational model that best describes the face. In Eigenface approach, variation between the face images are described by using a set of characteristic face images in order to find out the eigenvectors (Eigenfaces) of the covariance matrix of the distribution, spanned by a training set of face images. Then, every face image is represented by a linear combination of these eigenvectors. Recognition is implemented by projecting a new image into the face subspace spanned by the Eigenfaces and then classifying the face by comparing its position in face space with the positions of known individuals. In Kernel Eigenface method, non-linear mapping of input space is implemented before PCA in order to handle non-linearly embedded properties of images (i.e. background differences, illumination changes, and facial expressions etc.). In Fisher LDA, LDA is applied after PCA to increase the discrimination between classes.
These methods are implemented on three databases that are: Yale face database, AT& / T (formerly Olivetti Research Laboratory) face database, and METU Vision Lab face database. Experiment results are compared with respect to the effects of changes in illumination, pose and expression.
Kernel Eigenface and Fisher LDA show slightly better performance with respect to Eigenfaces method under changes in illumination. Expression differences did not affect the performance of Eigenfaces method.
From test results, it can be observed that Eigenfaces approach is an adequate method that can be used in face recognition systems due to its simplicity, speed and learning capability. By this way, it can easily be used in real time systems.
|
2 |
3d Face RecognitionUstun, Bulend 01 January 2008 (has links) (PDF)
In this thesis, the effect of registration process is evaluated as well as several methods proposed for 3D face recognition. Input faces are in point cloud form and have noises due to the nature of scanner technologies. These inputs are noise filtered and smoothed before registration step. In order to register the faces an average face model is obtained from all the images in the database. All the faces are registered to the average model and stored to the database. Registration is performed by using a rigid registration technique called ICP (Iterative Closest Point), probably the most popular technique for registering two 3D shapes. Furthermore some variants of ICP are implemented and they are evaluated in terms of accuracy, time and number of iterations needed for convergence. At the recognition step, several recognition methods, namely Eigenface, Fisherface, NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are tested on registered and non-registered faces and the performances are evaluated.
|
3 |
[en] A STUDY OF CLASSIFIERS FOR AUTOMATIC FACE RECOGNITION / [pt] ESTUDO DE CLASSIFICADORES PARA O RECONHECIMENTO AUTOMÁTICO DE FACES04 November 2005 (has links)
[pt] Identificar um indivíduo a partir de uma imagem de face é
uma tarefa simples para seres humanos e extremamente
difícil para a Visão Computacional. Esta questão tem
motivado diversos grupos de pesquisa em todo o mundo,
especialmente a partir de 1993. Inúmeros trabalhos
realizados até o momento encaram uma imagem digital de n
pixels como um vetor num espaço n-dimensional, onde n é em
geral muito grande. Imagens de rostos humanos possuem,
contudo, grande redundância: todas contém dois olhos, um
nariz, uma boca, e etc. É possível, portanto, trabalhar em
uma base deste espaço em que faces possam ser
adequadamente caracterizadas a partir de um conjunto de p
componentes, onde p é muito menor quen. É com este enfoque
que o presente trabalho estuda sistemas de reconhecimento
de faces que consistem de um estágio de redução de
dimensionalidade, realizado pela técnica de Análise de
Componentes Principais (PCA), seguido de um modelo
classificador. No estágio da PCA, as imagens de n pixels
são transformadas em vetores de p características a partir
de um conjunto de treinamento. Três classificadores
conhecidos na literatura são estudados: os classificadores
de distância (EUclideana e de Mahalanobis), a rede neural
de Funções Base Radiais (RBF), e o classificador de
Fisher. Este trabalho propõe, ainda, um novo classificador
que introduz o conceito de Matrizes de Covariança
Misturadas (MPM) no classificador gaussiano de Máxima
Probabilidade. Os quatros classificadores são avaliados
através da variação de seus respectivos parâmetros e
utilizam como imagens o banco de faces da Olivetti. Nos
experimentos realizados para comparar tais abordagens, o
novo classificador proposto atingiu as maiores taxas de
reconhecimento e apresentou menorsensibilidade à escolha
do conjunto de faces de treinamento. / [en] Identifying an individual based on a face image is a
simple task for humans to perform and a very difficult one
for Vision Computing. Since 1993, several research groups
in all over the world have been studied this problem. Most
of the methods proposed for recognizing the identity of an
individual represent a n intensity pixel image as a n-
dimensional vector, when, in general, n is a very large
number value. Face images are highly redundant, since
every individual has two eyes, one nose, one mouth and so
on. Then, instead of using n intensity values, it is
generally possible to characterize an image instance by a
set of p features, for p < < n. This work studies face
recognition systems consisting of a PCA stage for
dimensionality reduction followed by a classifier. The PCA
stage takes the n-pixels face images and produces the
corresponding p most expensive features, based on the
whole available training set. Three classifiers proposed
in the literature are studied: the Euclidean and
Mahalanobis distances, the RBF neural network, and the
Fisher classifier. This work also proposes a new
classifier, which introduces the concept of Mixture
Covariance Matrices (MPM) in the Minimum Total Probality
of Misclassification rule for normal populations. The four
classifiers are evaluated using the Olivetti Face Database
varying their parameters in a wide range. In the
experiments carried out to compare those approaches the
new proposed classifier reached the best recognition rates
and showed to be less sensitive to the choice of the
training set.
|
4 |
Holistic Face Recognition By Dimension ReductionGul, Ahmet Bahtiyar 01 January 2003 (has links) (PDF)
Face recognition is a popular research area where there are different
approaches studied in the literature. In this thesis, a holistic Principal
Component Analysis (PCA) based method, namely Eigenface method is
studied in detail and three of the methods based on the Eigenface method
are compared. These are the Bayesian PCA where Bayesian classifier is
applied after dimension reduction with PCA, the Subspace Linear
Discriminant Analysis (LDA) where LDA is applied after PCA and
Eigenface where Nearest Mean Classifier applied after PCA. All the
three methods are implemented on the Olivetti Research Laboratory
(ORL) face database, the Face Recognition Technology (FERET)
database and the CNN-TURK Speakers face database. The results are
compared with respect to the effects of changes in illumination, pose and
aging. Simulation results show that Subspace LDA and Bayesian PCA
perform slightly well with respect to PCA under changes in pose / however, even Subspace LDA and Bayesian PCA do not perform well
under changes in illumination and aging although they perform better
than PCA.
|
5 |
Face Recognition Using Eigenfaces And Neural NetworksAkalin, Volkan 01 December 2003 (has links) (PDF)
A face authentication system based on principal component analysis and neural networks is developed in this thesis. The system consists of three stages / preprocessing, principal component analysis, and recognition. In preprocessing stage, normalization illumination, and head orientation were done. Principal component analysis is applied to find the aspects of face which are important for identification. Eigenvectors and eigenfaces are calculated from the initial face image set. New faces are projected onto the space expanded by eigenfaces and represented by weighted sum of the eigenfaces. These weights are used to identify the faces. Neural network is used to create the face database and recognize and authenticate the face by using these weights. In this work, a separate network was build for each person. The input face is projected onto the eigenface space first and new descriptor is obtained. The new descriptor is used as input to each person& / #8217 / s network, trained earlier. The one with maximum output is selected and reported as the host if it passes predefined recognition threshold. The algorithms that have been developed are tested on ORL, Yale and Feret Face Databases.
|
6 |
Face Detection And Active Robot VisionOnder, Murat 01 September 2004 (has links) (PDF)
The main task in this thesis is to design a robot vision system with face detection and tracking capability. Hence there are two main works in the thesis: Firstly, the detection of the face on an image that is taken from the camera on the robot must be achieved. Hence this is a serious real time image processing task and time constraints are very important because of this reason. A processing rate of 1 frame/second is tried to be achieved and hence a fast face detection algorithm had to be used. The Eigenface method and the Subspace LDA (Linear Discriminant Analysis) method are implemented, tested and compared for face detection and Eigenface method proposed by Turk and Pentland is decided to be used. The images are first passed through a number of preprocessing algorithms to obtain better performance, like skin detection, histogram equalization etc. After this filtering process the face candidate regions are put through the face detection algorithm to understand whether there is a face or not in the image. Some modifications are applied to the eigenface algorithm to detect the faces better and faster.
Secondly, the robot must move towards the face in the image. This task includes robot motion. The robot to be used for this purpose is a Pioneer 2-DX8 Plus, which is a product of ActivMedia Robotics Inc. and only the interfaces to move the robot have been implemented in the thesis software. The robot is to detect the faces at different distances and arrange its position according to the distance of the human to the robot. Hence a scaling mechanism must be used either in the training images, or in the input image taken from the camera. Because of timing constraint and low camera resolution, a limited number of scaling is applied in the face detection process. With this reason faces of people who are very far or very close to the robot will not be detected. A background independent face detection system is tried to be designed. However the resultant algorithm is slightly dependent to the background. There is no any other constraints in the system.
|
7 |
Facial recognition, eigenfaces and synthetic discriminant functionsMuller, Neil 12 1900 (has links)
Thesis (PhD)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: In this thesis we examine some aspects of automatic face recognition, with specific reference to the eigenface technique. We provide a thorough theoretical analysis of this technique which allows us to explain many of the results reported in the literature. It also suggests that clustering can improve the performance of the system and we provide experimental evidence of this. From the analysis, we also derive an efficient algorithm for updating the eigenfaces. We demonstrate the ability of an eigenface-based system to represent faces efficiently (using at most forty values in our experiments) and also demonstrate our updating algorithm.
Since we are concerned with aspects of face recognition, one of the important practical problems is locating the face in a image, subject to distortions such as rotation. We review two well-known methods for locating faces based on the eigenface technique.e These algorithms are computationally expensive, so we illustrate how the Synthetic Discriminant Function can be used to reduce the cost. For our purposes, we propose the concept of a linearly interpolating SDF and we show how this can be used not only to locate the face, but also to estimate the extent of the distortion. We derive conditions which will ensure a SDF is linearly interpolating. We show how many of the more popular SDF-type filters are related to the classic SDF and thus extend our analysis to a wide range of SDF-type filters. Our analysis suggests that by carefully choosing the training set to satisfy our condition, we can significantly reduce the size of the training set required. This is demonstrated by using the equidistributing principle to design a suitable training set for the SDF. All this is illustrated with several examples.
Our results with the SDF allow us to construct a two-stage algorithm for locating faces. We use the SDF-type filters to obtain initial estimates of the location and extent of the distortion. This information is then used by one of the more accurate eigenface-based techniques to obtain the final location from a reduced search space. This significantly reduces the computational cost of the process. / AFRIKAANSE OPSOMMING: In hierdie tesis ondersoek ons sommige aspekte van automatiese gesigs- herkenning met spesifieke verwysing na die sogenaamde eigengesig ("eigen- face") tegniek. ‘n Deeglike teoretiese analise van hierdie tegniek stel ons in staat om heelparty van die resultate wat in die literatuur verskyn te verduidelik. Dit bied ook die moontlikheid dat die gedrag van die stelsel sal verbeter as die gesigte in verskillende klasse gegroepeer word. Uit die analise, herlei ons ook ‘n doeltreffende algoritme om die eigegesigte op te dateer. Ons demonstreer die vermoë van die stelsel om gesigte op ‘n doeltreffende manier te beskryf (ons gebruik hoogstens veertig eigegesigte) asook ons opdateringsalgoritme met praktiese voorbeelde.
Verder ondersoek ons die belangrike probleem om gesigte in ‘n beeld te vind, veral as rotasie- en skaalveranderinge plaasvind. Ons bespreek twee welbekende algoritmes om gesigte te vind wat op eigengesigte gebaseer is. Hierdie algoritme is baie duur in terme van numerise berekeninge en ons ontwikkel n koste-effektiewe metode wat op die sogenaamde "Synthetic Discriminant Functions" (SDF) gebaseer is. Vir hierdie doel word die begrip van lineêr interpolerende SDF’s ingevoer. Dit stel ons in staat om nie net die gesig te vind nie, maar ook ‘n skatting van sy versteuring te bereken. Voorts kon ons voorwaardes aflei wat verseker dat ‘n SDF lineêr interpolerend is. Aangesien ons aantoon dat baie van die gewilde SDF-tipe filters aan die klassieke SDF verwant is, geld ons resultate vir ‘n hele verskeidenheid SDF- tipe filters. Ons analise toon ook dat ‘n versigtige keuse van die afrigdata mens in staat stel om die grootte van die afrigstel aansienlik te verminder. Dit word duidelik met behulp van die sogenaamde gelykverspreidings beginsel ("equidistributing principle") gedemonstreer. Al hierdie aspekte van die SDF’s word met voorbeelde geïllustreer. Ons resultate met die SDF laat ons toe om ‘n tweestap algoritme vir die vind van ‘n gesig in ‘n beeld te ontwikkel. Ons gebruik eers die SDF-tipe filters om skattings vir die posisie en versteuring van die gesig te kry en dan verfyn ons hierdie skattings deur een van die teknieke wat op eigengesigte gebaseer is te gebruik. Dit lei tot ‘n aansienlike vermindering in die berekeningstyd.
|
8 |
Biometrické rozpoznávání 3D modelů obličeje / Biometric Recognition of 3D FacesMichálek, Martin January 2014 (has links)
This thesis is about biometric 3D face recognition . A general biometric system as well as functioning of biometric system are present . Techniques used in 2D and 3D face recognition are described . Finally , an automatic biometric system for 3D face recognition is proposed and implemeted . This system divide face for areas by position of detected landmarks . Particular areas are compared separately . Final system fusion results from Eigenfaces and ARENA algorithms .
|
9 |
Biometric Recognition of 3D Faces / Biometric Recognition of 3D FacesMráček, Štěpán January 2010 (has links)
Diplomová práce byla vypracována na studijním pobytu na "Gjovik University College" v Norsku, a je zpracována v angličtině. Tato práce se zabývá rozpoznáváním 3D obličejů. Je zde popsán obecný biometrický systém a také konkrétní postupy používané při rozpoznávání 2D i 3D obličejů. Následně je navžena metoda pro rozpoznávání 3D obličejů. Algoritmus je vyvíjen a testován pomocí databáze Face Recognition Grand Challenge (FRGC). Během předzpracování jsou nalezeny význačné body v obličeji a následně je trojrozměrný model zarovnán do referenční polohy. Dále jsou vstupní data porovnávána s biometrickými šablonami uloženými v databázi, a to je zajištěno využitím tří základních technik pro rozpoznávání obličejů -- metoda eigenface (PCA), rozpoznávání založené na histogramu obličeje a rozpoznávání založené na anatomických rysech. Nakonec jsou jednotlivé metody spojeny do jednoho systému, jehož celková výsledná výkonnost převyšuje výkonnost jednotlivých použitých technik.
|
10 |
Real-time Detection And Tracking Of Human Eyes In Video SequencesSavas, Zafer 01 September 2005 (has links) (PDF)
Robust, non-intrusive human eye detection problem has been a fundamental and challenging problem for computer vision area. Not only it is a problem of its own, it can be used to ease the problem of finding the locations of other facial features for recognition tasks and human-computer interaction purposes as well. Many previous works have the capability of determining the locations of the human eyes but the main task in this thesis is not only a vision system with eye detection capability / Our aim is to design a real-time, robust, scale-invariant eye tracker system with human eye movement indication property using the movements of eye pupil. Our eye tracker algorithm is implemented using the Continuously Adaptive Mean Shift (CAMSHIFT) algorithm proposed by Bradski and the EigenFace method proposed by Turk & / Pentland. Previous works for scale invariant object detection using Eigenface method are mostly dependent on limited number of user predefined scales which causes speed problems / so in order to avoid this problem an adaptive eigenface method using the information extracted from CAMSHIFT algorithm is implemented to have a fast and scale invariant eye tracking.
First of all / human face in the input image captured by the camera is detected using the CAMSHIFT algorithm which tracks the outline of an irregular shaped object that may change size and shape during the tracking process based on the color of the object. Face area is passed through a number of preprocessing steps such as color space conversion and thresholding to obtain better results during the eye search process. After these preprocessing steps, search areas for left and right eyes are determined using the geometrical properties of the human face and in order to locate each eye indivually the training images are resized by the width information supplied by the CAMSHIFT algortihm. Search regions for left and right eyes are individually passed to the eye detection algortihm to determine the exact locations of each eye. After the detection of eyes, eye areas are individually passed to the pupil detection and eye area detection algorithms which are based on the Active Contours method to indicate the pupil and eye area. Finally, by comparing the geometrical locations of pupil with the eye area, human gaze information is extracted.
As a result of this thesis a software named &ldquo / TrackEye&rdquo / with an user interface having indicators for the location of eye areas and pupils, various output screens for human computer interaction and controls for allowing to test the effects of color space conversions and thresholding types during object tracking has been built.
|
Page generated in 0.0813 seconds