• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 10
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Methods for facial pose estimation

Choi, Kwang Nam January 2002 (has links)
No description available.
2

Highly automated method for facial expression synthesis

Ersotelos, Nikolaos January 2010 (has links)
The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost. This thesis, proposes a highly automated approach for achieving a realistic facial expression synthesis, which allows for enhanced performance in speed (3 minutes processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal physical input. Moreover, it will describe a novel approach to the normalization of the illumination settings values between source and target images, thereby allowing the algorithm to work accurately, even in different lighting conditions. Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper.
3

3D facial feature extraction and recognition : an investigation of 3D face recognition : correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques

Al-Qatawneh, Sokyna M. S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
4

Rozpoznávání výrazu tváře / Facial Expression Recognition

Král, Jiří Unknown Date (has links)
Many views to facial expression recognition exist. This work presents one of approaches. Existing methods of human face representation by model are discussed. The AAM method, where final appearance model is created from model of shape and model of texture is proposed. Model of shape and model of texture is created by statistic analysis. Using this representation, an effective method is achieved that is complexity of information for searched face in static image. Choice and combination of suitable features for classification of facial expression is principle for facial expression recognition based on AAM. Two approaches of facial expression classification are compared. Classification based on LDA and classification based on SVM. These methods with necessary face localization using AdaBoost form an automated face recognizer in image.
5

Detection of facial expressions based on time dependent morphological features

Bozed, Kenz Amhmed January 2011 (has links)
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.
6

Affect Recognition and Support in Intelligent Tutoring Systems

Zakharov, Konstantin January 2007 (has links)
Empirical research provides evidence of strong interaction between cognitive and affective processes in the human mind. Education research proposes a model of constructive learning that relates cognitive and affective processes in an evolving cycle of affective states. Intelligent Tutoring Systems (ITSs) are capable of providing comprehensive cognitive support. Affective support in ITSs, however, is lagging behind; the in-depth exploration of cognitive and affective processes in ITSs is yet to be seen. Our research focuses on the integration of affective support in an ITS enhanced with an affective pedagogical agent. In our work we adopt the dimensional (versus categorical) view of emotions for modelling affective states of the agent and the ITSs users. In two stages we develop and evaluate an affective pedagogical agent. The affective response of the first agent version is based on the appraisal of the interaction state; this agent's affective response is displayed as affective facial expressions. The pilot study at the end of the first stage of the project confirms the viability of our approach which combines the dimensional view of emotions with the appraisal of interaction state. In the second stage of the project we develop a facial feature tracking application for real-time emotion recognition in a video-stream. Affective awareness of the second version of the agent is based on the output from the facial feature tracking application and the appraisal of the interaction state. This agent's response takes the form of affectoriented messages designed to interrupt the state of negative flow. The evaluation of the affect-aware agent against an unemotional affect-unaware agent provides positive results, thus confirming the superiority of the affect-aware agent. Although the uptake of the agent was not unanimous, the agent established and maintained good rapport with the users in a role of a caring tutor. The results of the pilot study and the final evaluation validate our choices in the design of affective interaction. In both experiments, the participants appreciated the addition of audible feedback messages, describing it as an enhancement which helped them save time and maintain their focus. Finally, we offer directions for future research on affective support which can be conducted within the framework developed in the course of this project.
7

Effect of isolated facial feature transformations in a change blindness experiment involving a person as the object of change

Kadosh, Hadar 29 May 2008 (has links)
Research has shown that people often fail to notice changes to visual scenes. This phenomenon is known as change blindness. This study investigated the effect of facial feature transformations on change blindness using change detection tasks involving a person as the object of change. 301 participants viewed a photo-story comprised of a few still frames. In the final frame, a selected facial feature of a character in the story was altered. Four different photo-stories were used, each utilising a different alteration. Questionnaires designed to determine whether the change was detected were administered. Results showed that changes to facial features considered to be more salient produced higher levels of change detection. A flicker test using the same images from the photo-story was administered to a further 75 participants and showed a similar pattern of results. It was concluded that in order to detect change, the changing stimuli have to be both salient and meaningful.
8

3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques.

Al-Qatawneh, Sokyna M.S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
9

Automatic age progression and estimation from faces

Bukar, Ali M. January 2017 (has links)
Recently, automatic age progression has gained popularity due to its numerous applications. Among these is the frequent search for missing people, in the UK alone up to 300,000 people are reported missing every year. Although many algorithms have been proposed, most of the methods are affected by image noise, illumination variations, and facial expressions. Furthermore, most of the algorithms use a pattern caricaturing approach which infers ages by manipulating the target image and a template face formed by averaging faces at the intended age. To this end, this thesis investigates the problem with a view to tackling the most prominent issues associated with the existing algorithms. Initially using active appearance models (AAM), facial features are extracted and mapped to people’s ages, afterward a formula is derived which allows the convenient generation of age progressed images irrespective of whether the intended age exists in the training database or not. In order to handle image noise as well as varying facial expressions, a nonlinear appearance model called kernel appearance model (KAM) is derived. To illustrate the real application of automatic age progression, both AAM and KAM based algorithms are then used to synthesise faces of two popular long missing British and Irish kids; Ben Needham and Mary Boyle. However, both statistical techniques exhibit image rendering artefacts such as low-resolution output and the generation of inconsistent skin tone. To circumvent this problem, a hybrid texture enhancement pipeline is developed. To further ensure that the progressed images preserve people’s identities while at the same time attaining the intended age, rigorous human and machine based tests are conducted; part of this tests resulted to the development of a robust age estimation algorithm. Eventually, the results of the rigorous assessment reveal that the hybrid technique is able to handle all existing problems of age progression with minimal error. / National Information Technology Development Agency of Nigeria (NITDA)
10

A Multi-Modal Approach for Face Modeling and Recognition

Mahoor, Mohammad Hossein 14 January 2008 (has links)
This dissertation describes a new methodology for multi-modal (2-D + 3-D) face modeling and recognition. There are advantages in using each modality for face recognition. For example, the problems of pose variation and illumination condition, which cannot be resolved easily by using the 2-D data, can be handled by using the 3-D data. However, texture, which is provided by 2-D data, is an important cue that cannot be ignored. Therefore, we use both the 2-D and 3-D modalities for face recognition and fuse the results of face recognition by each modality to boost the overall performance of the system. In this dissertation, we consider two different cases for multi-modal face modeling and recognition. In the first case, the 2-D and 3-D data are registered. In this case we develop a unified graph model called Attributed Relational Graph (ARG) for face modeling and recognition. Based on the ARG model, the 2-D and 3-D data are included in a single model. The developed ARG model consists of nodes, edges, and mutual relations. The nodes of the graph correspond to the landmark points that are extracted by an improved Active Shape Model (ASM) technique. In order to extract the facial landmarks robustly, we improve the Active Shape Model technique by using the color information. Then, at each node of the graph, we calculate the response of a set of log-Gabor filters applied to the facial image texture and shape information (depth values); these features are used to model the local structure of the face at each node of the graph. The edges of the graph are defined based on Delaunay triangulation and a set of mutual relations between the sides of the triangles are defined. The mutual relations boost the final performance of the system. The results of face matching using the 2-D and 3-D attributes and the mutual relations are fused at the score level. In the second case, the 2-D and 3-D data are not registered. This lack of registration could be due to different reasons such as time lapse between the data acquisitions. Therefore, the 2-D and 3-D modalities are modeled independently. For the 3-D modality, we developed a fully automated system for 3-D face modeling and recognition based on ridge images. The problem with shape matching approaches such as Iterative Closest Points (ICP) or Hausdorff distance is the computational complexity. We model the face by 3-D binary ridge images and use them for matching. In order to match the ridge points (either using the ICP or the Hausdorff distance), we extract three facial landmark points: namely, the two inner corners of the eyes and the tip of the nose, on the face surface using the Gaussian curvature. These three points are used for initial alignment of the constructed ridge images. As a result of using ridge points, which are just a fraction of the total points on the surface of the face, the computational complexity of the matching is reduced by two orders of magnitude. For the 2-D modality, we model the face using an Attributed Relational Graph. The results of the 2-D and 3-D matching are fused at the score level. There are various techniques to fuse the 2-D and 3-D modalities. In this dissertation, we fuse the matching results at the score level to enhance the overall performance of our face recognition system. We compare the Dempster-Shafer theory of evidence and the weighted sum rule for fusion. We evaluate the performance of the above techniques for multi-modal face recognition on various databases such as Gavab range database, FRGC (Face Recognition Grand Challenge) V2.0, and the University of Miami face database.

Page generated in 0.0743 seconds