• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 11
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-time Multi-face Tracking with Labels based on Convolutional Neural Networks

Li, Xile January 2017 (has links)
This thesis presents a real-time multi-face tracking system, which is able to track multiple faces for live videos, broadcast, real-time conference recording, etc. The real-time output is one of the most significant advantages. Our proposed tracking system is comprised of three parts: face detection, feature extraction and tracking. We deploy a three-layer Convolutional Neural Network (CNN) to detect a face, a one-layer CNN to extract the features of a detected face and a shallow network for face tracking based on the extracted feature maps of the face. The performance of our multi-face tracking system enables the tracker to run in real-time without any on-line training. This algorithm does not need to change any parameters according to different input video conditions, and the runtime cost will not be affected significantly by an the increase in the number of faces being tracked. In addition, our proposed tracker can overcome most of the generally difficult tracking conditions which include video containing a camera cut, face occlusion, false positive face detection, false negative face detection, e.g. due to faces at the image boundary or faces shown in profile. We use two commonly used metrics to evaluate the performance of our multi-face tracking system demonstrating that our system achieves accurate results. Our multi-face tracker achieves an average runtime cost around 0.035s with GPU acceleration and this runtime cost is close to stable even if the number of tracked faces increases. All the evaluation results and comparisons are tested with four commonly used video data sets.
2

A Study of Real-Time Face Tracking with an Active Camera

Xie, Yao-Zhang 03 July 2005 (has links)
In this research we develop a Real-time face tracking system by single pan-tilt camera. The system includes face detection, deformable template tracking and motion control. We refer a method to search the facial features by using the genetic algorithm searching technique, the learning algorithm for face detector is based on AdaBoost. In the face tracking, we refer a tracking way to combine with detection and tracking. In the pan-tilt camera control part, two fuzzy logic controllers are designed to control the tracking and handling of moving face. We achieve a more robust tracking way than the single-template by renewing face-template continuously. Finally in our tests, the system can track the face of people in 30-frame per second under complex environment by using the personal computer.
3

Designing and Constructing an Animatronic Head Capable of Human Motion Programmed using Face-Tracking Software

Fitzpatrick, Robert J 01 May 2012 (has links)
The focus of this project was to construct a humanoid animatronic head that had sufficient degrees of freedom to mimic human facial expression as well as human head movement and could be animated using face-tracking software to eliminate the amount of time spent on trial-and-error programming intrinsic in animatronics. As such, eight degrees of freedom were assigned to the robot: five in the face and three in the neck. From these degrees of freedom, the mechanics of the animatronic head were designed such that the neck and facial features could move with the same range and speed of a human being. Once the head was realized, various face-tracking software were utilized to analyze a pre-recorded video of a human actor and map the actors eye motion, eyebrow motion, mouth motion, and neck motion to the corresponding degrees of freedom on the robot. The corresponding values from the face-tracking software were then converted into required servomotor angles using MATLAB, which were then fed into Visual Show Automation to create a performance script that controls the motion and audio of the animatronic head during its performance.
4

Consistent and Accurate Face Tracking and Recognition in Videos

Liu, Yiran 23 September 2020 (has links)
No description available.
5

Técnicas de processamento de imagens para localização e reconhecimento de faces / Image processing techniques for faces location and recognition

Almeida, Osvaldo Cesar Pinheiro de 01 December 2006 (has links)
A biometria é a ciência que estuda a mensuração dos seres vivos. Muitos trabalhos exploram as características dos seres humanos tais como, impressão digital, íris e face, a fim de desenvolver sistemas biométricos, utilizados em diversas aplicações (monitoramento de segurança, computação ubíqua, robótica). O reconhecimento de faces é uma das técnicas biométricas mais investigadas, por ser bastante intuitiva e menos invasiva que as demais. Alguns trabalhos envolvendo essa técnica se preocupam apenas em localizar a face de um indivíduo (fazer a contagem de pessoas), enquanto outros tentam identificá-lo a partir de uma imagem. Este trabalho propõe uma abordagem capaz de identificar faces a partir de quadros de vídeo e, posteriormente, reconhecê-las por meio de técnicas de análise de imagens. Pode-se dividir o trabalho em dois módulos principais: (1) - Localização e rastreamento de faces em uma seqüência de imagens ( frames), além de separar a região rastreada da imagem; (2) - Reconhecimento de faces, identificando a qual pessoa pertence. Para a primeira etapa foi implementado um sistema de análise de movimento (baseado em subtração de quadros) que possibilitou localizar, rastrear e captar imagens da face de um indivíduo usando uma câmera de vídeo. Para a segunda etapa foram implementados os módulos de redução de informações (técnica Principal Component Analysis - PCA), de extração de características (transformada wavelet de Gabor), e o de classificação e identificação de face (distância Euclidiana e Support Vector Machine - SVM). Utilizando-se duas bases de dados de faces (FERET e uma própria - Própria), foram realizados testes para avaliar o sistema de reconhecimento implementado. Os resultados encontrados foram satisfatórios, atingindo 91,92% e 100,00% de taxa de acertos para as bases FERET e Própria, respectivamente. / Biometry is the science of measuring and analyzing biomedical data. Many works in this field have explored the characteristics of human beings, such as digital fingerprints, iris, and face to develop biometric systems, employed in various aplications (security monitoring, ubiquitous computation, robotic). Face identification and recognition are very apealing biometric techniques, as it it intuitive and less invasive than others. Many works in this field are only concerned with locating the face of an individual (for counting purposes), while others try to identify people from faces. The objective of this work is to develop a biometric system that could identify and recognize faces. The work can be divided into two major stages: (1) Locate and track in a sequence of images (frames), as well as separating the tracked region from the image; (2) Recognize a face as belonging to a certain individual. In the former, faces are captured from frames of a video camera by a motion analysis system (based on substraction of frames), capable of finding, tracking and croping faces from images of individuals. The later, consists of elements for data reductions (Principal Component Analysis - PCA), feature extraction (Gabor wavelets) and face classification (Euclidean distance and Support Vector Machine - SVM). Two faces databases have been used: FERET and a \"home-made\" one. Tests have been undertaken so as to assess the system\'s recognition capabilities. The experiments have shown that the technique exhibited a satisfactory performance, with success rates of 91.97% and 100% for the FERET and the \"home-made\" databases, respectively.
6

Técnicas de processamento de imagens para localização e reconhecimento de faces / Image processing techniques for faces location and recognition

Osvaldo Cesar Pinheiro de Almeida 01 December 2006 (has links)
A biometria é a ciência que estuda a mensuração dos seres vivos. Muitos trabalhos exploram as características dos seres humanos tais como, impressão digital, íris e face, a fim de desenvolver sistemas biométricos, utilizados em diversas aplicações (monitoramento de segurança, computação ubíqua, robótica). O reconhecimento de faces é uma das técnicas biométricas mais investigadas, por ser bastante intuitiva e menos invasiva que as demais. Alguns trabalhos envolvendo essa técnica se preocupam apenas em localizar a face de um indivíduo (fazer a contagem de pessoas), enquanto outros tentam identificá-lo a partir de uma imagem. Este trabalho propõe uma abordagem capaz de identificar faces a partir de quadros de vídeo e, posteriormente, reconhecê-las por meio de técnicas de análise de imagens. Pode-se dividir o trabalho em dois módulos principais: (1) - Localização e rastreamento de faces em uma seqüência de imagens ( frames), além de separar a região rastreada da imagem; (2) - Reconhecimento de faces, identificando a qual pessoa pertence. Para a primeira etapa foi implementado um sistema de análise de movimento (baseado em subtração de quadros) que possibilitou localizar, rastrear e captar imagens da face de um indivíduo usando uma câmera de vídeo. Para a segunda etapa foram implementados os módulos de redução de informações (técnica Principal Component Analysis - PCA), de extração de características (transformada wavelet de Gabor), e o de classificação e identificação de face (distância Euclidiana e Support Vector Machine - SVM). Utilizando-se duas bases de dados de faces (FERET e uma própria - Própria), foram realizados testes para avaliar o sistema de reconhecimento implementado. Os resultados encontrados foram satisfatórios, atingindo 91,92% e 100,00% de taxa de acertos para as bases FERET e Própria, respectivamente. / Biometry is the science of measuring and analyzing biomedical data. Many works in this field have explored the characteristics of human beings, such as digital fingerprints, iris, and face to develop biometric systems, employed in various aplications (security monitoring, ubiquitous computation, robotic). Face identification and recognition are very apealing biometric techniques, as it it intuitive and less invasive than others. Many works in this field are only concerned with locating the face of an individual (for counting purposes), while others try to identify people from faces. The objective of this work is to develop a biometric system that could identify and recognize faces. The work can be divided into two major stages: (1) Locate and track in a sequence of images (frames), as well as separating the tracked region from the image; (2) Recognize a face as belonging to a certain individual. In the former, faces are captured from frames of a video camera by a motion analysis system (based on substraction of frames), capable of finding, tracking and croping faces from images of individuals. The later, consists of elements for data reductions (Principal Component Analysis - PCA), feature extraction (Gabor wavelets) and face classification (Euclidean distance and Support Vector Machine - SVM). Two faces databases have been used: FERET and a \"home-made\" one. Tests have been undertaken so as to assess the system\'s recognition capabilities. The experiments have shown that the technique exhibited a satisfactory performance, with success rates of 91.97% and 100% for the FERET and the \"home-made\" databases, respectively.
7

A Software Framework for Facial Modelling and Tracking

Strand, Mattias January 2010 (has links)
<p>The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.</p>
8

Facial Features Tracking using Active Appearance Models

Fanelli, Gabriele January 2006 (has links)
<p>This thesis aims at building a system capable of automatically extracting and parameterizing the position of a face and its features in images acquired from a low-end monocular camera. Such a challenging task is justified by the importance and variety of its possible applications, ranging from face and expression recognition to animation of virtual characters using video depicting real actors. The implementation includes the construction of Active Appearance Models of the human face from training images. The existing face model Candide-3 is used as a starting point, making the translation of the tracking parameters to standard MPEG-4 Facial Animation Parameters easy.</p><p>The Inverse Compositional Algorithm is employed to adapt the models to new images, working on a subspace where the appearance is "projected out" and thus focusing only on shape.</p><p>The algorithm is tested on a generic model, aiming at tracking different people’s faces, and on a specific model, considering one person only. In the former case, the need for improvements in the robustness of the system is highlighted. By contrast, the latter case gives good results regarding both quality and speed, with real time performance being a feasible goal for future developments.</p>
9

Real-time Monocular Vision-based Tracking For Interactive Augmented Reality

Spencer, Lisa 01 January 2006 (has links)
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
10

PhysiKart : A 2D racing game controlled by physical activity through face-tracking software

Perisic, Hanna, Strömqvist, Theodor January 2022 (has links)
Background: A sedentary lifestyle is becoming more common as our society is shift-ing away from physical labour. Many workplaces offer work from home arrangements,schools offer tutoring over the internet and children’s playgrounds, once full of life, aremostly empty. The availability of gaming devices, smartphones and social media havemade a big impact on the way we choose to live. With that comes the challenges of neg-ative health effects on a population level. Gaming has traditionally been thought of as asedentary activity in front of a desk and screen, but the development of exergames andthe rise of gamification has changed our perception of gaming and in some cases beenshown to be directly beneficial for our health. Objectives: A gamification project led byErik Berglund at the department of Computer and Information Sciences at Linköping Uni-versity is investigating if exergames can be enjoyed, give users a feeling of control, andlead to heightened exertion levels. With this in mind, we set out to develop a 2D racingexergame, PhysiKart, controlled by a face-tracking machine algorithm from Google’s Me-diaPipe library. Method: 14 participants tested the game 3-5 times where each sessionlasted 2 minutes. The participants filled out the Exergame Enjoyment Questionnaire andrated their exertion levels on the RPE scale. Results: Most participants found the gameresponsive to the control system and that the scoring system motivated them to continueplaying. However, the users perceived the immersion of the game to be lower than desiredand is believed to be a consequence of the slow pacing and the short rounds of the game. Conclusions: Through the survey it was confirmed that using face-tracking software foran exergame is suitable to achieve low levels of exertion. By changing the control motionsof the game it would be possible to increase the exertion while still utilizing face-tracking.

Page generated in 0.0524 seconds