• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Utveckling av ett active vision system för demonstration av EDSDK++ i tillämpningar inom datorseende

Kargén, Rolf January 2014 (has links)
Datorseende är ett snabbt växande, tvärvetenskapligt forskningsområde vars tillämpningar tar en allt mer framskjutande roll i dagens samhälle. Med ett ökat intresse för datorseende ökar också behovet av att kunna kontrollera kameror kopplade till datorseende system. Vid Linköpings tekniska högskola, på avdelningen för datorseende, har ramverket EDSDK++ utvecklats för att fjärrstyra digitala kameror tillverkade av Canon Inc. Ramverket är mycket omfattande och innehåller en stor mängd funktioner och inställningsalternativ. Systemet är därför till stor del ännu relativt oprövat. Detta examensarbete syftar till att utveckla ett demonstratorsystem till EDSDK++ i form av ett enkelt active vision system, som med hjälp av ansiktsdetektion i realtid styr en kameratilt, samt en kamera monterad på tilten, till att följa, zooma in och fokusera på ett ansikte eller en grupp av ansikten. Ett krav var att programbiblioteket OpenCV skulle användas för ansiktsdetektionen och att EDSDK++ skulle användas för att kontrollera kameran. Dessutom skulle ett API för att kontrollera kameratilten utvecklas. Under utvecklingsarbetet undersöktes bl.a. olika metoder för ansiktsdetektion. För att förbättra prestandan användes multipla ansiktsdetektorer, som med hjälp av multitrådning avsöker en bild parallellt från olika vinklar. Såväl experimentella som teoretiska ansatser gjordes för att bestämma de parametrar som behövdes för att kunna reglera kamera och kameratilt. Resultatet av arbetet blev en demonstrator, som uppfyllde samtliga krav. / Computer vision is a rapidly growing, interdisciplinary field whose applications are taking an increasingly prominent role in today's society. With an increased interest in computer vision there is also an increasing need to be able to control cameras connected to computer vision systems. At the division of computer vision, at Linköping University, the framework EDSDK++ has been developed to remotely control digital cameras made by Canon Inc. The framework is very comprehensive and contains a large amount of features and configuration options. The system is therefore largely still relatively untested. This thesis aims to develop a demonstrator to EDSDK++ in the form of a simple active vision system, which utilizes real-time face detection in order to control a camera tilt, and a camera mounted on the tilt, to follow, zoom in and focus on a face or a group of faces. A requirement was that the OpenCV library would be used for face detection and EDSDK++ would be used to control the camera. Moreover, an API to control the camera tilt was to be developed. During development, different methods for face detection were investigated. In order to improve performance, multiple, parallel face detectors using multithreading, were used to scan an image from different angles. Both experimental and theoretical approaches were made to determine the parameters needed to control the camera and camera tilt. The project resulted in a fully functional demonstrator, which fulfilled all requirements.
12

[en] COLLABORATIVE FACE TRACKING: A FRAMEWORK FOR THE LONG-TERM FACE TRACKING / [pt] RASTREAMENTO DE FACES COLABORATIVO: UMA METODOLOGIA PARA O RASTREAMENTO DE FACES AO LONGO PRAZO

VICTOR HUGO AYMA QUIRITA 22 March 2021 (has links)
[pt] O rastreamento visual é uma etapa essencial em diversas aplicações de visão computacional. Em particular, o rastreamento facial é considerado uma tarefa desafiadora devido às variações na aparência da face, devidas à etnia, gênero, presença de bigode ou barba e cosméticos, além de variações na aparência ao longo da sequência de vídeo, como deformações, variações em iluminação, movimentos abruptos e oclusões. Geralmente, os rastreadores são robustos a alguns destes fatores, porém não alcançam resultados satisfatórios ao lidar com múltiplos fatores ao mesmo tempo. Uma alternativa é combinar as respostas de diferentes rastreadores para alcançar resultados mais robustos. Este trabalho se insere neste contexto e propõe um novo método para a fusão de rastreadores escalável, robusto, preciso e capaz de manipular rastreadores independentemente de seus modelos. O método prevê ainda a integração de detectores de faces ao modelo de fusão de forma a aumentar a acurácia do rastreamento. O método proposto foi implementado para fins de validação, tendo sido testado em diversas configurações que combinaram até cinco rastreadores distintos e um detector de faces. Em testes realizados a partir de quatro sequências de vídeo que apresentam condições diversas de imageamento o método superou em acurácia os rastreadores utilizados individualmente. / [en] Visual tracking is fundamental in several computer vision applications. In particular, face tracking is challenging because of the variations in facial appearance, due to age, ethnicity, gender, facial hair, and cosmetics, as well as appearance variations in long video sequences caused by facial deformations, lighting conditions, abrupt movements, and occlusions. Generally, trackers are robust to some of these factors but do not achieve satisfactory results when dealing with combined occurrences. An alternative is to combine the results of different trackers to achieve more robust outcomes. This work fits into this context and proposes a new method for scalable, robust and accurate tracker fusion able to combine trackers regardless of their models. The method further provides the integration of face detectors into the fusion model to increase the tracking accuracy. The proposed method was implemented for validation purposes and was tested in different configurations that combined up to five different trackers and one face detector. In tests on four video sequences that present different imaging conditions the method outperformed the trackers used individually.
13

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
<p>In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.</p><p>Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.</p>
14

Face recognition from video

Harguess, Joshua David 30 January 2012 (has links)
While the area of face recognition has been extensively studied in recent years, it remains a largely open problem, despite what movie and television studios would leave you to believe. Frontal, still face recognition research has seen a lot of success in recent years from any different researchers. However,the accuracy of such systems can be greatly diminished in cases such as increasing the variability of the database,occluding the face, and varying the illumination of the face. Further varying the pose of the face (yaw, pitch, and roll) and the face expression (smile, frown, etc.) adds even more complexity to the face recognition task, such as in the case of face recognition from video. In a more realistic video surveillance setting, a face recognition system should be robust to scale, pose, resolution, and occlusion as well as successfully track the face between frames. Also, a more advanced face recognition system should be able to improve the face recognition result by utilizing the information present in multiple video cameras. We approach the problem of face recognition from video in the following manner. We assume that the training data for the system consists of only still image data, such as passport photos or mugshots in a real-world system. We then transform the problem of face recognition from video to a still face recognition problem. Our research focuses on solutions to detecting, tracking and extracting face information from video frames so that they may be utilized effectively in a still face recognition system. We have developed four novel methods that assist in face recognition from video and multiple cameras. The first uses a patch-based method to handle the face recognition task when only patches, or parts, of the face are seen in a video, such as when occlusion of the face happens often. The second uses multiple cameras to fuse the recognition results of multiple cameras to improve the recognition accuracy. In the third solution, we utilize multiple overlapping video cameras to improve the face tracking result which thus improves the face recognition accuracy of the system. We additionally implement a methodology to detect and handle occlusion so that unwanted information is not used in the tracking algorithm. Finally, we introduce the average-half-face, which is shown to improve the results of still face recognition by utilizing the symmetry of the face. In one attempt to understand the use of the average-half-face in face recognition, an analysis of the effect of face symmetry on face recognition results is shown. / text
15

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted. Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
16

A Software Framework for Facial Modelling and Tracking

Strand, Mattias January 2010 (has links)
The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.
17

The State of Live Facial Puppetry in Online Entertainment

Gren, Lisa, Lindberg, Denny January 2024 (has links)
Avatars are used more and more in online communication, in both games and socialmedia. At the same time technology for facial puppetry, where expressions of the user aretransferred to the avatar, has developed rapidly. Why is it that facial puppetry, despite this,is conspicuous by its absence? This thesis analyzes the available and upcoming solutions for facial puppetry, if a com-mon framework or library can exist and what can be done to simplify the process for de-velopers who wants to implement facial puppetry. A survey was conducted to get a better understanding of the technology. It showedthat there is no standard yet for how to describe facial expressions, but part of the marketis converging towards a common format. It also showed that there is no existing inter-face that can handle communication with tracking devices or translation between differentexpression formats. Several prototypes for recording and streaming facial expression data from differentsources were implemented as a practical test. This was done to evaluate the complexity ofimplementing real-time facial puppetry. It showed that it is not always possible to integratethe available tracking solutions into an existing project. When integration was possible itrequired a lot of work. The best way to get tracking right now seems to be to implement astandalone program for tracking that streams the tracked data to the main application. In summary it is the poor integrability of the solutions that makes it problematic forthe developers, together with a wide variety of facial expression formats. A software thatacts like a bridge between the tracking solutions and the game could allow for translationbetween different formats and simplify implementation of support. In the future, instead of working towards making all tracking solutions output stan-dardized tracking data, research further how to build a framework that can handle differ-ent configurations. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
18

Recalage d'images de visage / Facial image registration

Ni, Weiyuan 11 December 2012 (has links)
Etude bibliographique sur le recalage d'images de visage et sur le recalage d'images et travail en collaboration avec Son VuS, pour définir la précision nécessaire du recalage en fonction des exigences des méthodes de reconnaissance de visages. / Face alignment is an important step in a typical automatic face recognition system.This thesis addresses the alignment of faces for face recognition applicationin video surveillance context. The main challenging factors of this research includethe low quality of images (e.g., low resolution, motion blur, and noise), uncontrolledillumination conditions, pose variations, expression changes, and occlusions. In orderto deal with these problems, we propose several face alignment methods using differentstrategies. The _rst part of our work is a three-stage method for facial pointlocalization which can be used for correcting mis-alignment errors. While existingalgorithms mostly rely on a priori knowledge of facial structure and on a trainingphase, our approach works in an online mode without requirements of pre-de_nedconstraints on feature distributions. The proposed method works well on images underexpression and lighting variations. The key contributions of this thesis are aboutjoint image alignment algorithms where a set of images is simultaneously alignedwithout a biased template selection. We respectively propose two unsupervised jointalignment algorithms : \Lucas-Kanade entropy congealing" (LKC) and \gradient correlationcongealing" (GCC). In LKC, an image ensemble is aligned by minimizing asum-of-entropy function de_ned over all images. GCC uses gradient correlation coef-_cient as similarity measure. The proposed algorithms perform well on images underdi_erent conditions. To further improve the robustness to mis-alignments and thecomputational speed, we apply a multi-resolution framework to joint face alignmentalgorithms. Moreover, our work is not limited in the face alignment stage. Since facealignment and face acquisition are interrelated, we develop an adaptive appearanceface tracking method with alignment feedbacks. This closed-loop framework showsits robustness to large variations in target's state, and it signi_cantly decreases themis-alignment errors in tracked faces.
19

Automatická regulace velikosti písma podle vzdálenosti čtenáře / Font size adjustment based on distance detection

Brunclík, Robert January 2016 (has links)
The thesis deals with automatic control the font size by the distance from the reader. It includes theoretical acquaintance with the face detection and subsequent tracking of the detected area during the scene. Furthermore, there is a comparison of the tracking algorithms. Then the calculation of distance is decribed. It is based on the user’s calibration and based on the outcome occurs the font size is automatically corrected. There is also a description of a separate application Automatical controller of the text size, with the recommended settings of the program.
20

Detekce obličejů ve videu / Face Detection in Video

Kolman, Aleš January 2012 (has links)
The project is focused on face detection in video. Firstly, it contains a summary of basic color models. Secondly, you can find the description and comparison of the basic methods for detection of human skin with a practical example of implementation of parametric detector. Thirdly, a theoretical basis for face detection and face tracking in a video containing a list of basic concepts and methods of this issue follows. Greater emphasis is placed on the description of machine learning algorithm AdaBoost and description of the possible application of the Kalman filter for the purpose of face tracking. Design, implementation and testing of library accomplished within the master thesis are listed in the final part of this thesis.

Page generated in 0.0434 seconds