• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Aspects of facial biometrics for verification of personal identity

Ramos Sanchez, M. Ulises January 2000 (has links)
No description available.
2

Rastreamento labial: aplicação em leitura labial. / Lip tracking: an application in lip reading.

Negreiros, Tupã 07 November 2012 (has links)
Novas interfaces homem-computador têm sido pesquisadas a fim de se tornarem mais naturais e flexíveis. O rastreamento labial, foco deste trabalho, é parte deste contexto, podendo ser utilizado na detecção de emoções, bem como no auxílio ao reconhecimento de voz. Pode assim tornar-se um módulo inicial para a leitura labial, na criação de interfaces voltadas a deficientes auditivos. Os algoritmos disponíveis na literatura foram analisados e comparados, mostrando os prós e contras de cada método. Finalmente foi escolhido desenvolver uma técnica baseada em Active Appearance Model (AAM). O AAM gera um modelo a partir de um conjunto de imagens de treinamento, que pode ser utilizado no rastreamento labial de novas imagens. A técnica proposta baseia-se no uso de algoritmos genéticos para o ajuste do modelo, diferente, portanto, da técnica proposta originalmente pelo AAM. A convergência da técnica proposta foi extensivamente analisada, com a variação de parâmetros, buscando a análise de erro residual da função custo e sua relação com o tempo de convergência e erro de posição. / New human-computer interfaces have been researched to make them more natural and flexible. The lip tracking, emphasis of this work, is part of this setting it can be used for detecting emotions and to aid the speech recognition. It may well become an initial module for lip-reading, in the creation of interfaces directed to hearing impaired people. Algorithms available on literature are analyzed and compared, explaining the upsides and downsides of each method. Finally was chose to develop a technique based on Active Appearance Model (AAM). The AAM generates a model from a set of training images, which can be used in tracking of new lip images. The proposed technique is based on the use of genetic algorithms for fitting the model, different therefore of the technique originally proposed by AAM. The convergence of the proposed technique has been extensively analyzed with the variation of parameters, searching the error analysis of the cost function residual error and its relation to the execution time and position error.
3

Rastreamento labial: aplicação em leitura labial. / Lip tracking: an application in lip reading.

Tupã Negreiros 07 November 2012 (has links)
Novas interfaces homem-computador têm sido pesquisadas a fim de se tornarem mais naturais e flexíveis. O rastreamento labial, foco deste trabalho, é parte deste contexto, podendo ser utilizado na detecção de emoções, bem como no auxílio ao reconhecimento de voz. Pode assim tornar-se um módulo inicial para a leitura labial, na criação de interfaces voltadas a deficientes auditivos. Os algoritmos disponíveis na literatura foram analisados e comparados, mostrando os prós e contras de cada método. Finalmente foi escolhido desenvolver uma técnica baseada em Active Appearance Model (AAM). O AAM gera um modelo a partir de um conjunto de imagens de treinamento, que pode ser utilizado no rastreamento labial de novas imagens. A técnica proposta baseia-se no uso de algoritmos genéticos para o ajuste do modelo, diferente, portanto, da técnica proposta originalmente pelo AAM. A convergência da técnica proposta foi extensivamente analisada, com a variação de parâmetros, buscando a análise de erro residual da função custo e sua relação com o tempo de convergência e erro de posição. / New human-computer interfaces have been researched to make them more natural and flexible. The lip tracking, emphasis of this work, is part of this setting it can be used for detecting emotions and to aid the speech recognition. It may well become an initial module for lip-reading, in the creation of interfaces directed to hearing impaired people. Algorithms available on literature are analyzed and compared, explaining the upsides and downsides of each method. Finally was chose to develop a technique based on Active Appearance Model (AAM). The AAM generates a model from a set of training images, which can be used in tracking of new lip images. The proposed technique is based on the use of genetic algorithms for fitting the model, different therefore of the technique originally proposed by AAM. The convergence of the proposed technique has been extensively analyzed with the variation of parameters, searching the error analysis of the cost function residual error and its relation to the execution time and position error.
4

Expressing emotions through vibration for perception and control / Expressing emotions through vibration

ur Réhman, Shafiq January 2010 (has links)
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction. / Taktil Video

Page generated in 0.0925 seconds