• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 21
  • 13
  • 12
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Interactive Imaging via Hand Gesture Recognition.

Jia, Jia January 2009 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. As a sub-field, Interactive Image Processing emphasizes particularly on the communications between machine and human. The basic flowchart is definition of object, analysis and training phase, recognition and feedback. Generally speaking, the core issue is how we define the interesting object and track them more accurately in order to complete the interaction process successfully. This thesis proposes a novel dynamic simulation scheme for interactive image processing. The work consists of two main parts: Hand Motion Detection and Hand Gesture recognition. Within a hand motion detection processing, movement of hand will be identified and extracted. In a specific detection period, the current image is compared with the previous image in order to generate the difference between them. If the generated difference exceeds predefined threshold alarm, a typical hand motion movement is detected. Furthermore, in some particular situations, changes of hand gesture are also desired to be detected and classified. This task requires features extraction and feature comparison among each type of gestures. The essentials of hand gesture are including some low level features such as color, shape etc. Another important feature is orientation histogram. Each type of hand gestures has its particular representation in the domain of orientation histogram. Because Gaussian Mixture Model has great advantages to represent the object with essential feature elements and the Expectation-Maximization is the efficient procedure to compute the maximum likelihood between testing images and predefined standard sample of each different gesture, the comparability between testing image and samples of each type of gestures will be estimated by Expectation-Maximization algorithm in Gaussian Mixture Model. The performance of this approach in experiments shows the proposed method works well and accurately.
12

Hand Gesture Recognition System

Gingir, Emrah 01 September 2010 (has links) (PDF)
This thesis study presents a hand gesture recognition system, which replaces input devices like keyboard and mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-colored glove or need lots of training data. The system mentioned in this study disables all these restrictions and provides an adaptive, effort free environment to the user. Study starts with an analysis of the different color space performances over skin color extraction. This analysis is independent of the working system and just performed to attain valuable information about the color spaces. Working system is based on two steps, namely hand detection and hand gesture recognition. In the hand detection process, normalized RGB color space skin locus is used to threshold the coarse skin pixels in the image. Then an adaptive skin locus, whose varying boundaries are estimated from coarse skin region pixels, segments the distinct skin color in the image for the current conditions. Since face has a distinct shape, face is detected among the connected group of skin pixels by using the shape analysis. Non-face connected group of skin pixels are determined as hands. Gesture of the hand is recognized by improved centroidal profile method, which is applied around the detected hand. A 3D flight war game, a boxing game and a media player, which are controlled remotely by just using static and dynamic hand gestures, were developed as human machine interface applications by using the theoretical background of this study. In the experiments, recorded videos were used to measure the performance of the system and a correct recognition rate of ~90% was acquired with nearly real time computation.
13

Commande sans contact d'éclairage opératoire par méthodes de vision informatique / Non-contact remote control of surgical lighting devices by computer vision means

Collumeau, Jean-François 31 January 2014 (has links)
De nos jours, le maintien de l'asepsie dans la salle d'opération est vital pour la limitation de la transmission d'infections nosocomiales au patient lors de l'opération. Des mesures d'asepsie drastiques ont pour but de préserver la zone stérile de tout agent infectieux.Elles interdisent actuellement au chirurgien d'interagir avec les équipements non-stériles du bloc.Le chirurgien opérant souhaiterait cependant disposer d'un contrôle direct sur certains équipements spécifiques du bloc dans des situations données sans enfreindre ces mesures.Les travaux présentés dans cette thèse concernent le développement d'une Interface Homme-Machine permettant la commande gestuelle sans contact, et donc sans transmission d'agents infectieux, de tels équipements.Dans la continuité des travaux existants dans la littérature, une chaîne de traitement basée sur des techniques de vision informatique et un prototype de caméra portée par l'utilisateur ont ainsi été développés pour atteindre ces objectifs. Ce document présente les études comparatives menées sur des algorithmes issus de la littérature afin de sélectionner les plus aptes à être employés dans la chaîne logicielle. Un descripteur géométrique dédié aux mains est introduit, et des approches coopératives sont investiguées sur les étapes de localisation de la main et de classification de la posture prise.Les performances de la chaîne de traitement ainsi créée sont évaluées dans différentes situations à l'aide de bases d'images et de vidéos extensives acquises dans des conditions proches de celles du bloc opératoire, ainsi que sur des images synthétiques réalisées sur un modèle virtuel de main créé ad hoc.Un démonstrateur composé de la chaîne de traitement développée et d'un prototype de caméra frontale permet, associé à une simulation de bras-support d'éclairage opératoire, d'illustrer les possibilités ouvertes par le système développé au cours de cette thèse. / Asepsis preservation in operating rooms is nowadays compulsory for avoiding the spread of hospital-acquired diseases to patients during surgeries. Drastic asepsis measures aim at preserving the sterile area of the operating room from infective agents.These measures forbid surgeons from interacting with non-sterile devices. Surgeons wish nonetheless having direct control over some of these devices.The works presented in this thesis relate to the development of a Human-Computer Interface enabling remote, hence without transmission of infective agents, non-contact control over such devices.Following on from previous authors' works in the literature, an image processing chain based on computer vision techniques and a wearable camera prototype have been developed in order to achieve these goals.This document presents the comparative studies led with algorithms issued from the literature with the aim of selecting the most suitable for using in the processing chain. A dedicated geometry-based hand descriptor is introduced, and cooperative approaches are investigated in relation with the hand localization and posture classification steps.The performance achieved by the processing chain in various situations are quantified using extensive picture and video databases acquired in conditions close to those of the operating room. Synthetic pictures created using an ad hoc virtual model of the hand are used as well for this evaluation.A demonstrator composed of the developed processing chain, a wearable camera prototype and a surgical lighting arm simulator enables the illustration of the possiblities offered by the system developed during this thesis.
14

Detecção de pele humana utilizando modelos estocásticos multi-escala de textura / Skin detection for hand gesture segmentation via multi-scale stochastic texture models

Medeiros, Rafael Sachett January 2013 (has links)
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem. / Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
15

Detecção de pele humana utilizando modelos estocásticos multi-escala de textura / Skin detection for hand gesture segmentation via multi-scale stochastic texture models

Medeiros, Rafael Sachett January 2013 (has links)
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem. / Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
16

Detecção de pele humana utilizando modelos estocásticos multi-escala de textura / Skin detection for hand gesture segmentation via multi-scale stochastic texture models

Medeiros, Rafael Sachett January 2013 (has links)
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem. / Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
17

Ovládání počítače gesty / Gesture Based Human-Computer Interface

Jaroň, Lukáš January 2012 (has links)
This masters thesis describes possibilities and principles of gesture-based computer interface. The work describes general approaches for gesture control.  It also deals with implementation of the selected detection method of the hands and fingers using depth maps loaded form Kinect sensor. The implementation also deals with gesture recognition using hidden Markov models. For demonstration purposes there is also described implementation of a simple photo viewer that uses developed gesture-based computer interface. The work also focuses on quality testing and accuracy evaluation for selected gesture recognizer.
18

Hand Gesture Controlled Omnidirectional Vehicle / Handstyrd farkost med mecanumhjul

NORMELIUS, ANTON, BECKMAN, KARL January 2020 (has links)
The purpose of this project was to study how hand gesture control can be implemented on a vehicle that utilizes mecanum wheels in order to move in all directions. Furthermore, it was investigated how the steering of such a vehicle can be made wireless to increase mobility. A prototype vehicle consisting of four mecanum wheels was constructed. Mecanum wheels are such wheels that enable translation in all directions. By varying rotational direction of each wheel, the direction of the resulting force on the vehicle is altered, making it move in the desired direction. Hand gesture control was enabled by constructing another prototype, attached to the hand, consisting of an IMU (Inertial Measurement Unit) and a transceiver. With the IMU, the hand’s angle against the horizontal plane can be calculated and instructions can be sent over to the vehicle by making use of the transceiver. Those instructions contain a short message that specifies in what direction the vehicle should move. The vehicle rotates the wheels in the desired direction and move thereafter. The results show that wireless hand gesture based control of an omnidirectional vehicle works without any noticeable delay in the transmission and the signals that are sent contain the correct information about moving directions. / Syftet med detta projekt var att studera hur handstyrning kan implementeras på ett fordon som utnyttjar mecanumhjul för att röra sig i alla riktningar. Vidare undersöktes också hur styrningen av sådant fordon kan genomföras trädlöst för ökad mobilitet. En prototypfarkost bestående av fyra mecanumhjul konstruerades. Mecanumhjul är sådana hjul som möjliggör translation i alla riktningar. Genom att variera rotationsriktningen på vardera motor ändras riktningen av den resulterande kraften på farkosten, vilket gör att den kan förflytta sig i önskad riktning. Handstyrning möjliggjordes genom att konstruera en till prototyp, som fästs i anslutning till handen, bestående av en IMU och en transceiver. Med IMU:n kan handens vinkel gentemot horisontalplanet beräknas och instruktioner kan skickas över till farkosten med hjälp av transceivern. Dessa instruktioner innehåller ett kort meddelande som specificerar i vilken riktning farkosten ska röra sig i. Resultaten visar på att trädlös handstyrning av en farkost fungerar utan märkbar tidsfördröjning i signalöverföring och att signalerna som skickas till farkosten innehåller korrekta instruktioner gällande rörelseriktningar.
19

Toward Understanding Human Expression in Human-Robot Interaction

Miners, William Ben January 2006 (has links)
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving. <br /><br /> An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces. <br /><br /> Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations. <br /><br /> This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding. <br /><br /> The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
20

Toward Understanding Human Expression in Human-Robot Interaction

Miners, William Ben January 2006 (has links)
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving. <br /><br /> An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces. <br /><br /> Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations. <br /><br /> This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding. <br /><br /> The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.

Page generated in 0.0758 seconds