• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 19
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 113
  • 58
  • 43
  • 36
  • 36
  • 22
  • 20
  • 19
  • 17
  • 17
  • 16
  • 14
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Human motion detection and gesture recognition using computer vision methods

Liu, X. (Xin) 21 February 2019 (has links)
Abstract Gestures are present in most daily human activities and automatic gestures analysis is a significant topic with the goal of enabling the interaction between humans and computers as natural as the communication between humans. From a computer vision perspective, a gesture analysis system is typically composed of two stages, the low-level stage for human motion detection and the high-level stage for understanding human gestures. Therefore, this thesis contributes to the research on gesture analysis from two aspects, 1) Detection: human motion segmentation from video sequences, and 2) Understanding: gesture cues extraction and recognition. In the first part of this thesis, two sparse signal recovery based human motion detection methods are presented. In real videos the foreground (human motions) pixels are often not randomly distributed but have the group properties in both spatial and temporal domains. Based on this observation, a spatio-temporal group sparsity recovery model is proposed, which explicitly consider the foreground pixels' group clustering priors of spatial coherence and temporal contiguity. Moreover, a pixel should be considered as a multi-channel signal. Namely, if a pixel is equal to the adjacent ones that means all the three RGB coefficients should be equal. Motivated by this observation, a multi-channel fused Lasso regularizer is developed to explore the smoothness of multi-channels signals. In the second part of this thesis, two human gesture recognition methods are presented to resolve the issue of temporal dynamics, which is crucial to the interpretation of the observed gestures. In the first study, a gesture skeletal sequence is characterized by a trajectory on a Riemannian manifold. Then, a time-warping invariant metric on the Riemannian manifold is proposed. Furthermore, a sparse coding for skeletal trajectories is presented by explicitly considering the labelling information, with the aim to enforcing the discriminant validity of the dictionary. In the second work, based on the observation that a gesture is a time series with distinctly defined phases, a low-rank matrix decomposition model is proposed to build temporal compositions of gestures. In this way, a more appropriate alignment of hidden states for a hidden Markov model can be achieved. / Tiivistelmä Eleet ovat läsnä useimmissa päivittäisissä ihmisen toiminnoissa. Automaattista eleiden analyysia tarvitaan laitteiden ja ihmisten välisestä vuorovaikutuksesta parantamiseksi ja tavoitteena on yhtä luonnollinen vuorovaikutus kuin ihmisten välinen vuorovaikutus. Konenäön näkökulmasta eleiden analyysijärjestelmä koostuu ihmisen liikkeiden havainnoinnista ja eleiden tunnistamisesta. Tämä väitöskirjatyö edistää eleanalyysin-tutkimusta erityisesti kahdesta näkökulmasta: 1) Havainnointi - ihmisen liikkeiden segmentointi videosekvenssistä. 2) Ymmärtäminen - elemarkkerien erottaminen ja tunnistaminen. Väitöskirjan ensimmäinen osa esittelee kaksi liikkeen havainnointi menetelmää, jotka perustuvat harvan signaalin rekonstruktioon. Videokuvan etualan (ihmisen liikkeet) pikselit eivät yleensä ole satunnaisesti jakautuneita vaan niillä toisistaan riippuvia ominaisuuksia spatiaali- ja aikatasolla tarkasteltuna. Tähän havaintoon perustuen esitellään spatiaalis-ajallinen harva rekonstruktiomalli, joka käsittää etualan pikseleiden klusteroinnin spatiaalisen koherenssin ja ajallisen jatkuvuuden perusteella. Lisäksi tehdään oletus, että pikseli on monikanavainen signaali (RGB-väriarvot). Pikselin ollessa samankaltainen vieruspikseliensä kanssa myös niiden värikanava-arvot ovat samankaltaisia. Havaintoon nojautuen kehitettiin kanavat yhdistävä lasso-regularisointi, joka mahdollistaa monikanavaisen signaalin tasaisuuden tutkimisen. Väitöskirjan toisessa osassa esitellään kaksi menetelmää ihmisen eleiden tunnistamiseksi. Menetelmiä voidaan käyttää eleiden ajallisen dynamiikan ongelmien (eleiden nopeuden vaihtelu) ratkaisemiseksi, mikä on ensiarvoisen tärkeää havainnoitujen eleiden oikein tulkitsemiseksi. Ensimmäisessä menetelmässä ele kuvataan luurankomallin liikeratana Riemannin monistossa (Riemannian manifold), joka hyödyntää aikavääristymille sietoista metriikkaa. Lisäksi esitellään harvakoodaus (sparse coding) luurankomallien liikeradoille. Harvakoodaus perustuu nimiöintitietoon, jonka tavoitteena on varmistua koodisanaston keskinäisestä riippumattomuudesta. Toisen menetelmän lähtökohtana on havainto, että ele on ajallinen sarja selkeästi määriteltäviä vaiheita. Vaiheiden yhdistämiseen ehdotetaan matala-asteista matriisihajotelmamallia, jotta piilotilat voidaan sovittaa paremmin Markovin piilomalliin (Hidden Markov Model).
102

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Hebert Luchetti Ribeiro 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
103

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Ribeiro, Hebert Luchetti 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
104

Human computer interface based on hand gesture recognition

Bernard, Arnaud Jean Marc 24 August 2010 (has links)
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content. The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control the TV. A dedicated interface can then be used to select a TV channel, adjust volume or browse videos from an online streaming server. This approach leads to several challenges. The first is the use of a simple webcam which leads to a vision based system. From the single webcam, we need to recognize the hand and identify its gesture or trajectory. A TV set is usually installed in a living room which implies constraints such as a potentially moving background and luminance change. These issues will be further discussed as well as the methods developed to resolve them. Video browsing is one example of the use of gesture recognition. To illustrate another application, we developed a simple game controlled by hand gestures. The emergence of 3D TVs is allowing the development of 3D video conferencing. Therefore we also consider the use of a stereo camera to recognize hand gesture.
105

Reconnaissance des actions humaines à partir d'une séquence vidéo

Touati, Redha 12 1900 (has links)
The work done in this master's thesis, presents a new system for the recognition of human actions from a video sequence. The system uses, as input, a video sequence taken by a static camera. A binary segmentation method of the the video sequence is first achieved, by a learning algorithm, in order to detect and extract the different people from the background. To recognize an action, the system then exploits a set of prototypes generated from an MDS-based dimensionality reduction technique, from two different points of view in the video sequence. This dimensionality reduction technique, according to two different viewpoints, allows us to model each human action of the training base with a set of prototypes (supposed to be similar for each class) represented in a low dimensional non-linear space. The prototypes, extracted according to the two viewpoints, are fed to a $K$-NN classifier which allows us to identify the human action that takes place in the video sequence. The experiments of our model conducted on the Weizmann dataset of human actions provide interesting results compared to the other state-of-the art (and often more complicated) methods. These experiments show first the sensitivity of our model for each viewpoint and its effectiveness to recognize the different actions, with a variable but satisfactory recognition rate and also the results obtained by the fusion of these two points of view, which allows us to achieve a high performance recognition rate. / Le travail mené dans le cadre de ce projet de maîtrise vise à présenter un nouveau système de reconnaissance d’actions humaines à partir d'une séquence d'images vidéo. Le système utilise en entrée une séquence vidéo prise par une caméra statique. Une méthode de segmentation binaire est d'abord effectuée, grâce à un algorithme d’apprentissage, afin de détecter les différentes personnes de l'arrière-plan. Afin de reconnaitre une action, le système exploite ensuite un ensemble de prototypes générés, par une technique de réduction de dimensionnalité MDS, à partir de deux points de vue différents dans la séquence d'images. Cette étape de réduction de dimensionnalité, selon deux points de vue différents, permet de modéliser chaque action de la base d'apprentissage par un ensemble de prototypes (censé être relativement similaire pour chaque classe) représentés dans un espace de faible dimension non linéaire. Les prototypes extraits selon les deux points de vue sont amenés à un classifieur K-ppv qui permet de reconnaitre l'action qui se déroule dans la séquence vidéo. Les expérimentations de ce système sur la base d’actions humaines de Wiezmann procurent des résultats assez intéressants comparés à d’autres méthodes plus complexes. Ces expériences montrent d'une part, la sensibilité du système pour chaque point de vue et son efficacité à reconnaitre les différentes actions, avec un taux de reconnaissance variable mais satisfaisant, ainsi que les résultats obtenus par la fusion de ces deux points de vue, qui permet l'obtention de taux de reconnaissance très performant.
106

Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition

Ferri Pascual, Josué 23 October 2020 (has links)
[EN] The main objective of this thesis is the development of new sensors and actuators using Printed Electronics technology. For this, conductive, semiconductor and dielectric polymeric materials are used on flexible and/or elastic substrates. By means of suitable designs and application processes, it is possible to manufacture sensors capable of interacting with the environment. In this way, specific sensing functionalities can be incorporated into the substrates, such as textile fabrics. Additionally, it is necessary to include electronic systems capable of processing the data obtained, as well as its registration. In the development of these sensors and actuators, the physical properties of the different materials are precisely combined. For this, multilayer structures are designed where the properties of some materials interact with those of others. The result is a sensor capable of capturing physical variations of the environment, and convert them into signals that can be processed, and finally transformed into data. On the one hand, a tactile sensor printed on textile substrate for 2D gesture recognition was developed. This sensor consists of a matrix composed of small capacitive sensors based on a capacitor type structure. These sensors were designed in such a way that, if a finger or other object with capacitive properties, gets close enough, its behaviour varies, and it can be measured. The small sensors are arranged in this matrix as in a grid. Each sensor has a position that is determined by a row and a column. The capacity of each small sensor is periodically measured in order to assess whether significant variations have been produced. For this, it is necessary to convert the sensor capacity into a value that is subsequently digitally processed. On the other hand, to improve the effectiveness in the use of the developed 2D touch sensors, the way of incorporating an actuator system was studied. Thereby, the user receives feedback that the order or action was recognized. To achieve this, the capacitive sensor grid was complemented with an electroluminescent screen printed as well. The final prototype offers a solution that combines a 2D tactile sensor with an electroluminescent actuator on a printed textile substrate. Next, the development of a 3D gesture sensor was carried out using a combination of sensors also printed on textile substrate. In this type of 3D sensor, a signal is sent generating an electric field on the sensors. This is done using a transmission electrode located very close to them. The generated field is received by the reception sensors and converted to electrical signals. For this, the sensors are based on electrodes that act as receivers. If a person places their hands within the emission area, a disturbance of the electric field lines is created. This is due to the deviation of the lines to ground using the intrinsic conductivity of the human body. This disturbance affects the signals received by the electrodes. Variations captured by all electrodes are processed together and can determine the position and movement of the hand on the sensor surface. Finally, the development of an improved 3D gesture sensor was carried out. As in the previous development, the sensor allows contactless gesture detection, but increasing the detection range. In addition to printed electronic technology, two other textile manufacturing technologies were evaluated. / [ES] La presente tesis doctoral tiene como objetivo fundamental el desarrollo de nuevos sensores y actuadores empleando la tecnología electrónica impresa, también conocida como Printed Electronics. Para ello, se emplean materiales poliméricos conductores, semiconductores y dieléctricos sobre sustratos flexibles y/o elásticos. Por medio de diseños y procesos de aplicación adecuados, es posible fabricar sensores capaces de interactuar con el entorno. De este modo, se pueden incorporar a los sustratos, como puedan ser tejidos textiles, funcionalidades específicas de medición del entorno y de respuesta ante cambios de este. Adicionalmente, es necesario incluir sistemas electrónicos, capaces de realizar el procesado de los datos obtenidos, así como de su registro. En el desarrollo de estos sensores y actuadores se combinan las propiedades físicas de los diferentes materiales de forma precisa. Para ello, se diseñan estructuras multicapa donde las propiedades de unos materiales interaccionan con las de los demás. El resultado es un sensor capaz de captar variaciones físicas del entorno, y convertirlas en señales que pueden ser procesadas y transformadas finalmente en datos. Por una parte, se ha desarrollado un sensor táctil impreso sobre sustrato textil para reconocimiento de gestos en 2D. Este sensor se compone de una matriz formada por pequeños sensores capacitivos basados en estructura de tipo condensador. Estos se han diseñado de forma que, si un dedo u otro objeto con propiedades capacitivas se aproxima suficientemente, su comportamiento varía, pudiendo ser medido. Los pequeños sensores están ordenados en dicha matriz como en una cuadrícula. Cada sensor tiene una posición que viene determinada por una fila y por una columna. Periódicamente se mide la capacidad de cada pequeño sensor con el fin de evaluar si ha sufrido variaciones significativas. Para ello es necesario convertir la capacidad del sensor en un valor que posteriormente es procesado digitalmente. Por otro lado, con el fin de mejorar la efectividad en el uso de los sensores táctiles 2D desarrollados, se ha estudiado el modo de incorporar un sistema actuador. De esta forma, el usuario recibe una retroalimentación indicando que la orden o acción ha sido reconocida. Para ello, se ha complementado la matriz de sensores capacitivos con una pantalla electroluminiscente también impresa. El resultado final ofrece una solución que combina un sensor táctil 2D con un actuador electroluminiscente realizado mediante impresión electrónica sobre sustrato textil. Posteriormente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D empleando una combinación de sensores impresos también sobre sustrato textil. En este tipo de sensor 3D, se envía una señal que genera un campo eléctrico sobre los sensores impresos. Esto se lleva a cabo mediante un electrodo de transmisión situado muy cerca de ellos. El campo generado es recibido por los sensores y convertido a señales eléctricas. Para ello, los sensores se basan en electrodos que actúan de receptores. Si una persona coloca su mano dentro del área de emisión, se crea una perturbación de las líneas de los campos eléctricos. Esto es debido a la desviación de las líneas de campo a tierra utilizando la conductividad intrínseca del cuerpo humano. Esta perturbación cambia/afecta a las señales recibidas por los electrodos. Las variaciones captadas por todos los electrodos son procesadas de forma conjunta pudiendo determinar la posición y el movimiento de la mano sobre la superficie del sensor. Finalmente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D mejorado. Al igual que el desarrollo anterior, permite la detección de gestos sin necesidad de contacto, pero incrementando la distancia de alcance. Además de la tecnología de impresión electrónica, se ha evaluado el empleo de otras dos tecnologías de fabricación textil. / [CA] La present tesi doctoral té com a objectiu fonamental el desenvolupament de nous sensors i actuadors fent servir la tecnologia de electrònica impresa, també coneguda com Printed Electronics. Es va fer us de materials polimèrics conductors, semiconductors i dielèctrics sobre substrats flexibles i/o elàstics. Per mitjà de dissenys i processos d'aplicació adequats, és possible fabricar sensors capaços d'interactuar amb l'entorn. D'aquesta manera, es poden incorporar als substrats, com ara teixits tèxtils, funcionalitats específiques de mesurament de l'entorn i de resposta davant canvis d'aquest. Addicionalment, és necessari incloure sistemes electrònics, capaços de realitzar el processament de les dades obtingudes, així com del seu registre. En el desenvolupament d'aquests sensors i actuadors es combinen les propietats físiques dels diferents materials de forma precisa. Cal dissenyar estructures multicapa on les propietats d'uns materials interaccionen amb les de la resta. manera El resultat es un sensor capaç de captar variacions físiques de l'entorn, i convertirles en senyals que poden ser processades i convertides en dades. D'una banda, s'ha desenvolupat un sensor tàctil imprès sobre substrat tèxtil per a reconeixement de gestos en 2D. Aquest sensor es compon d'una matriu formada amb petits sensors capacitius basats en una estructura de tipus condensador. Aquests s'han dissenyat de manera que, si un dit o un altre objecte amb propietats capacitives s'aproxima prou, el seu comportament varia, podent ser mesurat. Els petits sensors estan ordenats en aquesta matriu com en una quadrícula. Cada sensor té una posició que ve determinada per una fila i per una columna. Periòdicament es mesura la capacitat de cada petit sensor per tal d'avaluar si ha sofert variacions significatives. Per a això cal convertir la capacitat del sensor a un valor que posteriorment és processat digitalment. D'altra banda, per tal de millorar l'efectivitat en l'ús dels sensors tàctils 2D desenvolupats, s'ha estudiat la manera d'incorporar un sistema actuador. D'aquesta forma, l'usuari rep una retroalimentació indicant que l'ordre o acció ha estat reconeguda. Per a això, s'ha complementat la matriu de sensors capacitius amb una pantalla electroluminescent també impresa. El resultat final ofereix una solució que combina un sensor tàctil 2D amb un actuador electroluminescent realitzat mitjançant impressió electrònica sobre substrat tèxtil. Posteriorment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D emprant una combinació d'un mínim de sensors impresos també sobre substrat tèxtil. En aquest tipus de sensor 3D, s'envia un senyal que genera un camp elèctric sobre els sensors impresos. Això es porta a terme mitjançant un elèctrode de transmissió situat molt a proper a ells. El camp generat és rebut pels sensors i convertit a senyals elèctrics. Per això, els sensors es basen en elèctrodes que actuen de receptors. Si una persona col·loca la seva mà dins de l'àrea d'emissió, es crea una pertorbació de les línies dels camps elèctrics. Això és a causa de la desviació de les línies de camp a terra utilitzant la conductivitat intrínseca de el cos humà. Aquesta pertorbació afecta als senyals rebudes pels elèctrodes. Les variacions captades per tots els elèctrodes són processades de manera conjunta per determinar la posició i el moviment de la mà sobre la superfície del sensor. Finalment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D millorat. A l'igual que el desenvolupament anterior, permet la detecció de gestos sense necessitat de contacte, però incrementant la distància d'abast. A més a més de la tecnologia d'impressió electrònica, s'ha avaluat emprar altres dues tecnologies de fabricació tèxtil. / Ferri Pascual, J. (2020). Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/153075 / TESIS
107

Počítačové vidění a detekce gest rukou a prstů / Computer vision and hand gestures detection and fingers tracking

Bravenec, Tomáš January 2019 (has links)
Diplomová práce je zaměřena na detekci a rozpoznání gest rukou a prstů ve statických obrazech i video sekvencích. Práce obsahuje shrnutí několika různých přístupů k samotné detekci a také jejich výhody i nevýhody. V práci je též obsažena realizace multiplatformní aplikace napsané v Pythonu s použitím knihoven OpenCV a PyTorch, která dokáže zobrazit vybraný obraz nebo přehrát video se zvýrazněním rozpoznaných gest.
108

Serious Game para el aprendizaje de gestos estáticos del lenguaje de señas peruano mediante el uso de realidad virtual

Ramos Carrión, Cristopher Lizandro, Nureña Jara, Roberto Alonso 22 October 2021 (has links)
El presente proyecto tiene como objetivo el desarrollo de un juego serio para el aprendizaje de gestos básicos de la lengua de señas peruana en realidad virtual para personas que no padezcan de sordera, utilizando el dispositivo HTC Vive. El juego, al cual nombramos Sign Shooting, consta de 8 niveles en donde el jugador deberá aprender 3 gestos de letras por cada uno de éstos, y al final una prueba de los conocimientos aprendidos. Se implementó el Vive Hand Tracking SDK para la detección de las manos del usuario y sus características espaciales. Con esta información, generamos un dataset de los gestos a aprender, el cual utilizamos para entrenar un modelo de redes neuronales para el reconocimiento de señas, el cual fue validado con las métricas de bias y varianza. Para validar nuestra propuesta, se pidió a los usuarios completar el juego y a continuación responder una encuesta dividida en experiencia de juego y aprendizaje del usuario. Los resultados obtenidos muestran un puntaje promedio en la experiencia usuario mayor de 4 (de un máximo de 5) y que en toda la sesión de juego se aprenden un promedio de 17 gestos de letras (de un total de 24), significando que los usuarios consideran el juego entretenido, inmersivo y que cumple con su objetivo de enseñanza. Finalmente, se concluye que el uso de realidad virtual incentiva a que el usuario se sienta comprometido con el juego y busque llegar a su objetivo. / This project aims to develop a serious game for learning basic gestures of the Peruvian sign language in virtual reality to non-deaf people, employing the HTC Vive device. The game, which we name Sign Shooting, consists of 8 levels where the player must learn 3 gestures for each of these, and finally a test about the learned knowledge. The game implements the Vive Hand Tracking SDK to detect the user’s hands and their spatial features. With this information, we generate a dataset of the gestures to learn, which we use to train a neural network model for sign recognition and validate it with the bias and variance metrics. To validate our proposal, users were asked to complete the game and then answer a survey divided into user game experience and learning. The results obtained in the experiments show an average score in the user experience greater than 4 (of a maximum of 5) and that in the entire game session an average of 17 letter gestures (of a total of 24) are learned, meaning that users consider the game entertaining, immersive and that meets its teaching objective. Finally, we conclude that the use of virtual reality encourages the user to feel committed to the game and seek to reach its goal. / Tesis
109

Rozpoznání gest ruky v obrazu / Hand gesticulation recognition in image

Mráz, Stanislav January 2011 (has links)
This master’s thesis is dealing with recognition of an easy static gestures in order to computer controlling. First part of this work is attended to the theoretical review of methods used to hand segmentation from the image. Next methods for hang gesture classification are described. The second part of this work is devoted to choice of suitable method for hand segmentation based on skin color and movement. Methods for hand gesture classification are described in next part. Last part of this work is devoted to description of proposed system.
110

Analyse du geste dansé et retours visuels par modèles physiques : apport des qualités de mouvement à l'interaction avec le corps entier / Dance Gesture Analysis and Visual Feedback based on Physical Models : Contributions of Movement Qualities in Whole Body Interaction

Fdili Alaoui, Sarah 19 December 2012 (has links)
La présente thèse a pour but d’approfondir l’étude du geste dans le cadre de l’interaction Homme Machine. Il s’agit de créer de nouveaux paradigmes d’interaction qui offrent à l’utilisateur de plus amples possibilités d’expression basées sur le geste. Un des vecteurs d’expression du geste, très rarement traité en Interaction Homme Machine, qui lui confère sa coloration et son aspect, est ce que les théoriciens et praticiens de la danse appellent « les qualités de mouvement ». Nous mettons à profit des collaborations avec le domaine de la danse pour étudier la notion de qualités de mouvement et l’intégrer à des paradigmes d’interaction gestuelle. Notre travail analyse les apports de l’intégration des qualités de mouvement comme modalité d’interaction, fournit les outils propices à l’élaboration de cette intégration (en termes de méthodes d’analyse, de visualisation et de contrôle gestuel), en développe et évalue certaines techniques d’interaction.Les contributions de la thèse se situent d’abord dans la formalisation de la notion de qualités de mouvement et l’évaluation de son intégration dans un dispositif interactif en termes d’expérience utilisateur. Sur le plan de la visualisation des qualités de mouvement, les travaux menés pendant la thèse ont permis de démontrer que les modèles physiques masses-ressorts offrent de grandes possibilités de simulation de comportements dynamiques et de contrôle en temps réel. Sur le plan de l’analyse, la thèse a permis de développer des approches novatrices de reconnaissance automatique des qualités de mouvement de l’utilisateur. Enfin, à partir des approches d’analyse et de visualisation des qualités de mouvement, la thèse a donné lieu à l’implémentation d’un ensemble de techniques d’interaction. Elle a appliqué et évalué ses techniques dans le contexte de la pédagogie de la danse et de la performance. / The thesis studies gesture in the context of Human-Computer interaction. It aims at creating new interaction paradigms that offer the user further expressive possibilities based on gestures. The theorists and practitioners of the dance call "movement qualities” (MQ), a notion that conveys expressive content describing the way a gesture is performed. This notion has been rarely taken into consideration in the field of HCI. Our work draws on collaborations with the field of dance to explore the notion of movement qualities and to integrate it as interaction modality. 

The contributions of the thesis are in the formalism of the notion of movement qualities and evaluation of its integration as interaction modality in terms of user experience. 

We also provide computational tools for considering MQ in interactive systems in terms of analysis, representation and gesture control methods. On the representational level, our work have demonstrated that physical models based on masses and springs systems offer great opportunities for simulating dynamics related to MQs and for real-time gesture control. On the analysis level, we developed innovative approaches to automatic real time recognition of movement qualities. Finally, we implemented of a set of interaction techniques based on movement qualities that we applied and evaluated in the context of dance pedagogy and performance.

Page generated in 0.1182 seconds