101 |
Hand Gesture Recognition using mm-Wave RADAR TechnologyZhao, Yanhua 24 July 2024 (has links)
Die Interaktion zwischen Mensch und Computer ist zu einem Teil unseres täglichen Lebens geworden. Radarsensoren sind aufgrund ihrer geringen Größe, ihres niedrigen Stromverbrauchs und ihrer Erschwinglichkeit sehr vielversprechend. Im Vergleich zu anderen Sensoren wie Kameras und LIDAR kann RADAR in einer Vielzahl von Umgebungen eingesetzt werden, und wird dabei nicht durch Licht beeinträchtigt. Vor allem aber besteht keine Gefahr, dass die Privatsphäre des Benutzers verletzt wird. Unter den vielen Radararten wird das FMCW-Radar für die Gestenerkennung genutzt, da es mehrere Ziele beobachten, Reichweite, Geschwindigkeit und Winkel messen kann und die Hardware und Signalverarbeitung relativ einfach sind.
Die radargestützte Gestenerkennung kann in einer Vielzahl von Bereichen eingesetzt werden. So kann z. B. bei Gesundheits- und Sicherheitsaspekten durch den Einsatz radargestützter Gestenerkennungssysteme Körperkontakt vermieden und die Möglichkeit einer Kontamination verringert werden. Auch in der Automobilbranche kann die berührungslose Steuerung bestimmter Funktionen, wie z. B. das Einschalten der Klimaanlage, das Benutzererlebnis verbessern und zu einem sichereren Fahrverhalten beitragen. Bei der Implementierung eines auf künstlicher Intelligenz basierenden Gestenerkennungssystems unter Verwendung von RADAR gibt es noch viele Herausforderungen, wie z. B. die Interpretation von Daten, das Sammeln von Trainingsdaten, die Optimierung der Berechnungskomplexität und die Verbesserung der Systemrobustheit. Diese Arbeit konzentriert sich auf die Bewältigung dieser Herausforderungen.
Diese Arbeit befasst sich mit wichtigen Aspekten von Gestenerkennungssystemen. Von der Radarsignalverarbeitung, über maschinelle Lernmodelle, Datenerweiterung bis hin zu Multisensorsystemen werden die Herausforderungen der realen Welt angegangen. Damit wird der Grundstein für den umfassenden Einsatz von Gestenerkennungssystemen in der Praxis gelegt. / Human-computer interaction has become part of our daily lives. RADAR stands out as a very promising sensor, with its small size, low power consumption, and affordability. Compared to other sensors, such as cameras and LIDAR, RADAR can work in a variety of environments, and it is not affected by light. Most importantly, there is no risk of infringing on user's privacy. Among the many types of RADAR, FMCW RADAR is utilised for gesture recognition due to its ability to observe multiple targets and to measure range, velocity and angle, as well as its relatively simple hardware and signal processing.
RADAR-based gesture recognition can be applied in a variety of domains. For example, for health and safety considerations, the use of RADAR-based gesture recognition systems can avoid physical contact and reduce the possibility of contamination. Similarly, in automotive applications, contactless control of certain functions, such as turning on the air conditioning, can improve the user experience and contribute to safer driving. There are still many challenges in implementing an artificial intelligence-based gesture recognition system using RADAR, such as interpreting data, collecting training data, optimising computational complexity and improving system robustness. This work will focus on addressing these challenges.
This thesis addresses key aspects of gesture recognition systems. From RADAR signal processing, machine learning models, data augmentation to multi-sensor systems, the challenges posed by real-world scenarios are tackled. This lays the foundation for a comprehensive deployment of gesture recognition systems for many practical applications.
|
102 |
Gestures in human-robot interaction / development of intuitive gesture vocabularies and robust gesture recognitionBodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
|
103 |
Reconnaissance des actions humaines à partir d'une séquence vidéoTouati, Redha 12 1900 (has links)
The work done in this master's thesis, presents a new system for the
recognition of human actions from a video sequence. The system uses,
as input, a video sequence taken by a static camera. A binary
segmentation method of the the video sequence is first achieved, by a
learning algorithm, in order to detect and extract the different people
from the background. To recognize an action, the system then exploits
a set of prototypes generated from an MDS-based dimensionality
reduction technique, from two different points of view in the video
sequence. This dimensionality reduction technique, according to two
different viewpoints, allows us to model each human action of the
training base with a set of prototypes (supposed to be similar for
each class) represented in a low dimensional non-linear space. The
prototypes, extracted according to the two viewpoints, are fed to a
$K$-NN classifier which allows us to identify the human action that
takes place in the video sequence. The experiments of our model
conducted on the Weizmann dataset of human actions provide interesting
results compared to the other state-of-the art (and often more
complicated) methods. These experiments show first the
sensitivity of our model for each viewpoint and its effectiveness to
recognize the different actions, with a variable but satisfactory
recognition rate and also the results obtained by the fusion of these
two points of view, which allows us to achieve a high performance
recognition rate. / Le travail mené dans le cadre de ce projet de maîtrise vise à
présenter un nouveau système de reconnaissance d’actions humaines à
partir d'une séquence d'images vidéo. Le système utilise en entrée une
séquence vidéo prise par une caméra statique. Une méthode de
segmentation binaire est d'abord effectuée, grâce à un algorithme
d’apprentissage, afin de détecter les différentes personnes de
l'arrière-plan. Afin de reconnaitre une action, le système exploite
ensuite un ensemble de prototypes générés, par une technique de
réduction de dimensionnalité MDS, à partir de deux points de vue
différents dans la séquence d'images. Cette étape de réduction de
dimensionnalité, selon deux points de vue différents, permet de
modéliser chaque action de la base d'apprentissage par un ensemble de
prototypes (censé être relativement similaire pour chaque classe)
représentés dans un espace de faible dimension non linéaire. Les
prototypes extraits selon les deux points de vue sont amenés à un
classifieur K-ppv qui permet de reconnaitre l'action qui se déroule
dans la séquence vidéo. Les expérimentations de ce système sur la
base d’actions humaines de Wiezmann procurent des résultats assez
intéressants comparés à d’autres méthodes plus complexes. Ces
expériences montrent d'une part, la sensibilité du système pour chaque
point de vue et son efficacité à reconnaitre les différentes actions,
avec un taux de reconnaissance variable mais satisfaisant, ainsi que
les résultats obtenus par la fusion de ces deux points de vue, qui
permet l'obtention de taux de reconnaissance très performant.
|
104 |
Human motion detection and gesture recognition using computer vision methodsLiu, X. (Xin) 21 February 2019 (has links)
Abstract
Gestures are present in most daily human activities and automatic gestures analysis is a significant topic with the goal of enabling the interaction between humans and computers as natural as the communication between humans. From a computer vision perspective, a gesture analysis system is typically composed of two stages, the low-level stage for human motion detection and the high-level stage for understanding human gestures. Therefore, this thesis contributes to the research on gesture analysis from two aspects, 1) Detection: human motion segmentation from video sequences, and 2) Understanding: gesture cues extraction and recognition.
In the first part of this thesis, two sparse signal recovery based human motion detection methods are presented. In real videos the foreground (human motions) pixels are often not randomly distributed but have the group properties in both spatial and temporal domains. Based on this observation, a spatio-temporal group sparsity recovery model is proposed, which explicitly consider the foreground pixels' group clustering priors of spatial coherence and temporal contiguity. Moreover, a pixel should be considered as a multi-channel signal. Namely, if a pixel is equal to the adjacent ones that means all the three RGB coefficients should be equal. Motivated by this observation, a multi-channel fused Lasso regularizer is developed to explore the smoothness of multi-channels signals.
In the second part of this thesis, two human gesture recognition methods are presented to resolve the issue of temporal dynamics, which is crucial to the interpretation of the observed gestures. In the first study, a gesture skeletal sequence is characterized by a trajectory on a Riemannian manifold. Then, a time-warping invariant metric on the Riemannian manifold is proposed. Furthermore, a sparse coding for skeletal trajectories is presented by explicitly considering the labelling information, with the aim to enforcing the discriminant validity of the dictionary. In the second work, based on the observation that a gesture is a time series with distinctly defined phases, a low-rank matrix decomposition model is proposed to build temporal compositions of gestures. In this way, a more appropriate alignment of hidden states for a hidden Markov model can be achieved. / Tiivistelmä
Eleet ovat läsnä useimmissa päivittäisissä ihmisen toiminnoissa. Automaattista eleiden analyysia tarvitaan laitteiden ja ihmisten välisestä vuorovaikutuksesta parantamiseksi ja tavoitteena on yhtä luonnollinen vuorovaikutus kuin ihmisten välinen vuorovaikutus. Konenäön näkökulmasta eleiden analyysijärjestelmä koostuu ihmisen liikkeiden havainnoinnista ja eleiden tunnistamisesta. Tämä väitöskirjatyö edistää eleanalyysin-tutkimusta erityisesti kahdesta näkökulmasta: 1) Havainnointi - ihmisen liikkeiden segmentointi videosekvenssistä. 2) Ymmärtäminen - elemarkkerien erottaminen ja tunnistaminen.
Väitöskirjan ensimmäinen osa esittelee kaksi liikkeen havainnointi menetelmää, jotka perustuvat harvan signaalin rekonstruktioon. Videokuvan etualan (ihmisen liikkeet) pikselit eivät yleensä ole satunnaisesti jakautuneita vaan niillä toisistaan riippuvia ominaisuuksia spatiaali- ja aikatasolla tarkasteltuna. Tähän havaintoon perustuen esitellään spatiaalis-ajallinen harva rekonstruktiomalli, joka käsittää etualan pikseleiden klusteroinnin spatiaalisen koherenssin ja ajallisen jatkuvuuden perusteella. Lisäksi tehdään oletus, että pikseli on monikanavainen signaali (RGB-väriarvot). Pikselin ollessa samankaltainen vieruspikseliensä kanssa myös niiden värikanava-arvot ovat samankaltaisia. Havaintoon nojautuen kehitettiin kanavat yhdistävä lasso-regularisointi, joka mahdollistaa monikanavaisen signaalin tasaisuuden tutkimisen.
Väitöskirjan toisessa osassa esitellään kaksi menetelmää ihmisen eleiden tunnistamiseksi. Menetelmiä voidaan käyttää eleiden ajallisen dynamiikan ongelmien (eleiden nopeuden vaihtelu) ratkaisemiseksi, mikä on ensiarvoisen tärkeää havainnoitujen eleiden oikein tulkitsemiseksi. Ensimmäisessä menetelmässä ele kuvataan luurankomallin liikeratana Riemannin monistossa (Riemannian manifold), joka hyödyntää aikavääristymille sietoista metriikkaa. Lisäksi esitellään harvakoodaus (sparse coding) luurankomallien liikeradoille. Harvakoodaus perustuu nimiöintitietoon, jonka tavoitteena on varmistua koodisanaston keskinäisestä riippumattomuudesta. Toisen menetelmän lähtökohtana on havainto, että ele on ajallinen sarja selkeästi määriteltäviä vaiheita. Vaiheiden yhdistämiseen ehdotetaan matala-asteista matriisihajotelmamallia, jotta piilotilat voidaan sovittaa paremmin Markovin piilomalliin (Hidden Markov Model).
|
105 |
Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin colorHebert Luchetti Ribeiro 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
|
106 |
Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin colorRibeiro, Hebert Luchetti 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
|
107 |
Human computer interface based on hand gesture recognitionBernard, Arnaud Jean Marc 24 August 2010 (has links)
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content.
The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control the TV. A dedicated interface can then be used to select a TV channel, adjust volume or browse videos from an online streaming server.
This approach leads to several challenges. The first is the use of a simple webcam which leads to a vision based system. From the single webcam, we need to recognize the hand and identify its gesture or trajectory. A TV set is usually installed in a living room which implies constraints such as a potentially moving background and luminance change. These issues will be further discussed as well as the methods developed to resolve them. Video browsing is one example of the use of gesture recognition. To illustrate another application, we developed a simple game controlled by hand gestures.
The emergence of 3D TVs is allowing the development of 3D video conferencing. Therefore we also consider the use of a stereo camera to recognize hand gesture.
|
108 |
Reconnaissance des actions humaines à partir d'une séquence vidéoTouati, Redha 12 1900 (has links)
The work done in this master's thesis, presents a new system for the
recognition of human actions from a video sequence. The system uses,
as input, a video sequence taken by a static camera. A binary
segmentation method of the the video sequence is first achieved, by a
learning algorithm, in order to detect and extract the different people
from the background. To recognize an action, the system then exploits
a set of prototypes generated from an MDS-based dimensionality
reduction technique, from two different points of view in the video
sequence. This dimensionality reduction technique, according to two
different viewpoints, allows us to model each human action of the
training base with a set of prototypes (supposed to be similar for
each class) represented in a low dimensional non-linear space. The
prototypes, extracted according to the two viewpoints, are fed to a
$K$-NN classifier which allows us to identify the human action that
takes place in the video sequence. The experiments of our model
conducted on the Weizmann dataset of human actions provide interesting
results compared to the other state-of-the art (and often more
complicated) methods. These experiments show first the
sensitivity of our model for each viewpoint and its effectiveness to
recognize the different actions, with a variable but satisfactory
recognition rate and also the results obtained by the fusion of these
two points of view, which allows us to achieve a high performance
recognition rate. / Le travail mené dans le cadre de ce projet de maîtrise vise à
présenter un nouveau système de reconnaissance d’actions humaines à
partir d'une séquence d'images vidéo. Le système utilise en entrée une
séquence vidéo prise par une caméra statique. Une méthode de
segmentation binaire est d'abord effectuée, grâce à un algorithme
d’apprentissage, afin de détecter les différentes personnes de
l'arrière-plan. Afin de reconnaitre une action, le système exploite
ensuite un ensemble de prototypes générés, par une technique de
réduction de dimensionnalité MDS, à partir de deux points de vue
différents dans la séquence d'images. Cette étape de réduction de
dimensionnalité, selon deux points de vue différents, permet de
modéliser chaque action de la base d'apprentissage par un ensemble de
prototypes (censé être relativement similaire pour chaque classe)
représentés dans un espace de faible dimension non linéaire. Les
prototypes extraits selon les deux points de vue sont amenés à un
classifieur K-ppv qui permet de reconnaitre l'action qui se déroule
dans la séquence vidéo. Les expérimentations de ce système sur la
base d’actions humaines de Wiezmann procurent des résultats assez
intéressants comparés à d’autres méthodes plus complexes. Ces
expériences montrent d'une part, la sensibilité du système pour chaque
point de vue et son efficacité à reconnaitre les différentes actions,
avec un taux de reconnaissance variable mais satisfaisant, ainsi que
les résultats obtenus par la fusion de ces deux points de vue, qui
permet l'obtention de taux de reconnaissance très performant.
|
109 |
Počítačové vidění a detekce gest rukou a prstů / Computer vision and hand gestures detection and fingers trackingBravenec, Tomáš January 2019 (has links)
Diplomová práce je zaměřena na detekci a rozpoznání gest rukou a prstů ve statických obrazech i video sekvencích. Práce obsahuje shrnutí několika různých přístupů k samotné detekci a také jejich výhody i nevýhody. V práci je též obsažena realizace multiplatformní aplikace napsané v Pythonu s použitím knihoven OpenCV a PyTorch, která dokáže zobrazit vybraný obraz nebo přehrát video se zvýrazněním rozpoznaných gest.
|
110 |
Serious Game para el aprendizaje de gestos estáticos del lenguaje de señas peruano mediante el uso de realidad virtualRamos Carrión, Cristopher Lizandro, Nureña Jara, Roberto Alonso 22 October 2021 (has links)
El presente proyecto tiene como objetivo el desarrollo de un juego serio para el aprendizaje de gestos básicos de la lengua de señas peruana en realidad virtual para personas que no padezcan de sordera, utilizando el dispositivo HTC Vive. El juego, al cual nombramos Sign Shooting, consta de 8 niveles en donde el jugador deberá aprender 3 gestos de letras por cada uno de éstos, y al final una prueba de los conocimientos aprendidos. Se implementó el Vive Hand Tracking SDK para la detección de las manos del usuario y sus características espaciales. Con esta información, generamos un dataset de los gestos a aprender, el cual utilizamos para entrenar un modelo de redes neuronales para el reconocimiento de señas, el cual fue validado con las métricas de bias y varianza. Para validar nuestra propuesta, se pidió a los usuarios completar el juego y a continuación responder una encuesta dividida en experiencia de juego y aprendizaje del usuario. Los resultados obtenidos muestran un puntaje promedio en la experiencia usuario mayor de 4 (de un máximo de 5) y que en toda la sesión de juego se aprenden un promedio de 17 gestos de letras (de un total de 24), significando que los usuarios consideran el juego entretenido, inmersivo y que cumple con su objetivo de enseñanza. Finalmente, se concluye que el uso de realidad virtual incentiva a que el usuario se sienta comprometido con el juego y busque llegar a su objetivo. / This project aims to develop a serious game for learning basic gestures of the Peruvian sign language in virtual reality to non-deaf people, employing the HTC Vive device. The game, which we name Sign Shooting, consists of 8 levels where the player must learn 3 gestures for each of these, and finally a test about the learned knowledge. The game implements the Vive Hand Tracking SDK to detect the user’s hands and their spatial features. With this information, we generate a dataset of the gestures to learn, which we use to train a neural network model for sign recognition and validate it with the bias and variance metrics. To validate our proposal, users were asked to complete the game and then answer a survey divided into user game experience and learning. The results obtained in the experiments show an average score in the user experience greater than 4 (of a maximum of 5) and that in the entire game session an average of 17 letter gestures (of a total of 24) are learned, meaning that users consider the game entertaining, immersive and that meets its teaching objective. Finally, we conclude that the use of virtual reality encourages the user to feel committed to the game and seek to reach its goal. / Tesis
|
Page generated in 0.0265 seconds