• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hand Tracking by Fusion of Color and a Range Sensor

Sen, Abhishek Unknown Date
No description available.
2

Ανίχνευση κίνησης χεριού και αναγνώριση χειρονομίας σε πραγματικό χρόνο / Hand tracking and hand gesture recognition in real time

Γονιδάκης, Παναγιώτης 07 May 2015 (has links)
Με την αλματώδη πρόοδο της τεχνολογίας τα τελευταία χρόνια, οι συσκευές πολυμέσων έχουν γίνει ακόμη περισσότερο «έξυπνες». Όλες αυτές οι συσκευές, απαιτούν επικοινωνία με τον χρήστη σε πραγματικό χρόνο. Ο τομέας της επικοινωνίας ανθρώπου – υπολογιστή (human – computer interaction - HCI) έχει προχωρήσει πια από την εποχή που τα μοναδικά εργαλεία ήταν το ποντίκι και το πληκτρολόγιο. Ένας από τους πιο ενδιαφέροντες και αναπτυσσόμενους τομείς είναι η χρήση χειρονομιών για αλληλεπίδραση με την έξυπνη συσκευή. Στη παρούσα εργασία προτείνεται ένα αυτόματο σύστημα όπου ο χρήστης θα επικοινωνεί με μία συσκευή πολυμέσων, για παράδειγμα μία τηλεόραση, βάση χειρονομιών σε πραγματικό χρόνο και σε πραγματικές συνθήκες. Θα παρουσιαστούν και θα δοκιμαστούν δημοφιλείς αλγόριθμοι της υπολογιστικής όρασης (computer vision) και της αναγνώρισης προτύπων (pattern recognition) και κάποιοι από αυτούς θα ενσωματωθούν στο σύστημά μας. Το προτεινόμενο σύστημα μπορεί να παρακολουθεί το χέρι ενός χρήστη και να αναγνωρίζει τη χειρονομία του. Η παρούσα εργασία παρουσιάζει μια υλοποίηση στο Matlab® (2014b) και αποτελεί προστάδιο υλοποίησης σε πραγματικό χρόνο. / The rapid advances of technology during the last years have enabled multimedia devices to become more and more «smart» requiring real time interaction with the user. The field of human-computer interaction (HCI) has to show many advances in this communication that is not restricted only to the use of mouse and keyboard that once used to be the only tools for communication with the computer. An area of great interest and improvements is the use of hand gestures in order to enable interaction with the smart device. In this master thesis, a novel automatic system that enables the communication of the user with a multimedia device, e.g. a television, in real time and under real circumstances is proposed. In this thesis, popular algorithms from the fields of computer vision and pattern recognition will be investigated in order to choose the ones that will be incorporated in the proposed system. This system has the ability to track the user’s hand, recognize his gesture and act properly. The current implementation was conducted with the use of Matlab® (2014b) and constitutes a first step of the final real time implementation.
3

Independent hand-tracking from a single two-dimensional view and its application to South African sign language recognition

Achmed, Imran January 2014 (has links)
Philosophiae Doctor - PhD / Hand motion provides a natural way of interaction that allows humans to interact not only with the environment, but also with each other. The effectiveness and accuracy of hand-tracking is fundamental to the recognition of sign language. Any inconsistencies in hand-tracking result in a breakdown in sign language communication. Hands are articulated objects, which complicates the tracking thereof. In sign language communication the tracking of hands is often challenged by the occlusion of the other hand, other body parts and the environment in which they are being tracked. The thesis investigates whether a single framework can be developed to track the hands independently of an individual from a single 2D camera in constrained and unconstrained environments without the need for any special device. The framework consists of a three-phase strategy, namely, detection, tracking and learning phases. The detection phase validates whether the object being tracked is a hand, using extended local binary patterns and random forests. The tracking phase tracks the hands independently by extending a novel data-association technique. The learning phase exploits contextual features, using the scale-invariant features transform (SIFT) algorithm and the fast library for approximate nearest neighbours (FLANN) algorithm to assist tracking and the recovering of hands from any form of tracking failure. The framework was evaluated on South African sign language phrases that use a single hand, both hands without occlusion, and both hands with occlusion. These phrases were performed by 20 individuals in constrained and unconstrained environments. The experiments revealed that integrating all three phases to form a single framework is suitable for tracking hands in both constrained and unconstrained environments, where a high average accuracy of 82,08% and 79,83% was achieved respectively.
4

Hand shape estimation for South African sign language

Li, Pei January 2012 (has links)
>Magister Scientiae - MSc / Hand shape recognition is a pivotal part of any system that attempts to implement Sign Language recognition. This thesis presents a novel system which recognises hand shapes from a single camera view in 2D. By mapping the recognised hand shape from 2D to 3D,it is possible to obtain 3D co-ordinates for each of the joints within the hand using the kinematics embedded in a 3D hand avatar and smooth the transformation in 3D space between any given hand shapes. The novelty in this system is that it does not require a hand pose to be recognised at every frame, but rather that hand shapes be detected at a given step size. This architecture allows for a more efficient system with better accuracy than other related systems. Moreover, a real-time hand tracking strategy was developed that works efficiently for any skin tone and a complex background.
5

Suivi des mouvements de la main et reproduction de gestes à partir de séquences vidéo monoculaires / Monocular hand motion tracking and gestures recognition

Ben Henia, Ouissem 12 April 2012 (has links)
Les gestes de la main représentent un moyen naturel et intuitif de communication chez l'homme lui permettant d'interagir avec son environnement dans la vie de tous les jours. Ils permettent notamment de ponctuer et de renforcer l'expression orale d'un dialogue entre personnes. Outre la communication entre individus, les gestes de la main permettent de manipuler des objets ou encore d'interagir avec des machines. Avec le développement de la vision par ordinateur, on assiste à un véritable engouement pour de nouveaux types d'interactions qui exploitent le mouvement de la main et qui passent par une étape d'analyse et de reconnaissance du mouvement afin d'aboutir à l'interprétation des gestes de la main. La réalisation d'un tel objectif ouvre un large champ d'applications. C'est dans ce cadre que se positionne le travail réalisé au cours de cette thèse. Les objectifs visés étaient de proposer des méthodes pour: 1) permettre le transfert d'animation depuis une séquence réelle vers un modèle 3D représentant la main. Dans une telle perspective, le suivi permet d'estimer les différents paramètres correspondant aux degrés de liberté de la main. 2) identifier les gestes de la main en utilisant une base de gestes prédéfinie dans le but de proposer des modes d'interactions basés sur la vision par ordinateur. Sur le plan technique, nous nous sommes intéressés à deux types d’approches : le premier utilise un modèle 3D de la main et le deuxième fait appel à une base de gestes / Hand gestures take a fundamental role in inter-human daily communication. Their use has become an important part of human-computer interaction in the two last decades. Building a fast and effective vision-based hand motion tracker is challenging. This is due to the high dimensionality of the pose space, the ambiguities due to occlusion, the lack of visible surface texture and the significant appearance variations due to shading. In this thesis we are interested in two approaches for monocular hand tracking. In the first one, a parametric hand model is used. The hand motion tracking is first formulated as an optimization task, where a dissimilarity function between the projection of the hand model under articulated motion and the observed image features, is to be minimized. A two-step iterative algorithm is then proposed to minimize this dissimilarity function. We propose two dissimilarity functions to be minimized. We propose also in this thesis a data-driven method to track hand gestures and animate 3D hand model. To achieve the tracking, the presented method exploits a database of hand gestures represented as 3D point clouds. In order to track a large number of hand poses with a database as small as possible we classify the hand gestures using a Principal Component Analysis (PCA). Applied to each point cloud, the PCA produces a new representation of the hand pose independent of the position and orientation in the 3D space. To explore the database in a fast and efficient way, we use a comparison function based on 3D distance transform. Experimental results on synthetic and real data demonstrate the potentials of ours methods
6

Automatic recognition of American sign language classifiers

Zafrulla, Zahoor 08 June 2015 (has links)
Automatically recognizing classifier-based grammatical structures of American Sign Language (ASL) is a challenging problem. Classifiers in ASL utilize surrogate hand shapes for people or "classes" of objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and interrogative words like WH-questions (e.g. Who, What, Where, and When). Challenging problems such as recognition of ASL sentences and classifier-based grammatical structures remain relatively unexplored in the field of ASL recognition.  One application of recognition of classifiers is toward creating educational games to help young deaf children acquire language skills. Previous work developed CopyCat, an educational ASL game that requires children to engage in a progressively more difficult expressive signing task as they advance through the game.   We have shown that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. We have demonstrated that the quality of a machine verifier's ability to identify the boundary of the signs can be improved by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, we have shown that we can reduce CopyCat's dependency on custom manufactured hardware by using an off-the-shelf Microsoft Kinect depth camera to achieve similar verification performance. Finally, we show how we can extend our ability to recognize sign language by leveraging depth maps to develop a method using improved hand detection and hand shape classification to recognize selected classifier-based grammatical structures of ASL.
7

Model-based 3D hand pose estimation from monocular video / Suivi automatique de la main à partir de séquences vidéo monoculaires

La Gorce, Martin de 14 December 2009 (has links)
Dans cette thèse sont présentées deux méthodes visant à obtenir automatiquement une description tridimensionnelle des mouvements d'une main étant donnée une séquence vidéo monoculaire de cette main. En utilisant l'information fournie par la vidéo, l'objectif est de déterminer l'ensemble des paramètres cinématiques nécessaires à la description de la configuration spatiale des différentes parties de la main. Cet ensemble de paramètres est composé des angles de chaque articulation ainsi que de la position et de l'orientation globale du poignet. Ce problème est un problème difficile. La main a de nombreux degrés de liberté et les auto-occultations sont omniprésentes, ce qui rend difficile l'estimation de la configuration des parties partiellement ou totalement cachées. Dans cette thèse sont proposées deux nouvelles méthodes qui améliorent par certains aspects l'état de l'art pour ce problème. Ces deux méthodes sont basées sur un modèle de la main dont la configuration spatiale est ajustée pour que sa projection dans l'image corresponde au mieux à l'image de main observée. Ce processus est guidé par une fonction de coût qui définit une mesure quantitative de la qualité de l'alignement de la projection du modèle avec l'image observée. La procédure d'ajustement du modèle est réalisée grâce à un raffinement itératif de type descente de gradient quasi-newton qui vise à minimiser cette fonction de coût.Les deux méthodes proposées diffèrent principalement par le choix du modèle et de la fonction du coût. La première méthode repose sur un modèle de la main composé d'ellipsoïdes et d'une fonction coût utilisant un modèle de la distribution statistique de la couleur la main et du fond de l'image.La seconde méthode repose sur un modèle triangulé de la surface de la main qui est texturé est ombragé. La fonction de coût mesure directement, pixel par pixel, la différence entre l'image observée et l'image synthétique obtenue par projection du modèle de la main dans l'image. Lors du calcul du gradient de la fonction de coût, une attention particulière a été portée aux termes dûs aux changements de visibilité de la surface au voisinage des auto-occultations, termes qui ont été négligés dans les méthodes préexistantes.Ces deux méthodes ne fonctionnement malheureusement pas en temps réel, ce qui rend leur utilisation pour l'instant impossible dans un contexte d'interaction homme-machine. L'amélioration de la performance des ordinateur combinée avec une amélioration de ces méthodes pourrait éventuellement permettre d'obtenir un résultat en temps réel. / In this thesis we propose two methods that allow to recover automatically a full description of the 3d motion of a hand given a monocular video sequence of this hand. Using the information provided by the video, our aimto is to determine the full set of kinematic parameters that are required to describe the pose of the skeleton of the hand. This set of parameters is composed of the angles associate to each joint/articulation and the global position and orientation of the wrist. This problem is extremely challenging. The hand as many degrees of freedom and auto-occlusion are ubiquitous, which makes difficult the estimation of occluded or partially ocluded hand parts.In this thesis, we introduce two novel methods of increasing complexity that improve to certain extend the state-of-the-art for monocular hand tracking problem. Both are model-based methods and are based on a hand model that is fitted to the image. This process is guided by an objective function that defines some image-based measure of the hand projection given the model parameters. The fitting process is achieved through an iterative refinement technique that is based on gradient-descent and aims a minimizing the objective function. The two methos differ mainly by the choice of the hand model and of the cost function.The first method relies on a hand model made of ellipsoids and a simple discrepancy measure based on global color distributions of the hand and the background. The second method uses a triangulated surface model with texture and shading and exploits a robust distance between the synthetic and observed image as discrepancy measure.While computing the gradient of the discrepancy measure, a particular attention is given to terms related to the changes of visibility of the surface near self occlusion boundaries that are neglected in existing formulations. Our hand tracking method is not real-time, which makes interactive applications not yet possible. Increase of computation power of computers and improvement of our method might make real-time attainable.
8

Real-time 2D Static Hand Gesture Recognition and 2D Hand Tracking for Human-Computer Interaction

Popov, Pavel Alexandrovich 11 December 2020 (has links)
The topic of this thesis is Hand Gesture Recognition and Hand Tracking for user interface applications. 3 systems were produced, as well as datasets for recognition and tracking, along with UI applications to prove the concept of the technology. These represent significant contributions to resolving the hand recognition and tracking problems for 2d systems. The systems were designed to work in video only contexts, be computationally light, provide recognition and tracking of the user's hand, and operate without user driven fine tuning and calibration. Existing systems require user calibration, use depth sensors and do not work in video only contexts, or are computationally heavy requiring GPU to run in live situations. A 2-step static hand gesture recognition system was created which can recognize 3 different gestures in real-time. A detection step detects hand gestures using machine learning models. A validation step rejects false positives. The gesture recognition system was combined with hand tracking. It recognizes and then tracks a user's hand in video in an unconstrained setting. The tracking uses 2 collaborative strategies. A contour tracking strategy guides a minimization based template tracking strategy and makes it real-time, robust, and recoverable, while the template tracking provides stable input for UI applications. Lastly, an improved static gesture recognition system addresses the drawbacks due to stratified colour sampling of the detection boxes in the detection step. It uses the entire presented colour range and clusters it into constituent colour modes which are then used for segmentation, which improves the overall gesture recognition rates. One dataset was produced for static hand gesture recognition which allowed for the comparison of multiple different machine learning strategies, including deep learning. Another dataset was produced for hand tracking which provides a challenging series of user scenarios to test the gesture recognition and hand tracking system. Both datasets are significantly larger than other available datasets. The hand tracking algorithm was used to create a mouse cursor control application, a paint application for Android mobile devices, and a FPS video game controller. The latter in particular demonstrates how the collaborating hand tracking can fulfill the demanding nature of responsive aiming and movement controls.
9

Moving Air : A Comparative Study of Physical Controllers and Hand Tracking in Virtual Reality / Rörlig luft : En jämförande studie av fysiska kontroller och hand Spårning i virtuell verklighet

Konráð Albertsson, Ólafur January 2023 (has links)
Onboarding is a procedure used to familiarize new employees with the details of a new industry, and VR and AR training are increasingly being used for role-specific medical field experiences. This study aims to explore the affordances and challenges of a task-specific shaped controller vs. a regular controller in a VR training scenario. To explore this comparison in this study, 26 participants performed a task based on a real-life task performed in the metals industry using an Oculus VR headset, as shown above. The task entailed using a large ladle to move "Liquid aluminum" represented by conditioned spheres from the "lava" to a container referred to as a crucible. The task was performed twice, once utilizing hand tracking to control the virtual ladle and again using a physical stick to control the virtual ladle. The order in which the controller was used first was randomized. Participants’ user experience was measured by immersion questionnaire and after-study interview. Participant performance in VR was captured and analyzed in comparison to an optimal metric. Results showed user experience between using a physical controller and hand tracking to be similar, but task performance as measured by comparing average performance in time completion and distance traveled compared to expert-certified metrics proved participants to perform better using just the hand tracking. / Onboarding är en procedur som används för att bekanta nya medarbetare med detaljerna i en ny bransch och VR- och AR-utbildning används alltmer för rollspecifika medicinska fälterfarenheter. Denna studie syftar till att undersöka förmågor och utmaningar med en uppgiftsspecifik formad kontrollenhet kontra en vanlig kontrollenhet i ett VRträningsscenario. För att utforska detta i denna studie utförde 26 deltagare en uppgift baserad på en verklig uppgift utförd inom metallindustrin med ett Oculus VR-headset, som visas ovan. Uppgiften gick ut på att använda en stor slev för att flytta ”flytande aluminium” representerad som solida sfärer från "lavan" till en behållare som kallas en smältdegel. Uppgiften utfördes två gånger, en gång med hjälp av hand-spårning för att styra den virtuella sleven och en gång med en fysisk stav för att styra den virtuella sleven. Ordningen i vilken variant som användes först randomiserades. Deltagarnas användarupplevelse mättes genom en ”immersion” enkät och intervju. Deltagarnas prestation i VR spelades in och analyserades i jämförelse med ett fastställt optimalt mätvärde. Resultaten visade att användarupplevelsen var likvärdiga mellan att använda en fysisk kontroll och hand-spårning, men prestationen mätt genom att jämföra genomsnittligt utförande i tid och avstånd jämfört med det expertcertifierade mätvärdet visade att deltagarna presterade bättre genom att bara använda hand-spårning.
10

Fingerbaserad navigering i virtuella 3D-miljöer : En utvärdering av fingerstyrning som alternativ till tangentbordet

Grindebäck, Max January 2023 (has links)
Navigering i virtuella 3D-miljöer har varit möjligt i många år och sker vanligtvis i samband med till exempel spel och 3D-modellering. Till en persondator används nästan alltid en datormus och ett tangentbord. Musen har visats vara lätthanterlig vid rotering av vyn och kommer inte att vara fokuset i studien. Tangentbordet däremot, som styr vyns position, skulle möjligtvis kunna bytas ut mot något bättre. Vanligtvis används tangenterna W, A, S, och D för förflyttningar, och sedan behövs två tangenter till om det ska vara möjligt att ”flyga” upp och ner. Sex olika tangenter för att styra förflyttningen i tre dimensioner kan vara svårt att lära sig. Trots att vana användare kan hantera det bra, skulle ett mer naturligt sätt att styra på kunna vara enklare för nybörjare, och kanske också för de erfarna. Det begränsade antalet tangenter som används tillåter inte heller finjustering av riktningen. Studien föreslår en alternativ form av 3D-navigering där användaren styr med sitt finger. En Leap Motion kamera ligger på bordet under för att mäta fingrets position, och översätter det till en vektor som kontrollerar vyns hastighet och riktning. Detta är tänkt att vara ett mer naturligt sätt att styra på, då människor har så bra kontroll över sin egen kropp. Utöver det kan även hastigheten justeras genom att dra fingret längre eller kortare sträckor. Vid styrning med tangentbord är justering av hastigheten inte möjligt; undantaget är om användaren kan hålla ner en tangent för att springa, vilket gör att det finns två val av hastigheter. Fingerstyrningen testades och jämfördes direkt mot tangentbordet i ett antal olika experiment. Testerna visar att det går snabbare när tangentbordet används, och ge-nerellt sker färre misstag. När fingerstyrningen används så blir färdsträckorna ofta kortare, speciellt när det krävs mer precision, dock kan detta bero på den lägre has-tigheten som deltagarna hade när de använde fingret. En inmatningsmetod testades bara sju gånger. Under denna period blev fingerstyrningen betydligt snabbare mellan varje försök jämfört med tangentbordet, därför finns anledning att tro att fingerstyr-ningen kommer att förbättras med mer träning. För att få pålitliga resultat skulle en längre studie behöva utföras där deltagarna verkligen hinner lära sig att styra med fingret. Författaren har under utvecklingen av fingerstyrningen blivit snabbare med den än med tangentbordet. Detta är en ytterligare indikation på att det finns potential hos fingerstyrningen som deltagarna aldrig hann uppnå i denna preliminära studie, och att ytterligare experiment krävs.

Page generated in 0.0765 seconds