• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 19
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 113
  • 58
  • 43
  • 36
  • 36
  • 22
  • 20
  • 19
  • 17
  • 17
  • 16
  • 14
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Multimodal Interaction through the Design of Data Glove

Han, Bote January 2015 (has links)
In this thesis, we propose and present a multimodal interaction system that can provide a natural way for human-computer interaction. The core idea of this system is to help users to interact with the machine naturally by recognizing various gestures from the user from a wearable device. To achieve this goal, we have implemented a system including both hardware solution and gesture recognizing approaches. For the hardware solution, we designed and implemented a data glove based interaction device with multiple kinds of sensors to detect finger formations, touch commands and hand postures. We also modified and implemented two gesture recognizing approach based on support vector machine (SVM) as well as the lookup table. The detailed design and information is presented in this thesis. In the end, the system achieves supporting over 30 kinds of touch commands, 18 kinds of finger formation, and 10 kinds of hand postures as well as the combination of finger formation and hand posture with the recognition rate of 86.67% as well as the accurate touch command detection. We also evaluated the system from the subjective user experience.
2

Model-Based Segmentation and Recognition of Continuous Gestures

LI, HONG 27 September 2010 (has links)
Being one of the most active research topics in the computer vision field, automatic human gesture recognition is receiving increasing attention driven by its promising applications, ranging from surveillance and human monitoring, human-computer interface (HCI), and motion analysis, etc. Segmentation and recognition of human dynamic gestures from continuous video streams is considered to be a highly challenging task due to the spatio-temporal variation and endpoint localization issues. In this thesis, we propose a Motion Signature, which is a 3D spatio-temporal surface based upon the evolution of a contour over time, to reliably represent dynamic motion. A Gesture Model, is then constructed by a set of mean and variance images of Motion Signatures in a multi-scale manner, which not only is able to accommodate a wide range of spatio-temporal variation, but also has the advantage of requiring only a small amount of training data. Three approaches have been proposed to simultaneously segment and recognize gestures from continuous streams, which mainly differ in the way that the endpoints of gestures are located. While the first approach adopts an explicit multi-scale search strategy to find the endpoints of the gestures, the other two employ Dynamic Programming (DP) to handle this issue. All the three methods are rooted in the idea that segmentation and recognition are actually the two aspects of the same problem, and that the solution to either one of them will lead to the solution of the other. This is novel to most methods in the literature, which separate segmentation and recognition into two phases, and perform segmentation before recognition by looking into abrupt motion feature changes. The performance of the methods has been evaluated and compared on two types of gestures: two arms movement and a single hand movement. Experimental results have shown that all three methods achieved high recognition rates, ranging from 88% to 96% for upper body gestures, with the last one outperforming the other two. The single hand experiment also suggested that the proposed method has the potential to be applied to the application of continuous sign language recognition. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2010-09-24 19:27:43.316
3

Robust Upper Body Pose Recognition in Unconstrained Environments Using Haar-Disparity

Chu, Cheng-Tse January 2008 (has links)
In this research, an approach is proposed for the robust tracking of upper body movement in unconstrained environments by using a Haar- Disparity algorithm together with a novel 2D silhouette projection algorithm. A cascade of boosted Haar classifiers is used to identify human faces in video images, where a disparity map is then used to establish the 3D locations of detected faces. Based on this information, anthropometric constraints are used to define a semi-spherical interaction space for upper body poses. This constrained region serves the purpose of pruning the search space as well as validating user poses. Haar-Disparity improves on the traditional skin manifold tracking by relaxing constraints on clothing, background and illumination. The 2D silhouette projection algorithm provides three orthogonal views of the 3D objects. This allows tracking of upper limbs to be performed in the 2D space as opposed to manipulating 3D noisy data directly. This thesis also proposes a complete optimal set of interactions for very large interactive displays. Experimental evaluation includes the performance of alternative camera positions and orientations, accuracy of pointing, direct manipulative gestures, flag semaphore emulation, and principal axes. As a minor part of this research interest, the usability of interacting using only arm gestures is also evaluated based on ISO 9241-9 standard. The results suggest that the proposed algorithm and optimal set of interactions are useful for interacting with large displays.
4

Vision-based analysis, interpretation and segmentation of hand shape using six key marker points

Crawford, Gordon Finlay January 1997 (has links)
No description available.
5

Robotmanipulering med Leap Motion : För små och medelstora företag / Robot manipulation based on Leap Motion : For small and medium sized enterprises

Agell, Ulrica January 2016 (has links)
On-line programming of industrial robots is time consuming and requires experience in robot programming. Due to this fact, small and medium sized enterprises are reserved about the implementation of robots in production. Ongoing research in the field is focused on finding more intuitive interfaces and methods for programming to make the interaction with robots more natural and intuitive. This master thesis presents a method for manipulation of industrial robots utilizing an external device other than the traditional teach pendant. The base of the method is a PC application which handles the program logic and the communication between an external device and an ABB robot. The program logic is designed to be modular in order to allow customization of the method, both in terms of its functions and the type of external device that is used for the method. Since gestures are one of the most common forms of communication between humans, it is interesting to investigate gestures for the purpose to make manipulation of industrial robots more intuitive. Therefore, a Leap Motion controller is presented as an example of an external device which could be used as an alternative to the teach pendant. The Leap Motion controller is specialised on hand and finger position tracking with both good absolute accuracy and precision. Further, its associated Software Development Kit (SDK) has the capabilities which are required to enable implementation of a teach pendants most fundamental functionalities. Results obtained by a user test show that the developed application is both easy and fast to use but has poor robustness.
6

Implementación de una herramienta de integración de varios tipos de interacción humano-computadora para el desarrollo de nuevos sistemas multimodales / Implementation of an integration tool of several types of human-computer interaction for the development of new multimodal systems

Alzamora M., Alzamora, Manuel I., Huamán, Andrés E., Barrientos, Alfredo, Villalta Riega, Rosario del Pilar January 2018 (has links)
Las personas interactúan con su entorno de forma multimodal. Esto es, con el uso simultaneo de sus sentidos. En los últimos años, se ha buscado una interacción multimodal humano-computador desarrollando nuevos dispositivos y usando diferentes canales de comunicación con el fin de brindar una experiencia de usuario interactiva más natural. Este trabajo presenta una herramienta que permite la integración de diferentes tipos de interacción humano computador y probarlo sobre una solución multimodal. / Revisión por pares
7

Principal Component Analysis on Fingertips for Gesture Recognition

Hsu, Hung-Chang 31 July 2003 (has links)
To have a voice link with other diving partners or surface personnel, divers need to put on a communication mask. The second stage regulator or mouthpiece is equipped with a circuit to pick up the voice of the diver. Then the voice is frequency-modulates into ultrasonic signal to be transmitted into water. A receiver on the other side picks up the ultrasonic signal and demodulates it back to voice, and plays back in diver's earphone set. This technology is mature but not widely adopted for its price. Most divers still use their favorite way to communicate with each other, i.e. DSL (divers' sign language.) As more and more intelligent machines or robots are built to help divers for their underwater task, divers not only need to exchange messages with their human partners but also machines. However, it seems that there are not many input devices available other than push buttons or joysticks. We know that divers¡¦hands are always busy with holding tools or gauges. Additional input devices will further complicate their movement, also distract their attention for safety measures. With this consideration, this paper intends to develop an algorithm to read the DSL as input commands for computer-aided diving system. To simplify the image processing part of the problem, we attach an LED at the tip of each finger. The gesture or the hand sign is then captured by a CCD camera. After thresholding, there will only five or less than five bright spots left in the image. The remaining part of the task is to design a classifier that can identify if the unknown sign is one from the pool. Furthermore, a constraint imposed is that the algorithm should work without knowing all of the signs in advance. This is an analogy to that human can recognize a face is someone known seen before or a stranger. We modify the concept of eigenfaces developed by Turk and Pentland into eigenhands. The idea is to choose geometrical properties of the bright spots (finger tips), like distance from fingertips to the centroid or the total area of the polygon with fingertips as its vertices as the features of the corresponding hand sign. All these features are quantitative, so we can put several features together to construct a vector to represent a specific hand sign. These vectors are treated as the raw data of the hand signs, and an essential subset or subspace can be spanned by the eigen vectors of the first few large corresponding values. It is less than the total number of hand signed involved. The projection of the raw vector along these eigen vectors are called the principal components of the hand sign. Principal components are abstract but they can serve as keys to match the candidate from a larger pool. With these types of simple geometrical features, the success rate of cross identification among 30 different subjects' 16 gestures varies to 91.04% .
8

Robust Upper Body Pose Recognition in Unconstrained Environments Using Haar-Disparity

Chu, Cheng-Tse January 2008 (has links)
In this research, an approach is proposed for the robust tracking of upper body movement in unconstrained environments by using a Haar- Disparity algorithm together with a novel 2D silhouette projection algorithm. A cascade of boosted Haar classifiers is used to identify human faces in video images, where a disparity map is then used to establish the 3D locations of detected faces. Based on this information, anthropometric constraints are used to define a semi-spherical interaction space for upper body poses. This constrained region serves the purpose of pruning the search space as well as validating user poses. Haar-Disparity improves on the traditional skin manifold tracking by relaxing constraints on clothing, background and illumination. The 2D silhouette projection algorithm provides three orthogonal views of the 3D objects. This allows tracking of upper limbs to be performed in the 2D space as opposed to manipulating 3D noisy data directly. This thesis also proposes a complete optimal set of interactions for very large interactive displays. Experimental evaluation includes the performance of alternative camera positions and orientations, accuracy of pointing, direct manipulative gestures, flag semaphore emulation, and principal axes. As a minor part of this research interest, the usability of interacting using only arm gestures is also evaluated based on ISO 9241-9 standard. The results suggest that the proposed algorithm and optimal set of interactions are useful for interacting with large displays.
9

Feature Extraction of Gesture Recognition Based on Image Analysis by Using Matlab

Chaofan, Hao, Haisheng, Yu January 2014 (has links)
This thesis mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we used image analysis technologies to create an application by encoding in Matlab program. We used this application to segment and extract the finger from one specific gesture (the gesture "one") and ran successfully. We explored the success rate of extracting the characteristic of the specific gesture "one" in different natural environments. We divided the natural environment into three different conditions which are glare and dark condition, similar object condition and different distances condition, then collected the results to calculate the successful extraction rate. We also evaluated and analyzed the inadequacies and future works of this application. / Technology
10

Design of a Wearable Two-Dimensional Joystick as a Muscle-Machine Interface Using Mechanomyographic Signals

Saha, Deba Pratim 22 January 2014 (has links)
Finger gesture recognition using glove-like interfaces are very accurate for sensing individual finger positions by employing a gamut of sensors. However, for the same reason, they are also very costly, cumbersome and unaesthetic for use in artistic scenarios such as gesture based music composition platforms like Virginia Tech's Linux Laptop Orchestra. Wearable computing has shown promising results in increasing portability as well as enhancing proprioceptive perception of the wearers' body. In this thesis, we present the proof-of-concept for designing a novel muscle-machine interface for interpreting human thumb motion as a 2-dimensional joystick employing mechanomyographic signals. Infrared camera based systems such as Microsoft Digits and ultrasound sensor based systems such as Chirp Microsystems' Chirp gesture recognizers are elegant solutions, but have line-of-sight sensing limitations. Here, we present a low-cost and wearable joystick designed as a wristband which captures muscle sounds, also called mechanomyographic signals. The interface learns from user's thumb gestures and finally interprets these motions as one of the four kinds of thumb movements. We obtained an overall classification accuracy of 81.5% for all motions and 90.5% on a modified metric. Results obtained from the user study indicate that mechanomyography based wearable thumb-joystick is a feasible design idea worthy of further study. / Master of Science

Page generated in 0.1074 seconds