• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 19
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 113
  • 58
  • 43
  • 36
  • 36
  • 22
  • 20
  • 19
  • 17
  • 17
  • 16
  • 14
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Novel Accelerometer-based Gesture Recognition System

Akl, Ahmad 14 December 2010 (has links)
Gesture Recognition provides an efficient human-computer interaction for interactive and intelligent computing. In this work, we address the problem of gesture recognition using the theory of random projection and by formulating the recognition problem as an $\ell_1$-minimization problem. The gesture recognition uses a single 3-axis accelerometer for data acquisition and comprises two main stages: a training stage and a testing stage. For training, the system employs dynamic time warping as well as affinity propagation to create exemplars for each gesture while for testing, the system projects all candidate traces and also the unknown trace onto the same lower dimensional subspace for recognition. A dictionary of 18 gestures is defined and a database of over 3,700 traces is created from 7 subjects on which the system is tested and evaluated. Simulation results reveal a superior performance, in terms of accuracy and computational complexity, compared to other systems in the literature.
32

Robust gesture recognition

Cheng, You-Chi 08 June 2015 (has links)
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.
33

Feature selection and hierarchical classifier design with applications to human motion recognition

Freeman, Cecille January 2014 (has links)
The performance of a classifier is affected by a number of factors including classifier type, the input features and the desired output. This thesis examines the impact of feature selection and classification problem division on classification accuracy and complexity. Proper feature selection can reduce classifier size and improve classifier performance by minimizing the impact of noisy, redundant and correlated features. Noisy features can cause false association between the features and the classifier output. Redundant and correlated features increase classifier complexity without adding additional information. Output selection or classification problem division describes the division of a large classification problem into a set of smaller problems. Problem division can improve accuracy by allocating more resources to more difficult class divisions and enabling the use of more specific feature sets for each sub-problem. The first part of this thesis presents two methods for creating feature-selected hierarchical classifiers. The feature-selected hierarchical classification method jointly optimizes the features and classification tree-design using genetic algorithms. The multi-modal binary tree (MBT) method performs the class division and feature selection sequentially and tolerates misclassifications in the higher nodes of the tree. This yields a piecewise separation for classes that cannot be fully separated with a single classifier. Experiments show that the accuracy of MBT is comparable to other multi-class extensions, but with lower test time. Furthermore, the accuracy of MBT is significantly higher on multi-modal data sets. The second part of this thesis focuses on input feature selection measures. A number of filter-based feature subset evaluation measures are evaluated with the goal of assessing their performance with respect to specific classifiers. Although there are many feature selection measures proposed in literature, it is unclear which feature selection measures are appropriate for use with different classifiers. Sixteen common filter-based measures are tested on 20 real and 20 artificial data sets, which are designed to probe for specific feature selection challenges. The strengths and weaknesses of each measure are discussed with respect to the specific feature selection challenges in the artificial data sets, correlation with classifier accuracy and their ability to identify known informative features. The results indicate that the best filter measure is classifier-specific. K-nearest neighbours classifiers work well with subset-based RELIEF, correlation feature selection or conditional mutual information maximization, whereas Fisher's interclass separability criterion and conditional mutual information maximization work better for support vector machines. Based on the results of the feature selection experiments, two new filter-based measures are proposed based on conditional mutual information maximization, which performs well but cannot identify dependent features in a set and does not include a check for correlated features. Both new measures explicitly check for dependent features and the second measure also includes a term to discount correlated features. Both measures correctly identify known informative features in the artificial data sets and correlate well with classifier accuracy. The final part of this thesis examines the use of feature selection for time-series data by using feature selection to determine important individual time windows or key frames in the series. Time-series feature selection is used with the MBT algorithm to create classification trees for time-series data. The feature selected MBT algorithm is tested on two human motion recognition tasks: full-body human motion recognition from joint angle data and hand gesture recognition from electromyography data. Results indicate that the feature selected MBT is able to achieve high classification accuracy on the time-series data while maintaining a short test time.
34

A Study of Boosting based Transfer Learning for Activity and Gesture Recognition

January 2011 (has links)
abstract: Real-world environments are characterized by non-stationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs and without the need for explicit relearning from scratch. In this thesis, a novel instance transfer technique that adapts a "Cost-sensitive" variation of AdaBoost is presented. The method capitalizes on the theoretical and functional properties of AdaBoost to selectively reuse outdated training instances obtained from a "source" domain to effectively classify unseen instances occurring in a different, but related "target" domain. The algorithm is evaluated on real-world classification problems namely accelerometer based 3D gesture recognition, smart home activity recognition and text categorization. The performance on these datasets is analyzed and evaluated against popular boosting-based instance transfer techniques. In addition, supporting empirical studies, that investigate some of the less explored bottlenecks of boosting based instance transfer methods, are presented, to understand the suitability and effectiveness of this form of knowledge transfer. / Dissertation/Thesis / M.S. Computer Science 2011
35

An integrated sign language recognition system

Nel, Warren January 2014 (has links)
Doctor Educationis / Research has shown that five parameters are required to recognize any sign language gesture: hand shape, location, orientation and motion, as well as facial expressions. The South African Sign Language (SASL) research group at the University of the Western Cape has created systems to recognize Sign Language gestures using single parameters. Using a single parameter can cause ambiguities in the recognition of signs that are similarly signed resulting in a restriction of the possible vocabulary size. This research pioneers work at the group towards combining multiple parameters to achieve a larger recognition vocabulary set. The proposed methodology combines hand location and hand shape recognition into one combined recognition system. The system is shown to be able to recognize a very large vocabulary of 50 signs at a high average accuracy of 74.1%. This vocabulary size is much larger than existing SASL recognition systems, and achieves a higher accuracy than these systems in spite of the large vocabulary. It is also shown that the system is highly robust to variations in test subjects such as skin colour, gender and body dimension. Furthermore, the group pioneers research towards continuously recognizing signs from a video stream, whereas existing systems recognized a single sign at a time. To this end, a highly accurate continuous gesture segmentation strategy is proposed and shown to be able to accurately recognize sentences consisting of five isolated SASL gestures.
36

Aktivní protéza ruky / Active prostetic hand

Brenner, Maximilian January 2019 (has links)
BACKGROUND: Based on mainly vascular diseases and traumatic injuries, around 40,000 upper limb amputations are performed annually worldwide. The affected persons are strongly impaired in their physical abilities by such an intervention. Through myoelectric prostheses, affected persons are able to recover some of their abilities. METHODS: In order to control such prostheses, a system is to be developed by which electromyographic (EMG) measurements on the upper extremities can be carried out. The data obtained in this way should then be processed to recognize different gestures. These EMG measurements are to be performed by means of a suitable microcontroller and afterwards processed and classified by adequate software. Finally, a model or prototype of a hand is to be created, which is controlled by means of the acquired data. RESULTS: The signals from the upper extremities were picked up by four MyoWare sensors and transmitted to a computer via an Arduino Uno microcontroller. The Signals were processed in quantized time windows using Matlab. By means of a neural network, the gestures were recognized and displayed both graphically and by a prosthesis. The achieved recognition rate was up to 87% across all gestures. CONCLUSION: With an increasing number of gestures to be detected, the functionality of a neural network exceeds that of any fuzzy logic concerning classification accuracy. The recognition rates fluctuated between the individual gestures. This indicates that further fine tuning is needed to better train the classification software. However, it demonstrated that relatively cheap hardware can be used to create a control system for upper extremity prostheses.
37

Ovládání počítače pomocí gest / Human-Machine Interface Based on Gestures

Charvát, Jaroslav January 2011 (has links)
Master's thesis "Human-Machine Interface Based on Gestures" depicts the theoretical background of the computer vision and gesture recognition. It describes more in detail different methods that were used to create the application. Practical part of this thesis consists of the description of the developed program and its functionality. Using this application, user should be able to control computer by gestures of both right and left hands and also his head. The program is primarily based on the skin detection that is followed by the recognition of palms and head gestures. There were used two essential methods for these actions, AdaBoost and PCA.
38

Rozpoznávání pohybu těla pomocí nositelných zařízení / Body Gestures Recognition With Using Wearable Devices

Kajzar, Aleš January 2016 (has links)
The goal of this master's thesis is to describe the possibilites of devices with operating system Android Wear, there is a description of Android Wear API and components, which are nowadays widely used in smart wearable devices. The thesis contains a description of recognition of dynamic gestures with the use of machine learning methods applied on data, which are provided by a smart device. In the practical part of this master's thesis is described an implemented library, which allows to train gestures and recognize them using FastDTW algorithm and inform a connected device about the recognized movement. Use of the library is shown on a demo application.
39

Real-Time Gesture-Based Posture Control of a Manipulator

Plouffe, Guillaume 20 January 2020 (has links)
Reaching a target quickly and accurately with a robotic arm containing multiple joints while avoiding moving and fixed obstacles can be a daunting (and sometimes impossible) task for any user behind the remote control. Current existing solutions are often hard to use and to scale for all user body types and robotic arm configurations. In this work, we propose a vision-based gesture recognition approach to naturally control the overall posture of a robotic arm using human hand gestures and an inverse kinematic exploration approach using the FABRIK algorithm. Three different methods are investigated to intuitively control a robotic arm's posture in real-time using depth data collected by a Kinect sensor. Each of the posture control methods are users scalable and compatible with most existing robotic arm configurations. In the first method, the user's right index fingertip position is mapped to compute the inverse kinematics on the robot. The inverse kinematics solutions are displayed in a graphical interface. Using this interface and the left hand, the user can intuitively browse and select a desired robotic arm posture. In the second method, the user's right index fingertip position and finger direction are respectively used to determine the end-effector position and an attraction point position. The latter enables the control of the robotic arm posture. In the third method, the user's right index finger is mapped to compute the inverse kinematics on the robot. Using static gesture with the same hand, the user's right index finger can be transformed into a virtual pen that can trace the form of the desired robotic arm posture. The trace can be visualized in real-time on a graphical interface. A search is then performed using an inverse kinematic exploration and the Dynamic Time Warping algorithm to select the closest matching possible posture. In the last two proposed methods, different search strategies to optimize the speed and the inverse kinematic exploration coverage are proposed. Using a combination of Greedy Best First search and an efficient selection of input postures based on the FABRIK's algorithm characteristics, these optimizations allow for smoother and more accurate posture control of the robotic arm. The performance of these real-time natural human control approaches is evaluated for precision and speed against static (i.e. fixed) and dynamic (i.e. moving) obstacles in a simulated experiment. An adaptation of the vision-based gesture recognition system to operate the AL5D robotic arm was also implemented to conduct further evaluation in a real-world environment. The results showed that the first and third methods were better suited for obstacle avoidance in static environments not requiring continuous posture changes. The second method gave excellent results in the dynamic environment experience and was able to complete a challenging pick and place task in a difficult real-world environment with static constraints.
40

Real-time 2D Static Hand Gesture Recognition and 2D Hand Tracking for Human-Computer Interaction

Popov, Pavel Alexandrovich 11 December 2020 (has links)
The topic of this thesis is Hand Gesture Recognition and Hand Tracking for user interface applications. 3 systems were produced, as well as datasets for recognition and tracking, along with UI applications to prove the concept of the technology. These represent significant contributions to resolving the hand recognition and tracking problems for 2d systems. The systems were designed to work in video only contexts, be computationally light, provide recognition and tracking of the user's hand, and operate without user driven fine tuning and calibration. Existing systems require user calibration, use depth sensors and do not work in video only contexts, or are computationally heavy requiring GPU to run in live situations. A 2-step static hand gesture recognition system was created which can recognize 3 different gestures in real-time. A detection step detects hand gestures using machine learning models. A validation step rejects false positives. The gesture recognition system was combined with hand tracking. It recognizes and then tracks a user's hand in video in an unconstrained setting. The tracking uses 2 collaborative strategies. A contour tracking strategy guides a minimization based template tracking strategy and makes it real-time, robust, and recoverable, while the template tracking provides stable input for UI applications. Lastly, an improved static gesture recognition system addresses the drawbacks due to stratified colour sampling of the detection boxes in the detection step. It uses the entire presented colour range and clusters it into constituent colour modes which are then used for segmentation, which improves the overall gesture recognition rates. One dataset was produced for static hand gesture recognition which allowed for the comparison of multiple different machine learning strategies, including deep learning. Another dataset was produced for hand tracking which provides a challenging series of user scenarios to test the gesture recognition and hand tracking system. Both datasets are significantly larger than other available datasets. The hand tracking algorithm was used to create a mouse cursor control application, a paint application for Android mobile devices, and a FPS video game controller. The latter in particular demonstrates how the collaborating hand tracking can fulfill the demanding nature of responsive aiming and movement controls.

Page generated in 0.1128 seconds