• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 19
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 113
  • 58
  • 43
  • 36
  • 36
  • 22
  • 20
  • 19
  • 17
  • 17
  • 16
  • 14
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Evaluating the Feasibility of Accelerometers in Hand Gestures Recognition

Karlaputi, Sarada 12 1900 (has links)
Gesture recognition plays an important role in human computer Interaction for intelligent computing. Major applications like Gaming, Robotics and Automated Homes uses gesture recognition techniques which diminishes the usage of mechanical devices. The main goal of my thesis is to interpret SWAT team gestures using different types of sensors. Accelerometer and flex sensors were explored extensively to build a prototype for soldiers to communicate in the absence of line of sight. Arm movements were recognized by flex sensors and motion gestures by Accelerometers. Accelerometers are used to measure acceleration in respect to movement of the sensor in 3D. Flex sensors changes its resistance based on the amount of bend in the sensor. SVM is the classification algorithm used for classification of the samples. LIBSVM (Library for Support Vector Machines) is integrated software for support vector classification, regression and distribution estimation which supports multi class classification. Sensors data is connected to the WI micro dig to digitize the signal and to transmit it wirelessly to the computing device. Feature extraction and Signal windowing were the two major factors which contribute for the accuracy of the system. Mean Average value and Standard Deviation are the two features considered for accelerometer sensor data classification and Standard deviation is used for the flex sensor analysis for optimum results. Filtering of the signal is done by identifying the different states of signals which are continuously sampled.
42

Machine Learning Techniques for Gesture Recognition

Caceres, Carlos Antonio 13 October 2014 (has links)
Classification of human movement is a large field of interest to Human-Machine Interface researchers. The reason for this lies in the large emphasis humans place on gestures while communicating with each other and while interacting with machines. Such gestures can be digitized in a number of ways, including both passive methods, such as cameras, and active methods, such as wearable sensors. While passive methods might be the ideal, they are not always feasible, especially when dealing in unstructured environments. Instead, wearable sensors have gained interest as a method of gesture classification, especially in the upper limbs. Lower arm movements are made up of a combination of multiple electrical signals known as Motor Unit Action Potentials (MUAPs). These signals can be recorded from surface electrodes placed on the surface of the skin, and used for prosthetic control, sign language recognition, human machine interface, and a myriad of other applications. In order to move a step closer to these goal applications, this thesis compares three different machine learning tools, which include Hidden Markov Models (HMMs), Support Vector Machines (SVMs), and Dynamic Time Warping (DTW), to recognize a number of different gestures classes. It further contrasts the applicability of these tools to noisy data in the form of the Ninapro dataset, a benchmarking tool put forth by a conglomerate of universities. Using this dataset as a basis, this work paves a path for the analysis required to optimize each of the three classifiers. Ultimately, care is taken to compare the three classifiers for their utility against noisy data, and a comparison is made against classification results put forth by other researchers in the field. The outcome of this work is 90+ % recognition of individual gestures from the Ninapro dataset whilst using two of the three distinct classifiers. Comparison against previous works by other researchers shows these results to outperform all other thus far. Through further work with these tools, an end user might control a robotic or prosthetic arm, or translate sign language, or perhaps simply interact with a computer. / Master of Science
43

Distance-Scaled Human-Robot Interaction with Hybrid Cameras

Pai, Abhishek 24 October 2019 (has links)
No description available.
44

Gesture recognition with application in music arrangement

Pun, James Chi-Him 05 November 2007 (has links)
This thesis studies the interaction with music synthesis systems using hand gestures. Traditionally users of such systems were limited to input devices such as buttons, pedals, faders, and joysticks. The use of gestures allows the user to interact with the system in a more intuitive way. Without the constraint of input devices, the user can simultaneously control more elements within the music composition, thus increasing the level of the system's responsiveness to the musician's creative thoughts. A working system of this concept is implemented, employing computer vision and machine intelligence techniques to recognise the user's gestures. / Dissertation (MSc)--University of Pretoria, 2006. / Computer Science / MSc / unrestricted
45

R-CNN and Wavelet Feature Extraction for Hand Gesture Recognition With Emg Signals

Shanmuganathan, Vimal, Yesudhas, Harold Robinson, Khan, Mohammad S., Khari, Manju, Gandomi, Amir H. 01 November 2020 (has links)
This paper demonstrates the implementation of R-CNN in terms of electromyography-related signals to recognize hand gestures. The signal acquisition is implemented using electrodes situated on the forearm, and the biomedical signals are generated to perform the signals preprocessing using wavelet packet transform to perform the feature extraction. The R-CNN methodology is used to map the specific features that are acquired from the wavelet power spectrum to validate and train how the architecture is framed. Additionally, the real-time test is completed to reach the accuracy of 96.48% compared to the related methods. This kind of result proves that the proposed work has the highest amount of accuracy in recognizing the gestures.
46

Hand Gesture Recognition Using Ultrasonic Waves

AlSharif, Mohammed H. 04 1900 (has links)
Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.
47

Passive gesture recognition on unmodified smartphones using Wi-Fi RSSI / Passiv gest-igenkänning för en standardutrustad smartphone med hjälpav Wi-Fi RSSI

Abdulaziz Ali Haseeb, Mohamed January 2017 (has links)
The smartphone is becoming a common device carried by hundreds of millions of individual humans worldwide, and is used to accomplish a multitude of different tasks like basic communication, internet browsing, online shopping and fitness tracking. Limited by its small size and tight energy storage, the human-smartphone interface is largely bound to the smartphones small screens and simple keypads. This prohibits introducing new rich ways of interaction with smartphones.   The industry and research community are working extensively to find ways to enrich the human-smartphone interface by either seizing the existing smartphones resources like microphones, cameras and inertia sensors, or by introducing new specialized sensing capabilities into the smartphones like compact gesture sensing radar devices.   The prevalence of Radio Frequency (RF) signals and their limited power needs, led us towards investigating using RF signals received by smartphones to recognize gestures and activities around smartphones. This thesis introduces a solution for recognizing touch-less dynamic hand gestures from the Wi-Fi Received Signal Strength (RSS) received by the smartphone using a recurrent neural network (RNN) based probabilistic model. Unlike other Wi-Fi based gesture recognition solutions, the one introduced in this thesis does not require a change to the smartphone hardware or operating system, and performs the hand gesture recognition without interfering with the normal operation of other smartphone applications.   The developed hand gesture recognition solution achieved a mean accuracy of 78% detecting and classifying three hand gestures in an online setting involving different spatial and traffic scenarios between the smartphone and Wi-Fi access points (AP). Furthermore the characteristics of the developed solution were studied, and a set of improvements have been suggested for further future work. / Smarta telefoner bärs idag av hundratals miljoner människor runt om i världen, och används för att utföra en mängd olika uppgifter, så som grundläggande kommunikation, internetsökning och online-inköp. På grund av begränsningar i storlek och energilagring är människa-telefon-gränssnitten dock i hög grad begränsade till de förhållandevis små skärmarna och enkla knappsatser.   Industrin och forskarsamhället arbetar för att hitta vägar för att förbättra och bredda gränssnitten genom att antingen använda befintliga resurser såsom mikrofoner, kameror och tröghetssensorer, eller genom att införa nya specialiserade sensorer i telefonerna, som t.ex. kompakta radarenheter för gestigenkänning.   Det begränsade strömbehovet hos radiofrekvenssignaler (RF) inspirerade oss till att undersöka om dessa kunde användas för att känna igen gester och aktiviteter i närheten av telefoner. Denna rapport presenterar en lösning för att känna igen gester med hjälp av ett s.k. recurrent neural network (RNN). Till skillnad från andra Wi-Fi-baserade lösningar kräver denna lösning inte en förändring av vare sig hårvara eller operativsystem, och ingenkänningen genomförs utan att inverka på den normala driften av andra applikationer på telefonen.   Den utvecklade lösningen når en genomsnittlig noggranhet på 78% för detektering och klassificering av tre olika handgester, i ett antal olika konfigurationer vad gäller telefon och Wi-Fi-sändare. Rapporten innehåller även en analys av flera olika egenskaper hos den föreslagna lösningen, samt förslag till vidare arbete.
48

Personalized Dynamic Hand Gesture Recognition

Wang, Lei January 2018 (has links)
Human gestures, with the spatial-temporal variability, are difficult to be recognized by a generic model or classifier that are applicable for everyone. To address the problem, in this thesis, personalized dynamic gesture recognition approaches are proposed. Specifically, based on Dynamic Time Warping(DTW), a novel concept of Subject Relation Network is introduced to describe the similarity of subjects in performing dynamic gestures, which offers a brand new view for gesture recognition. By clustering or arranging training subjects based on the network, two personalization algorithms are proposed respectively for generative models and discriminative models. Moreover, three basic recognition methods, DTW-based template matching, Hidden Markov Model(HMM) and Fisher Vector combining classification, are compared and integrated into the proposed personalized gesture recognition. The proposed approaches are evaluated on a challenging dynamic hand gesture recognition dataset DHG14/28, which contains the depth images and skeleton coordinates returned by the Intel RealSense depth camera. Experimental results show that the proposed personalized algorithms can significantly improve the performance of basic generative&discriminative models and achieve the state-of-the-art accuracy of 86.2%. / Människliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
49

Interactive Imaging via Hand Gesture Recognition.

Jia, Jia January 2009 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. As a sub-field, Interactive Image Processing emphasizes particularly on the communications between machine and human. The basic flowchart is definition of object, analysis and training phase, recognition and feedback. Generally speaking, the core issue is how we define the interesting object and track them more accurately in order to complete the interaction process successfully. This thesis proposes a novel dynamic simulation scheme for interactive image processing. The work consists of two main parts: Hand Motion Detection and Hand Gesture recognition. Within a hand motion detection processing, movement of hand will be identified and extracted. In a specific detection period, the current image is compared with the previous image in order to generate the difference between them. If the generated difference exceeds predefined threshold alarm, a typical hand motion movement is detected. Furthermore, in some particular situations, changes of hand gesture are also desired to be detected and classified. This task requires features extraction and feature comparison among each type of gestures. The essentials of hand gesture are including some low level features such as color, shape etc. Another important feature is orientation histogram. Each type of hand gestures has its particular representation in the domain of orientation histogram. Because Gaussian Mixture Model has great advantages to represent the object with essential feature elements and the Expectation-Maximization is the efficient procedure to compute the maximum likelihood between testing images and predefined standard sample of each different gesture, the comparability between testing image and samples of each type of gestures will be estimated by Expectation-Maximization algorithm in Gaussian Mixture Model. The performance of this approach in experiments shows the proposed method works well and accurately.
50

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader 08 November 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.

Page generated in 0.1204 seconds