• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards Full-Body Gesture Analysis and Recognition

Puranam, Muthukumar B 01 January 2005 (has links)
With computers being embedded in every walk of our life, there is an increasing demand forintuitive devices for human-computer interaction. As human beings use gestures as importantmeans of communication, devices based on gesture recognition systems will be effective for humaninteraction with computers. However, it is very important to keep such a system as non-intrusive aspossible, to reduce the limitations of interactions. Designing such non-intrusive, intuitive, camerabasedreal-time gesture recognition system has been an active area of research research in the fieldof computer vision.Gesture recognition invariably involves tracking body parts. We find many research works intracking body parts like eyes, lips, face etc. However, there is relatively little work being done onfull body tracking. Full-body tracking is difficult because it is expensive to model the full-body aseither 2D or 3D model and to track its movements.In this work, we propose a monocular gesture recognition system that focuses on recognizing a setof arm movements commonly used to direct traffic, guiding aircraft landing and for communicationover long distances. This is an attempt towards implementing gesture recognition systems thatrequire full body tracking, for e.g. an automated recognition semaphore flag signaling system.We have implemented a robust full-body tracking system, which forms the backbone of ourgesture analyzer. The tracker makes use of two dimensional link-joint (LJ) model, which representsthe human body, for tracking. Currently, we track the movements of the arms in a video sequence,however we have future plans to make the system real-time. We use distance transform techniquesto track the movements by fitting the parameters of LJ model in every frames of the video captured.The tracker's output is fed a to state-machine which identifies the gestures made. We haveimplemented this system using four sub-systems. Namely1. Background subtraction sub-system, using Gaussian models and median filters.2. Full-body Tracker, using L-J Model APIs3. Quantizer, that converts tracker's output into defined alphabets4. Gesture analyzer, that reads the alphabets into action performed.Currently, our gesture vocabulary contains gestures involving arms moving up and down which canbe used for detecting semaphore, flag signaling system. Also we can detect gestures like clappingand waving of arms.
2

Gestures in human-robot interaction

Bodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
3

Proposta de uma metodologia para a obtenção de vocabulários de gestos intuitivos para a interação homem-robô

Santos, Clebeson Canuto dos 29 February 2016 (has links)
Development in robotics has been accelerated in the last decades. Mainly due to the advancement in technology, especially computers. However, even having enough technology to create robots that can participate in the daily lives of people, robotics has not become popular, once that the most robots purchased by people still fall into the category of toys, monitoring systems, among others. The search for such robots is due to the fact that their repertoire of tasks is so much limited and predetermined, which ultimately facilitate the interaction between users and robots. Meanwhile, more sophisticated robots, in most cases, can only be used by specialized people, because they have a larger task repertoire, which needs more complex interaction mechanisms. In others areas, such as computing, the communication interface was of fundamental importance for its popularization. This way, the construction of friendly communication interfaces between people and robots can be the key to robotics can be widespread in the actual society. However, not every interface can provide an easy and e cient communication. An e ective interface should be as intuitive as possible, what, according the psycholinguistics studies, can be achieved through the use of spontaneous gestures. Therefore, knowing the di cult to nd a procedure to obtain intuitive gesture vocabularies, this master thesis proposes a methodology that, based on psycholinguistics and HCI (Human-Computer Interaction) studies, is suitable to obtain intuitive gesture vocabularies to be used in HRI (Human-Robot Interaction). Therefore, after the application of this methodology, it was possible to notice that it was able to lead to results as good as those obtained by another methodology which is already used and accepted in HCI. Moreover, the proposed methodology has some distinct characteristics, such as the possibility to obtain more complex vocabularies, that can lead to more intuitive gesture vocabularies and which, in form, are more likely to be robust. In addition, by submitting the obtained gestures to a recognizer, an average hit rate of 77,5% was obtained, which, even though it is not so high, can be considered good enough, since some of the gestures are performed with both arms, increasing the complexity of the recognition task. Thus, at the end of this master thesis, some complementary works are proposed, which must be carried out in order to move further towards the development of intuitive interfaces for human-robot interaction. / O desenvolvimento da rob otica vem ganhando acelera c~ao desde as ultimas d ecadas, motivado principalmente pelo avan co da tecnologia, sobretudo dos computadores. No entanto, mesmo tendo tecnologia su ciente para criar rob^os que possam participar do cotidiano das pessoas, a rob otica ainda n~ao se popularizou, haja vista que a maioria dos rob^os adquiridos pelas pessoas ainda se enquadram na categoria de brinquedos, sistemas de monitoramento, dentre outros. A busca por esses tipos de rob^os se deve ao fato de que seu repert orio de tarefas e bem reduzido e predeterminado, o que acaba facilitando a intera c~ao entre usu arios e rob^os. Enquanto isso, rob^os mais so sticados, por possu rem um maior repert orio de tarefas, acabam necessitando de mecanismos de intera c~ao mais complexos que, em sua maioria, s o podem ser utilizados por pessoas especializadas. Em outras areas, como a inform atica, a interface de comunica c~ao foi de fundamental import^ancia para a sua populariza c~ao. Dessa maneira, a cria c~ao de interfaces amig aveis de comunica c~ao entre pessoas e rob^os pode ser a chave para que a rob otica tamb em possa ser amplamente difundida na sociedade atual. No entanto, n~ao e qualquer interface que pode oferecer uma comunica c~ao f acil e e ciente. Para isso as mesmas devem ser o mais intuitivas poss vel, o que, segundo os estudos psicolingu sticos, pode ser alcan cado por meio de gestos espont^aneos. Logo, sabendo da di culdade de se encontrar um procedimento que ofere ca a possibilidade de se obter vocabul arios de gestos intuitivos, esta disserta c~ao de mestrado prop~oe uma metodologia que, baseada na psicolingu stica e nos estudos sobre HCI (do ingl^es - Human Computer Interaction), ofere ce facilidade no processo de obten c~ao de vocabul arios de gestos intuitivos a serem utilizados na intera c~ao entre pessoas e rob^os. Desta maneira, ap os a aplica c~ao de tal metodologia, p^ode-se perceber que a mesma, apesar de ser nova, al em de poder levar a resultados t~ao bons quanto uma outra metodologia j a utilizada e aceita em HCI, ainda possui diferenciais, como a possibilidade de obter vocabul arios mais complexos, que podem levar a vocabul arios de gestos mais intuitivos e possivelmente mais robustos. Al em disso, ao submeter os gestos obtidos a um reconhecedor, obteve-se uma taxa m edia de acertos de 77,5%, que mesmo n~ao sendo alta, pode ser considerada boa, pois, uma vez que alguns gestos obtidos s~ao realizados com os dois bra cos, a complexidade do reconhecimento e aumentada de maneria consider avel. Assim, ao nal desta disserta c~ao, s~ao apresentados v arios trabalhos complementares a este, que devem ser realizados para que se possa avan car ainda mais na dire c~ao do desenvolvimento das interfaces intuitivas para a intera c~ao homem-rob^o.

Page generated in 0.0661 seconds