• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 21
  • 13
  • 12
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

3d Hand Tracking In Video Sequences

Tokatli, Aykut 01 September 2005 (has links) (PDF)
The use of hand gestures provides an attractive alternative to cumbersome interface devices such as keyboard, mouse, joystick, etc. Hand tracking has a great potential as a tool for better human-computer interaction by means of communication in a more natural and articulate way. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures and hand tracking. In this study, a real-time hand tracking system is developed. Mainly, it is image-based hand tracking and based on 2D image information. For separation and identification of finger parts, coloured markers are used. In order to obtain 3D tracking, a stereo vision approach is used where third dimension is obtained by depth information. In order to see results in 3D, a 3D hand model is developed and Java 3D is used as the 3D environment. Tracking is tested on two different types of camera: a cheap USB web camera and Sony FCB-IX47AP camera, connected to the Matrox Meteor frame grabber with a standard Intel Pentium based personal computer. Coding is done by Borland C++ Builder 6.0 and Intel Image Processing and Open Source Computer Vision (OpenCV) library are used as well. For both camera types, tracking is found to be robust and efficient where hand tracking at ~8 fps could be achieved. Although the current progress is encouraging, further theoretical as well as computational advances are needed for this highly complex task of hand tracking.
22

Optické metody rozeznání gest / Optical methods of gesture recognition

Netopil, Jan January 2016 (has links)
This thesis deals with optical devices and methods image processing for recognizing hand gestures. The types of gestures, possible applications, contact based devices and vision based devices are described in thesis. Next, a review of hand detection, features extraction and gesture classification is provided. Proposed gesture recognition system consists of infrared camera FLIR A655sc, infrared FLIR Lepton module, webcam Logitech S7500, method for hand gesture analysis and a database of gestures for classification. For each of the devices, gesture recognition is evaluated in terms of speed and accuracy in different environments. The proposed method was implemented in MATLAB.
23

Using Leap Motion for the Interactive Analysis of Multivariate Networks

Vendruscolo, Marcello Pietro, Lif, Andreas January 2020 (has links)
This work is an interdisciplinary study involving mainly the fields of information visualisation and human-computer interaction. The advancement of technology has expanded the ways in which humans interact with machines, which has benefited both the industry as well as several fields within science. However, scientists and practitioners in the information visualisation domain remain working, mostly, with classical setups constituted of keyboard and standard computer mouse devices. This project investigates how a shift in the human-computer interaction aspect of visualisation software systems can affect the accomplishment of tasks and the overall user experience when analysing two-dimensionally displayed multivariate networks. Such investigation is relevant as complex network structures have seen an increase in use as essential tools to solve challenges that directly affect individuals and societies, such as in medicine or social sciences. The improvement of visualisation software’s usability can result in more of such challenges answered in a shorter time or with more precision. To answer this question, a web application that enables users to analyse multivariate networks through interfaces based both on hand gesture recognition and mouse device was developed. Also, a number of gesture designs were developed for several tasks to be performed when visually analysing networks. Then, an expert in the field of human-computer interaction was invited to review the proposed hand gestures and report his overall user experience of using them. The results show that the expert had, overall, similar user experience for both hand gestures and mouse device. Moreover, the interpretation of the results indicates that the accuracy offered by gestures has to be carefully taken into account when designing gestures for selection tasks, particularly when the selection targets are small objects. Finally, our analysis points out that the manner in which the software’s graphical user interface is presented also affects the usability of gestures, and that both factors have to be designed accordingly.
24

Combining Eye Tracking and Gestures to Interact with a Computer System

Rådell, Dennis January 2016 (has links)
Eye tracking and gestures are relatively new input methods, changing the way humans interact with computers. Gestures can be used for games or controlling a computer through an interface. Eye tracking is another way of interacting with computers, often by combining with other inputs such as a mouse or touch pad. Gestures and eye tracking have been used in commercially available products, but seldom combined to create a multimodal interaction. This thesis presents a prototype which combines eye tracking with gestures to interact with a computer. To accomplish this, the report investigates different methods of recognizing hand gestures. The aim is to combine the technologies in such a way that the gestures can be simple, and the location of a user’s gaze will decide what the gesture does. The report concludes by presenting a final prototype where the gestures are combined with eye tracking to interact with a computer. The final prototype uses an IR camera together with an eye tracker. The final prototype is evaluated with regards to learnability, usefulness, and intuitiveness. The evaluation of the prototype shows that usefulness is low, but learnability and intuitiveness are quite high. / Eye tracking och gester är relativt nya inmatningsmetoder, som förändra sättet människor interagerar med datorer. Gester kan användas för till exempel spel eller för att styra en dator via ett gränssnitt. Eye tracking är ett annat sätt att interagera med datorer, ofta med hjälp av genom att kombinera med andra styrenheter såsom en mus eller styrplatta. Gester och eye tracking har använts i kommersiellt tillgängliga produkter, men sällan kombinerats för att skapa en multimodal interaktion. Denna avhandling presenterar en prototyp som kombinerar eye tracking med gester för att interagera med en dator. För att åstadkomma detta undersöker rapporten olika metoder för att känna igen gester. Målet är att kombinera teknologierna på ett sådant sätt att gestern kan vara enkla, och platsen för användarens blick kommer bestämma vad gesten gör. Rapporten avslutas genom att presentera en slutlig prototyp där gester kombineras med eye tracking för att interagera med en dator. Den slutliga prototypen använder en IR kamera och en eye tracker. Den slutliga prototypen utvärderas med avseende på lärbarhet, användbarhet, och intuition. Utvärderingen av prototypen visar att användbarheten är låg, men både lärbarhet och intuition är ganska höga.
25

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Hebert Luchetti Ribeiro 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
26

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Ribeiro, Hebert Luchetti 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
27

Human computer interface based on hand gesture recognition

Bernard, Arnaud Jean Marc 24 August 2010 (has links)
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content. The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control the TV. A dedicated interface can then be used to select a TV channel, adjust volume or browse videos from an online streaming server. This approach leads to several challenges. The first is the use of a simple webcam which leads to a vision based system. From the single webcam, we need to recognize the hand and identify its gesture or trajectory. A TV set is usually installed in a living room which implies constraints such as a potentially moving background and luminance change. These issues will be further discussed as well as the methods developed to resolve them. Video browsing is one example of the use of gesture recognition. To illustrate another application, we developed a simple game controlled by hand gestures. The emergence of 3D TVs is allowing the development of 3D video conferencing. Therefore we also consider the use of a stereo camera to recognize hand gesture.

Page generated in 0.0827 seconds