61 |
Usability Analysis in Locomotion Interface for Human Computer Interaction System DesignFarhadi-Niaki, Farzin 09 January 2019 (has links)
In the past decade and more than any time before, new technologies have been broadly applied in various fields of interaction between human and machine. Despite many functionality studies, yet, how such technologies should be evaluated within the context of human computer interaction research remains unclear. This research aims at proposing a mechanism to evaluate/predict the design of user interfaces with their interacting components. At the first level of analysis, an original concept extracts the usability results of components, such as effectiveness, efficiency, adjusted satisfaction, and overall acceptability, for comparison in the fields of interest. At the second level of analysis, another original concept defines new metrics based on the level of complexity in interactions between input modality and feedback of performing a task, in the field of classical solid mechanics. Having these results, a set of hypotheses is provided to test if some common satisfaction criteria can be predicted from their correlations with the components of performance, complexity, and overall acceptability. In the context of this research, three multimodal applications are implemented and experimentally tested to study the quality of interactions through the proposed hypotheses: a) full-body gestures vs. mouse/keyboard, in a Box game; b) arm/hand gestures vs. three-dimensional haptic controller, in a Slingshot game; and c) hand/finger gestures vs. mouse/keyboard, in a Race game. Their graphical user interfaces are designed to cover some extents of static/dynamic gestures, pulse/continuous touch-based controls, and discrete/analog tasks measured. They are quantified based on a new definition termed index of complexity which represents a concept of effort in the domain of locomotion interaction. Single/compound devices are also defined and studied to evaluate the effect of user’s attention in multi-tasking interactions. The proposed method of investigation for usability is meant to assist human-computer interface developers to reach a proper overall acceptability, performance, and effort-based analyses prior to their final user interface design.
|
62 |
Técnica para interação com mãos em superficies planares utilizando uma câmera RGB-D / A technique for hand interaction with planar surfaces using an RGB-D cameraWeber, Henrique January 2016 (has links)
Sistemas de Interação Humano-Computador baseados em toque são uma tecnologia disseminada em tablets, smartphones e notebooks. Trata-se de um grande avanço que aumenta a facilidade de comunicação e, ao mesmo tempo, diminui a necessidade de interfaces como mouse e teclado. Entretanto, a superfície de interação utilizada por esses sistemas normalmente é equipada com sensores para a captação dos movimentos realizados pelo usuário, o que impossibilita transformar uma superfície planar qualquer (uma mesa, por exemplo) em uma superfície de interação. Por outro lado, a popularização de sensores de profundidade a partir do lançamento do Microsoft Kinect propiciou o desenvolvimento de sistemas que adotam objetos do dia a dia como superfícies de interação. Nesta dissertação é proposta uma interface natural para interação com superfícies planares utilizando uma câmera RGB-D em posição descendente. Inicialmente, o plano de interação é localizado na nuvem de pontos 3D através de uma variação do algoritmo RANSAC com coerência temporal. Objetos acima do plano são segmentados a partir da transformada watershed baseada em uma função de energia que combina cor, profundidade e informação de confiança. A cor de pele é utilizada para isolar as mãos, e os dedos que interagem com o plano são identificados por um novo processo de esqueletonização 2D. Finalmente, as pontas dos dedos são rastreadas com o uso do algoritmo Húngaro, e o filtro de Kalman é usado para produzir trajetórias mais suaves. Para demonstrar a utilidade da técnica, foi desenvolvido um protótipo que permite ao usuário desenhar em uma superfície de forma natural e intuitiva. / Touch-based Human-Computer Interfaces (HCIs) are a widespread technology present in tablets, smartphones, and notebooks. This is a breakthrough which increases the ease of communication and at the same time reduces the need for interfaces such as mouse and keyboard. However, the interaction surface used by these systems is usually equipped with sensors to capture the movements made by the user, making it impossible to substitute this surface by any other such as a table, for example. On the other hand, the progress of commercial 3D depth sensing technologies in the past five years, having as a keystone Microsoft’s Kinect sensor, has increased the interest in 3D hand gesture recognition using depth data. In this dissertation, we present a natural Human-Computer Interface (HCI) for interaction with planar surfaces using a topdown RGB-D camera. Initially, the interaction plane is located in the 3D point cloud by using a variation of RANSAC with temporal coherence. Off-plane objects are segmented using the watershed transform based on an energy function that combines color, depth and confidence information. Skin color information is used to isolate the hand(s), and a novel 2D skeletonization process identifies the interaction fingers. Finally, the fingertips are tracked using the Hungarian algorithm, and a Kalman filter is applied to produce smoother trajectories. To demonstrate the usefulness of the technique, we also developed a prototype in which the user can draw on the surface using lines and sprays in a natural way.
|
63 |
Técnica para interação com mãos em superficies planares utilizando uma câmera RGB-D / A technique for hand interaction with planar surfaces using an RGB-D cameraWeber, Henrique January 2016 (has links)
Sistemas de Interação Humano-Computador baseados em toque são uma tecnologia disseminada em tablets, smartphones e notebooks. Trata-se de um grande avanço que aumenta a facilidade de comunicação e, ao mesmo tempo, diminui a necessidade de interfaces como mouse e teclado. Entretanto, a superfície de interação utilizada por esses sistemas normalmente é equipada com sensores para a captação dos movimentos realizados pelo usuário, o que impossibilita transformar uma superfície planar qualquer (uma mesa, por exemplo) em uma superfície de interação. Por outro lado, a popularização de sensores de profundidade a partir do lançamento do Microsoft Kinect propiciou o desenvolvimento de sistemas que adotam objetos do dia a dia como superfícies de interação. Nesta dissertação é proposta uma interface natural para interação com superfícies planares utilizando uma câmera RGB-D em posição descendente. Inicialmente, o plano de interação é localizado na nuvem de pontos 3D através de uma variação do algoritmo RANSAC com coerência temporal. Objetos acima do plano são segmentados a partir da transformada watershed baseada em uma função de energia que combina cor, profundidade e informação de confiança. A cor de pele é utilizada para isolar as mãos, e os dedos que interagem com o plano são identificados por um novo processo de esqueletonização 2D. Finalmente, as pontas dos dedos são rastreadas com o uso do algoritmo Húngaro, e o filtro de Kalman é usado para produzir trajetórias mais suaves. Para demonstrar a utilidade da técnica, foi desenvolvido um protótipo que permite ao usuário desenhar em uma superfície de forma natural e intuitiva. / Touch-based Human-Computer Interfaces (HCIs) are a widespread technology present in tablets, smartphones, and notebooks. This is a breakthrough which increases the ease of communication and at the same time reduces the need for interfaces such as mouse and keyboard. However, the interaction surface used by these systems is usually equipped with sensors to capture the movements made by the user, making it impossible to substitute this surface by any other such as a table, for example. On the other hand, the progress of commercial 3D depth sensing technologies in the past five years, having as a keystone Microsoft’s Kinect sensor, has increased the interest in 3D hand gesture recognition using depth data. In this dissertation, we present a natural Human-Computer Interface (HCI) for interaction with planar surfaces using a topdown RGB-D camera. Initially, the interaction plane is located in the 3D point cloud by using a variation of RANSAC with temporal coherence. Off-plane objects are segmented using the watershed transform based on an energy function that combines color, depth and confidence information. Skin color information is used to isolate the hand(s), and a novel 2D skeletonization process identifies the interaction fingers. Finally, the fingertips are tracked using the Hungarian algorithm, and a Kalman filter is applied to produce smoother trajectories. To demonstrate the usefulness of the technique, we also developed a prototype in which the user can draw on the surface using lines and sprays in a natural way.
|
64 |
Um estudo sobre o mapeamento de gestos do Leap motion para a língua brasileira de sinais- (Libras)MELO, Alain Rosemberg Lívio Linhares de 06 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T13:17:52Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertacao_AlainRosemberg_Mestrado.pdf: 2190332 bytes, checksum: 646eab9f3f8199a8f7b003398354b6ea (MD5) / Made available in DSpace on 2016-03-15T13:17:52Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertacao_AlainRosemberg_Mestrado.pdf: 2190332 bytes, checksum: 646eab9f3f8199a8f7b003398354b6ea (MD5)
Previous issue date: 2015-08-06 / A interação com computadores através de gestos há algum tempo vem sendo explorada pelos pesquisadores. Os gestos, no que se refere ao processo de compreensão, não sofre interferência de ruído ocasionado por um determinado ambiente e faz uso de um canal distinto do utilizado na comunicação verbal. Porém, as principais desvantagens se resumem na dificuldade em manipular e interpretar as informações referentes ao processamento e mapeamento de gestos. Além disso, as várias soluções existentes, voltadas para a língua de sinais, ou são de cunho proprietário ou de nível acadêmico. E ainda são limitadas a um pequeno e restrito conjunto de gestos. Os gestos presentes nestas soluções, ora são apenas estáticos ou ora apenas dinâmicos, o que poderá afetar diretamente no nível de dificuldade de uso por parte dos usuários. O aperfeiçoamento acelerado das tecnologias vem melhorando o acesso a dispositivos de captura e processamento de imagens, mais robustos. Os avanços permitiram o uso livre das mãos, sem que haja a necessidade de equipamentos diretamente interligados aos usuários. Com o intuito de contribuir no estreitamento da barreira comunicacional através do uso da tecnologia, este trabalho tem por principal objetivo identificar e estudar a viabilidade de uma tecnologia específica de reconhecimento de gestos, o Leap Motion. Que a partir do desenvolvimento de uma solução desenvolvida especificamente para essa tecnologia, seja possível propor uma abordagem que possibilite implementar, mapear e testar gestos. Com isso possibilitar a validação e interação com a solução comercial denominada ProDeaf. Isto voltado para as pessoas surdas, especificamente no contexto da Língua Brasileira de Sinais (Libras). / The interaction with computers through gestures has been explored by researchers for some time. The gestures, as in regards of the process to understanding, do not suffer noise interference caused by a given environment, and use a different channel from verbal communication. However, the main disadvantages are summarized in the difficulty of manipulating and interpreting the information regarding processing and mapping of the gestures. In addition, various existing solutions are in the owner level or academic level, and are still limited to a small and restricted set of gestures. The gestures presented by these solutions, sometimes are just static or sometimes are just dynamic, which may directly affect the level of difficulty of use. The technology evolution improves the access to more robust capture devices and image processors. The advances allowed free use of hands, without requiring an equipment directly connected to users. In order to contribute to the narrowing of the communication barrier through the use of technology, this work has the main objective to identify and study the feasibility of a specific technology of gesture recognition, the Leap Motion. And from the development of a solution specific for that technology show that it could be possible to propose an approach that allows coding, testing and mapping gestures. With this, allow the validation and interaction with the commercial solution called ProDeaf. Specially to the deaf people, specifically in the context of Brazilian Sign Language (Libras).
|
65 |
Multistage neural networks for pattern recognitionZieba, Maciej January 2009 (has links)
In this work the concept of multistage neural networks is going to be presented. The possibility of using this type of structure for pattern recognition would be discussed and examined with chosen problem from eld area. The results of experiment would be confront with other possible methods used for the problem.
|
66 |
Kinect in retro gamesHillman, Joel January 2013 (has links)
Have you ever asked yourself whether new technology mixed with older might have a great outcome? How about using a modern kind of input control, such as a camera with gesture recognition in an older game? This study is meant to nd out whether or not camera-based input is suitable in older retro games and how it di ers in comparison to standard gamepad input. A modi ed version of In nite Super Mario is used to receive input from a Kinect application and control the character inside the game. Questionnaires and logging is added into the application to collect research data to help in answering the research questions. The evaluation consists of a pre-survey to collect background information about the player, empirical analysis of statistics collected during game play to measure performance and a post-survey to nd out the reactions of the subjects. T-tests are used to nd signi cant di erences in the test results and the participants are grouped based on their preference, former gamepad- and Kinect experience. The results indicate that players with lower gaming experience have a higher satisfaction using the Kinect as input method. Additionally gesture recognition appears to add an another fun factor to the game.
|
67 |
A Framework for Mobile Paper-based ComputingSylverberg, Tomas January 2007 (has links)
Military work-practice is a difficult area of research where paper-based approaches are still extended. This thesis proposes a solution which permits the digitalization of information at the same time as workpractice remains unaltered for soldiers working with maps in the field. For this purpose, a mobile interactive paper-based platform has been developed which permits the users to maintain their current work-flow. The premise of the solution parts from a system consisting of a prepared paper-map, a cellular phone, a desktop computer, and a digital pen with bluetooth connection. The underlying idea is to permit soldiers to take advantage of the information a computerized system can offer, at the same time as the overhead it incurs is minimized. On one hand this implies that the solution must be light-weight, on the other it must retain current working procedures as far as possible. The desktop computer is used to develop new paper-driven applications through the application provided in the development framework, thus allowing the tailoring of applications to the changing needs of military operations. One major component in the application suite is a symbol recognizer which is capable of recognizing symbols parting from a template which can be created in one of the applications. This component permits the digitalization of information in the battlefield by drawing on the paper-map. The proposed solution has been found to be viable, but still there is a need for further development. Furthermore, there is a need to adapt the existing hardware to the requirements of the military to make it usable in a real-world situation.
|
68 |
Augmented reality based user interface for mobile applications and servicesAntoniac, P. (Peter) 07 June 2005 (has links)
Abstract
Traditional design of user interfaces for mobile phones is limited to a small interaction that provides only the necessary means to place phone calls or to write short messages. Such narrow activities supported via current terminals suppress users from moving towards mobile and ubiquitous computing environments of the future. Unfortunately, the next generation of user interfaces for mobile terminals seems to apply the same design patterns as commonly used for desktop computers. Whereas the desktop environment has enough resources to implement such design, capabilities of the mobile terminals fall under constraints dictated by mobility, like the size and weight. Additionally, to make mobile terminals available for everyone, users should be able to operate them with minimal or no preparation, while users of desktop computers will require certain degree of training.
This research looks into how to improve the user interface of future mobile devices by using a more human-centred design. One possible solution is to combine the Augmented Reality technique with image recognition in such a way that it will allow the user to access a "virtualized interface". Such an interface is feasible since the user of an Augmented Reality system is able to see synthetic objects overlaying the real world. Overlaying the user's sight and using the image recognition process, the user interacts with the system using a combination of virtual buttons and hand gestures.
The major contribution of this work is the definition of the user's gestures that makes it possible for human-computer interaction with such Augmented Reality based User Interfaces. Another important contribution is the evaluation on how mobile applications and services work with this kind of user interface and whether the technology is available to support it.
|
69 |
Real-time Hand Gesture Detection and Recognition for Human Computer InteractionDardas, Nasser Hasan Abdel-Qader January 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control.
Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture.
Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture.
Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
|
70 |
Segmentação e reconhecimento de gestos em tempo real com câmeras e aceleração gráfica / Real-time segmentation and gesture recognition with cameras and graphical accelerationDaniel Oliveira Dantas 15 March 2010 (has links)
O objetivo deste trabalho é reconhecer gestos em tempo real apenas com o uso de câmeras, sem marcadores, roupas ou qualquer outro tipo de sensor. A montagem do ambiente de captura é simples, com apenas duas câmeras e um computador. O fundo deve ser estático, e contrastar com o usuário. A ausência de marcadores ou roupas especiais dificulta a tarefa de localizar os membros. A motivação desta tese é criar um ambiente de realidade virtual para treino de goleiros, que possibilite corrigir erros de movimentação, posicionamento e de escolha do método de defesa. A técnica desenvolvida pode ser aplicada para qualquer atividade que envolva gestos ou movimentos do corpo. O reconhecimento de gestos começa com a detecção da região da imagem onde se encontra o usuário. Nessa região, localizamos as regiões mais salientes como candidatas a extremidades do corpo, ou seja, mãos, pés e cabeça. As extremidades encontradas recebem um rótulo que indica a parte do corpo que deve representar. Um vetor com as coordenadas das extremidades é gerado. Para descobrir qual a pose do usuário, o vetor com as coordenadas das suas extremidades é classificado. O passo final é a classificação temporal, ou seja, o reconhecimento do gesto. A técnica desenvolvida é robusta, funcionando bem mesmo quando o sistema foi treinado com um usuário e aplicado a dados de outro. / Our aim in this work is to recognize gestures in real time with cameras, without markers or special clothes. The capture environment setup is simple, uses just two cameras and a computer. The background must be static, and its colors must be different the users. The absence of markers or special clothes difficults the location of the users limbs. The motivation of this thesis is to create a virtual reality environment for goalkeeper training, but the technique can be applied in any activity that involves gestures or body movements. The recognition of gestures starts with the background subtraction. From the foreground, we locate the more proeminent regions as candidates to body extremities, that is, hands, feet and head. The found extremities receive a label that indicates the body part it may represent. To classify the users pose, the vector with the coordinates of his extremities is compared to keyposes and the best match is selected. The final step is the temporal classification, that is, the gesture recognition. The developed technique is robust, working well even when the system was trained with an user and applied to another users data.
|
Page generated in 0.0312 seconds