• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures

Akumalla, Sarath Chandra 05 1900 (has links)
Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures.
2

Hand Gesture based Telemedicine enabled by Mobile VR

Vulgari, Sofia Kiriaki January 2019 (has links)
Virtual Reality (VR) is a highly evolving domain and is used in anincreasing number of areas in today's society. Among the technologiesassociated with VR and especially mobile VR, is hand tracking and handgesture recognition. Telemedicine is one of the elds where VR is startingto thrive, and so the concept of adding the use of hand gestures came to bein order to explore the possibilities that can come from it. This researchis conducted with the development of a prototype application that usessome of the most emerging technologies. Manomotion's hand trackingand hand gesture recognition algorithms, and Photon's servers and developerkit, which makes multi-user applications achievable, allowed theconceptual idea of the prototype to become reality. In order to test itsusability and how potential users perceive it, a user study with 24 participantswas made, 8 of which were either studying or working in themedical eld. Additional expert meetings and observations from the userstudy also contributed to ndings that helped show how hand gesturescan aect a doctor consultation in Telemedicine. Findings showed thatthe participants thought of the proposed system as a less costly and timesaving solution, and that they felt immersed in the VR. The hand gestureswere accepted and understood. The participants did not have dicultieson learning or executing them, and had control of the prototype environment.In addition, the data showed that participants considered it to beusable in the medical eld in the future.
3

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Prado Neto, Elias Ximenes do 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
4

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Elias Ximenes do Prado Neto 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
5

VR Gaming - Hands On : The use and effects of bare hand gestures as an interaction method in multiplayer Virtual Reality Games

Georgiadis, Abraham January 2017 (has links)
The field of virtual reality (VR) is getting increasing attention from the scientific community and it is being portrayed by advertisements as the user interface (UI) of the future. This is a fair statement since the prior uses of VR that used to exist only in fiction movies and books are now widely available in many forms and settings to the public. One of the most interesting outcomes from this technological evolution is that now VR can be experienced through the use of a mobile phone and the addition of some inexpensive means typically in a form of a headset. The combination of the phone’s screen as attached to the headset creates a form of Head Mounted Display (HMD) which can be utilized in order for the user to be immersed within a virtual environment (VE). The argument here is that even if the means to get access to VR are cheap, this should not be the case with the experience as well. On the contrary, the low entry requirements in combination with a high quality experience are the basis for the medium's success and further adoption by the users. More specifically, the capability of utilizing a three dimensional space (3D) should not limit the medium’s use on just that but instead, this space should be used in order to offer immersive environments which make the user feel as if he is there.    There are many factors that contribute to that result and significant progress has been made to some such as the quality of screen or other hardware parts that allow the user get immersed into the virtual scenery, however, little progress has been made towards the conceptual means that allow the user of better experiencing this VE. Most of the VR applications so far are specifically designed for a single user session. This creates an isolation of the user from any other type of communities which further increases the stigma of VR being a solitary experience. Another issue is the interaction method that is available to users in order to interact with the VE. The use of buttons in most of the available headsets is a counter intuitive method for a person to interact with an environment that wants to be called real. The technological advancements in the field of image processing have resulted in many new methods of interaction and multimodal manipulation within VE and it would be worthy of exploring their effects on the user experience (UX) when used as an interaction method.    For these reasons, this thesis used the case of VR games as a setting to study how UX can be enhanced from its current state by introducing a bare hand gesture interaction method and expanding the VR setting in order to host two users in shared VE. Two individual studies were conducted where user feedback was collected in order to describe the effects of this approach in both a qualitative and quantitative manner. As results indicate, by utilizing gesture analysis on a headset equipped with a smartphone, it is possible to offer a natural and engaging solution for VR interaction capable of rich UXs while maintaining a low entry level for the end users. Finally, the addition of another player significantly affected the experience by influencing the emotional state of the participants in the game and further enforcing their feeling of presence within the VE.
6

Gaining the Upper Hand : An Investigation into Real-time Communication of the Vamp and Lead-In through Non-Expressive Gestures and Preparatory Beats with a focus on Opera and Musical Theatre

Hermon, Andrew Neil January 2021 (has links)
This thesis seeks to discuss conducting technique in relation to real-time communication of Vamp, Safety-Bars and Lead-Ins through left-hand gestures within the context of opera and musical theatre. The research aims to develop a codified set of gestures suitable for the left-hand. It will explore and analyse left-hand gestures which are commonly used, but not yet codified, and the importance in which the preparatory beat plays a role in communicating the Vamp and Lead-In. This research also aims to establish a framework for conductors to create their own left-hand gestures and better understand musical structure used in Opera and Musical Theatre. The new gestures developed through research into visual and body languages (such as sign languages) as well as body movement (sound painting). The gestures will be tested through one artistic project, with three sections, then analysed using methods of qualitative inquiry. The paper is narrative based in its structure; with the reader guided through each topic by the last. The introduction sets up the main idea for this thesis, then each section is guided by these elements. The research questions and aims were formed because of the available literature; thus, they appear after the theory chapter.
7

Artificial Intelligence Based Real-Time Processing of Sterile Preparations Compounding

Rehman Faridi, Shah Mohammad Hamoodur January 2020 (has links)
No description available.
8

Identifying Similarities and Differences in a Human – Human Interaction versus a Human – Robot Interaction to Support Modelling Service Robots

Sam, Farrah January 2009 (has links)
With the ongoing progress of research in robotics, computer vision and artificial intelligence, robots are becoming more complex, their functionalities are increasing and their abilities to solve particular problems get more efficient. For these robots to share with us our lives and environment, they should be able to move autonomously and be operated easily by users. The main focus of this thesis is on the differences and similarities in a human to a human versus a human to a robot interaction in an office environment. Experimental methods are used to identify these differences and similarities and arrive at an understanding about how users perceive robots and the robots’ abilities to help in the development of interactive service robots that are able to navigate and perform various tasks in a real life environment. A user study was conducted where 14 subjects were observed while presenting an office environment to a mobile robot and then to a person. The results from this study were that users used the same verbal phrases, hand gestures, gaze, etc. to present the environment to the robot versus a person but they emphasized more by identifying the different items to the robot .The subjects took less time to show a person around than the robot. / Genom forskning i robotik, datorseende och artificiell intelligens kommer robotar att bli mer och mer komplexa. Robotars funktionalitet ökar ständigt och deras kapacitet att lösa specifika problem blir mer effektiv. För att dessa robotar ska finnas i våra vardagsliv och vår miljö måste de kunna röra sig självständigt (autonomt) och de måste vara lätta att hantera för användare. Detta examensarbete fokuserar på skillnader och likheter mellan interaktion människa - människa och robot - människa i en kontorsmiljö. Med hjälp av experimentell metod är det möjligt att upptäcka dessa skillnader och likheter och därmed förstå hur människor uppfattar robotar och deras förmåga. Detta kan bidra till utvecklingen av servicerobotar som kan navigera och utföra olika uppgifter i vardagslivet. En användarstudie utfördes med 14 försökspersoner som observerades medan de presenterade en kontorsmiljö både för en människa och för en robot. Resultatet av denna studie var att försökspersonerna använde samma typer av muntliga uttryck och handgester, blickar, osv. för att presentera miljön för en människa som för roboten. De uttryckte sig mer detaljerat för roboten när det gällde att identifiera olika föremål i miljön. Försökspersonerna behövde mer tid för att presentera miljön för roboten än en människa.
9

[en] A COMPUTER VISION APPLICATION FOR HAND-GESTURES HUMAN COMPUTER INTERACTION / [pt] UMA APLICAÇÃO DE VISÃO COMPUTACIONAL QUE UTILIZA GESTOS DA MÃO PARA INTERAGIR COM O COMPUTADOR

MICHEL ALAIN QUINTANA TRUYENQUE 15 June 2005 (has links)
[pt] A Visão Computacional pode ser utilizada para capturar gestos e criar dispositivos de interação com computadores mais intuitivos e rápidos. Os dispositivos comerciais atuais de interação baseados em gestos utilizam equipamentos caros (dispositivos de seguimento, luvas, câmeras especiais, etc.) e ambientes especiais que dificultam a difusão para o público em geral. Este trabalho apresenta um estudo sobre a viabilidade de utilizarmos câmeras Web como dispositivo de interação baseado em gestos da Mão. Em nosso estudo consideramos que a mão humana está limpa, isto é, sem nenhum dispositivo (mecânico, magnético ou óptico) colocado nela. Consideramos ainda que o ambiente onde ocorre a interação tem as características de um ambiente de trabalho normal, ou seja, sem luzes ou panos de fundo especiais. Para avaliar a viabilidade deste mecanismo de interação, desenvolvemos alguns protótipos. Neles os gestos da mão e as posições dos dedos são utilizados para simular algumas funções presentes em mouses e teclados, tais como selecionar estados e objetos e definir direções e posições. Com base nestes protótipos apresentamos algumas conclusões e sugestões para trabalhos futuros. / [en] Computer Vision can be used to capture gestures and create more intuitive and faster devices to interact with computers. Current commercial gesture-based interaction devices make use of expensive equipment (tracking devices, gloves, special cameras, etc.) and special environments that make the dissemination of such devices to the general public difficult. This work presents a study on the feasibility of using Web cameras as interaction devices based on hand-gestures. In our study, we consider that the hand is clean, that is, it has no (mechanical, magnetic or optical) device. We also consider that the environment where the interaction takes place has the characteristics of a normal working place, that is, without special lights or backgrounds. In order to evaluate the feasibility of such interaction mechanism, we have developed some prototypes of interaction devices. In these prototypes, hand gestures and the position of fingers were used to simulate some mouse and keyboard functions, such as selecting states and objects, and defining directions and positions. Based on these prototypes, we present some conclusions and suggestions for future works.
10

Rozpoznání gest ruky v obrazu / Hand gesticulation recognition in image

Mráz, Stanislav January 2011 (has links)
This master’s thesis is dealing with recognition of an easy static gestures in order to computer controlling. First part of this work is attended to the theoretical review of methods used to hand segmentation from the image. Next methods for hang gesture classification are described. The second part of this work is devoted to choice of suitable method for hand segmentation based on skin color and movement. Methods for hand gesture classification are described in next part. Last part of this work is devoted to description of proposed system.

Page generated in 0.0336 seconds