• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 1
  • Tagged with
  • 13
  • 13
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures

Akumalla, Sarath Chandra 05 1900 (has links)
Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures.
2

Gestures and Groups : An interaction analysis of hand gestures during in-group and out-group speech / Gester och grupper : En interaktionsanalys av handgester under tal om in-grupper och ut-grupper

Lindblad, Patricia January 2019 (has links)
The purpose of this study is to examine how body language, specifically handgestures, correlate to in-group and out-group notions. To approach the issue thehand gestures of two politicians are compared with a focus on how theirgestures relate to in-group and out-group notions in their speech. Interactionanalysis is applied, and the gestures of each politician are categorised andsummarised to be analysed. The analysis reveals that there is a distinctdifference between the two politicians in what gestures they use over all, andconsequently also differences in their gestures when discussing in-groups versusout-groups. However, the main takeaway from the discussion is that one of thepoliticians is directing their gestures towards the camera, whereas the otherpolitician mostly directs their gestures at the live audience.
3

Hand Gesture based Telemedicine enabled by Mobile VR

Vulgari, Sofia Kiriaki January 2019 (has links)
Virtual Reality (VR) is a highly evolving domain and is used in anincreasing number of areas in today's society. Among the technologiesassociated with VR and especially mobile VR, is hand tracking and handgesture recognition. Telemedicine is one of the elds where VR is startingto thrive, and so the concept of adding the use of hand gestures came to bein order to explore the possibilities that can come from it. This researchis conducted with the development of a prototype application that usessome of the most emerging technologies. Manomotion's hand trackingand hand gesture recognition algorithms, and Photon's servers and developerkit, which makes multi-user applications achievable, allowed theconceptual idea of the prototype to become reality. In order to test itsusability and how potential users perceive it, a user study with 24 participantswas made, 8 of which were either studying or working in themedical eld. Additional expert meetings and observations from the userstudy also contributed to ndings that helped show how hand gesturescan aect a doctor consultation in Telemedicine. Findings showed thatthe participants thought of the proposed system as a less costly and timesaving solution, and that they felt immersed in the VR. The hand gestureswere accepted and understood. The participants did not have dicultieson learning or executing them, and had control of the prototype environment.In addition, the data showed that participants considered it to beusable in the medical eld in the future.
4

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Prado Neto, Elias Ximenes do 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
5

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Elias Ximenes do Prado Neto 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
6

VR Gaming - Hands On : The use and effects of bare hand gestures as an interaction method in multiplayer Virtual Reality Games

Georgiadis, Abraham January 2017 (has links)
The field of virtual reality (VR) is getting increasing attention from the scientific community and it is being portrayed by advertisements as the user interface (UI) of the future. This is a fair statement since the prior uses of VR that used to exist only in fiction movies and books are now widely available in many forms and settings to the public. One of the most interesting outcomes from this technological evolution is that now VR can be experienced through the use of a mobile phone and the addition of some inexpensive means typically in a form of a headset. The combination of the phone’s screen as attached to the headset creates a form of Head Mounted Display (HMD) which can be utilized in order for the user to be immersed within a virtual environment (VE). The argument here is that even if the means to get access to VR are cheap, this should not be the case with the experience as well. On the contrary, the low entry requirements in combination with a high quality experience are the basis for the medium's success and further adoption by the users. More specifically, the capability of utilizing a three dimensional space (3D) should not limit the medium’s use on just that but instead, this space should be used in order to offer immersive environments which make the user feel as if he is there.    There are many factors that contribute to that result and significant progress has been made to some such as the quality of screen or other hardware parts that allow the user get immersed into the virtual scenery, however, little progress has been made towards the conceptual means that allow the user of better experiencing this VE. Most of the VR applications so far are specifically designed for a single user session. This creates an isolation of the user from any other type of communities which further increases the stigma of VR being a solitary experience. Another issue is the interaction method that is available to users in order to interact with the VE. The use of buttons in most of the available headsets is a counter intuitive method for a person to interact with an environment that wants to be called real. The technological advancements in the field of image processing have resulted in many new methods of interaction and multimodal manipulation within VE and it would be worthy of exploring their effects on the user experience (UX) when used as an interaction method.    For these reasons, this thesis used the case of VR games as a setting to study how UX can be enhanced from its current state by introducing a bare hand gesture interaction method and expanding the VR setting in order to host two users in shared VE. Two individual studies were conducted where user feedback was collected in order to describe the effects of this approach in both a qualitative and quantitative manner. As results indicate, by utilizing gesture analysis on a headset equipped with a smartphone, it is possible to offer a natural and engaging solution for VR interaction capable of rich UXs while maintaining a low entry level for the end users. Finally, the addition of another player significantly affected the experience by influencing the emotional state of the participants in the game and further enforcing their feeling of presence within the VE.
7

Gaining the Upper Hand : An Investigation into Real-time Communication of the Vamp and Lead-In through Non-Expressive Gestures and Preparatory Beats with a focus on Opera and Musical Theatre

Hermon, Andrew Neil January 2021 (has links)
This thesis seeks to discuss conducting technique in relation to real-time communication of Vamp, Safety-Bars and Lead-Ins through left-hand gestures within the context of opera and musical theatre. The research aims to develop a codified set of gestures suitable for the left-hand. It will explore and analyse left-hand gestures which are commonly used, but not yet codified, and the importance in which the preparatory beat plays a role in communicating the Vamp and Lead-In. This research also aims to establish a framework for conductors to create their own left-hand gestures and better understand musical structure used in Opera and Musical Theatre. The new gestures developed through research into visual and body languages (such as sign languages) as well as body movement (sound painting). The gestures will be tested through one artistic project, with three sections, then analysed using methods of qualitative inquiry. The paper is narrative based in its structure; with the reader guided through each topic by the last. The introduction sets up the main idea for this thesis, then each section is guided by these elements. The research questions and aims were formed because of the available literature; thus, they appear after the theory chapter.
8

Interacting with Hand Gestures in Augmented Reality : A Typing Study

Moberg, William, Pettersson, Joachim January 2017 (has links)
Smartphones are used today to accomplish a variety of different tasks, but it has some issues that might be solved with new technology. Augmented Reality is a developing technology that in the future can be used in our daily lives to solve some of the problems that smartphones have. Before people will adopt the new augmented technology it is important to have an intuitive method to interact with it. Hand gesturing has always been a vital part of human interaction. Using hand gestures to interact with devices has the potential to be a more natural and familiar method than traditional methods, such as keyboards, controllers, and computer mice. The aim of this thesis is to explore whether hand gesture recognition in an Augmented Reality head-mounted display can provide the same interaction possibilities as a smartphone touchscreen. This was done by implementing an application in Unity that mimics an interface of a smartphone, but uses hand gestures as input in AR. The Leap Motion Controller was the device used to perform hand gesture recognition. To test how practical hand gestures are as an interaction method, text typing was chosen as the task to be used to measure this, as it is used in many applications on smartphones. Thus, the results can be better generalized to real world usage.Five different keyboards were designed and tested in a pilot study. A controlled experiment was conducted, in which 12 participants tried two hand gesturing keyboards and a touchscreen keyboard. This was done to compare how hand gestures compare to touchscreen interaction. In the experiment, participants wrote words using the keyboards, while their completion time and accuracy was recorded. After using a keyboard, a questionnaire was completed by the participants to measure the usability.  The results consists of an implementation of five different keyboards, and data collected from the experiment. The data gathered from the experiment consists of completion time, accuracy, and usability derived from questionnaire responses. Statistical tests were used to determine statistical significance between the keyboards used in the experiment. The results are presented in graphs and tables. The results show that typing with pinch gestures in augmented reality is a slow and tiresome way of typing and affects the users completion time and accuracy negatively, in relation to using a touchscreen. The lower completion time, and higher usability, of the touchscreen keyboard could be determined with statistical significance. Prediction and auto-completion might help with fatigue as fewer key presses are needed to create a word. The research concludes that hand gestures are reasonable to use as input technique to accomplish certain tasks that a smartphone performs. These include simple tasks such as scrolling through a website or opening an email. However, tasks that involve typing long sentences, e.g. composing an email, is arduous using pinch gestures. When it comes to typing, the authors advice developers to employ a continuous gesture typing approach such as Swype for Android and iOS.
9

Artificial Intelligence Based Real-Time Processing of Sterile Preparations Compounding

Rehman Faridi, Shah Mohammad Hamoodur January 2020 (has links)
No description available.
10

Identifying Similarities and Differences in a Human – Human Interaction versus a Human – Robot Interaction to Support Modelling Service Robots

Sam, Farrah January 2009 (has links)
With the ongoing progress of research in robotics, computer vision and artificial intelligence, robots are becoming more complex, their functionalities are increasing and their abilities to solve particular problems get more efficient. For these robots to share with us our lives and environment, they should be able to move autonomously and be operated easily by users. The main focus of this thesis is on the differences and similarities in a human to a human versus a human to a robot interaction in an office environment. Experimental methods are used to identify these differences and similarities and arrive at an understanding about how users perceive robots and the robots’ abilities to help in the development of interactive service robots that are able to navigate and perform various tasks in a real life environment. A user study was conducted where 14 subjects were observed while presenting an office environment to a mobile robot and then to a person. The results from this study were that users used the same verbal phrases, hand gestures, gaze, etc. to present the environment to the robot versus a person but they emphasized more by identifying the different items to the robot .The subjects took less time to show a person around than the robot. / Genom forskning i robotik, datorseende och artificiell intelligens kommer robotar att bli mer och mer komplexa. Robotars funktionalitet ökar ständigt och deras kapacitet att lösa specifika problem blir mer effektiv. För att dessa robotar ska finnas i våra vardagsliv och vår miljö måste de kunna röra sig självständigt (autonomt) och de måste vara lätta att hantera för användare. Detta examensarbete fokuserar på skillnader och likheter mellan interaktion människa - människa och robot - människa i en kontorsmiljö. Med hjälp av experimentell metod är det möjligt att upptäcka dessa skillnader och likheter och därmed förstå hur människor uppfattar robotar och deras förmåga. Detta kan bidra till utvecklingen av servicerobotar som kan navigera och utföra olika uppgifter i vardagslivet. En användarstudie utfördes med 14 försökspersoner som observerades medan de presenterade en kontorsmiljö både för en människa och för en robot. Resultatet av denna studie var att försökspersonerna använde samma typer av muntliga uttryck och handgester, blickar, osv. för att presentera miljön för en människa som för roboten. De uttryckte sig mer detaljerat för roboten när det gällde att identifiera olika föremål i miljön. Försökspersonerna behövde mer tid för att presentera miljön för roboten än en människa.

Page generated in 0.0796 seconds