Spelling suggestions: "subject:"gesture interaction"" "subject:"vesture interaction""
1 |
ProGes: A User Interface for Multimedia Devices over the Internet of ThingsAhmadi Danesh Ashtiani, Ali January 2014 (has links)
With the rapid growth of online devices, a new concept of Internet of Things (IoT) is emerging in which everyday devices will be connected to the Internet. As the number of devices in IoT is increasing, so is the complexity of the interactions between user and devices. There is a need to design intelligent user interfaces that could assist users in interactions. Many studies have been conducted on different interaction techniques such as proxemic and gesture interaction in order to propose an intuitive and intelligent system for controlling multimedia devices over the IoT, though most could not propose a universal solution. The present study proposes a proximity-based and gesture-enabled user interface for multimedia devices over IoT. The proposed method employs a cloud-based decision engine to support user to choose and interact with the most appropriate device, reliving the user from the burden of enumerating available devices manually. The decision engine observes the multimedia content and device properties, learns user preferences adaptively, and automatically recommends the most appropriate device to interact. In addition to that, the proposed system uses proximity information to find the user among people and provides her/him gesture control services. Furthermore, a new hand gesture vocabulary is proposed for controlling multimedia devices through conducting a multiphase elicitation study. The main advantage of this vocabulary is that it can be used for all multimedia devices. Both device recommendation system and gesture vocabulary are evaluated. The device recommendation system evaluation shows that the users agree with the proposed interaction 70% of the times. Moreover, the average agreement score of the proposed gesture vocabulary (0.56) exceeds the score of similar studies. An external user evaluation study shows that the average score of being a good-match is 4.08 out of 5 and the average of ease-of-performance equals to 4.21 out of 5. The memory test reveals that the proposed vocabulary is easy to remember since participants could remember and perform gestures in 3.13 seconds on average. In addition to that, the average accuracy of remembering gestures equals to 91.54%.
|
2 |
A comparative study about cognitive load of air gestures and screen gestures for performing in-car music selection taskWu, Xiaolong 07 January 2016 (has links)
With the development of technology, people's viewpoints of the automobile have shifted; instead of merely a means of transportation, the automobile has become a space in which a driver can still perform daily activities besides driving, such as communicating with other people, interacting with electronic devices, and receiving information. In the meantime, different ways of interaction have been explored. Among all the modalities, gestures have been considered as a feasible way for performing in-car secondary tasks because of their intuitiveness. However, few researches have been conducted in terms of subjects' cognitive load. This thesis has examined four gesture interfaces (air swipe, air tap, screen swipe, and screen tap), in terms of their effects on drivers' driving performance, secondary task performance, perceived cognitive load, and eye glance behavior. The result demonstrated that air gestures are generally slower than screen gestures with regard to secondary performance. Screen swipe gesture requires the lowest cognitive load while air swipe and screen tap gesture remain the same. Subjects in this study tend to prefer screen swipe gesture the most while prefer air tap gesture the least. However, there is no significant difference between air swipe and screen tap gesture. Although air tap gesture and screen tap gesture generated the largest amount of dwell times, no variance among the four gesture interfaces in driving performance has been found. The result indicated that even though air gestures are not limited by space, screen swipe in this study still seemed to be the most ideal way for performing in-car secondary task of music selection.
|
3 |
Increase Driving Situation Awareness and In-vehicle Gesture-based Menu Navigation Accuracy with Heads-Up DisplayCao, Yusheng 04 1900 (has links)
More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. This thesis designed and tested a novel interface to solve a specific issue caused by this method of interaction: visual distraction within the car. In this study, a Heads-Up Display (HUD) was integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment was conducted to investigate the potential of this system in improving drivers’ driving performance, situation awareness, and gesture interactions. The thesis recruited 24 participants to test the system. Participants provided subjective feedback about using the system and objective performance data. This thesis found that HUD significantly outperformed the Heads-Down Display (HDD) in participants’ preference, perceived workload, level 1 situation awareness, and secondary-task performance. However, to achieve this, the participants compensated by having poor driving performance and relatively longer visual distraction. This thesis will provide directions for future research and improve the overall user experience while the driver interacts with the in-vehicle gesture interaction system. / M.S. / Driving is becoming one of the essential daily activities. Unless a fully autonomous vehicle is made, driving will remain as the primary task when operating the vehicle. However, to improve the overall experience during traveling, drivers are also required to perform secondary tasks such as changing the AC, switching the music, navigating the map, and other functions. Nevertheless, car accidents may happen when drivers are performing secondary tasks because those tasks are considered a distraction from the primary task, which is driving safely. Many novel interaction methods have been implemented in a modern car, such as touch screen interaction, voice interaction, etc. This thesis introduces a new gesture interaction system that allows the user to use mid-air gestures to navigate through the secondary task menus. To further avoid visual distraction caused by the system, the gesture interaction system integrated a head-up display (HUD) to allow the user to see visual feedback on their front windshield. The HUD will let the driver use the system without looking in the other directions and keep peripheral vision on the road. The experiment recruited 24 participants to test the system. Each participant provided subjective feedback about their workload, experience, and preference. In the experiment, driving simulator was used to collect their driving performance. The eye tracker glasses were used to collect eye gaze data, and the gesture menu system was used to collect gesture system performance. This thesis expects four key factors to affect the user experience: HUD vs. Heads-Down Display (visual feedback types), with sound feedback vs. without sound feedback. Results showed that HUD helped the driver perform the secondary task faster, understand the current situation better, and reduce workload. Most of the participants preferred using the HUD over using HDD. However, there are some compensations that drivers needed to make if they use HUD: focusing on the HUD for more time while performing secondary tasks and having poor driving performance. By analyzing result data, this thesis provides a direction for conducting HUD or in-vehicle gesture interaction research and improving the users’ performance and overall experience.
|
4 |
An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface designChen, Tin Kai January 2009 (has links)
Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S.
|
5 |
Exploring gesture based interaction and visualizations for supporting collaborationSimonsson Huck, Andreas January 2011 (has links)
This thesis will introduce the concept of collaboratively using freehand gestures to interact with visualizations. It could be problematic to work with data and visualizations together with others in the traditional desktop setting because of the limited screen size and a single user input device. Therefore this thesis suggests a solution by integrating computer vision and gestures with interactive visualizations. This integration resulted in a prototype where multiple users can interact with the same visualizations simultaneously. The prototype was evaluated and tested on ten potential users. The results from the tests show that using gestures have potential to support collaboration while working with interactive visualizations. It also shows what components are needed in order to enable gestural interaction with visualizations.
|
6 |
Designing an interactive handlebar infotainment system for light vehicles / Design av ett interaktivt styr-monterat infotainment system för lätta fordonBratt, Jesper January 2016 (has links)
This thesis studies what the crucial aspects are when designing and developing an in-vehicle infotainment system for light vehicles that should both extend functionality and improve safety. In order to ground the research, innovations made in automotive infotainment systems are examined and a design for a light vehicle infotainment system that utilizes optical gesture based touch interaction is proposed. This is done with the goal to provide drivers of light vehicles with the same safety and usability improvements that drivers of cars can enjoy. A research through design approach together with heuristics and cognitive walkthrough enabled rapid design iterations to be made in order to produce a prototype to be tested. In the end, a design proposal was presented which showed that there are several similar ways of thinking that can be applied to light vehicle infotainment designs compared to its automotive counterparts. During the design process, the importance of a simple menu; animations to convey spatial connections; and notifications to lower the overall visual clutter were identified as key aspects of a safe and usable infotainment system. / Denna uppsats undersöker vilka de viktigaste aspekterna i designprocessen för nya infotainmentsystem är med fokus på utökad funktionalitet och säkerhet för lätta fordon. För att grunda undersökningen så studeras framsteg gjorda inom infotainmentsystem för bilar samt andra relevanta lösningar för lätta fordon. När detta är gjort presenteras en design för ett infotainmentsystemsomanvänderoptiskgest-baserad interaktion. Detta görs med målet att förare av lätta fordon ska få samma säkerhets- och användbarhetsförbättringar som bilförare idag har. En forskning genom design (research through design) approach tillsammans med kognitiv genomgång (cognitive walkthrough) och heuristiker möjliggjorde snabba iterationer i designprocessen. I slutändan presenterades ett designförslag som påvisade att det finns flera liknande sätt att tänka vid design för lätta fordon samt för bilar. Under processen framkom det bland annat att en simpel meny; animationer som framför rumsliga förhållanden; samt notifikationer för att minska den visuella belastningen blev identifierade som nyckelpunkter vid design av ett säkert och användbart infotainmentsystem.
|
7 |
Make people move : Utilizing smartphone motion sensors to capture physical activity within audiences during lectures / Rör på er! : Användning av rörelsesensorer i smartphones för att skapa fysisk aktivitet i en föreläsningspublikEklund, Frida January 2018 (has links)
It takes only about 10-30 minutes into a sedentary lecture before audience attention is decreasing. There are different ways to avoid this. One is to use a web-based audience response systems (ARS), where the audience interact with the lecturer through their smartphones, and another is to take short breaks, including physical movements, to re-energize both the body and the brain. In this study, these two methods have been combined and explored. By utilizing the motion sensors that are integrated in almost every smartphone, a physical activity for a lecture audience was created and implemented in the ARS platform Mentimeter. The proof of concept was evaluated in two lectures, based on O’Brien and Toms' model of engagement. The aim was to explore the prerequisites, both in terms of design and implementation, for creating an engaging physical activity within a lecture audience, using smartphone motion sensors to capture movements and a web-based ARS to present the data. The results showed that the proof of concept was perceived as fun and engaging, where important factors for creating engagement were found to be competition and a balanced level of task difficulty. The study showed that feedback is complicated when it comes to motion gesture interactions, and that there are limitations as to what can be done with smartphone motion sensors using web technologies. There is great potential for further research in how to design an energizing lecture activity using smartphones, as well as in exploring the area of feedback in motion gesture interaction. / Efter bara 10-30 minuter på en stillasittande föreläsning börjar publiken tappa i koncentration. Det går undvika på olika sätt. Ett sätt kan vara genom att låta publiken bli mer aktiva i föreläsningen med hjälp av ett webb-baserat röstningsverktyg, där de använder sina smartphones för att interagera med föreläsaren, och ett annat sätt kan vara att ta korta pauser där publiken får röra på sig för att syresätta hjärna och kropp. I den här studien kombinerades dessa två metoder genom att utnyttja rörelsesensorerna som finns inbyggda i de flesta smartphones. En fysisk aktivitet för en föreläsningspublik togs fram och implementerades i ARS-plattformen Mentimeter och konceptet utvärderades sedan under två föreläsningar baserat på O’Brien and Toms' modell för engagemang. Målet var att utforska förutsättningarna, både inom teknik och design, för att skapa en engagerande fysisk aktivitet för en föreläsningspublik, där smartphonens rörelsesensorer används för att fånga rörelse och ett webb-baserat röstningssystem för att presentera data. Resultatet visade att konceptet upplevdes som kul och engagerande, där viktiga faktorer för att skapa engagemang fanns i att ha ett tävlingsmoment och en lagom svårighetsgrad. Studien visade även att feedback är komplicerat när det kommer till rörelseinteraktion, och att det finns begräsningarna i vad som kan göras med rörelsesensorerna i en smartphone med hjälp av webbteknologi. Det finns stor potential för ytterligare undersökningar både inom hur man kan skapa interaktiva aktiviteter på föreläsningar som ger publiken mer energi, men också inom området kring feedback för rörelseinteraktion.
|
8 |
Bringing the user experience to early product conception : From idea generation to idea evaluationBongard, Kerstin 19 December 2013 (has links) (PDF)
The User Experience (UX) has become a major concern for the design of consumer products. Today exist various tools for the evaluation of static properties of final products on their User Experience value. However, very few tools and methods are available that allow anticipating the future User Experience during the first stages of product conception. This thesis explores the wide range of design dimensions that potentially form the experience of the user. Dynamic product properties emerge as an important factor for User Experience. In the studies a software based on inspiration words and links, as well as the technique body storming are tested as a new means of User Experience generation. The produced early concepts and interaction gestures are then evaluated through a combination of questionnaires, behavioural and physiological measurements. The study results show firstly that a wide range of design dimension needs to be regarded to design for User Experience, secondly that it is possible to apply UX evaluations on early concepts and thirdly that UX evaluations can also be done on dynamic properties like interaction gestures. This thesis furthermore contributes design research and practice with a new model on the mechanism of User Experience and a list of design dimensions for early product conception.
|
9 |
Aplicação do sensor leap motion como instrumento didático no ensino de crianças surdas / Application of the leap motion sensor as didactic device to the teaching of deaf childrenFelippsen, Eduardo Alberto 04 February 2017 (has links)
Submitted by Miriam Lucas (miriam.lucas@unioeste.br) on 2017-11-08T17:58:36Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Eduardo_Alberto_Felippsen_2017.pdf: 2829878 bytes, checksum: 34245de5c475cba9de58a5835f5abfd4 (MD5) / Made available in DSpace on 2017-11-08T17:58:36Z (GMT). No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Eduardo_Alberto_Felippsen_2017.pdf: 2829878 bytes, checksum: 34245de5c475cba9de58a5835f5abfd4 (MD5)
Previous issue date: 2017-02-03 / The technologies are in broad development and increasingly present in the students' daily life, a reality that can be appropriated by the school for the creation of new teaching strategies. In the case of teaching deaf children, the use of technologies may be even more effective, since the visual aspect should be privileged. In this context, this dissertation presents, by means of a qualitative research with action-research methodology, the application of the Leap Motion Gesture Interaction Interface in classes for deaf children. The planning and selection of the softwares took place together, involving the researcher and the teachers of the Training Center of the Professionals of the Education and Assistance to Persons with Deafness (in Portuguese, Centro de Capacitação dos Profissionais da Educação e Atendimento às Pessoas com Surdez) in the city of Cascavel / PR, this, in the generation of lesson plans. It was identified that the communication and mediation of the teacher are key-factors for the student to interpret and understand the interaction environment, as well as the task that is expected to be realized. This, because the student is in training in the Brazilian Sign Language (in Portuguese, Língua Brasileira de Sinais - Libras) and hasn’t knowledge of the written Portuguese language. In other words, there is a strong dependence of the teacher to that him/her interprets the interface of the software and translates it to Libras, inserting in this translation elements that result in the understanding by the child. As results, it was verified that, in well-planned use scenarios, there was the contribution of the use of the Leap Motion sensor in the learning, and that there was a significant contribution in the social interaction and the collaboration between the students in the accomplishment of the tasks. These aspects of interaction among students can be also considered in the elaboration of new teaching strategies supported by gestural interfaces. / As tecnologias estão em franco desenvolvimento, e cada vez mais presentes no cotidiano dos estudantes, uma realidade que pode ser apropriada pela escola para a criação de novas estratégias de ensino. No caso do ensino de crianças surdas, o uso de tecnologias pode ser ainda mais efetivo, uma vez o aspecto visual deve ser privilegiado. Neste contexto, esta dissertação apresenta, por meio de uma pesquisa qualitativa com metodologia de pesquisa-ação, a aplicação da Interface de Interação Gestual Leap Motion em aulas para crianças surdas. O planejamento e escolha dos softwares foi conjunto, envolvendo o pesquisador e os professores do Centro de Capacitação dos Profissionais da Educação e Atendimento às Pessoas com Surdez na cidade de Cascavel/PR, isto, na geração de planos de aula. Identificou-se que a comunicação e a mediação do professor são fatores-chave para que o aluno possa interpretar e compreender o ambiente de interação, bem como a tarefa que se espera seja realizada. Isto, pois o aluno encontra-se em formação na Língua Brasileira de Sinais (Libras) e não possui conhecimento da Língua Portuguesa escrita. Em outras palavras, há uma forte dependência do professor para que este interprete a interface do software e a traduza para Libras, inserindo nesta tradução elementos que resultem na compreensão por parte da criança. Como resultados, verificou-se que, em cenários de uso bem planejados, houve a contribuição do uso do sensor Leap Motion no aprendizado, e que houve contribuição significativa na interação social e na colaboração entre os alunos na realização das tarefas. Esses aspectos de interação entre os estudantes podem ser também considerados na elaboração de novas estratégias de ensino suportadas por interfaces gestuais.
|
10 |
Game Accessibility for Children with Cognitive Disabilities : Comparing Gesture-based and Touch InteractionGauci, Francesca January 2021 (has links)
The interest in video games has grown substantially over the years, transforming from a means of recreation to one of the most dominating fields in entertainment. However, a significant number of individuals face several obstacles when playing games due to disabilities. While efforts towards more accessible game experiences have increased, cognitive disabilities have been often neglected, partly because games targeting cognitive disabilities are some of the most difficult to design, since cognitive accessibility barriers can be present at any part of the game. In recent years, research in human-computer interaction has explored gesture-based technologies and interaction, especially in the context of games and virtual reality. Research on gesture-based interaction has concentrated on providing a new form of interaction for people with cognitive disabilities. Several studies have shown that gesture interaction may provide several benefits to individuals with cognitive disabilities, including increased cognitive, motor and social aptitudes. This study aims to explore the impact of gesture-based interaction on the. accessibility of video games for children with cognitive disabilities. Accessibility of gesture interaction is evaluated against touch interaction as the baseline, a comparison founded on previous studies that have argued for the high level of accessibility and universal availability of touchscreen devices. As such, a game prototype was custom designed and developed to support both types of interaction, gestures and touch. The game was presented to several users during an interaction study, where every user played the game with both methods of interaction. The game and outcome of the user interaction study were further discussed with field experts. This study contributes towards a better understanding of how gesture interaction impacts the accessibility in games for children with cognitive disabilities. This study concludes that there are certain drawbacks with gesture-based games, especially with regards to precision, accuracy, and ergonomics. As a result, the majority of users preferred the touch interaction method. Nevertheless, some users also considered the gesture game to be a fun experience. Further, discussion with experts produces several points of improvement to make gesture interaction more accessible. The findings of the study are a departure point for a deeper analysis of gestures and how they can be integrated into the gaming world.
|
Page generated in 0.1713 seconds