Spelling suggestions: "subject:"gestures."" "subject:"festures.""
161 |
La relation gestes-parole dans la planification de la résolution du problème de la Tour de Hanoï chez des enfants, adolescents et adultes colombiens / The gesture-speech relationship in planning the solving of the Tower of Hanoï problem by Columbian children, adolescents and adultsMoreno Torres, Mayilin 09 December 2014 (has links)
Lorsque nous parlons, nous bougeons nos mains, nous faisons des gestes. Les gestes aident à communiquer avec les autres mais aussi à mieux exprimer notre pensée. Les gestes et la parole sont donc deux dimensions qui s’intègrent dans un système de communication unifié, reposant sur des représentations cognitives communes : quand un locuteur produit un message, la plupart des informations qu’il veut partager est certes véhiculée par le discours, mais également par les gestes (McNeil, 1992). Parfois, cette information entre les gestes et la parole est non-concordante. Garbert et Goldin-Meadow (2002) ont montré que lorsque les participants expliquent leur résolution de la tâche de la Tour de Hanoï, ces non-concordances entre l’information transmise par les gestes et celle transmise par la parole, peuvent se produire soit aux moments incertains, preuve que les participants hésitent entre plusieurs stratégies de résolution soit aux moments-clés, indiquant la capacité à planifier deux stratégies de résolution, l’une par les gestes, l’autre par la parole. La planification à travers les « mismatches » gestes-parole n’a pas été étudié malgré les nombreuses recherches de Goldin-Meadow à ce sujet. Nous avons mené une étude en Colombie auprès de 144 participants issus de deux milieux socio-économiques contrastés. Nous avons tenté de combler cette lacune en étudiant les effets de l’âge, du milieu socio-économique et de la complexité de la tâche de la Tour de Hanoï sur les non-concordances gestes-paroles lors des explications anticipant sa résolution. Nos résultats suggèrent l’existence d’un effet de la complexité de la tâche et des effets limités de l’âge et du milieu socio-économique. / When we speak, we move our hands, we make gestures. Gestures help to communicate with others but also to better express our thoughts. Gestures and speech are two dimensions that are part of a unified system of communication based on shared cognitive representations: when a speaker produces a message, most of the information he wants to share is certainly conveyed by speech, but also by gestures (McNeil, 1992). Sometimes this information between gestures and speech is discordant. Goldin-Meadow and Garbert (2002) showed that when participants explain their resolution of the Tower of Hanoi task, the mismatch between the information conveyed by the gesture and the information transmitted orally, may occur either at key moments, indicating the ability to provide two problem-solving strategies, or at uncertain times, showing that participants are torn between several solving strategies. We conducted a study in Colombia among to 144 participants from two contrasting socio-economic backgrounds. The study of the development of planning through the gesture-speech "mismatch" remains understudied despite wide research conducted by Goldin-Meadow. We tried to fill this gap by examining the effects of age, socio-economic background and the complexity of the Tower of Hanoi task upon gesture-speech “mismatches” regarding explanations anticipating the resolution. Our results suggest the existence of an effect of the complexity of the task and limited age and socio-economic background effects.
|
162 |
Human-Machine Interface Using Facial Gesture RecognitionToure, Zikra 12 1900 (has links)
This Master thesis proposes a human-computer interface for individual with limited hand movements that incorporate the use of facial gesture as a means of communication. The system recognizes faces and extracts facial gestures to map them into Morse code that would be translated in English in real time. The system is implemented on a MACBOOK computer using Python software, OpenCV library, and Dlib library. The system is tested by 6 students. Five of the testers were not familiar with Morse code. They performed the experiments in an average of 90 seconds. One of the tester was familiar with Morse code and performed the experiment in 53 seconds. It is concluded that errors occurred due to variations in features of the testers, lighting conditions, and unfamiliarity with the system. Implementing an auto correction and auto prediction system will decrease typing time considerably and make the system more robust.
|
163 |
Gaining the Upper Hand : An Investigation into Real-time Communication of the Vamp and Lead-In through Non-Expressive Gestures and Preparatory Beats with a focus on Opera and Musical TheatreHermon, Andrew Neil January 2021 (has links)
This thesis seeks to discuss conducting technique in relation to real-time communication of Vamp, Safety-Bars and Lead-Ins through left-hand gestures within the context of opera and musical theatre. The research aims to develop a codified set of gestures suitable for the left-hand. It will explore and analyse left-hand gestures which are commonly used, but not yet codified, and the importance in which the preparatory beat plays a role in communicating the Vamp and Lead-In. This research also aims to establish a framework for conductors to create their own left-hand gestures and better understand musical structure used in Opera and Musical Theatre. The new gestures developed through research into visual and body languages (such as sign languages) as well as body movement (sound painting). The gestures will be tested through one artistic project, with three sections, then analysed using methods of qualitative inquiry. The paper is narrative based in its structure; with the reader guided through each topic by the last. The introduction sets up the main idea for this thesis, then each section is guided by these elements. The research questions and aims were formed because of the available literature; thus, they appear after the theory chapter.
|
164 |
Interacting with Hand Gestures in Augmented Reality : A Typing StudyMoberg, William, Pettersson, Joachim January 2017 (has links)
Smartphones are used today to accomplish a variety of different tasks, but it has some issues that might be solved with new technology. Augmented Reality is a developing technology that in the future can be used in our daily lives to solve some of the problems that smartphones have. Before people will adopt the new augmented technology it is important to have an intuitive method to interact with it. Hand gesturing has always been a vital part of human interaction. Using hand gestures to interact with devices has the potential to be a more natural and familiar method than traditional methods, such as keyboards, controllers, and computer mice. The aim of this thesis is to explore whether hand gesture recognition in an Augmented Reality head-mounted display can provide the same interaction possibilities as a smartphone touchscreen. This was done by implementing an application in Unity that mimics an interface of a smartphone, but uses hand gestures as input in AR. The Leap Motion Controller was the device used to perform hand gesture recognition. To test how practical hand gestures are as an interaction method, text typing was chosen as the task to be used to measure this, as it is used in many applications on smartphones. Thus, the results can be better generalized to real world usage.Five different keyboards were designed and tested in a pilot study. A controlled experiment was conducted, in which 12 participants tried two hand gesturing keyboards and a touchscreen keyboard. This was done to compare how hand gestures compare to touchscreen interaction. In the experiment, participants wrote words using the keyboards, while their completion time and accuracy was recorded. After using a keyboard, a questionnaire was completed by the participants to measure the usability. The results consists of an implementation of five different keyboards, and data collected from the experiment. The data gathered from the experiment consists of completion time, accuracy, and usability derived from questionnaire responses. Statistical tests were used to determine statistical significance between the keyboards used in the experiment. The results are presented in graphs and tables. The results show that typing with pinch gestures in augmented reality is a slow and tiresome way of typing and affects the users completion time and accuracy negatively, in relation to using a touchscreen. The lower completion time, and higher usability, of the touchscreen keyboard could be determined with statistical significance. Prediction and auto-completion might help with fatigue as fewer key presses are needed to create a word. The research concludes that hand gestures are reasonable to use as input technique to accomplish certain tasks that a smartphone performs. These include simple tasks such as scrolling through a website or opening an email. However, tasks that involve typing long sentences, e.g. composing an email, is arduous using pinch gestures. When it comes to typing, the authors advice developers to employ a continuous gesture typing approach such as Swype for Android and iOS.
|
165 |
Introducing Gestures: Exploring Feedforward in Touch-Gesture InterfacesLindberg, Martin January 2019 (has links)
This interaction design thesis aimed to explore how users could be introduced to the different functionalities of a gesture-based touch screen interface. This was done through a user-centred design research process where the designer was taught different artefacts by experienced users. Insights from this process lay the foundation for an interactive, digital gesture-introduction prototype.Testing said prototype with users yielded this study's results. While containing several areas for improvement regarding implementation and behaviour, the prototype's base methods and qualities were well received. Further development would be needed to fully assess its viability. The user-centred research methods used in this project proved valuable for later ideation and prototyping stages. Activities and results from this project indicate a potential for designers to further explore the possibilities for ensuring the discoverability of touch-gesture interactions. For future projects the author suggests more extensive research and testing using a greater sample size and wider demographic.
|
166 |
Gestengesteuerte Visualisierung digitaler Bestandsdaten in BibliothekenSonnefeld, Philipp 11 November 2013 (has links)
Das Bibliothekswesen steht vor der Herausforderung, die rasant wachsende und umfassende Verbreitung digitaler Medien in neue Nutzungskonzepte einzubinden, beispielsweise durch eine Steigerung der Aufenthaltsqualität. (vgl. Bonte, 2011) Begriffe wie digital library, hybrid library oder blended library versuchen, die Rolle der Bibliotheken im digitalen Zeitalter neu zu definieren. (vgl. Gläser, 2008) Die Sächsische Landes- und Universitätsbibliothek (SLUB) begegnet dieser Herausforderung unter anderem mit dem verstärkten Aufbau digitaler Bestände. Sie verfügt daher über umfangreiche digitale Kollektionen, die mit Hilfe eines geplanten öffentlichen interaktiven Systems für die Bibliotheksbesucher auf ansprechende Weise erfahrbar gemacht werden sollen. Die Exploration der digitalen Sammlungen steht dabei im Vordergrund, bereits existierende Katalogfunktionalität soll nicht ersetzt, sondern ergänzt werden. Zu diesem Zweck wird untersucht, wie durch öffentliche Interaktion das Interesse an digitalen Inhalten am Beispiel der Sammlung „Deutsche Fotothek“ gesteigert werden kann. Dabei wird besonders betrachtet, wie durch Embodied Interaction in Verbindung mit der Visualisierung des komplexen Informationsraumes der digitalen Sammlungen eine leicht erlernbare gestenbasierte Steuerung erreicht werden kann. (Dourish, 2004) Der Hauptbeitrag dieser Arbeit liegt in der Konzeption und Umsetzung eines Anwendungsprototypen, der im öffentlichen Bereich der SLUB installiert wird.:1 EINFÜHRUNG 6
1.1 MOTIVATION 6
1.2 ZIELSTELLUNG 7
1.3 GLIEDERUNG 7
2 GRUNDLAGEN 9
2.1 KOGNITION UND RAUMWAHRNEHMUNG 9
2.2 CHARAKTERISTIKEN DES ÖFFENTLICHEN RAUMES IN BIBLIOTHEKEN 9
2.2.1 Öffentlichkeit in der Bibliothek 10
2.2.2 Bühnenmetapher 10
2.2.3 Zusammenfassung 10
2.3 INTERAKTION IM RAUM 10
2.3.1 Begriffsdefinition 10
2.3.2 Räumliche Interaktion 11
2.3.3 Zusammenfassung 12
2.4 INFORMATIONSVISUALISIERUNG 12
2.4.1 Begriffsklärung 12
2.4.2 Datenbild und Navigationsbild 13
2.4.3 Darstellung von Relationen in Baum und Netz 15
2.4.4 Information Visualization Framework 18
2.4.5 Zusammenfassung 18
2.5 EXPLORATION KOMPLEXER DATENRÄUME 19
2.5.1 Exploration 19
2.5.2 Immersion 21
2.5.3 Emersion 21
2.5.4 Zusammenfassung 22
2.6 PERSONENTRACKING MIT MICROSOFT KINECT® 23
2.7 GESTENERKENNUNG 25
3 VERWANDTE ARBEITEN 27
3.1 AMBIENTE DISPLAYS 27
3.2 GESTISCHE INTERAKTION 28
3.3 EXPLORATIONSTECHNIKEN FÜR INFORMATIONSRÄUME 30
4 SYNTHESE UND KONZEPTION 32
4.1 ZIELVISION 32
4.2 INFORMATIONSKONZEPT 34
4.2.1 Initiale Auswahl und Analyse der Bestandsdaten 35
4.2.2 Verfeinerung der selektierten Bestände 36
4.2.3 Organisation der relevanten Daten 38
4.3 DARSTELLUNGSKONZEPT 40
4.3.1 Interface 40
4.3.2 Graphkomponente 41
4.3.3 Galeriekomponente 45
4.4 RAUMSITUATION UND INTERAKTIONSKONZEPT 49
4.4.1 Raumsituation 49
4.4.2 Aufmerksamkeit und Nutzerverhalten 53
4.4.3 Interaktionskonzept 54
5 UMSETZUNG 61
5.1 BACKEND 61
5.1.1 Verarbeitung der Bestandsdaten 62
5.1.2 Tracking und Gestenerkennung 63
5.2 MIDDLEWARE 65
5.2.1 Zugriff auf Bestandsdaten 65
5.2.2 Zugriff auf Trackingdaten 67
5.3 FRONTEND 68
5.3.1 Verarbeitung der Nutzereingaben 68
5.3.2 Galeriekomponente 71
5.3.3 Graphkomponente 77
6 ZUSAMMENFASSUNG 84
6.1 FAZIT 84
6.2 AUSBLICK 85
6.2.1 Verbesserungsvorschläge 85
6.2.2 Konzeptionelle Beschränkungen 86
6.2.3 Erweiterte Zielvision 87
7 ANHANG 89
A LITERATURVERZEICHNIS 89
B ABBILDUNGSVERZEICHNIS 94
C TABELLENVERZEICHNIS 97
D QUELLKODEVERZEICHNIS 98
E LISTE DER KOLLEKTIONSTITEL IN DEN DIGITALEN SAMMLUNGEN 99
F METAINFORMATIONEN FÜR DATENSÄTZE DER DEUTSCHEN FOTOTHEK 100
G ÜBERSICHT DER AUSGEWÄHLTEN KNOTENHIERARCHIE UND UNTERKNOTEN 103
H DEPLOYMENT-ANLEITUNG 104
|
167 |
Examining the Effects of Interactive Dynamic Multimedia and Direct Touch Input on Performance of a Procedural Motor TaskMarraffino, Matthew 01 January 2014 (has links)
Ownership of mobile devices, such as tablets and smartphones, has quickly risen in the last decade. Unsurprisingly, they are now being integrated into the training and classroom setting. Specifically, the U.S. Army has mapped out a plan in the Army Learning Model of 2015 to utilize mobile devices for training purposes. However, before these tools can be used effectively, it is important to identify how the tablets' unique properties can be leveraged. For this dissertation, the touch interface and the interactivity that tablets afford were investigated using a procedural-motor task. The procedural motor task was the disassembly procedures of a M4 carbine. This research was motivated by cognitive psychology theories, including Cognitive Load Theory and Embodied Cognition. In two experiments, novices learned rifle disassembly procedures in a narrated multimedia presentation presented on a tablet and then were tested on what they learned during the multimedia training involving a virtual rifle by performing a rifle disassembly on a physical rifle, reassembling the rifle, and taking a written recall test about the disassembly procedures. Spatial ability was also considered as a subject variable. Experiment 1 examined two research questions. The primary research question was whether including multiple forms of interactivity in a multimedia presentation resulted in higher learning outcomes. The secondary research question in Experiment 1 was whether dynamic multimedia fostered better learning outcomes than equivalent static multimedia. To examine the effects of dynamism and interactivity on learning, four multimedia conditions of varying levels of interactivity and dynamism were used. One condition was a 2D phase diagram depicting the before and after of the step with no animation or interactivity. Another condition utilized a non-interactive animation in which participants passively watched an animated presentation of the disassembly procedures. A third condition was the interactive animation in which participants could control the pace of the presentation by tapping a button. The last condition was a rifle disassembly simulation in which participants interacted with a virtual rifle to learn the disassembly procedures. A comparison of the conditions by spatial ability yielded the following results. Interactivity, overall, improved outcomes on the performance measures. However, high spatials outperformed low spatials in the simulation condition and the 2D phase diagram condition. High spatials seemed to be able to compensate for low interactivity and dynamism in the 2D phase diagram condition while enhancing their performance in the rifle disassembly simulation condition. In Experiment 2, the touchscreen interface was examined by investigating how gestures and input modality affected learning the disassembly procedures. Experiment 2 had two primary research questions. The first was whether gestures facilitate learning a procedural-motor task through embodied learning. The second was whether direct touch input using resulted in higher learning outcomes than indirect mouse input. To examine the research questions, three different variations of the rifle disassembly simulation were used. One was identical to that of Experiment 1. Another incorporated gestures to initiate the animation whereby participants traced a gesture arrow representing the motion of the component to learn the procedures. The third condition utilized the same interface as the initial rifle disassembly simulation but included "dummy" gesture arrows that displayed only visual information but did not respond to gesture. This condition was included to see the effects (if any) of the gesture arrows in isolation of the gesture component. Furthermore, direct touch input was compared to indirect mouse input. Once again, spatial ability also was considered. Results from Experiment 2 were inconclusive as no significant effects were found. This may have been due to a ceiling effect of performance. However, spatial ability was a significant predictor of performance across all conditions. Overall, the results of the two experiments support the use of multimedia on a tablet to train a procedural-motor task. In line with vision of ALM 2015, the research support incorporating tablets into U.S. Army training curriculum.
|
168 |
Attention following and nonverbal referential communication in bonobos (Pan paniscus), chimpanzees (Pan troglodytes) and orangutans (Pongo pygmaeus)Madsen, Elainie Alenkær January 2011 (has links)
A central issue in the study of primate communication is the extent to which individuals adjust their behaviour to the attention and signals of others, and manipulate others’ attention to communicate about external events. I investigated whether 13 chimpanzees (Pan troglodytes spp.), 11 bonobos (Pan paniscus), and 7 orangutans (Pongo pygmaeus pygmaeus) followed conspecific attention and led others to distal locations. Individuals were presented with a novel stimulus, to test if they would lead a conspecific to detect it in two experimental conditions. In one the conspecific faced the communicator, while another required the communicator to first attract the attention of a conspecific. All species followed conspecific attention, but only bonobos in conditions that required geometric attention following and that the communicator first attract the conspecific‘s attention. There was a clear trend for the chimpanzees to selectively produce a stimulus directional ‘hunching’ posture when viewing the stimulus in the presence of a conspecific rather than alone (the comparison was statistically non-significant, but very closely approached significance [p = 0.056]), and the behaviour consistently led conspecifics to look towards the stimulus. An observational study showed that ‘hunching’ only occurred in the context of attention following. Some chimpanzees and bonobos consistently and selectively combined functionally different behaviours (consisting of sequential auditory-stimulus-directional-behaviours), when viewing the stimulus in the presence of a non-attentive conspecific, although at species level this did not yield significant effects. While the design did not eliminate the possibility of a social referencing motive (“look and help me decide how to respond”), the coupling of auditory cues followed by directional cues towards a novel object, is consistent with a declarative and social referential interpretation of non-verbal deixis. An exploratory study, which applied the ‘Social Attention Hypothesis’ (that individuals accord and receive attention as a function of dominance) to attention following, showed that chimpanzees were more likely to follow the attention of the dominant individual. Overall, the results suggest that the paucity of observed referential behaviours in apes may owe to the inconspicuousness and multi-faceted nature of the behaviours.
|
169 |
Communication chez les primates non humains : étude des asymétries dans la production d'expressions oro-faciales / Communication in non-human primates : studying asymmetries during the production of oro-facial expressionsWallez, Catherine 11 October 2012 (has links)
L'examen des asymétries oro-faciales fournit un indice indirect et fiable pour déterminer la spécialisation hémisphérique des processus liés à la communication socio-émotionnelle chez les primates non humains. Cependant, à ce jour, peu d'études ont été réalisées et les théories formulées chez l'homme sont peu consensuelles. Afin de contribuer à la question de la latéralisation cérébrale des processus cognitivo-émotionnels chez le primate, quatre études expérimentales ont été réalisées au cours de cette thèse. Tout d'abord, deux méthodes ont été utilisées pour mesurer les asymétries oro-faciales dans une population de babouins adultes (une méthode morphométrique et une méthode dite des « chimères »). Une spécialisation hémisphérique droite dominante pour le traitement des émotions négatives a été notée. Une troisième étude a démontré, pour la première fois, une asymétrie oro-faciale au niveau populationnel chez des jeunes macaques et babouins. Enfin, une dernière étude a été réalisée chez des chimpanzés afin de tester la robustesse d'une recherche qui avait mis en évidence une différence d'asymétrie selon la fonction communicative intentionnelle (hémisphère gauche) vs. émotionnelle (hémisphère droit) des vocalisations. Les résultats ont confirmé ceux de la première étude et permettent de discuter des hypothèses concernant l'origine de l'évolution du langage. Ces travaux sont discutés à la lumière des recherches récentes concernant de nombreuses espèces animales. Ils apportent des connaissances nouvelles pour appréhender la phylogénèse de la spécialisation hémisphérique des processus associés à la communication verbale et non verbale chez l'homme. / The study of oro-facial asymmetries offers an indirect and suitable index to determine the hemispheric specialization of the processes associated to socio-emotional communication in non-human primates. However, few studies have been made in this domain and the available theories in humans are in part contradictory. In order to contribute to this field, i.e., hemispheric specialization of cognitive and emotional processing in primates, four experimental studies have been carried out during this doctorate. Firstly, two methods have been used to assess oro-facial asymmetries in adult baboons (a morphometric one and a free viewing of chimeric faces). A right hemispheric specialization for negative emotions was noticed. A third study demonstrated for the first time a population-level hemispheric specialization for the production of emotions in infant macaques and baboons. A last study tested the robustness of previous findings in chimpanzees concerning differences of hemispheric lateralization patterns depending on the communicative function of the vocalizations: intentional (left hemisphere) vs emotional (right hemisphere). Results confirmed the previous conclusions and allowed to discuss hypotheses about the origin of the evolution of language (speech). These collective findings are discussed within the context of the phylogeny of hemispheric specialization mechanisms underlying verbal and nonverbal communication in humans.
|
170 |
Design and Control of a Dexterous Anthropomorphic Robotic Hand / Conception et Contrôle d’une Main Robotique anthropomorphique et dextreCerruti, Giulio 17 October 2016 (has links)
Cette thèse présente la conception et la commande d’une main robotique légère et peu onéreuse pour un robot compagnon humanoïde. La main est conçue pour exprimer des émotions à travers des gestes et pour saisir de petits objets légers. Sa géométrie est définie à l’aide de données anthropométriques. Sa cinématique est simplifiée par rapport à la main humaine pour réduire le nombre d’actionneurs tout en respectant ses exigences fonctionnelles. La main préserve son anthropomorphisme grâce aux nombres et au placement de la base des doigts et à une bonne opposabilité du pouce. La mécatronique de la main repose sur un compromis entre des phalanges couplés, qui permettent de bien connaître la posture des doigts pendant les gestes, et des phalanges capable de s’adapter à la forme des objets pendant la saisie, réunis en une conception hybride unique. Ce compromis est rendu possible grâce à deux systèmes d’actionnement distincts placés en parallèle. Leur coexistence est garantie par une transmission compliante basée sur des barres en élastomère. La solution proposée réduit significativement le poids et la taille de la main en utilisant sept actionneurs de faible puissance pour les gestes et un seul moteur puissant pour la saisie. Le système est conçue pour être embarqué sur Romeo, un robot humanoïde de1.4 [m] produit par Aldebaran. Les systèmes d’actionnements sont dimensionnés pour ouvrir et fermer les doigts en moins de 1 [s] et pour saisir une canette pleine de soda. La main est réalisée et contrôlée pour garantir une interaction sûre avec l’homme mais aussi pour protéger l’intégrité de la mécanique. Un prototype (ALPHA) est réalisé pour valider la conception et ses capacités fonctionnelles. / This thesis presents the design and control of a low-cost and lightweight robotic hand for a social humanoid robot. The hand is designed to perform expressive hand gestures and to grasp small and light objects. Its geometry follows anthropometric data. Its kinematics simplifies the human hand structure to reduce the number of actuators while ensuring functional requirements. The hand preserves anthropomorphism by properly placing five fingers on the palm and by ensuring an equilibrated thumb opposability. Its mechanical system results from the compromise between fully-coupled phalanges and self-adaptable fingers in a unique hybrid design. This answers the need for known finger postures while gesturing and for finger adaptation to different object shapes while grasping. The design is based on two distinct actuation systems embodied in parallel within the palm and the fingers. Their coexistence is ensured by a compliant transmission based on elastomer bars. The proposed solution significantly reduces the weightand the size of the hand by using seven low-power actuators for gesturing and a single high-power motor for grasping. The overall system is conceived to be embedded on Romeo, a humanoid robot 1.4 [m] tall produced by Aldebaran. Actuation systems are dimensioned to open and close the fingers in less than1 [s] and to grasp a full soda can. The hand is realized and controlled to ensure safe human-robot interaction and to preserve mechanical integrity. A prototype(ALPHA) is realized to validate the design feasibility and its functional capabilities.
|
Page generated in 0.057 seconds