• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 139
  • 103
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 670
  • 133
  • 123
  • 112
  • 101
  • 97
  • 80
  • 74
  • 70
  • 70
  • 60
  • 55
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A manual alphabet for touchless gesture-controlled writing input with a myoelectric device : Design, evaluation and user experience

Bieber Bardt, Raphaela January 2015 (has links)
The research community around gesture-based interaction has so far not paid attention to the possibility of replacing the keyboard with natural gestures for writing purposes. Additionally, insight into the actual user experience of such an interaction style is only insufficiently provided. This work presents a novel approach for text input that is based on a manual alphabet, MATImyo. The hand alphabet was developed in a user-centered design process involving potential users in pre-studies, design process and evaluation procedure. In a Wizard-of-Oz style experiment with accompanying interviews, the alphabet’s quality as input language for composing electronic texts was evaluated and the user experience of such an interaction style assessed. MATImyo was found to be very suitable as gestural input language with a positive user experience. The whole process of designing MATImyo and evaluating its suitability and user experience was based on the principles of Embodied Interaction, which was chosen as theoretical framework. This work contributes to understanding the bigger picture of the user experience of gesture-based interaction and presents a novel, more natural text input method.
42

A comparative study about cognitive load of air gestures and screen gestures for performing in-car music selection task

Wu, Xiaolong 07 January 2016 (has links)
With the development of technology, people's viewpoints of the automobile have shifted; instead of merely a means of transportation, the automobile has become a space in which a driver can still perform daily activities besides driving, such as communicating with other people, interacting with electronic devices, and receiving information. In the meantime, different ways of interaction have been explored. Among all the modalities, gestures have been considered as a feasible way for performing in-car secondary tasks because of their intuitiveness. However, few researches have been conducted in terms of subjects' cognitive load. This thesis has examined four gesture interfaces (air swipe, air tap, screen swipe, and screen tap), in terms of their effects on drivers' driving performance, secondary task performance, perceived cognitive load, and eye glance behavior. The result demonstrated that air gestures are generally slower than screen gestures with regard to secondary performance. Screen swipe gesture requires the lowest cognitive load while air swipe and screen tap gesture remain the same. Subjects in this study tend to prefer screen swipe gesture the most while prefer air tap gesture the least. However, there is no significant difference between air swipe and screen tap gesture. Although air tap gesture and screen tap gesture generated the largest amount of dwell times, no variance among the four gesture interfaces in driving performance has been found. The result indicated that even though air gestures are not limited by space, screen swipe in this study still seemed to be the most ideal way for performing in-car secondary task of music selection.
43

Head gestures as a means of human-computer communication in rehabilitation applications

Perricos, Constantine January 1995 (has links)
No description available.
44

Evolution of symbolic communication : an embodied perspective

Brown, Jessica Erin January 2012 (has links)
This thesis investigates the emergence in human evolution of communication through symbols, or conventional, arbitrary signs. Previous work has argued that symbolic speech was preceded by communication through nonarbitrary signs, but how vocal symbolic communication arose out of this has not been extensively studied. Thus far, past research has emphasized the advantages of vocal symbols and pointed to communicative and evolutionary pressures that would have spurred their development. Based on semiotic principles, I examine emergence in terms of two factors underlying symbols: interpretation and conventionalization. I address the question with a consideration of embodied human experience – that is, accounting for the particular features that characterize human communication. This involves simultaneous expression through vocal and gestural modalities, each of which has distinct semiotic properties and serves distinct functions in language today. I examine research on emerging sign systems together with research on properties of human communication to address the question of symbol emergence in terms of the specific context of human evolution. I argue that, instead of in response to pressures for improved communication, symbolic vocalizations could have emerged through blind cultural processes out of the conditions of multimodal nonarbitrary communication in place prior to modern language. Vocalizations would have been interpreted as arbitrary by virtue of their semiotic profile relative to that of gesture, and arbitrary vocalizations could have become conventionalized via the communicative support of nonarbitrary gestures. This scenario avoids appealing to improbable evolutionary and psychological processes and provides a comprehensive and evolutionarily sound explanation for symbol emergence. I present experiments that test hypotheses stemming from this claim. I show that novel arbitrary vocal forms are interpreted and adopted as symbols even when these are uninformative and gesture is the primary mode of communication. I also present computational models that simulate multi-channel, heterosemiotic communication like that of arbitrary speech and nonarbitrary gesture. These demonstrate that information like that provided by gesture can enable the conventionalization of symbols across a population. The results from experiments and simulations together support the claim that symbolic communication could arise naturally from multimodal nonarbitrary communication, offering an explanation for symbol emergence more consistent with evolutionary principles than existing proposals.
45

Robust Upper Body Pose Recognition in Unconstrained Environments Using Haar-Disparity

Chu, Cheng-Tse January 2008 (has links)
In this research, an approach is proposed for the robust tracking of upper body movement in unconstrained environments by using a Haar- Disparity algorithm together with a novel 2D silhouette projection algorithm. A cascade of boosted Haar classifiers is used to identify human faces in video images, where a disparity map is then used to establish the 3D locations of detected faces. Based on this information, anthropometric constraints are used to define a semi-spherical interaction space for upper body poses. This constrained region serves the purpose of pruning the search space as well as validating user poses. Haar-Disparity improves on the traditional skin manifold tracking by relaxing constraints on clothing, background and illumination. The 2D silhouette projection algorithm provides three orthogonal views of the 3D objects. This allows tracking of upper limbs to be performed in the 2D space as opposed to manipulating 3D noisy data directly. This thesis also proposes a complete optimal set of interactions for very large interactive displays. Experimental evaluation includes the performance of alternative camera positions and orientations, accuracy of pointing, direct manipulative gestures, flag semaphore emulation, and principal axes. As a minor part of this research interest, the usability of interacting using only arm gestures is also evaluated based on ISO 9241-9 standard. The results suggest that the proposed algorithm and optimal set of interactions are useful for interacting with large displays.
46

Vision-based analysis, interpretation and segmentation of hand shape using six key marker points

Crawford, Gordon Finlay January 1997 (has links)
No description available.
47

The role of gesture in British ELT in a university setting

Hague, Elizabeth January 2000 (has links)
No description available.
48

The Time Is at Hand: The Development of Spatial Representations of Time in Children’s Speech and Gesture

Stites, Lauren 15 December 2016 (has links)
Children achieve increasingly complex language milestones initially in gesture before they do so in speech. In this study, we ask whether gesture continues to be part of the language-learning process as children develop more abstract language skills, namely metaphors. More specifically, we focus on spatial metaphors for time and ask whether developmental changes in children’s production of such metaphors in speech also become evident in gesture and what cognitive and linguistic factors contribute to these changes. To answer these questions, we analyzed the speech and gestures produced by three groups of children (ages 3-4, 5-6, and 7-8)—all learning English as first language—as they talked about past and future events, along with adult native speakers of English. Here we asked how early we see change in the orientation (sagittal vs. lateral), directionality (left-to-right, right-to-left, backward, or forward) and congruency with speech (lateral gestures with Time-RP language and sagittal gestures with Ego-RP language). Further, we asked how comprehension of metaphors for time and literacy level would influence these changes. We found developmental changes in the orientation, directionality, and congruency of children’s gestures about time. We found that children’s gestures about time change in orientation (sagittal vs. lateral), in that children increase their use of lateral gestures with age and that this increase is influenced by their literacy level. Further, the directionality (left-to-right, right-to-left, forward, backward) of children’s gestures changes with age. For sagittal gestures we found that children that understood metaphor for time were more likely to produce sagittal gestures that placed the past behind and the future ahead. For lateral gestures, we found that children with higher levels of literacy were more likely to use lateral gestures that place the past to the left and the future to the right. Finally the congruency of children’s gesture with their speech changed. The older children were more likely to pair lateral gestures with Time-RP language than Ego-RP language.
49

Robotmanipulering med Leap Motion : För små och medelstora företag / Robot manipulation based on Leap Motion : For small and medium sized enterprises

Agell, Ulrica January 2016 (has links)
On-line programming of industrial robots is time consuming and requires experience in robot programming. Due to this fact, small and medium sized enterprises are reserved about the implementation of robots in production. Ongoing research in the field is focused on finding more intuitive interfaces and methods for programming to make the interaction with robots more natural and intuitive. This master thesis presents a method for manipulation of industrial robots utilizing an external device other than the traditional teach pendant. The base of the method is a PC application which handles the program logic and the communication between an external device and an ABB robot. The program logic is designed to be modular in order to allow customization of the method, both in terms of its functions and the type of external device that is used for the method. Since gestures are one of the most common forms of communication between humans, it is interesting to investigate gestures for the purpose to make manipulation of industrial robots more intuitive. Therefore, a Leap Motion controller is presented as an example of an external device which could be used as an alternative to the teach pendant. The Leap Motion controller is specialised on hand and finger position tracking with both good absolute accuracy and precision. Further, its associated Software Development Kit (SDK) has the capabilities which are required to enable implementation of a teach pendants most fundamental functionalities. Results obtained by a user test show that the developed application is both easy and fast to use but has poor robustness.
50

Implementación de una herramienta de integración de varios tipos de interacción humano-computadora para el desarrollo de nuevos sistemas multimodales / Implementation of an integration tool of several types of human-computer interaction for the development of new multimodal systems

Alzamora M., Alzamora, Manuel I., Huamán, Andrés E., Barrientos, Alfredo, Villalta Riega, Rosario del Pilar January 2018 (has links)
Las personas interactúan con su entorno de forma multimodal. Esto es, con el uso simultaneo de sus sentidos. En los últimos años, se ha buscado una interacción multimodal humano-computador desarrollando nuevos dispositivos y usando diferentes canales de comunicación con el fin de brindar una experiencia de usuario interactiva más natural. Este trabajo presenta una herramienta que permite la integración de diferentes tipos de interacción humano computador y probarlo sobre una solución multimodal. / Revisión por pares

Page generated in 0.048 seconds