• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 139
  • 103
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 670
  • 133
  • 123
  • 112
  • 101
  • 97
  • 80
  • 74
  • 70
  • 70
  • 60
  • 55
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Consistencies in body-focused hand movements /

Kenner, Andrew N. January 1988 (has links) (PDF)
Thesis (Ph. D.)--University of Adelaide, Dept. of Psychology, 1990. / Includes bibliographical references (leaves 288-308).
12

Gesture and attitude /

Piper, Stephen. January 1984 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 1984. / Typescript. Includes bibliographical references (leaf 26).
13

A preliminary study of the interpretation of bodily expression

Blake, William Harold, January 1933 (has links)
Issued also as Thesis (Ph. D.)--Columbia University. / "Bibliographical references to chapter I": p. 52-54.
14

System der Gebärden dargestellt auf Grund der mittelalterlichen Literatur Frankreichs. (Vorrede. Kap. I) /

Lommatzsch, Erhard, January 1910 (has links)
Thesis--Berlin. / Cover title. Vita. "Die ganze Arbeit wird binnen Jahresfrist in Buchform erscheinen." Includes bibliographical references (p. 17-24).
15

Effects of task complexity on manual gesture /

Peabody, Amy, January 1900 (has links)
Thesis (M.S.)--Missouri State University, 2009. / "May 2009." Includes bibliographical references (leaves 21-23). Also available online.
16

ProGes: A User Interface for Multimedia Devices over the Internet of Things

Ahmadi Danesh Ashtiani, Ali January 2014 (has links)
With the rapid growth of online devices, a new concept of Internet of Things (IoT) is emerging in which everyday devices will be connected to the Internet. As the number of devices in IoT is increasing, so is the complexity of the interactions between user and devices. There is a need to design intelligent user interfaces that could assist users in interactions. Many studies have been conducted on different interaction techniques such as proxemic and gesture interaction in order to propose an intuitive and intelligent system for controlling multimedia devices over the IoT, though most could not propose a universal solution. The present study proposes a proximity-based and gesture-enabled user interface for multimedia devices over IoT. The proposed method employs a cloud-based decision engine to support user to choose and interact with the most appropriate device, reliving the user from the burden of enumerating available devices manually. The decision engine observes the multimedia content and device properties, learns user preferences adaptively, and automatically recommends the most appropriate device to interact. In addition to that, the proposed system uses proximity information to find the user among people and provides her/him gesture control services. Furthermore, a new hand gesture vocabulary is proposed for controlling multimedia devices through conducting a multiphase elicitation study. The main advantage of this vocabulary is that it can be used for all multimedia devices. Both device recommendation system and gesture vocabulary are evaluated. The device recommendation system evaluation shows that the users agree with the proposed interaction 70% of the times. Moreover, the average agreement score of the proposed gesture vocabulary (0.56) exceeds the score of similar studies. An external user evaluation study shows that the average score of being a good-match is 4.08 out of 5 and the average of ease-of-performance equals to 4.21 out of 5. The memory test reveals that the proposed vocabulary is easy to remember since participants could remember and perform gestures in 3.13 seconds on average. In addition to that, the average accuracy of remembering gestures equals to 91.54%.
17

The Multimodal Interaction through the Design of Data Glove

Han, Bote January 2015 (has links)
In this thesis, we propose and present a multimodal interaction system that can provide a natural way for human-computer interaction. The core idea of this system is to help users to interact with the machine naturally by recognizing various gestures from the user from a wearable device. To achieve this goal, we have implemented a system including both hardware solution and gesture recognizing approaches. For the hardware solution, we designed and implemented a data glove based interaction device with multiple kinds of sensors to detect finger formations, touch commands and hand postures. We also modified and implemented two gesture recognizing approach based on support vector machine (SVM) as well as the lookup table. The detailed design and information is presented in this thesis. In the end, the system achieves supporting over 30 kinds of touch commands, 18 kinds of finger formation, and 10 kinds of hand postures as well as the combination of finger formation and hand posture with the recognition rate of 86.67% as well as the accurate touch command detection. We also evaluated the system from the subjective user experience.
18

Coordinating Joint Action in a Real-Life Activity: The Interplay of Explicit and Implicit Coordination

Zheng, Chen January 2022 (has links)
Humans engage in joint actions on a daily basis. Some of these joint actions are explicitly coordinated using, for example, speech and gesture, and the others are implicitly coordinated with the actions themselves. The first chapter of this dissertation reviews the use of speech, gesture, and intentional behavioral signals in explicit coordination of joint action and identifies three cognitive mechanisms that enable implicit coordination of joint action, namely, motor resonance, joint intentionality, and environmental and social affordance. The second chapter reports an empirical study exploring the employment of explicit and implicit coordination of joint action in a complex real-life joint activity, assembling a TV cart from its parts. We coded the content of the utterances and gestures that pairs of participants used throughout the assembly and the major and subordinate joint actions they performed. We then coded how each joint action was coordinated, that is, using speech, gestures, or action itself. The results showed speech and gesture served primarily to establish and sustain a shared mental model of the environmental affordances between the co-actors, which occurred primarily at the beginning of the task and as the participants began to attach two major parts. For both major and subordinate joint actions alike, the specifics of the joint actions such as the goal and division of labor was primarily coordinated implicitly. We argue that the shared mental model scaffolded the participants’ implicit coordination of the actions. These findings provide evidence that action itself is a communicative device and part of the conversation between co-actors of a joint activity. They also lend support to the argument that joint action cannot be fully understood on the individual level but must be interpreted as a collective of which each individual is a part.
19

The voice as gesture in Meredith Monk's ATLAS /

Pym, Rebekah January 2002 (has links)
No description available.
20

PARENTAL TRANSLATION OF CHILD GESTURE HELPS THE VOCABULARY DEVELOPMENT OF BILINGUAL CHILDREN

Mateo, Valery Denisse 08 August 2017 (has links)
Monolingual children identify referents uniquely in gesture before they do so with words, and parents translate these gestures into words. Children benefit from these translations, acquiring the words their parents translated earlier than the ones that are not translated. Are bilingual children as likely as monolingual children to identify referents uniquely in gesture; and, if so, do parental translations have the same positive impact on the vocabulary development of bilingual children? Our results showed that the bilingual children—dominant in English or in Spanish—were as likely as monolingual children to identify referents uniquely in gesture. More important, the unique gestures, translated into words by the parents, were as likely to enter bilingual children’s speech, as it does for monolinguals—independent of language dominance. Our results suggest that parental response to child gesture plays as crucial of a role in the vocabulary development bilingual children as it does in monolinguals.

Page generated in 0.0465 seconds