• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 139
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 676
  • 134
  • 124
  • 113
  • 101
  • 98
  • 82
  • 75
  • 71
  • 70
  • 62
  • 57
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Through boundaries

Dempsey, Lydia 01 May 2019 (has links)
Through Boundaries is a string quartet that attempts to recalibrate the way one listens by focusing on two spectrums, time and pitch. I explore what falls between the boundaries, in reference to the space between sound and silence and to frequencies between the conventional twelve pitches. The two movements are differing interpretations of the same graphical sketch material, viewing it from different perspectives. I. In-Between is structured with short gestures that are juxtaposed with longer periods of time that lack notated sound. Each gesture represents an unnamed event. For example, one gesture may represent the toss of a ball and the next gesture may represent it bouncing against the concrete. Each of these is a discrete point. At first, the space between these points appears empty, but on deeper investigation, it is overflowing with movement and energy. Visually, the ball floats up and falls to the ground. The electronics represent this energy, slowly fading into perception and forming waves of sound that weave in and out, nearly overpowering the quartet. II. Glide explores the interaction of pitches on a micro-level, including quartertones and glissandi that occupy a small pitch range. The movement begins with a still, seemingly static nature. Deviations appear, and harmonies seemingly blur and crystallize. When a larger melodic interval finally arrives, it is overwhelming.
122

The role of pointing gestures in facilitating word learning

Wu, Zhen 01 May 2015 (has links)
Previous natural observations have found a robust correlation between infants’ spontaneous gesture production and vocabulary development: the onset and frequency of infants’ pointing gestures are significantly correlated to their subsequent vocabulary size (Colonnesi, Stams, Koster, & Noom, 2010). The present study first examined the correlations between pointing and vocabulary size in an experimental setting, and then experimentally manipulated responses to pointing, to investigate the role of pointing in infants’ forming word-object associations. In the first experiment, we elicited 12- to 24-month old infants’ pointing gestures to 8 familiar and 8 novel objects. Their vocabulary was assessed by the MacArthur Communicative Development Inventory (MCDI): Words and Gestures. Results showed that 12-16 month old infants’ receptive vocabulary was positively correlated to infants’ spontaneous pointing. This correlation, however, was not significant in 19-24 month old infants. This experiment thus generalizes the previous naturalistic observation findings to an experimental setting, and shows a developmental change in the relation between pointing and receptive vocabulary. Together with prior studies, it suggests a possible positive social feedback loop of pointing and language skills in infants younger than 18 months old: the bigger vocabulary size infants have, the more likely they point, the more words they hear, and then the faster they develop their vocabulary. In the second experiment, we tested whether 16-month-old infants’ pointing gestures facilitate infants’ word learning in the moment. Infants were randomly assigned to one of three conditions: the experimenter labeled an unfamiliar object with a novel name 1) immediately after the infant pointed to it (the point contingent condition); 2) when the infant looked at it; or 3) at a schedule predetermined by a vocabulary-matched infant in the point contingent condition. After hearing the objects’ names, infants were presented with a word learning test. Results showed that infants successfully selected the correct referent above chance level only in the point contingent condition, and their performance was significantly better in the point contingent condition than the other two conditions. Therefore, only words that were provided contingently after pointing were learned. Taken together, these two studies further our understanding of the correlation between early gesture and vocabulary development and suggest that pointing plays a role in early word learning.
123

A Study On Handedness In Citonga Multimodal Interactions

Unknown Date (has links)
CiTonga speakers in Malawi describe dominant use of the left hand as distasteful and offensive in face-to-face multimodal interactions, communicative exchanges involving both oral-auditory and visual-gestural actions. They observe a left hand taboo on religious and social grounds, linking the right hand to "good" and the left hand to "bad". Despite this widespread perception, ciTonga speakers were often observed using their left hand and eschewing the taboo even in serious situations where politeness is a social imperative. In this study, I aim to resolve this paradox by arguing that the significance of left hand taboo is domain specific. To do this, I collected 101 multimodal interactions--over 50 hours of recording--through participant observation in Cifila and Kavuzi, where ciTonga is spoken as a native language. I analyzed the gestures in two domains of interaction: everyday rituals and ordinary talks. For both domains, flexibility of handedness is determined by a ranking of four different contextual constraints. I proposed a decision matrix to describe how the type and scale of a constraint can explain the permissiveness of left hand use. CiTonga kinesic signs can elevate to taboo status when they violate the handedness convention for interlocutors with distant social relationships, but over-producing deferential signs can create a social imbalance between close affiliates. Selecting an interaction-appropriate hand preference is therefore an integral part of ciTonga communicative competence. A study on taboo in multimodality shows the ways in which domain structure and purpose shape the application of large sociocultural ideologies to spontaneous interactions in daily life. / acase@tulane.edu
124

Co-speech gesture integration in hippocampal amnesia

Clough, Sharice 01 May 2018 (has links)
Co-speech gesture is ubiquitous in everyday conversation, facilitating comprehension, learning, and memory. Information is often provided uniquely in the gesture modality and this information is integrated with speech, affecting the listener’s comprehension and memory of a message. Despite the robust evidence that gesture supports learning, the memory mechanisms that support this learning are unclear. The current study investigates the ability of patients with hippocampal damage to integrate and retain information from co-speech gesture. Four patients with bilateral hippocampal lesions, four patients with damage to the ventral medial prefrontal cortex, and 17 healthy comparisons watched videos of a storyteller narrating four stories with gestures. Some of the gestures provided redundant information to the speech signal and some provided supplementary information that was unique. The participants retold the story immediately after, thirty-minutes after, and four weeks later. Co-speech gesture integration was measured by the proportion of words changed as a result of seeing a supplementary gesture. Memory retention for the stories was measured by the number of story features mentioned during each retelling. The patients with hippocampal amnesia were successful at integrating speech and gesture information immediately after hearing the story but did not show a benefit in memory for gestured features after delays. Though the hippocampus has previously been thought to be critical for relational memory, this finding suggests that the integration of speech and gesture may be mediated by other cognitive mechanisms.
125

Infant cross-fostered chimpanzees develop indexical pointing

Nugent, Susie P. January 2006 (has links)
Thesis (M.A.)--University of Nevada, Reno, 2006. / "May 15, 2006." Includes bibliographical references (leaves 24-28). Online version available on the World Wide Web.
126

Naturalistic skeletal gesture movement and rendered gesture decoding

Smith, Jason Alan. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Sciences, Computer Science Department, 2006. / Includes bibliographical references.
127

Modality-independent and modality-specific aspects of grammaticalization in sign languages

Pfau, Roland, Steinbach, Markus January 2006 (has links)
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
128

Design and Evaluation of a Presentation Maestro: Controlling Electronic Presentations Through Gesture

Fourney, Adam January 2009 (has links)
Gesture-based interaction has long been seen as a natural means of input for electronic presentation systems; however, gesture-based presentation systems have not been evaluated in real-world contexts, and the implications of this interaction modality are not known. This thesis describes the design and evaluation of Maestro, a gesture-based presentation system which was developed to explore these issues. This work is presented in two parts. The first part describes Maestro's design, which was informed by a small observational study of people giving talks; and Maestro's evaluation, which involved a two week field study where Maestro was used for lecturing to a class of approximately 100 students. The observational study revealed that presenters regularly gesture towards the content of their slides. As such, Maestro supports several gestures which operate directly on slide content (e.g., pointing to a bullet causes it to be highlighted). The field study confirmed that audience members value these content-centric gestures. Conversely, the use of gestures for navigating slides is perceived to be less efficient than the use of a remote. Additionally, gestural input was found to result in a number of unexpected side effects which may hamper the presenter's ability to fully engage the audience. The second part of the thesis presents a gesture recognizer based on discrete hidden Markov models (DHMMs). Here, the contributions lie in presenting a feature set and a factorization of the standard DHMM observation distribution, which allows modeling of a wide range of gestures (e.g., both one-handed and bimanual gestures), but which uses few modeling parameters. To establish the overall robustness and accuracy of the recognition system, five new users and one expert were asked to perform ten instances of each gesture. The system accurately recognized 85% of gestures for new users, increasing to 96% for the expert user. In both cases, false positives accounted for fewer than 4% of all detections. These error rates compare favourably to those of similar systems.
129

Design and Evaluation of a Presentation Maestro: Controlling Electronic Presentations Through Gesture

Fourney, Adam January 2009 (has links)
Gesture-based interaction has long been seen as a natural means of input for electronic presentation systems; however, gesture-based presentation systems have not been evaluated in real-world contexts, and the implications of this interaction modality are not known. This thesis describes the design and evaluation of Maestro, a gesture-based presentation system which was developed to explore these issues. This work is presented in two parts. The first part describes Maestro's design, which was informed by a small observational study of people giving talks; and Maestro's evaluation, which involved a two week field study where Maestro was used for lecturing to a class of approximately 100 students. The observational study revealed that presenters regularly gesture towards the content of their slides. As such, Maestro supports several gestures which operate directly on slide content (e.g., pointing to a bullet causes it to be highlighted). The field study confirmed that audience members value these content-centric gestures. Conversely, the use of gestures for navigating slides is perceived to be less efficient than the use of a remote. Additionally, gestural input was found to result in a number of unexpected side effects which may hamper the presenter's ability to fully engage the audience. The second part of the thesis presents a gesture recognizer based on discrete hidden Markov models (DHMMs). Here, the contributions lie in presenting a feature set and a factorization of the standard DHMM observation distribution, which allows modeling of a wide range of gestures (e.g., both one-handed and bimanual gestures), but which uses few modeling parameters. To establish the overall robustness and accuracy of the recognition system, five new users and one expert were asked to perform ten instances of each gesture. The system accurately recognized 85% of gestures for new users, increasing to 96% for the expert user. In both cases, false positives accounted for fewer than 4% of all detections. These error rates compare favourably to those of similar systems.
130

Virtual Mouse¡GVision-Based Gesture Recognition

Chen, Chih-Yu 01 July 2003 (has links)
The thesis describes a method for human-computer interaction through vision-based gesture recognition and hand tracking, which consists of five phases: image grabbing, image segmentation, feature extraction, gesture recognition, and system mouse controlling. Unlike most of previous works, our method recognizes hand with just one camera and requires no color markers or mechanical gloves. The primary work of the thesis is improving the accuracy and speed of the gesture recognition. Further, the gesture commands will be used to replace the mouse interface on a standard personal computer to control application software in a more intuitive manner.

Page generated in 0.0897 seconds