• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 140
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 680
  • 137
  • 124
  • 113
  • 102
  • 98
  • 82
  • 75
  • 71
  • 71
  • 62
  • 59
  • 46
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

PARENTAL TRANSLATION OF CHILD GESTURE HELPS THE VOCABULARY DEVELOPMENT OF BILINGUAL CHILDREN

Mateo, Valery Denisse 08 August 2017 (has links)
Monolingual children identify referents uniquely in gesture before they do so with words, and parents translate these gestures into words. Children benefit from these translations, acquiring the words their parents translated earlier than the ones that are not translated. Are bilingual children as likely as monolingual children to identify referents uniquely in gesture; and, if so, do parental translations have the same positive impact on the vocabulary development of bilingual children? Our results showed that the bilingual children—dominant in English or in Spanish—were as likely as monolingual children to identify referents uniquely in gesture. More important, the unique gestures, translated into words by the parents, were as likely to enter bilingual children’s speech, as it does for monolinguals—independent of language dominance. Our results suggest that parental response to child gesture plays as crucial of a role in the vocabulary development bilingual children as it does in monolinguals.
22

How Do Gestures Reflect Thought and When Do They Affect Thought?

Zrada, Melissa January 2018 (has links)
People perform gestures both while communicating with others and while thinking to themselves. Gestures that people perform for themselves when they are alone can reveal a great deal about what they are thinking, and are also believed to improve comprehension and memory. Previous research has demonstrated that people gesture when information can be mapped directly to a spatial representation; for example, on tests of spatial thinking. What is not as widely researched is whether or not people will gesture for information that is not inherently spatial. Further, will people gesture for information that is not spatial or relational? And if individuals do gesture for these other types of stimuli, what types of gestures will they perform, and will gesturing improve memory? This work provides evidence that people do gesture, even when the information is not inherently spatial or relational. For information that is not spatial but related, people perform representational gestures; for example, creating an ordered list with their hands to represent preference of movie genres. For information that is non-relational, people use considerably fewer representational gestures, but can be observed using beat gestures, which are believed to help in keeping track of information. These studies did not provide strong evidence to support the claim that gestures help people understand and remember information, as gesture was only beneficial for one type of stimuli (mechanical systems). However, future research with more sensitive measures has the potential reveal this phenomenon.
23

Can participants extract subtle information from gesturelike visual stimuli that are coordinated with speech without using any other cues?

Abdalla, Marwa 01 May 2012 (has links)
Embodied cognition is the reflection of an organism's interaction with its environment on its cognitive processes. We explored the question whether participants are able to pick up on subtle cues from gestures using the Tower of Hanoi task. Previous research has shown that listeners are sensitive to the height of the gestures that they observe, and reflect this knowledge in their mouse movements (Cook & Tanenhaus, 2009). Participants in our study watched a modified video of someone explaining the Tower of Hanoi puzzle solution, so that participants only saw a black background with two moving dots representing the hand positions from the original explanation in space and time. We parametrically manipulated the location of the dots to examine whether listeners were sensitive to this subtle variation. We selected the transfer gestures from the original explanation, and tracked the hand positions with dots at varying heights relative to the original gesture height. The experimental gesture heights reflected 0%, 25%, 50%, 75% and 100% of this original height. We predicted, based on previous research (Cook in prep), that participants will be able to extract the difference in gesture height and reflect this in their mouse movements when solving the problem. Using linear model for our analysis, we found that the starting trajectory confirmed our hypothesis. However, when looking at the averaged first 15 moves (the minimum to solve the puzzle) across the five conditions, the ordered effect of the gesture heights was lost, although there were apparent differences between the gesture heights. This is an important finding because it shows that participants are able to glean subtle height information from gestures. Listeners truly interpret iconic gestures iconically.
24

The Gesture and the Drip

Breton, Nicholas 14 May 2013 (has links)
The Gesture and the Drip investigates our increasing reliance on digital media as a means to encounter and view art works online as photographic documentation. This body of work attempts to place significance on the human gesture in relation to the loss of the human presence that often accompanies digital documentation. The gesture is a reoccurring element that can be traced throughout my thesis body of work. Occasionally, gestures are tactile marks made by my hand and in other cases they are the result of photographic reproduction, silk-screened onto the surface. A paradox is formed between the real and illusion that are interchangeable on the canvas. My paintings encompass authentic and mediated gestures to challenge the visual experience and disrupt a logical reading.
25

Application of single and multi-touch gestures in a WebGL molecule viewer

Slininger, Andrew David 07 November 2011 (has links)
The number of devices with touch input such as smart phones, computers, and tablets has grown extensively in recent years. Native applications on these devices have access to this touch and gesture information and can provide a rich, interactive experience. Web applications, however, lack a consistent and uniform way to retrieve touch and gesture input. With the quality and robustness of web applications continually growing and replacing native applications in many areas, a way to access and harness touch input is critical. This paper proposes two JavaScript libraries that provide a reliable and easy way for web applications to use touch efficiently and effectively. First, getTjs abstracts the gathering of touch events for most mobile and desktop touch devices. GenGesjs, the second library, receives this information and identifies gestures based on the touch input. Web applications can have this gesture information pushed to them as it is received or instead request the most recent gestures when desired. An example of interfacing with both libraries is provided in the form of WebMol. WebMol is a web application that allows for three dimensional viewing of molecules using WebGL. Gestures from GenGesjs are translated to interactions with the molecules, providing an intuitive interface for users. Using both of these libraries, web applications can easily tap into touch input resulting in an improved user experience regardless of the device. / text
26

When gestures are perceived through sounds : a framework for sonification of musicians' ancillary gestures

Savard, Alexandre. January 2008 (has links)
This thesis presents a multimodal sonification system that combines video with sound synthesis generated from motion capture data. Such a system allows for a fast and efficient exploration of musicians' ancillary gestural data, for which sonification complements conventional videos by stressing certain details which could escape one's attention if not displayed using an appropriate representation. The main objective of this project is to provide a research tool designed for people that are not necessarily familiar with signal processing or computer sciences. This tool is capable of easily generating meaningful sonifications thanks to dedicated mapping strategies. On the one hand, the dimensionality reduction of data obtained from motion capture systems such as the Vicon is fundamental as it may exceed 350 signals describing gestures. For that reason, a Principal Component Analysis is used to objectively reduce the number of signals to a subset that conveys the most significant gesture information in terms of signal variance. On the other hand, movement data presents high variability depending on the subjects: additional control parameters for sound synthesis are offered to restrain the sonification to the significant gestures, easily perceivable visually in terms of speed and path distance. Then, signal conditioning techniques are proposed to adapt the control signals to sound synthesis parameter requirements or to allow for emphasizing certain gesture characteristics that one finds important. All those data treatments are performed in realtime within one unique environment, minimizing data manipulation and facilitating efficient sonification designs. Realtime process also allows for an instantaneous system reset to parameter changes and process selection so that the user can easily and interactively manipulate data, design and adjust sonifications strategies.
27

Alignment of speech and co-speech gesture in a constraint-based grammar

Saint-Amand, Katya January 2013 (has links)
This thesis concerns the form-meaning mapping of multimodal communicative actions consisting of speech signals and improvised co-speech gestures, produced spontaneously with the hand. The interaction between speech and speech-accompanying gestures has been standardly addressed from a cognitive perspective to establish the underlying cognitive mechanisms for the synchronous speech and gesture production, and also from a computational perspective to build computer systems that communicate through multiple modalities. Based on the findings of this previous research, we advance a new theory in which the mapping from the form of the combined speech-and-gesture signal to its meaning is analysed in a constraint-based multimodal grammar. We propose several construction rules about multimodal well-formedness that we motivate empirically from an extensive and detailed corpus study. In particular, the construction rules use the prosody, syntax and semantics of speech, the form and meaning of the gesture signal, as well as the temporal performance of the speech relative to the temporal performance of the gesture to constrain the derivation of a single multimodal syntax tree which in turn determines a meaning representation via standard mechanisms for semantic composition. Gestural form often underspecifies its meaning, and so the output of our grammar is underspecified logical formulae that support the range of possible interpretations of the multimodal act in its final context-of-use, given the current models of the semantics/ pragmatics interface. It is standardly held in the gesture community that the co-expressivity of speech and gesture is determined on the basis of their temporal co-occurrence: that is, a gesture signal is semantically related to the speech signal that happened at the same time as the gesture. Whereas this is usually taken for granted, we propose a methodology of establishing in a systematic and domain-independent way which spoken element(s) gesture can be semantically related to, based on their form, so as to yield a meaning representation that supports the intended interpretation(s) in context. The ‘semantic’ alignment of speech and gesture is thus driven not from the temporal co-occurrence alone, but also from the linguistic properties of the speech signal gesture overlaps with. In so doing, we contribute a fine-grained system for articulating the form-meaning mapping of multimodal actions that uses standard methods from linguistics. We show that just as language exhibits ambiguity in both form and meaning, so do multimodal actions: for instance, the integration of gesture is not restricted to a unique speech phrase but rather speech and gesture can be aligned in multiple multimodal syntax trees thus yielding distinct meaning representations. These multiple mappings stem from the fact that the meaning as derived from gesture form is highly incomplete even in context. An overall challenge is thus to account for the range of possible interpretations of the multimodal action in context using standard methods from linguistics for syntactic derivation and semantic composition.
28

Typologies of Movement in Western Percussion Performance: A Study of Marimbists' Gestures

Colton, Michelle 02 August 2013 (has links)
Musicians are on stage not only to be heard but also to be seen. The visual aspects of music are a crucial part of the experience. Whether performers move too much or too little for a particular audience member, their gestures are often noticed. Some audiences may enjoy certain gestures while others may find them distracting. To study this topic in greater detail, I view my research through the lens of marimba performance. The marimba is a large instrument that can involve many movements to produce a sound. The way marimbists move while playing is noticed due to the nature of the instrument. When I interviewed ten professional marimbists in a 2011/2012 study, most participants discussed distracting gestures as a negative part of performance and said that they try to avoid extra gestures unless they relate to the music. The same participants were video recorded performing four excerpts from standard marimba repertoire by Gordon Stout, J.S. Bach, and Keiko Abe. The results of the analysis include: 1) gesture repetition in multiple takes of the same excerpt; 2) movement-areas of the body that I observed most in each participant; 3) a comparison of each participant to the others; and 4) results, patterns, and trends. This research also includes a discussion of literature in visual aspects of music performance, insight to why performers move the way that they do, an explanation of “sound producing” versus “ancillary gestures”, and a detailed discussion of my research study. Although this study will not lead to conclusions that can be applied to all marimbists, it will, however, provide an important contribution to physical gesture research in music performance by presenting patterns and trends from a comparative study of ten professional musicians.
29

Typologies of Movement in Western Percussion Performance: A Study of Marimbists' Gestures

Colton, Michelle 02 August 2013 (has links)
Musicians are on stage not only to be heard but also to be seen. The visual aspects of music are a crucial part of the experience. Whether performers move too much or too little for a particular audience member, their gestures are often noticed. Some audiences may enjoy certain gestures while others may find them distracting. To study this topic in greater detail, I view my research through the lens of marimba performance. The marimba is a large instrument that can involve many movements to produce a sound. The way marimbists move while playing is noticed due to the nature of the instrument. When I interviewed ten professional marimbists in a 2011/2012 study, most participants discussed distracting gestures as a negative part of performance and said that they try to avoid extra gestures unless they relate to the music. The same participants were video recorded performing four excerpts from standard marimba repertoire by Gordon Stout, J.S. Bach, and Keiko Abe. The results of the analysis include: 1) gesture repetition in multiple takes of the same excerpt; 2) movement-areas of the body that I observed most in each participant; 3) a comparison of each participant to the others; and 4) results, patterns, and trends. This research also includes a discussion of literature in visual aspects of music performance, insight to why performers move the way that they do, an explanation of “sound producing” versus “ancillary gestures”, and a detailed discussion of my research study. Although this study will not lead to conclusions that can be applied to all marimbists, it will, however, provide an important contribution to physical gesture research in music performance by presenting patterns and trends from a comparative study of ten professional musicians.
30

The Gesture and the Drip

Breton, Nicholas 14 May 2013 (has links)
The Gesture and the Drip investigates our increasing reliance on digital media as a means to encounter and view art works online as photographic documentation. This body of work attempts to place significance on the human gesture in relation to the loss of the human presence that often accompanies digital documentation. The gesture is a reoccurring element that can be traced throughout my thesis body of work. Occasionally, gestures are tactile marks made by my hand and in other cases they are the result of photographic reproduction, silk-screened onto the surface. A paradox is formed between the real and illusion that are interchangeable on the canvas. My paintings encompass authentic and mediated gestures to challenge the visual experience and disrupt a logical reading.

Page generated in 0.0665 seconds