• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 7
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 16
  • 12
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Practicing phonomimetic (conducting-like) gestures facilitates vocal performance of typically developing children and children with autism: an experimental study

Bingham, Emelyne Marie 21 December 2020 (has links)
Every music teacher is likely to teach one or more children with autism, given that an average of one in 54 persons in the United States receives a diagnosis of Autism Spectrum Disorder (ASD). ASD persons often show tremendous interest in music, and some even become masterful performers; however, the combination of deficits and abilities associated with ASD can pose unique challenges for music teachers. This experimental study shows that phonomimetic (conducting-like) gestures can be used to teach the expressive qualities of music. Children were asked to watch video recordings of conducting-like gestures and produce vocal sounds to match the gestures. The empirical findings indicate that motor training can strengthen the visual to vocomotor couplings in both populations, suggesting that phonomimetic gesture may be a suitable approach for teaching musical expression in inclusive classrooms.
12

The impact of an auditory task on visual processing:implications for cellular phone usage while driving

Cross, Ginger Wigington 03 May 2008 (has links)
Previous research suggests that cellular phone conversations or similar auditory/conversational tasks lead to degradations in visual processing. Three contemporary theories make different claims about the nature of the degradation that occurs when we talk on a cellular phone. We are either: (a) disproportionately more likely to miss objects located in the most peripheral areas of the visual environment due to a reduction in the size of the attentional window or functional field of view (Atchley & Dressel, 2004); (b) more likely to miss objects from all areas of the visual environment (even at the center of fixation) because attention is withdrawn from the roadway, leading to inattention blindness or general interference (Strayer & Drews, 2006; Crundall, Underwood, & Chapman, 1999; 2002), or (c) more likely to miss objects that are located on the side of the visual environment contralateral to the cellular phone message due to crossmodal links in spatial attention (Driver & Spence, 2004). These three theories were compared by asking participants to complete central and peripheral visual tasks (i.e., a measure of the functional field of view) in isolation and in combination with an auditory task. During the combined visual/auditory task, peripheral visual targets could appear on the same side as auditory targets or on the opposite side. When the congruency between auditory and visual target locations was not considered (as is typical in previous research), the results were consistent with the general interference/inattention blindness theory, but not the reduced functional field of view theory. Yet, when congruency effects were considered, the results support the theory that crossmodal links affect the spatial allocation of attention: Participants were better at detecting and localizing visual peripheral targets and at generating words for the auditory task if attention was directed to the same location in both modalities.
13

Delineating the Neural Circuitry Underlying Crossmodal Object Recognition in Rats

Reid, James 15 September 2011 (has links)
Previous research has indicated that the perirhinal cortex (PRh) and posterior parietal cortex (PPC) functionally interact to mediate crossmodal object representations in rats; however, it remains to be seen whether other cortical regions contribute to this cognitive function. The prefrontal cortex (PFC) has been widely implicated in crossmodal tasks and might underlie either a unified multimodal or amodal representation or comparison mechanism that allows for integration of object information across sensory modalities. The hippocampus (HPC) is also a strong candidate, with extensive polymodal inputs, and has been implicated in some aspects of object recognition. A series of lesion based experiments assessed the roles of HPC, PFC and PFC sub regions [medial prefrontal (mPFC) and orbitofrontal cortex (OFC)], revealing functional dissociations between these brain regions using two versions of crossmodal object recognition: 1. spontaneous crossmodal matching (CMM), which requires rats to compare between a stored tactile object representation and visually-presented objects to discriminate the novel and familiar stimuli; and 2. crossmodal object association (CMA), in which simultaneous pre-exposure to the tactile and visual elements of an object enhances CMM performance across long retention delays. Notably, while inclusive PFC lesions impaired both CMM and CMA tasks, selective OFC lesions disrupted only CMM, whereas selective mPFC damage did not impair performance on either task. Furthermore, there was no impact of HPC lesions on either CMM or CMA tasks. Thus, the PFC and the OFC play a selective role in crossmodal object recognition but the exact contributions and interactions of the regions will require further research to elucidate. / PDF Document / Natural Sciences and Engineering Research Council of Canada (NSERC)
14

Signal compatibility as a modulatory factor for audiovisual multisensory integration

Parise, Cesare Valerio January 2013 (has links)
The physical properties of the distal stimuli activating our senses are often correlated in nature; it would therefore be advantageous to exploit such correlations to better process sensory information. Stimulus correlations can be contingent and readily available to the senses (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). Over the last century, a large body of research on multisensory processing has demonstrated the existence of compatibility effects between individual features of stimuli from different sensory modalities. Such compatibility effects, termed crossmodal correspondences, possibly reflect the internalization of the natural correlation between stimulus properties. The present dissertation assesses the effects of crossmodal correspondences on multisensory processing and reports a series of experiments demonstrating that crossmodal correspondences influence the processing rate of sensory information, distort perceptual experiences and lead to stronger multisensory integration. Moreover, a final experiment investigating the effects of contingent signals’ correlation on multisensory processing demonstrates the key role of temporal correlation in inferring whether two signals have a common physical cause or not (i.e., the correspondence problem). A Bayesian framework is proposed to interpret the present results whereby stimulus correlations, represented on the prior distribution of expected crossmodal co-occurrence, operate as cues to solve the correspondence problem.
15

THE EFFECTS OF LONG-TERM DEAFNESS ON DENSITY AND DIAMETER OF DENDRITIC SPINES ON PYRAMIDAL NEURONS IN THE DORSAL ZONE OF THE FELINE AUDITORY CORTEX

Bauer, Rachel J 01 January 2019 (has links)
Neuroplasticity has been researched in many different ways, from the growing neonatal brain to neural responses to trauma and injury. According to recent research, neuroplasticity is also prevalent in the ability of the brain to repurpose areas that are not of use, like in the case of a loss of a sense. Specifically, behavioral studies have shown that deaf humans (Bavalier and Neville, 2002) and cats have increased visual ability, and that different areas of the auditory cortex enhance specific kinds of sight. One such behavioral test demonstrated that the dorsal zone (DZ) of the auditory cortex enhances sensitivity to visual motion through cross-modal plasticity (Lomber et. al., 2010). Current research seeks to examine the anatomical structures responsible for these changes through analysis of excitatory neuron dendritic spine density and spine head diameter. This present study focuses on the examination of DZ neuron spine density, distribution, and size in deaf and hearing cats to corroborate the visual changes seen in behavioral studies. Using Golgi-stained tissue and light microscopy, our results showed a decrease in overall spine density but slight increase in spine head diameter in deaf cats compared to hearing cats. These results, along with several other studies, support multiple theories on how cross-modal reorganization of the auditory cortex occurs after deafening
16

The Role of Concepts in Perception

Connolly, Kevin L. 19 January 2012 (has links)
The claim of my dissertation is that some basic concepts are required for perception. Non-basic concepts, we acquire, and I give an account as to how that process changes our perception. Suppose you are looking at the Mona Lisa. It might seem that you can perceive a lot more shades of color and a lot more shapes than for which you possess precise concepts. I argue against this. For every color or shape in appearance you have the ability to categorize it as that color or shape. It’s just that this is done by your sensory system prior to appearance. I argue that empirical studies show this. Blindsighted patients, for instance, are blind in part of their visual field. But they can use color and shape information received through the blind portion. I take this, along with other studies, to show that once you perceive a color or shape, it has already been categorized. I then argue that we perceive only low-level properties like colors and shapes. For in-stance, we don’t perceive high-level kind properties like being a table or being a wren. I do think that wrens or tables might look different to you after you become disposed to recognize them. Some take this to show that being a wren or being a table can be represented in your perception. I argue that this inference does not follow. If you are not disposed to recognize wrens, but we track the attention of someone who is, and we get you to attend to wrens in that same way, your visual phenomenology might be exactly the same as theirs. But there is no reason to think that it represents a wren. After all, you lack a recognitional disposition for wrens. I take this and other arguments to show that we perceive only low-level properties like colors and shapes.
17

Crossmodal Modulation as a Basis for Visual Enhancement of Auditory Performance

Qian, Cheng 15 February 2010 (has links)
The human sensory system processes many modalities simultaneously. It was believed that each modality would be processed individually first, and their combination deferred to higher-level cortical areas. Recent neurophysiological investigations indicate interconnections between early visual and auditory cortices, areas putatively considered unimodal, but the function remains unclear. The present work explores how this cross-modality might contribute to a visual enhancement of auditory performance, using a combined theoretical and experimental approach. The enhancement of sensory performance was studied through a signal detection framework. A model was constructed using principles from signal detection theory and neurophysiology, demonstrating enhancements of roughly 1.8dB both analytically and through simulation. Several experiments were conducted to observe e ects of visual cues on a 2-alternative-forced-choice detection task of an auditory tone in noise. Results of the main experiment showed an enhancement of 1.6dB. Better enhancement also tended to occur for more realistic relationships between audio to visual stimuli.
18

Crossmodal Modulation as a Basis for Visual Enhancement of Auditory Performance

Qian, Cheng 15 February 2010 (has links)
The human sensory system processes many modalities simultaneously. It was believed that each modality would be processed individually first, and their combination deferred to higher-level cortical areas. Recent neurophysiological investigations indicate interconnections between early visual and auditory cortices, areas putatively considered unimodal, but the function remains unclear. The present work explores how this cross-modality might contribute to a visual enhancement of auditory performance, using a combined theoretical and experimental approach. The enhancement of sensory performance was studied through a signal detection framework. A model was constructed using principles from signal detection theory and neurophysiology, demonstrating enhancements of roughly 1.8dB both analytically and through simulation. Several experiments were conducted to observe e ects of visual cues on a 2-alternative-forced-choice detection task of an auditory tone in noise. Results of the main experiment showed an enhancement of 1.6dB. Better enhancement also tended to occur for more realistic relationships between audio to visual stimuli.
19

The Role of Concepts in Perception

Connolly, Kevin L. 19 January 2012 (has links)
The claim of my dissertation is that some basic concepts are required for perception. Non-basic concepts, we acquire, and I give an account as to how that process changes our perception. Suppose you are looking at the Mona Lisa. It might seem that you can perceive a lot more shades of color and a lot more shapes than for which you possess precise concepts. I argue against this. For every color or shape in appearance you have the ability to categorize it as that color or shape. It’s just that this is done by your sensory system prior to appearance. I argue that empirical studies show this. Blindsighted patients, for instance, are blind in part of their visual field. But they can use color and shape information received through the blind portion. I take this, along with other studies, to show that once you perceive a color or shape, it has already been categorized. I then argue that we perceive only low-level properties like colors and shapes. For in-stance, we don’t perceive high-level kind properties like being a table or being a wren. I do think that wrens or tables might look different to you after you become disposed to recognize them. Some take this to show that being a wren or being a table can be represented in your perception. I argue that this inference does not follow. If you are not disposed to recognize wrens, but we track the attention of someone who is, and we get you to attend to wrens in that same way, your visual phenomenology might be exactly the same as theirs. But there is no reason to think that it represents a wren. After all, you lack a recognitional disposition for wrens. I take this and other arguments to show that we perceive only low-level properties like colors and shapes.
20

Applications of Crossmodal Relationships in Interfaces for Complex Systems: A Study of Temporal Synchrony

Giang, Wayne Chi Wei January 2011 (has links)
Current multimodal interfaces for complex systems, such as those designed using the Ecological Interface Design (EID) methodology, have largely focused on effective design of interfaces that treat each sensory modality as either an independent channel of information or as a way to provide redundant information. However, there are many times when operationally related information is presented in different sensory modalities. There is very little research that has examined how this information in different modalities can be linked at a perceptual level. When related information is presented through multiple sensory modalities, interface designers will require perceptual methods for linking relevant information together across modalities. This thesis examines one possible crossmodal perceptual relationship, temporal synchrony, and evaluates whether the relationship is useful in the design of multimodal interfaces for complex systems. Two possible metrics for the evaluation of crossmodal perceptual relationships were proposed: resistance to changes in workload, and stream monitoring awareness. Two experiments were used to evaluate these metrics. The results of the first experiment showed that temporal rate synchrony was not resistant to changes in workload, manipulated through a secondary visual task. The results of the second experiment showed that participants who used crossmodal temporal rate synchrony to link information in a multimodal interface did not achieve better performance in the monitoring of the two streams of information being presented over equivalent unimodal interfaces. Taken together, these findings suggest that temporal rate synchrony may not be an effective method for linking information across modalities. Crossmodal perceptual relationships may be very different from intra-modal perceptual relationships. However, methods for linking information across sensory modalities are still an important goal for interface designers, and a key feature of future multimodal interface design for complex systems.

Page generated in 0.0582 seconds