Spelling suggestions: "subject:"crossmodal"" "subject:"crossmodale""
11 |
Practicing phonomimetic (conducting-like) gestures facilitates vocal performance of typically developing children and children with autism: an experimental studyBingham, Emelyne Marie 21 December 2020 (has links)
Every music teacher is likely to teach one or more children with autism, given that an average of one in 54 persons in the United States receives a diagnosis of Autism Spectrum Disorder (ASD). ASD persons often show tremendous interest in music, and some even become masterful performers; however, the combination of deficits and abilities associated with ASD can pose unique challenges for music teachers. This experimental study shows that phonomimetic (conducting-like) gestures can be used to teach the expressive qualities of music. Children were asked to watch video recordings of conducting-like gestures and produce vocal sounds to match the gestures. The empirical findings indicate that motor training can strengthen the visual to vocomotor couplings in both populations, suggesting that phonomimetic gesture may be a suitable approach for teaching musical expression in inclusive classrooms.
|
12 |
The impact of an auditory task on visual processing:implications for cellular phone usage while drivingCross, Ginger Wigington 03 May 2008 (has links)
Previous research suggests that cellular phone conversations or similar auditory/conversational tasks lead to degradations in visual processing. Three contemporary theories make different claims about the nature of the degradation that occurs when we talk on a cellular phone. We are either: (a) disproportionately more likely to miss objects located in the most peripheral areas of the visual environment due to a reduction in the size of the attentional window or functional field of view (Atchley & Dressel, 2004); (b) more likely to miss objects from all areas of the visual environment (even at the center of fixation) because attention is withdrawn from the roadway, leading to inattention blindness or general interference (Strayer & Drews, 2006; Crundall, Underwood, & Chapman, 1999; 2002), or (c) more likely to miss objects that are located on the side of the visual environment contralateral to the cellular phone message due to crossmodal links in spatial attention (Driver & Spence, 2004). These three theories were compared by asking participants to complete central and peripheral visual tasks (i.e., a measure of the functional field of view) in isolation and in combination with an auditory task. During the combined visual/auditory task, peripheral visual targets could appear on the same side as auditory targets or on the opposite side. When the congruency between auditory and visual target locations was not considered (as is typical in previous research), the results were consistent with the general interference/inattention blindness theory, but not the reduced functional field of view theory. Yet, when congruency effects were considered, the results support the theory that crossmodal links affect the spatial allocation of attention: Participants were better at detecting and localizing visual peripheral targets and at generating words for the auditory task if attention was directed to the same location in both modalities.
|
13 |
Delineating the Neural Circuitry Underlying Crossmodal Object Recognition in RatsReid, James 15 September 2011 (has links)
Previous research has indicated that the perirhinal cortex (PRh) and posterior parietal cortex (PPC) functionally interact to mediate crossmodal object representations in rats; however, it remains to be seen whether other cortical regions contribute to this cognitive function. The prefrontal cortex (PFC) has been widely implicated in crossmodal tasks and might underlie either a unified multimodal or amodal representation or comparison mechanism that allows for integration of object information across sensory modalities. The hippocampus (HPC) is also a strong candidate, with extensive polymodal inputs, and has been implicated in some aspects of object recognition. A series of lesion based experiments assessed the roles of HPC, PFC and PFC sub regions [medial prefrontal (mPFC) and orbitofrontal cortex (OFC)], revealing functional dissociations between these brain regions using two versions of crossmodal object recognition: 1. spontaneous crossmodal matching (CMM), which requires rats to compare between a stored tactile object representation and visually-presented objects to discriminate the novel and familiar stimuli; and 2. crossmodal object association (CMA), in which simultaneous pre-exposure to the tactile and visual elements of an object enhances CMM performance across long retention delays. Notably, while inclusive PFC lesions impaired both CMM and CMA tasks, selective OFC lesions disrupted only CMM, whereas selective mPFC damage did not impair performance on either task. Furthermore, there was no impact of HPC lesions on either CMM or CMA tasks. Thus, the PFC and the OFC play a selective role in crossmodal object recognition but the exact contributions and interactions of the regions will require further research to elucidate. / PDF Document / Natural Sciences and Engineering Research Council of Canada (NSERC)
|
14 |
Crossmodal Interference During Selective Attention to Spatial Stimuli: Evidence for a Stimulus-Driven Mechanism Underlying the Modality-Congruence Visual Dominance EffectLinda Tomko (7907639) 25 July 2024 (has links)
<p dir="ltr">Many tasks require processing, filtering, and responding to information from multiple sensory modalities. Crossmodal interactions are common and visual dominance often arises with incongruent sensory information. Past studies have shown that visual dominance tends to be strong in spatial tasks. Experiments in a crossmodal attention switching paradigm with physical-spatial stimuli (e.g., stimuli in left and right locations) have demonstrated a robust visual dominance congruence pattern with conflicting visual-spatial information impairing responses to auditory-spatial stimuli, but conflicting auditory-spatial information having less impact on visual-spatial processing. Strikingly, this pattern does not occur with verbal-spatial stimuli (e.g., the words LEFT and RIGHT as stimuli). In the present study, experiments were conducted to systematically examine the occurrence and underlying basis of this distinction. Participants were presented with either verbal-spatial or physical-spatial stimuli, simultaneously in the visual and auditory modalities, and were to selectively attend and respond to the location of the cued modality. An initial experiment replicated previously reported effects, with similar patterns of crossmodal congruence effects for visual and auditory verbal-spatial stimuli. Three further experiments directly compared crossmodal congruence patterns for physical-spatial and verbal-spatial stimuli across varying attentional conditions. Intermixing verbal and physical spatial stimulus sets did not meaningfully alter the distinct congruence patterns compared to when the sets were blocked, and biasing attention to verbal-spatial processing amplified the modality-congruence interaction for physical-spatial stimuli. Together, the consistent findings of the modality-congruence interaction showing visual dominance for physical-spatial stimuli but not for verbal-spatial stimuli suggests that the effect is driven by the particular spatial sets based on their sensory properties rather than endogenous attentional mechanisms.</p>
|
15 |
Signal compatibility as a modulatory factor for audiovisual multisensory integrationParise, Cesare Valerio January 2013 (has links)
The physical properties of the distal stimuli activating our senses are often correlated in nature; it would therefore be advantageous to exploit such correlations to better process sensory information. Stimulus correlations can be contingent and readily available to the senses (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). Over the last century, a large body of research on multisensory processing has demonstrated the existence of compatibility effects between individual features of stimuli from different sensory modalities. Such compatibility effects, termed crossmodal correspondences, possibly reflect the internalization of the natural correlation between stimulus properties. The present dissertation assesses the effects of crossmodal correspondences on multisensory processing and reports a series of experiments demonstrating that crossmodal correspondences influence the processing rate of sensory information, distort perceptual experiences and lead to stronger multisensory integration. Moreover, a final experiment investigating the effects of contingent signals’ correlation on multisensory processing demonstrates the key role of temporal correlation in inferring whether two signals have a common physical cause or not (i.e., the correspondence problem). A Bayesian framework is proposed to interpret the present results whereby stimulus correlations, represented on the prior distribution of expected crossmodal co-occurrence, operate as cues to solve the correspondence problem.
|
16 |
THE EFFECTS OF LONG-TERM DEAFNESS ON DENSITY AND DIAMETER OF DENDRITIC SPINES ON PYRAMIDAL NEURONS IN THE DORSAL ZONE OF THE FELINE AUDITORY CORTEXBauer, Rachel J 01 January 2019 (has links)
Neuroplasticity has been researched in many different ways, from the growing neonatal brain to neural responses to trauma and injury. According to recent research, neuroplasticity is also prevalent in the ability of the brain to repurpose areas that are not of use, like in the case of a loss of a sense. Specifically, behavioral studies have shown that deaf humans (Bavalier and Neville, 2002) and cats have increased visual ability, and that different areas of the auditory cortex enhance specific kinds of sight. One such behavioral test demonstrated that the dorsal zone (DZ) of the auditory cortex enhances sensitivity to visual motion through cross-modal plasticity (Lomber et. al., 2010). Current research seeks to examine the anatomical structures responsible for these changes through analysis of excitatory neuron dendritic spine density and spine head diameter. This present study focuses on the examination of DZ neuron spine density, distribution, and size in deaf and hearing cats to corroborate the visual changes seen in behavioral studies. Using Golgi-stained tissue and light microscopy, our results showed a decrease in overall spine density but slight increase in spine head diameter in deaf cats compared to hearing cats. These results, along with several other studies, support multiple theories on how cross-modal reorganization of the auditory cortex occurs after deafening
|
17 |
The Role of Concepts in PerceptionConnolly, Kevin L. 19 January 2012 (has links)
The claim of my dissertation is that some basic concepts are required for perception. Non-basic concepts, we acquire, and I give an account as to how that process changes our perception.
Suppose you are looking at the Mona Lisa. It might seem that you can perceive a lot more shades of color and a lot more shapes than for which you possess precise concepts. I argue against this. For every color or shape in appearance you have the ability to categorize it as that color or shape. It’s just that this is done by your sensory system prior to appearance. I argue that empirical studies show this. Blindsighted patients, for instance, are blind in part of their visual field. But they can use color and shape information received through the blind portion. I take this, along with other studies, to show that once you perceive a color or shape, it has already been categorized.
I then argue that we perceive only low-level properties like colors and shapes. For in-stance, we don’t perceive high-level kind properties like being a table or being a wren. I do think that wrens or tables might look different to you after you become disposed to recognize them. Some take this to show that being a wren or being a table can be represented in your perception. I argue that this inference does not follow. If you are not disposed to recognize wrens, but we track the attention of someone who is, and we get you to attend to wrens in that same way, your visual phenomenology might be exactly the same as theirs. But there is no reason to think that it represents a wren. After all, you lack a recognitional disposition for wrens. I take this and other arguments to show that we perceive only low-level properties like colors and shapes.
|
18 |
Crossmodal Modulation as a Basis for Visual Enhancement of Auditory PerformanceQian, Cheng 15 February 2010 (has links)
The human sensory system processes many modalities simultaneously. It was believed that each modality would be processed individually first, and their combination deferred to higher-level cortical areas. Recent neurophysiological investigations indicate interconnections between early visual and auditory cortices, areas putatively considered unimodal, but the function remains unclear. The present work explores how this cross-modality might contribute to a visual enhancement of auditory performance, using a combined theoretical and experimental approach. The enhancement of sensory performance was studied through a signal detection framework. A model was constructed using principles from signal detection theory and neurophysiology, demonstrating enhancements of roughly 1.8dB both analytically and through simulation. Several experiments were conducted to observe e ects of visual cues on a 2-alternative-forced-choice detection task of an auditory tone in noise. Results of the main experiment showed an enhancement of 1.6dB. Better enhancement also tended to occur for more realistic relationships between audio to visual stimuli.
|
19 |
Crossmodal Modulation as a Basis for Visual Enhancement of Auditory PerformanceQian, Cheng 15 February 2010 (has links)
The human sensory system processes many modalities simultaneously. It was believed that each modality would be processed individually first, and their combination deferred to higher-level cortical areas. Recent neurophysiological investigations indicate interconnections between early visual and auditory cortices, areas putatively considered unimodal, but the function remains unclear. The present work explores how this cross-modality might contribute to a visual enhancement of auditory performance, using a combined theoretical and experimental approach. The enhancement of sensory performance was studied through a signal detection framework. A model was constructed using principles from signal detection theory and neurophysiology, demonstrating enhancements of roughly 1.8dB both analytically and through simulation. Several experiments were conducted to observe e ects of visual cues on a 2-alternative-forced-choice detection task of an auditory tone in noise. Results of the main experiment showed an enhancement of 1.6dB. Better enhancement also tended to occur for more realistic relationships between audio to visual stimuli.
|
20 |
The Role of Concepts in PerceptionConnolly, Kevin L. 19 January 2012 (has links)
The claim of my dissertation is that some basic concepts are required for perception. Non-basic concepts, we acquire, and I give an account as to how that process changes our perception.
Suppose you are looking at the Mona Lisa. It might seem that you can perceive a lot more shades of color and a lot more shapes than for which you possess precise concepts. I argue against this. For every color or shape in appearance you have the ability to categorize it as that color or shape. It’s just that this is done by your sensory system prior to appearance. I argue that empirical studies show this. Blindsighted patients, for instance, are blind in part of their visual field. But they can use color and shape information received through the blind portion. I take this, along with other studies, to show that once you perceive a color or shape, it has already been categorized.
I then argue that we perceive only low-level properties like colors and shapes. For in-stance, we don’t perceive high-level kind properties like being a table or being a wren. I do think that wrens or tables might look different to you after you become disposed to recognize them. Some take this to show that being a wren or being a table can be represented in your perception. I argue that this inference does not follow. If you are not disposed to recognize wrens, but we track the attention of someone who is, and we get you to attend to wrens in that same way, your visual phenomenology might be exactly the same as theirs. But there is no reason to think that it represents a wren. After all, you lack a recognitional disposition for wrens. I take this and other arguments to show that we perceive only low-level properties like colors and shapes.
|
Page generated in 0.0219 seconds