Spelling suggestions: "subject:"audiovisual kontextintegration"" "subject:"audiovisual migrantintegration""
1 |
Developmental and cultural factors of audiovisual speech perception in noiseReetzke, Rachel Denise 16 September 2014 (has links)
The aim of this project is two-fold: 1) to investigate developmental differences in intelligibility gains from visual cues in speech perception-in-noise, and 2) to examine how different types of maskers modulate visual enhancement across age groups. A secondary aim of this project is to investigate whether or not bilingualism differentially
modulates audiovisual integration during speech in noise tasks. To that end, both child and adult, monolingual and bilingual participants completed speech perception in noise tasks through three within-subject variables: (1) masker type: pink noise or two-talker babble, (2) modality: audio-only (AO) and audiovisual (AV), and (3) Signal-to-noise ratio (SNR): 0 dB, -4 dB, -8 dB, -12 dB, and -16 dB. The findings revealed that, although
both children and adults benefited from visual cues in speech-in-noise tasks, adults showed greater benefit at lower SNRs. Moreover, although child monolingual and bilingual participants performed comparably across all conditions, monolingual adults
outperformed simultaneous bilingual adult participants. These results may indicate that the divergent use of visual cues in speech perception between bilingual and monolingual speakers occurs later in development. / text
|
2 |
Evaluating the influence of audiovisual unity in cross-modal temporal binding of musical stimuliChuen, Lorraine 11 1900 (has links)
An observer’s inference that multimodal signals come from a common underlying source can facilitate cross-modal binding in the temporal domain. This ‘unity assumption’ can cause asynchronous audiovisual speech streams to seem simultaneous (Vatakis & Spence, 2007), but follow-up work has been unable to replicate this effect for non-speech, musical events (Vatakis & Spence, 2008). Given that amplitude envelope (the changes in energy of a sound over time) has been shown to affect audiovisual integration, the current study investigates whether previous null findings with musical stimuli can be explained by the similarity in amplitude envelope between audiovisual conditions. To test whether amplitude envelope affects temporal cross-modal binding, Experiment 1 contrasted events with clearly differentiated envelopes: cello and marimba audiovisual stimuli. Participants performed an un-speeded temporal order judgments task; they viewed audio-visually matched (e.g. marimba auditory with marimba video) and mismatched (e.g. cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies and indicated which modality was presented first. As predicted, participants were less sensitive to temporal order (greater JNDs) in matched conditions, suggesting that the unity assumption facilitates synchrony perception outside of non-speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed, amplitude envelope alone could not facilitate the influence of audiovisual unity on temporal binding. We propose that both amplitude and spectral cues affect the percept of audiovisual ‘unity’, likely working in concert to help an observer determine the causal source of an auditory event. / Thesis / Master of Science (MSc)
|
3 |
Noise reduction limits the McGurk EffectDeonarine, Justin January 2011 (has links)
In the McGurk Effect (McGurk & MacDonald, 1976), a visual depiction of a speaker silently mouthing the syllable [ga]/[ka] is presented concurrently with the auditory input [ba]/[pa], resulting in “fused” [da]/[ta] being heard. Deonarine (2010) found that increasing the intensity (volume) of the auditory input changes the perception of the auditory input from [ga] (at quiet volume levels) to [da], and then to [ba] (at loud volume levels). The present experiments show that reducing both ambient noise (additional frequencies in the environment) and stimulus noise (excess frequencies in the sound wave which accompany the intended auditory signal) prevents the illusory percept. This suggests that noise is crucial to audiovisual integration and that the McGurk effect depends on the existence of auditory ambiguity.
|
4 |
Noise reduction limits the McGurk EffectDeonarine, Justin January 2011 (has links)
In the McGurk Effect (McGurk & MacDonald, 1976), a visual depiction of a speaker silently mouthing the syllable [ga]/[ka] is presented concurrently with the auditory input [ba]/[pa], resulting in “fused” [da]/[ta] being heard. Deonarine (2010) found that increasing the intensity (volume) of the auditory input changes the perception of the auditory input from [ga] (at quiet volume levels) to [da], and then to [ba] (at loud volume levels). The present experiments show that reducing both ambient noise (additional frequencies in the environment) and stimulus noise (excess frequencies in the sound wave which accompany the intended auditory signal) prevents the illusory percept. This suggests that noise is crucial to audiovisual integration and that the McGurk effect depends on the existence of auditory ambiguity.
|
5 |
Ability of Children with Autism Spectrum Disorders to Identify Emotional Facial ExpressionsLorenzi, Jill Elizabeth 05 June 2012 (has links)
Previous research on emotion identification in Autism Spectrum Disorders (ASD) has demonstrated inconsistent results. While some studies have cited a deficit in emotion identification for individuals with ASD compared to controls, others have failed to find a difference. Many studies have used static photographs that do not capture subtle details of dynamic, real-life facial expressions that characterize authentic social interactions, and therefore have not been able to provide complete information regarding emotion identification. The current study aimed to build upon prior research by using dynamic, talking videos where the speaker expresses emotions of happiness, sadness, fear, anger, and excitement, both with and without a voice track. Participants included 10 children with ASD between the ages of four and 12, and 10 gender- and mental age-matched children with typical development between six and 12. Overall, both ASD and typically developing groups performed similarly in their accuracy, though the group with typical development benefited more from the addition of voice. Eye tracking analyses considered the eye region and mouth as areas of interest (AOIs). Eye tracking data from accurately identified trials resulted in significant main effects for group (longer and more fixations for participants with typical development) and condition (longer and more fixations on voiced emotions), and a significant condition by AOI interaction, where participants fixated longer and more on the eye region in the voiced condition compared to the silent condition, but fixated on the mouth approximately the same in both conditions. Treatment implications and directions for future research are discussed. / Master of Science
|
6 |
Behavioral and neurophysiological investigations of short-term memory in primatesBigelow, James 01 May 2015 (has links)
Detecting and interpreting sensory events, and remembering those events in in the service of future actions, forms the foundation of all behavior. Each of these pillars of the so-called "perception-action cycle" have been topics of extensive inquiry throughout recorded history, with philosophical foundations provided by early BCE and CE periods (especially during the Classic and Renaissance eras) leading to intensive empirical study in the twentieth and twenty-first centuries. Such experiments have described detailed (but incomplete) behavioral functions reflecting perception and memory, and have begun to unravel the extraordinarily complex substrates of these functions in the nervous system. The current dissertation was motivated by these findings, with the goal of meaningfully extending our understanding of such processes through a multi-experiment approach spanning the behavioral and neurophysiological levels. The focus of these experiments is on short-term memory (STM), though as we shall see, STM is ultimately inseparable from sensory perception and is directly or indirectly associated with guidance of motor responses. It thus provides a nexus between the sensory inputs and motor outputs that describe interactions between the organism and environment.
In Chapter 2, previous findings from nonhuman primate literature describing relatively poor performance for auditory compared to visual or tactile STM inspired similar comparisons among modalities in humans. In both STM and recognition memory paradigms, accuracy is shown to be lowest for the auditory modality, suggesting commonalities among primate species. Chapters 3-5 examined STM processing in nonhuman primates at the behavioral and neurophysiological levels. In Chapter 3, a systematic investigation of memory errors produced by recycling memoranda across trials (proactive interference) is provided for the understudied auditory modality in monkeys. Such errors were ameliorated (but not completely eliminated) by increasing the proportions of unique memoranda presented within a session, and by separating successive trials by greater time intervals. In Chapter 4, previous results revealing a human memory advantage for audiovisual events (compared to unimodal auditory or visual events) inspired a similar comparison in monkeys using a concurrent auditory, visual, and audiovisual STM task. Here, the primary results conformed to a priori expectations, with superior performance observed on audiovisual trials compared to either unimodal trial type. Surprisingly, two of three subjects exhibited superior unimodal performance on auditory trials. This result contrasts with previous results in nonhuman primates, but can be interpreted in light of these subjects' extensive prior experience with unimodal auditory STM tasks. In Chapter 5, the same subjects performed the concurrent audiovisual STM task while activity of single cells and local cell populations was recorded within prefrontal cortex (PFC), a region known to exhibit multisensory integrative and memory functions. The results indicate that both of these functions converge within PFC, down to the level of individual cells, as evidenced by audiovisual integrative responses within mnemonic processes such as delay-related changes in activity and detection of repeated versus different sensory cues. Further, a disproportionate number of the recorded units exhibited such mnemonic processes on audiovisual trials, a finding that corresponds to the superior behavioral performance on these trials. Taken together, these findings reinforce the important role of PFC in STM and multisensory integration. They further strengthen the evidence that "memory" is not a unitary phenomenon, but can be seen as the outcome of processing within and among multiple subsystems, with substantial areas of overlap and separation across modalities. Finally, cross-species comparisons reveal substantial similarities in memory processing between humans and nonhuman primates, suggesting shared evolutionary heritage of systems underlying the perception-action cycle.
|
7 |
The role of synesthetic correspondence in intersensory binding: investigating an unrecognized confound in multimodal perception researchOlsheski, Julia DeBlasio 13 January 2014 (has links)
The current program of research tests the following main hypotheses: 1) Synesthetic correspondence is an amodal property that serves to bind intersensory signals and manipulating this correspondence between pairs of audiovisual signals will affect performance on a temporal order judgment (TOJ) task; 2) Manipulating emphasis during a TOJ task from spatial to temporal aspects will strengthen the influence of task-irrelevant auditory signals; 3) The degree of dimensional overlap between audiovisual pairs will moderate the effect of synesthetic correspondence on the TOJ task; and 4) There are gaps in current perceptual theory due to the fact that synesthetic correspondence is a potential confound that has not been sufficiently considered in the design of perception research. The results support these main hypotheses. Finally, potential applications for the findings presented here are discussed.
|
8 |
Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexiaWidmann, Andreas, Schröger, Erich, Tervaniemi, Mari, Pakarinen, Satu, Kujala, Teija 29 July 2022 (has links)
Dyslexic and control first-grade school children were compared in a Symbol-to-Sound matching test based on a non-linguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction) elicited the N2b and P3a event-related potential (ERP) components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR) reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly) different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.
|
9 |
Audiovisual integration for perception of speech produced by nonnative speakersYi, Han-Gyol 12 September 2014 (has links)
Speech often occurs in challenging listening environments, such as masking noise. Visual cues have been found to enhance speech intelligibility in noise. Although the facilitatory role of audiovisual integration for perception of speech has been established in native speech, it is relatively unclear whether it also holds true for speech produced by nonnative speakers. Native listeners were presented with English sentences produced by native English and native Korean speakers. The sentences were in either audio-only or audiovisual conditions. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced speech intelligibility in noise for native English speech but less so for nonnative speech. Reduced intelligibility of audiovisual nonnative speech was associated with implicit Asian-Foreign association, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for perception of speech produced by nonnative speakers. / text
|
10 |
Utilisation de l’électrophysiologie dans l’étude du développement des capacités d’intégration audiovisuelle du nourrisson à l’âge adulteDionne-Dostie, Emmanuelle 09 1900 (has links)
No description available.
|
Page generated in 0.1216 seconds