Spelling suggestions: "subject:"auditoryvisual"" "subject:"audiovisual""
1 |
Perceptual lateralisation of audio-visual stimuliHolt, Nigel James January 1997 (has links)
No description available.
|
2 |
Interaction Between Auditory and Visual Discriminations Attempted SimultaneouslyMills, Linda Barbara 09 1900 (has links)
<p> The study contains two discriminatory tasks which are attempted separately and simultaneoulsly. The subject is asked to judge the relative positions of successively presented points of light and/or to decide whether a test tone is added to continuous white noise during the interval between the two lights. It is noted that this design is similar to a retroactive interference paradigm. Analysis of the data shows that there is little interaction between decisions to each of the psychophysical tasks when they are attempted simultaneously. There also appears to be no significant change in sensitivity whether the tasks are attempted alone or together. It is suggested that-further experiments, involving different forms of visual memory, are needed.</p> / Thesis / Master of Arts (MA)
|
3 |
Refinement and Normalisation of the University of Canterbury Auditory-Visual Matrix Sentence TestMcClelland, Amber January 2015 (has links)
Developed by O'Beirne and Trounson (Trounson, 2012), the UC Auditory-Visual Matrix Sentence Test (UCAMST) is an auditory-visual speech test in NZ English where sentences are assembled from 50 words arranged into 5 columns (name, verb, quantity, adjective, object). Generation of sentence materials involved cutting and re-assembling 100 naturally spoken ‟original” sentences to create a large repertoire of 100,000 unique ‟synthesised” sentences.
The process of synthesising sentences from video fragments resulted in occasional artifactual image jerks (‟judders”)‒quantified by an unusually large change in the ‟pixel difference value” of consecutive frames‒at the edited transitions between video fragments. To preserve the naturalness of materials, Study 1 aimed to select transitions with the least ‟noticeable” judders.
Normal-hearing participants (n = 18) assigned a 10-point noticeability rating score to 100 sentences comprising unedited ‟no judder” sentences (n = 28), and ‟synthesised” sentences (n = 72) that varied in the severity (i.e. pixel difference value), number, and position of judders. The judders were found to be significantly noticeable compared to no judder controls, and based on mean rating score, 2,494 sentences with ‟minimal noticeable judder” were included in the auditory-visual UCAMST. Follow-on work should establish equivalent lists using these sentences. The average pixel difference value was found to be a significant predictor of rating score, therefore may be used as a guide in future development of auditory-visual speech tests assembled from video fragments.
The aim of Study 2 was to normalise the auditory-alone UCAMST to make each audio fragment equally intelligible in noise. In Part I, individuals with normal hearing (n = 17) assessed 400 sentences containing each file fragment presented at four different SNRs (-18.5, -15, -11.5, and -8 dB) in both constant speech-shaped noise (n = 9) and six-talker babble (n = 8). An intelligibility function was fitted to word-specific data, and the midpoint (Lmid, intelligibility at 50%) of each function was adjusted to equal the mean pre-normalisation midpoint across fragments. In Part II, 30 lists of 20 sentences were generated with relatively homogeneous frequency of matrix word use. The predicted parameters in constant noise (Lmid = 14.0 dB SNR; slope = 13.9%/dB ± 0.0%/dB) are comparable with published equivalents. The babble noise condition was, conversely, less sensitive (Lmid = 14.9 dB SNR; slope = 10.3%/dB ± 0.1%/dB), possibly due to a smaller sample size (n = 8). Overall, this research constituted an important first step in establishing the UCAMST as a reliable measure of speech recognition; follow-on work will validate the normalisation procedure carried out in this project.
|
4 |
Examining the role of auditory-visual interaction in the characterization of perceived wildness and tranquillity in valued open spacesPheasant, Robert J., Watts, Gregory R., Horoshenkov, Kirill V. January 2013 (has links)
no
|
5 |
Effects of auditory versus visual presentation and pronounced versus silent reading on frequency estimatesPearlman, Ilissa Bloch January 1992 (has links)
No description available.
|
6 |
Differences in Performance Between Minimally Brain-Injured and Normal Children as Measured by the "Birch-Belmont Auditory-Visual Integration Test"Glass, Daniel J. 12 1900 (has links)
The problem with which this study was concerned involved the identification of minimally brain-injured children. The performance on the "Birch-Belmont Auditory-Visual Integration Test" by twenty-five minimally brain-injured students was compared to the performance of twenty-five non-brain-injured children. It was found that when ages and I.Q. scores were not significantly different, and when sexes were approximately proportionate, the M.B.I. children scored significantly lower than did the non-brain-injured children. While it was indicated that the minimally brain-injured children perform less adequately on auditory-visual integration, no comparison of intrasensory and intersensory functioning was made. It was suggested that the test not be employed for sole determination of minimal brain injury, but that it may be used as a screening device quite appropriately.
|
7 |
Cued Visual Search and Multisensory EnhancementHaggit, Jordan January 2014 (has links)
No description available.
|
8 |
Development of auditory-visual speech perception in young childrenErdener, Vahit Dogu, University of Western Sydney, College of Arts, School of Psychology January 2007 (has links)
Unlike auditory-only speech perception, little is known about the development of auditory-visual speech perception. Recent studies show that pre-linguistic infants perceive auditory-visual speech phonetically in the absence of any phonological experience. In addition, while an increase in visual speech influence over age is observed in English speakers, particularly between six and eight years, this is not the case in Japanese speakers. This thesis aims to investigate the factors that lead to an increase in visual speech influence in English speaking children aged between 3 and 8 years. The general hypothesis of this thesis is that age-related, language-specific factors will be related to auditory-visual speech perception. Three experiments were conducted here. Results show that in linguistically challenging periods, such as school onset and reading acquisition, there is a strong link between auditory visual and language specific speech perception, and that this link appears to help cope with new linguistic challenges. However this link does not seem to be present in adults or preschool children, for whom auditory visual speech perception is predictable from auditory speech perception ability alone. Implications of these results in relation to existing models of auditory-visual speech perception and directions for future studies are discussed. / Doctor of Philosophy (PhD)
|
9 |
The Active Ingredients of Integral Stimulation Treatment: The Efficacy of Auditory, Visual, and Auditory-Visual Cues for Treatment of Childhood Apraxia of SpeechCondoluci, Lauren, 0000-0001-8760-0145 January 2020 (has links)
The purpose of this study was to determine the relative efficacy of cueing modalities employed in Integral Stimulation (IS) treatment for childhood apraxia of speech (CAS). Previous literature has supported the use of IS for children with CAS, though there are no studies that evaluate the active ingredients of IS. This study aimed to examine the efficacy of single- and multi-modality cues in IS treatment.
The experiment was administered as a single-case, alternating treatments design consisting of three conditions (auditory-only, visual-only, and simultaneous auditory and visual). Two participants with CAS received IS treatment in every condition during each session. Probes were administered prior to starting every other session (once per week), consisting of practiced and control targets that were balanced for complexity and functionality. Perceptual accuracy of productions was rated on a 3-point scale and standardized effect sizes were calculated for each condition.
Each participant demonstrated different effects in regard to modality and treatment effects. The visual-only condition yielded the greatest effect for one participant, followed by the auditory-only cues. The other participant displayed no significant effects in any condition nor a treatment effect.
The results of this study suggest that single-modality cues may be more beneficial for some children with CAS than the clinically used simultaneous auditory-visual multi-modality cue. The significant effect of the visual-only condition in one participant indicates that visual-only cues may bypass an impaired auditory feedback system and support speech motor learning, though more research is required. / Public Health
|
10 |
Weighting of Visual and Auditory Stimuli in Children with Autism Spectrum DisordersRybarczyk, Aubrey Rachel 29 August 2016 (has links)
No description available.
|
Page generated in 0.0556 seconds