Spelling suggestions: "subject:"[een] HEARING"" "subject:"[enn] HEARING""
191 |
Progressive hearing loss in mouse mutantsHilton, Jennifer Maglona January 2012 (has links)
No description available.
|
192 |
The effect of stimulus interval and foreperiod duration on temporal synchronizationBest, Paul Raymond, 1945- January 1971 (has links)
No description available.
|
193 |
Sound synthesis in a teaching machine for tactile recognitionPresser, Karl David, 1943- January 1968 (has links)
No description available.
|
194 |
Early Maternal Word-Learning Cues to Children with and without Cochlear ImplantsLund, Emily Ann 21 November 2013 (has links)
Despite improvements in amplification technology, the vocabulary growth of children with cochlear implants lag behind that of typically developing children. Maternal input may influence opportunities for children with cochlear implants to learn new vocabulary words. This pair of studies compared mothers multimodal cues about word referents available to and used by children with cochlear implants and children with normal hearing. The first quantified the proportion of converging and diverging auditory-visual cues present in maternal speech to children with cochlear implants as compared to children with normal hearing matched for chronological age and matched for vocabulary size. Mothers provided input to children with cochlear implants in a way that was different from the way that mothers provide input to children matched for vocabulary size. The second study evaluated the effects of synchronous and asynchronous auditory-visual cues on the word-learning performance of children with cochlear implants and children with normal hearing matched for chronological age. Children with cochlear implants did not learn words in either condition, whereas children with normal hearing made use of synchronous cues to learn words. These findings represent a first step toward determining how environment-level factors influence the lexical outcomes of children with cochlear implants.
|
195 |
The Attentive Hearing Aid: visual selection of auditory sourcesHart, Jamie Lauren 01 October 2007 (has links)
We present the Attentive Hearing Aid, a system that uses eye input to amplify the audio of tagged sound sources in the environment. A multidisciplinary project, we use the latest technology to take advantage of the social phenomenon of turn-taking in human-human communication, and apply this in a new kind of assistive hearing device. Using hearing-impaired participants, we evaluated the use of eye input for switching between sound sources on a screen in terms of switch time and the recall of audiovisual material. We compared eye input to a control condition and two manual selection techniques: using a remote to point at the target on the screen, and using buttons to select the target. Results show that in terms of switch time, Eyes were 73% faster than Pointing and 58% faster than Buttons. In terms of recall, Eyes performed 80% better than Control, 54% better than Buttons, and 37% better than Pointing. In a post-evaluation user experience survey, participants rated Eyes highest in “easiest”, “most natural”, and “best overall” categories. We present the implications of this work as a new type of assistive hearing device, and also discuss how this system could benefit non-hearing-impaired individuals. / Thesis (Master, Computing) -- Queen's University, 2007-09-26 13:46:25.789
|
196 |
Validation of an iPod-Based Hearing Screening for a Pediatric PopulationGreidanus, Krista R Unknown Date
No description available.
|
197 |
Contextual effects in pitch processing : investigating neural correlates using complementary methodologiesWarrier, Catherine M. January 2000 (has links)
This thesis includes four studies investigating neural correlates underlying pitch perception, and effects of tonal context on this percept. Each study addressed the issue from a unique methodological perspective. The first study confirmed that tonal context can affect the way a tone's pitch is perceived. In this study, normal listeners made pitch discriminations between tones varying in pitch and/or timbre, a difficult task when presented in isolation. Increasing tonal context increased performance, with melodic context providing the most facilitation. / A similar task was presented to patients with unilateral focal excisions in the temporal lobe. Patients with right but not left temporal lobe lesions were impaired at using melodic cues to facilitate performance. Posterior extent of the lesions did not affect results, implying that right anterior temporal regions can process pitch information relative to tones heard previously. A functional magnetic resonance imaging (fMRI) study using a similar task with normal listeners found converging evidence. Melodic context produced the most activity in right anterior superior temporal gyrus (STG), as well as the most facilitation behaviorally. / A positron emission tomography study investigating neural processing of song stimuli broadened the investigation to include a comparison between musical and linguistic processing. Left frontal and temporal structures known to be involved in language processing were active when subjects attended to song lyrics, and right temporal-lobe structures were again implicated in melodic processing, suggesting that a song's lyrics and melodies are processed separately. / These studies find pitch processing in tonal contexts to involve right temporal-lobe structures. The right anterior STG in particular appears to be involved in processing pitch relative to previously heard tones. This suggests that the right anterior STG processes tones with respect to their tonal context, which entails holding contextual tones in memory while processing subsequent tones. This region has connections to right dorsolateral frontal areas previously implicated in tonal working memory, possibly providing a mechanism for holding contextual tones in memory. Supporting this theory, all contextual conditions in the fMRI study produced activity in right dorsolateral frontal cortex.
|
198 |
A theoretical and experimental investigation of the acoustic transmission properties in the external ear.Sinyor, Albert. January 1972 (has links)
No description available.
|
199 |
Modelling of auditory processing mechanisms related to backward maskingHultz, Paul B. 12 1900 (has links)
No description available.
|
200 |
Language, perception and production in profoundly deaf childrenHind, Sarah E. January 1993 (has links)
Prelingually profoundly deaf children usually experience problems with language learning (Webster, 1986; Campbell, Burden & Wright, 1992). The acquisition of written language would be no problem for them if normal development of reading and writing was not dependent on spoken language (Pattison, 1986). However, such children cannot be viewed as a homogeneous group since some, the minority, do develop good linguistic skills. Group studies have identified several factors relating to language skills: hearing loss and level of loss, I.Q., intelligibility, lip-reading, use of phonology and memory capacity (Furth, 1966; Conrad, 1979; Trybus & Karchmer, 1977; Jensema, 1975; Baddeley, Papagno & Vallar, 1988; Baddeley & Wilson, 1988; Hanson, 1989; Lake, 1980; Daneman & Carpenter,1980). These various factors appear to be interrelated, with phonological awareness being implicated in most. So to understand behaviour, measures of all these factors must be obtained. The present study aimed to achieve this whilst investigating the prediction that performance success may be due to better use of phonological information. Because linguistic success for the deaf child is exceptional, a case study approach was taken to avoid obscuring subtle differences in performance. Subjects were screened to meet 6 research criteria: profound prelingual deafness, no other known handicap, English the first language in the home, at least average non-verbal IQ , reading age 7-9 years and inter-subject dissimilarities between chronological reading age discrepancies. Case histories were obtained from school records and home interviews. Six subjects with diverse linguistic skills were selected, four of which undertook all tests. Phonological awareness and development was assessed across several variables: immediate memory span, intelligibility, spelling, rhyme judgement, speech discrimination and production. There was considerable inter-subject performance difference. One boy's speech production was singled out for a more detailed analysis. Useful aided hearing and consistent contrastive speech appear to be implicated in other English language skills. It was concluded that for phonological awareness to develop, the deaf child must receive useful inputs from as many media as possible (e.g., vision, audition, articulation, sign and orthography). When input is biassed toward the more reliable modalities of audition and articulation, there is a greater possibility of a robust and useful phonology being derived and thus better access to the English language.
|
Page generated in 0.036 seconds