• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effects of speaker age on speech understanding and listening effort in older adults.

Spencer, Geraldine Antionette January 2011 (has links)
Purpose: Hearing loss is a prevalent condition among older adults. Structural changes at the auditory periphery, changes in central audition and cognitive function are all known to influence speech understanding in older adults. Biological aging also alters speech and voice characteristics from the age of 50 years. These changes are likely to reduce the clarity of speech signals received by older adults with age-related hearing loss. Recent findings suggest that older adults with hearing loss subjectively find listening to the speech of other older adults more effortful than listening to the speech of younger adults. However, the observations of listener effort were subjective and follow up using an objective measure was recommended. Therefore, the purpose of this study was to determine the influence of speaker age (young versus older) on speech understanding and listener effort in older adults with hearing loss. In addition, the relationships between these parameters, and age and working memory was investigated. It is hypothesised that older adults with hearing loss will recognise less speech, and expend more effort, while listening to speech of an older adult relative to a younger adult. Method: A dual task paradigm was used to measure speech understanding and listening effort in 18 older adult listeners with hearing loss. The primary task involved recognition of target words in sentences containing either high or low contextual cues. The secondary task required listeners to memorise the target words for later recall following a set length of sentences. Listeners performed speech understanding (primary task) under six experimental conditions: For each speaker (i.e., older adult and younger adult) there were 3 listening backgrounds: quiet, and noise at 0 dB SNR and +5 dB SNR. Results: Speech understanding in older adults with hearing loss was significantly improved when the speaker was an older adult, especially in noise. The ability to recall words from memory was also significantly better when the speaker was an older adult. Age was strongly correlated with speech understanding with contributions from hearing loss. Age and working memory had moderate correlations with word recall. Conclusion: The findings provide further evidence that peripheral hearing loss is not the only contributor to speech understanding and word recall ability in older adults. The naturally occurring speech signal also has the potential to influence speech understanding and listening effort in older adults.
2

Speech Understanding Abilities of Older Adults with Sensorineural Hearing Loss

Wilding, Phillipa Jane January 2010 (has links)
Older adults with sensorineural hearing loss have greater difficulty understanding speech than younger adults with equivalent hearing (Gates & Mills, 2005). This increased difficulty may be related to the influence of peripheral, central auditory processing or cognitive deficits and although this has been extensively debated the relative contribution to speech understanding is equivocal (Working Group on Speech Understanding and Aging, 1988). Furthermore, changes to the speech mechanism that occur as a result of age lead to natural degradations of signal quality. Studies involving hearing impaired listeners have not examined the influence of such naturally degraded speech signals. The purpose of this study was to determine: (1) whether older hearing impaired listeners demonstrate differences in speech understanding ability or perceived effort of listening on the basis of the age of the speaker and the predictability of the stimulus, and (2) whether any individual differences in speech understanding were related to central auditory processing ability. The participants included nineteen native speakers of New Zealand English ranging in age from 60 to 87 years (mean = 71.4 years) with age-related sensorineural hearing loss. Each participant underwent a full audiological assessment, three measures of central auditory processing (the Dichotic Digits Test, the Random Gap Detection Test and the Staggered Spondaic Words Test), and completed a computer-based listening experiment containing phrases of high and low predictability spoken by two groups: (1) young adults (18 – 30 years) and (2) older adults (70 years and above). Participants were required to repeat stimulus phrases as heard, with the researcher entering orthographic transcriptions into the custom-designed computer programme. An Analysis of Covariance (ANCOVA) was used to determine if significant differences existed in percentage words correct scores as a factor of speaker group (young versus older speakers) and stimulus predictability (high predictability versus low predictability phrases), with level of presentation (dB) as a covariate. Results demonstrated that although there were no significant differences in percentage words correct with regards to speaker group as expected, lower scores were achieved for low predictability phrases. In addition, increased listener effort was required when listening to the speech from the older adult group and during the low predictability phrase condition. Positive correlations were found between word understanding scores and tests of dichotic separation, which suggests that central auditory processing deficits contribute to the speech understanding difficulties of older adults. The implications of these findings for audiological assessment and rehabilitation are explored.
3

Towards robust conversational speech recognition and understanding

Weng, Chao 12 January 2015 (has links)
While significant progress has been made in automatic speech recognition (ASR) during the last few decades, recognizing and understanding unconstrained conversational speech remains a challenging problem. In this dissertation, five methods/systems are proposed towards a robust conversational speech recognition and understanding system. I. A non-uniform minimum classification error (MCE) approach is proposed which can achieve consistent and significant keyword spotting performance gains on both English and Mandarin large-scale spontaneous conversational speech tasks (Switchboard and HKUST Mandarin CTS). II. A hybrid recurrent DNN-HMM system is proposed for robust acoustic modeling and a new way of backpropagation through time (BPTT) is introduced. The proposed system achieves state-of-the-art performances on two benchmark datasets, the 2nd CHiME challenge (track 2) and Aurora-4, without front-end preprocessing, speaker adaptive training or multiple decoding passes. III. To study the specific case of conversational speech recognition in the presence of competing talkers, several multi-style training setups of DNNs are investigated and a joint decoder operating on multi-talker speech is introduced. The proposed combined system improves upon the previous state-of-the-art IBM superhuman system by 2.8% absolute on the 2006 speech separation challenge dataset. IV. Latent semantic rational kernels (LSRKs) are proposed for spotting the semantic notions on conversational speech. The proposed framework is generalized using tf-idf weighting, latent semantic analysis, WordNet, probabilistic topic models and neural network learned representations and is shown to achieve substantial topic spotting performance gains on two conversational speech tasks, Switchboard and AT&T HMIHY initial collection. V. Non-uniform sequential discriminative training (DT) of DNNs with LSRKs is proposed which directly links the information of the proposed LSRK framework to the objective function of the DT. The experimental results on the subset of Switchboard show the proposed method can lead the acoustic modeling to a more robust system with respect to the semantic decoder.
4

Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology

Smith, Sherri L., Pichora-Fuller, M. Kathleen, Alexander, Genevieve 01 November 2016 (has links)
Objectives: The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. Design: The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Results: Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. Conclusions: These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
5

The influence of working memory on the quality of linguistic predictions during speech understanding in adverse listening conditions : Comparing cortical responses using MEG

Allander, Karin January 2022 (has links)
Speech understanding is a fundamental human ability that enable flexible communication among individuals. Understanding natural speech in normal conditions is a fast and automatic process. It is facilitated through integration between prior knowledge about a speech signal and multimodal speech inputs. In situations where listening conditions are adverse, for example due to hearing impairment or environmental noise, speech understanding is challenged and reliance on prior knowledge increases. Prior knowledge about phonology and semantics are involved in predictive mechanisms that generates more successful speech understanding. Working memory processing seems to be involved in influencing the quality of such predictions. To evaluate the role of working memory in the quality of linguistic predictions, a cortical comparison using MEG was used. MEG data from a previous experiment, where participants performed an auditory sentence completion task with background noise was analyzed. Results from statistical analysis, time-domain analysis and time frequency analysis suggests that differences in working memory processing does not influence the quality of linguistic predictions. Further research is required to assess what factors are involved in the quality of linguistic predictions which could lead to unsuccessful speech understanding, in order to improve communication in everyday situations.
6

Evaluation of Speech Perception and Psychoacoustic Abilities Following Chemotherapy

Kappes, Melissa Skarl 24 September 2018 (has links)
No description available.
7

Effects of Degree and Configuration of Hearing Loss on the Contribution of High- and Low-Frequency Speech Information to Bilateral Speech Understanding

Hornsby, Benjamin W. Y., Johnson, Earl E, Picou, Erin 01 October 2011 (has links)
Objectives: The purpose of this study was to examine the effects of degree and configuration of hearing loss on the use of, and benefit from, information in amplified high- and low-frequency speech presented in background noise. Design: Sixty-two adults with a wide range of high- and low-frequency sensorineural hearing loss (5 to 115+ dB HL) participated in the study. To examine the contribution of speech information in different frequency regions, speech understanding in noise was assessed in multiple low- and high-pass filter conditions, as well as a band-pass (713 to 3534 Hz) and wideband (143 to 8976 Hz) condition. To increase audibility over a wide frequency range, speech and noise were amplified based on each individual's hearing loss. A stepwise multiple linear regression approach was used to examine the contribution of several factors to (1) absolute performance in each filter condition and (2) the change in performance with the addition of amplified high- and low-frequency speech components. Results: Results from the regression analysis showed that degree of hearing loss was the strongest predictor of absolute performance for low- and high-pass filtered speech materials. In addition, configuration of hearing loss affected both absolute performance for severely low-pass filtered speech and benefit from extending high-frequency (3534 to 8976 Hz) bandwidth. Specifically, individuals with steeply sloping high-frequency losses made better use of low-pass filtered speech information than individuals with similar low-frequency thresholds but less high-frequency loss. In contrast, given similar high-frequency thresholds, individuals with flat hearing losses received more benefit from extending high-frequency bandwidth than individuals with more sloping losses. Conclusions: Consistent with previous work, benefit from speech information in a given frequency region generally decreases as degree of hearing loss in that frequency region increases. However, given a similar degree of loss, the configuration of hearing loss also affects the ability to use speech information in different frequency regions. Except for individuals with steeply sloping high-frequency losses, providing high-frequency amplification (3534 to 8976 Hz) had either a beneficial effect on, or did not significantly degrade, speech understanding. These findings highlight the importance of extended high-frequency amplification for listeners with a wide range of high-frequency hearing losses, when seeking to maximize intelligibility.
8

Measuring the ability to understand everyday speech in children with middle ear dysfunction Tegan Michelle Keogh A thesis submitted for the degree of Doctor of Philosophy at The University of Queensland in March 2009 School of

Tegan Keogh Unknown Date (has links)
ABSTRACT Thus far, literature is scant in assessing the ability of children with conductive hearing impairment to understand everyday speech. This assessment is important in determining the functional ability of children with conductive hearing impairment. In order to identify the hearing ability of children with conductive hearing impairment, many assessments to date have used speech stimuli, such as syllables, words and sentences, to measure how well children perform. In general, these tests are useful in measuring speech recognition ability, but are not adequate in measuring the functional ability of children to understand the conversations they encounter in their daily lives. In addition, many of these tests are not designed to be interesting or engage the children whom they are assessing. The University of Queensland Understanding of Everyday Speech (UQUEST) Test was developed to address the above issues by providing a stimulating speech perception assessment for children aged 5 to 10 years. This overall objectives of this thesis were to: (1) determine the applicability of a computer-based, self-driven assessment of speech comprehension, the UQUEST, (2) establish normative UQUEST data for school children, (3) compare the UQUEST results in children with and without histories of otitis media in understanding everyday speech, and (4) measure speech understanding in noise by children with minimal conductive hearing impairment. A total of 1094 children were assessed using the UQUEST. All children were native speakers of English and attended schools in the Brisbane Metropolitan and Sunshine Coast regions within the state of Queensland, Australia. All children were firstly assessed using otoscopic examination, pure tone audiometry testing and tympanometry. Children with sensorineural hearing impairment were excluded from the study. Following the initial audiological assessments, the UQUEST was administered to all participants. Three experiments were performed on three cohorts of children selected from the pool of 1094 children. Experiment 1 aimed to assess whether the UQUEST is a feasible speech perception assessment tool for school children and to establish normative data in a sample of normally hearing children. ix In this experiment, participants were a total of 99 children (55 boys / 44 girls), attending Grade 3 and grade 4 (41/58, mean age = 8.3 yr, range = 7 – 10 yr, SD = 0.7). The results showed that the UQUEST is a feasible test of speech understanding in children aged 7 to 10 years. In general, the UQUEST scores decreased as the signal-to-noise-ratio (SNR) decreased from 10 to 0 dB. Normative data based on the scores of six passages of equal difficulty were established for the 0 dB and 5 dB SNR conditions. In addition, the children appeared to be captivated with the UQUEST task and the attention of all the children was sustained throughout the duration of the test. Experiment 2 determined whether children with histories of otitis media (experimental group) performed worse on the UQUEST in comparison to those children without histories of otitis media (OM). A total of 484 children (246 boys / 238 girls), attending Grade 3 (272, mean age = 8.25 yr, SD = 0.43) and Grade 4 (212, mean age = 9.28 yr, SD = 0.41), were assessed. Children were grouped according to the number of episodes of otitis media as per parental report (control: < 4 episodes; mild history group: 4-9 episodes; and moderate history group: > 9 episodes OM). All children had normal hearing as determined by otoscopy, pure tone audiometry screening and tympanometry results. Results showed no significant difference in UQUEST scores between the control group in comparison to the experimental groups. However, children with a history of OM demonstrated varying speech comprehension abilities. Some children had severe difficulty with the speech comprehension task, suggesting that in cases with extensively reported episodes of OM, performance on the UQUEST was compromised. Experiment 3 determined the prevalence of conductive hearing loss in the Australian primary school population and investigated the ability of school children with minimal conductive hearing loss to understand everyday speech under noisy conditions. Based on a sample of 1071 children (mean age = 7.7 yr; range = 5.3 - 11.7 yr), 10.2% of children were found to have conductive hearing loss in one or both ears. To evaluate the binaural speech comprehension ability of children, a sample of 542 children were divided into four groups according to their audiological assessment results: Group 1: 63 children (34 boys, 29 girls, mean age = 7.7 yr, SD = 1.5) who failed the pure tone audiometry and tympanometry tests in both ears; Group 2: 38 children (17 x boys, 21 girls, mean age = 7.5 yr, SD = 1.2) who passed pure tone audiometry and tympanometry in one ear but failed both tests in the other ear; Group 3 (control group): 357 children (187 boys, 170 girls, mean age = 7.8 yr, SD = 1.3) who passed pure tone audiometry and tympanometry in both ears; Group 4: 84 children (41 boys, 43 girls, mean age = 7.2 yr, SD = 1.3) who passed pure tone audiometry in both ears, but failed tympanometry in one or both ears. The results showed that Group 1 had the lowest mean scores of 60.8% - 69.3% obtained under noise conditions. Their scores were significantly lower than the corresponding scores of 69.3% - 75.3% obtained by children in Group 4; 70.5% - 76.5% obtained by children with unilateral conductive hearing loss (Group 2); and 72.0% - 80.3% obtained by their normally hearing peers (Group 3). This study confirmed that young children, who are known to have poorer speech understanding in noise than adults, show further disadvantage when a bilateral conductive hearing loss is present In summary, the UQUEST has been found to be a useful tool to measure children‟s understanding of everyday speech. This test could be successfully used as a measure of speech comprehension in background noise in children. The UQUEST met expectations of being an interesting and engaging test for children aged 5-10 years. In addition, the UQUEST scores showed that children performed worse when challenged by the more difficult noise conditions incorporated in the test design. The findings from this thesis demonstrated that, at the group level, children with histories of OM did not perform any differently from those without significant histories of OM. However, at the individual level, children with significant OM histories had degraded functional performance with low UQUEST scores. Lastly, this thesis provided much needed speech comprehension data obtained from children with minimal conductive hearing impairment and provided evidence that young children were more affected by the co-occurrence of environmental noise and bilateral conductive hearing loss than their normally hearing peers in understanding everyday speech.
9

Associations Between Speech Understanding and Auditory and Visual Tests of Verbal Working Memory: Effects of Linguistic Complexity, Task, Age, and Hearing Loss

Smith, Sherri L., Pichora-Fuller, M. K. 01 January 2015 (has links)
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners' auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding.
10

WOZシステムのログ情報を利用した事例ベース音声対話システムの開発

INAGAKI, Yasuyoshi, YAMAGUCHI, Yukiko, MATSUBARA, Shigeki, KAWAGUCHI, Nobuo, MURAO, Hiroya, 稲垣, 康善, 山口, 由紀子, 松原, 茂樹, 河口, 信夫, 村尾, 浩也 19 December 2002 (has links)
情報処理学会研究報告音声言語情報処理;2002-SLP-44-23

Page generated in 0.121 seconds