• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 2
  • 2
  • 2
  • Tagged with
  • 56
  • 56
  • 24
  • 21
  • 18
  • 13
  • 12
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Effects of Distracting Background Audio on Speech Production

Cowley, Camille Margaret 17 June 2020 (has links)
This study examined changes in speech production when distracting background audio is present. Forty typically speaking adults completed a repetitive sentence reading task in the presence of 5 different audio conditions (pink noise, movie dialogue, heated debate, classical music, and contemporary music) and a silent condition. Acoustic parameters measured during the study included vowel space area (VSA), vowel articulation index (VAI), formant transition extent, formant transition rate, and diphthong duration for /ɑɪ/ and /ɑʊ/. It was hypothesized that there would be significant increases in vowel space area and vowel articulation index as well as an increase in formant transition measures in the presence of background noise. There were statistically significant decreases in vowel space are and vowel articulation index in the presence of all noise conditions compared to the silent baseline condition. Results also demonstrated a significant decrease in F2 transition extent for both /ɑɪ/, and /ɑʊ/ diphthongs in all noise conditions except the pink noise condition when compared to the silent condition. These findings were contrary to what was originally hypothesized. It is possible that VAI and VSA decreased in the presence of background noise due to an increase in speaking rate. Formant transition measurements were consistent with the VAI and VSA results. More research is needed to accurately determine the acoustic changes a speaker makes in response to distracting background audio.
12

Habilidades cognitivas e de percepção de fala no ruído em idosos com perda auditiva / Cognitive abilities on the speech-in-noise perception test in the elderly with sensorineural hearing loss

Cardoso, Maria Julia Ferreira 26 February 2019 (has links)
INTRODUÇÃO: A perda auditiva relacionada à idade provoca diversas alterações, como dificuldade na percepção dos sons e na compreensão da fala, principalmente em ambientes desfavoráveis. O envelhecimento também pode ocasionar alteração no sistema nervoso central, acarretando redução na capacidade intelectual e/ou cognitiva e deterioração de outras funções sensoriais. Além disso, evidências científicas apontam uma associação entre a perda auditiva e a alteração da cognição, sendo de extrema importância que os profissionais estejam atentos a esta relação para que ocorra sucesso na reabilitação auditiva. OBJETIVO: Verificar a influência de habilidades cognitivas verbais no teste de percepção de fala no ruído em idosos com perda auditiva sensorioneural e relacionar a classificação socioeconômica, a escolaridade, o grau de perda auditiva e o nível intelectual-cognitivo na percepção de fala no ruído competitivo MÉTODO: Estudo do tipo observacional e transversal. Participaram 36 idosos com idade entre 60 e 89 anos com diagnóstico de perda auditiva sensorioneural bilateral, divididos em (GI) 24 idosos sem alteração cognitiva e (GII) 12 idosos com alteração cognitiva. Foram submetidos a avaliação otorrinolaringológica, a avaliação psicológica por meio do Wechsler Intelligence Scale for Adults (WAISIII), a entrevista audiológica inicial, a audiometria tonal liminar, a logoaudiometria, a imitanciometria, a avaliação da percepção de fala no ruído com o Hearing in Noise Test (HINT-Brasil) e a avaliação da integração binaural por meio do teste dicótico de dígitos. A análise estatística foi realizada por meio dos seguintes testes: Teste U de Mann-Whitney para comparação entre os grupos, Correlação de Spearman e Kruskal-Wallis para verificação da influência das variáveis idade, grau da perda auditiva, nível de escolaridade, configuração audiométrica e relação entre os resultados do HINT-Brasil e o WAIS-III. RESULTADOS: Houve diferença entre os grupos no desempenho do HINTBrasil apenas na condição ruído à esquerda, mostrando vantagem da orelha direita na percepção de fala no ruído. A idade, o grau da perda auditiva e o nível de escolaridade influenciaram na percepção de fala no ruído. Houve influência da idade, do nível de escolaridade e da classificação socioeconômica no WAIS-III. Não foi observada correlação entre o teste dicótico dígitos, o teste de percepção de fala no ruído e o desempenho da função cognitiva, ou entre o teste de percepção de fala no ruído e as habilidades cognitivas verbais. CONCLUSÃO: Não houve influência das habilidades cognitivas verbais na habilidade de percepção de fala no ruído nos idosos com perda auditiva sensorioneural de grau leve e moderado de acordo com análise estatística. A idade, o grau da perda auditiva e o nível de escolaridade influenciaram na percepção de fala no ruído, e nas habilidades cognitivas verbais houve interferência da idade, do nível de escolaridade e da classificação socioeconômica. / INTRODUCTION: Age-related hearing loss causes several changes such as difficulty in perceiving sounds and understanding speech, especially in unfavorable environments. Aging can also cause changes to the central nervous system, reducing intellectual and/or cognitive capacity and impairing other sensory functions. In addition, scientific evidence points to an association between hearing loss and altered cognition, and it is extremely important that professionals are attentive to that so they can offer successful auditory rehabilitation. OBJECTIVE: Verify the influence of verbal cognitive abilities on the speech-in-noise perception test in the elderly with sensorineural hearing loss and to relate socioeconomic classification, schooling, degree of hearing loss and intellectual-cognitive level in speech perception in competitive noise. METHODS: This is an observational and cross-sectional study. The participants are 36 elderly subjects aged 60-90 years old diagnosed with bilateral sensorineural hearing loss and were divided in (GI) 24 elderly subjects with no cognitive alterations and (GII) 12 elderly subjects with cognitive alteration. They were submitted to otorhinolaryngological evaluation, psychological evaluation through the Wechsler Intelligence Scale for Adults (WAISIII), initial audiological interview, pure tone audiometry, logoaudiometry, immitanciometry, evaluation of speech perception in noise with the Brazilian Hearing in Noise Test (HINT-Brazil), and evaluation of binaural integration through the dichotic digits test. The statistical analysis was carried out through the following tests: Mann-Whitney U test for comparing between groups, Spearman and Kruskal- Wallis correlation for checking the influence of the variables age, degree of hearing loss, educational level, audiometric configuration and relation between the results of HINT-Brazil and WAIS-III. RESULTS: There was a difference between groups in the speech perception test with the condition left noise, showing an advantage in the right ear regarding speech-in-noise perception. Age, degree of hearing loss, and level of schooling influenced the speech-in-noise perception. Age, level of schooling, and socio-economic classification influenced the WAIS-III. No correlation was found between the dichotic digits test, the speech-in-noise perception test, and the performance of cognitive function, or between the speech-innoise perception test and the verbal cognitive abilities. CONCLUSION: There was no influence of verbal cognitive abilities on the ability of speech perception in noise in the elderly with mild to moderate sensorineural hearing loss according to statistical analysis. Age, degree of hearing loss and level of schooling influenced the speechin- noise perception, and in verbal cognitive abilities there was interference of age, level of-schooling and socioeconomic classification.
13

Effects of Bilingualism on Speech Recognition Performance in Noise

Carlo, Mitzarie A 11 April 2008 (has links)
This study examined the effects of bilingualism on speech recognition in noise performance of young normal-hearing Spanish-English bilinguals across several signal-to-noise ratios (SNR). The estimated signal-to-noise ratio needed for 50% correct recognition performance obtained for bilingual listeners was compared to young normal-hearing monolingual listeners of both English and Spanish. The estimated mean SNR needed for 50% correct recognition was significantly higher (i.e., poorer) for the bilingual than for the monolingual English listeners. The Spanish language performance of the bilingual listeners did not significantly differ from that of the monolingual Spanish listeners. The bilinguals were then divided into subgroups based on age of acquisition of the second language. Bilinguals were subdivided into early and later learners of English and further comparisons were made. The average estimated SNR needed for 50% correct recognition for the early bilinguals did not differ statistically from that of monolingual listeners in either the English or the Spanish language testing. The SNR obtained for 50% correct recognition of English words was significantly higher for the late bilinguals than for the monolingual English listeners. For Spanish words, the mean SNRs obtained for 50% correct recognition for the later bilinguals and the monolingual Spanish speakers did not differ statistically from one another. These results suggest that caution should be used when assessing speech-in-noise performance in the second language of bilingual patients because separate norms may be needed for this population. Age of acquisition of the second language should be considered as a confounding factor in speech-in-noise performance of bilingual listeners.
14

Normalisation, Evaluation and Verification of the New Zealand Hearing Screening Test.

Bowden, Alice Therese January 2013 (has links)
Presbycusis, or age-related hearing loss, is one of the most common chronic conditions to affect adults. On average individuals wait seven years from the time they notice a hearing impairment to the time they seek help from a hearing professional. This delay may have wide reaching implications for public health in the coming decades, as aging populations become more prevalent and as further research assesses the relationship between hearing loss and mental health conditions such as depression and dementia. The development of the New Zealand Hearing Screening Test (NZHST) aims to fulfil a need for a robust hearing screening test that individuals can access from home. This digit triplet test (DTT) will be particularly valuable for those in rural areas where audiological services are sparse and for those who have mobility issues which restrict attendance at clinical appointments. In order to accommodate as many New Zealanders as possible, the NZHST will have two versions, an internet version and a land-line telephone version; both of which can be delivered into their home in either New Zealand English or Te Reo Māori. This research is the third instalment in the development of the NZHST. The current research is divided into three parts; the verification of the New Zealand English DTT for the internet version, the pilot study for the Te Reo Māori DTT for the internet version, and the normalisation of the New Zealand English DTT for the telephone version. In the verification process, 50 individuals with various audiometric thresholds listened to 3 lists of 27 New Zealand English digit triplets, presented in three conditions; binaurally and to each ear separately via an internet interface. In the pilot study, 27 participants with various audiometric thresholds listened to 3 lists of 27 Te Reo Māori digit triplets via a software interface on a laptop computer. The normalisation process involved 10 individuals with normal hearing (average air-conduction pure tone thresholds of ≤ 20 dB HL) listening to 168 New Zealand English digit triplets under two different noise conditions; one as continuous speech noise and the other a noise with spectral and temporal gaps (STG noise) presented via a software interface on a laptop computer. Four conditions of the 168 digits were presented; once to each ear for the continuous noise, and once to each ear for the STG noise. Significant correlations were found between the binaural DTT and PTA (R = 0.66), and between the monaural ear DTT and PTA (R = 0.73) for the verification. The binaural DTT had a test sensitivity of 94% and a specificity of 88%. Pilot study correlation between binaural DTT and PTA was R = 0.61, and was R = 0.63 between monaural DTT and PTA; while the binaural sensitivity (100%) and specificity (100%) of the Te Reo DTT was affected by the very small number of participants with hearing loss (n = 4). The normalisation revealed that detection of the digit triplets was easier when STG noise (Lmid = -11.5 dB SNR, SD = 1.6 dB) was used as a masker, rather than continuous noise (Lmid = -8.9 dB SNR, SD = 1.4 dB).
15

AUDITORY TRAINING AT HOME FOR ADULT HEARING AID USERS

Olson, Anne D. 01 January 2010 (has links)
Research has shown that re-learning to understand speech in noise can be a difficult task for adults with hearing aids (HA). If HA users want to improve their speech understanding ability, specific training may be needed. Auditory training is one type of intervention that may enhance listening abilities for adult HA users. The purpose of this study was to examine the behavioral effects of an auditory training program called Listening and Communication Enhancement (LACE™) in the Digital Video Display (DVD) format in new and experienced HA users. No research to date has been conducted on the efficacy of this training program. An experimental, repeated measures group design was used. Twenty–six adults with hearing loss participated in this experiment and were assigned to one of three groups: New HA + training, Experienced HA + training or New HA – control. Participants in the training groups completed twenty, 30 minute training lessons from the LACE™ DVD program at home over a period of 4-weeks. Trained group participants were evaluated at baseline, after 2-weeks of training and again after 4- weeks of training. Participants in the control group were evaluated at baseline and after 4-weeks of HA use. Findings indicate that both new and experienced users improved their understanding of speech in noise after training and perception of communication function. Effect size calculations suggested that a larger training effect was observed for new HA users compared to experienced HA users. New HA users also reported greater benefit from training compared to experienced users. Auditory training with the LACE ™ DVD format should be encouraged, particularly among new HA users to improve understanding speech in noise.
16

Assessing cognitive spare capacity as a measure of listening effort using the Auditory Inference Span Test

Rönnberg, Niklas January 2014 (has links)
Hearing loss has a negative effect on the daily life of 10-15% of the world’s population. One of the most common ways to treat a hearing loss is to fit hearing aids which increases audibility by providing amplification. Hearing aids thus improve speech reception in quiet, but listening in noise is nevertheless often difficult and stressful. Individual differences in cognitive capacity have been shown to be linked to differences in speech recognition performance in noise. An individual’s cognitive capacity is limited and is gradually consumed by increasing demands when listening in noise. Thus, fewer cognitive resources are left to interpret and process the information conveyed by the speech. Listening effort can therefore be explained by the amount of cognitive resources occupied with speech recognition. A well fitted hearing aid improves speech reception and leads to less listening effort, therefore an objective measure of listening effort would be a useful tool in the hearing aid fitting process. In this thesis the Auditory Inference Span Test (AIST) was developed to assess listening effort by measuring an individual’s cognitive spare capacity, the remaining cognitive resources available to interpret and encode linguistic content of incoming speech input while speech understanding takes place. The AIST is a dual-task hearing-innoise test, combining auditory and memory processing, and requires executive processing of speech at different memory load levels. The AIST was administered to young adults with normal hearing and older adults with hearing impairment. The aims were 1) to develop the AIST; 2) to investigate how different signal-to-noise ratios (SNRs) affect memory performance for perceived speech; 3) to explore if this performance would interact with cognitive capacity; 4) to test if different background noise types would interact differently with memory performance for young adults with normal hearing; and 5) to examine if these relationships would generalize to older adults with hearing impairment. The AIST is a new test of cognitive spare capacity which uses existing speech material that is available in several countries, and manipulates simultaneously cognitive load and SNR. Thus, the design of AIST pinpoints potential interactions between auditory and cognitive factors. The main finding of this thesis was the interaction between noise type and SNR showing that decreased SNR reduced cognitive spare capacity more in speech-like noise compared to speech-shaped noise, even though speech intelligibility levels were similar between noise types. This finding applied to young adults with normal hearing but there was a similar effect for older adults with hearing impairment with the addition of background noise compared to no background noise. Task demands, MLLs, interacted with cognitive capacity, thus, individuals with less cognitive capacity were more sensitive to increased cognitive load. However, MLLs did not interact with noise type or with SNR, which shows that different memory load levels were not affected differently in different noise types or in different SNRs. This suggests that different cognitive mechanisms come into play for storage and processing of speech information in AIST and for listening to speech in noise. Thus, the results suggested that a test of cognitive spare capacity seems to be a useful way to assess listening effort, even though the AIST, in the design used in this thesis, might be too cognitively demanding to provide reliable results for all individuals.
17

Characterization of audiovisual binding and fusion in the framework of audiovisual speech scene analysis / Caractérisation du liage et de la fusion audiovisuels dans le cadre de l'analyse de la scène audiovisuelle

Attigodu Chandrashekara, Ganesh 29 February 2016 (has links)
Cette thèse porte sur l’intégration de deux concepts : l’Analyse de Scènes Auditives (ASA) et la fusion audiovisuelle (AV) en perception de parole. Nous introduisons "l’Analyse de Scènes de Parole Audio Visuelles" (AVSSA) comme une extension du modèle à deux étages caractéristique de l’ASA vers des scènes audiovisuelles et nous proposons qu'un indice de cohérence entre modalités auditive et visuelle est calculé avant la fusion AV, ce qui permet de déterminer si les entrées sensorielles doivent être cognitivement liées : c’est le « modèle à deux étages » de la fusion AV. Des expériences antérieures sur la modulation de l'effet McGurk par des contextes AV cohérents vs. incohérents présentés avant la cible McGurk ont permis de valider le modèle à deux étages. Dans ce travail de thèse, nous étudions le processus AVSSA au sein de l'architecture à deux étages dans différentes dimensions telles que l'introduction de bruit, le mélange de sources AV, la recherche de corrélats neurophysiologiques et l’évaluation sur différentes populations.Une première série d'expériences chez les jeunes adultes a permis la caractérisation du mécanisme de liage AV en introduisant du bruit et les résultats ont montré que les participants étaient en mesure d'évaluer à la fois le niveau de bruit acoustique et la cohérence AV et de contrôler la fusion AV en conséquence. Dans une deuxième série d'expériences comportementales impliquant une compétition entre sources AV, nous avons montré que l’AVSSA permet d'évaluer la cohérence entre caractéristiques visuelles et auditives dans une scène complexe, afin d'associer les composants adéquats d'une source de parole AV donné, et de fournir pour le processus de fusion une évaluation de la cohérence de la source AV extraite. Il apparaît également que la fusion dépend du focus attentionnel sur une source ou l'autre. Puis une expérience EEG a cherché à mettre en évidence un marqueur neurophysiologique du processus de liage-déliage et a montré qu’un contexte AV incohérent peut moduler l'effet de l'entrée visuelle sur la composante N1 / P2. Une dernière série d'expériences a été axée sur l’évaluation du liage AV et de sa dynamique dans une population âgée, et a fourni des résultats similaires à ceux des adultes plus jeunes mais avec une plus grande dynamique de déliage. L'ensemble des résultats a permis de mieux caractériser le processus AVSSA et a été intégré dans la proposition d'une architecture neurocognitive améliorée pour la fusion AV dans la perception de la parole. / The present doctoral work is focused on a tentative fusion between two separate concepts: Auditory Scene Analysis (ASA) and Audiovisual (AV) fusion in speech perception. We introduce “Audio Visual Speech Scene Analysis” (AVSSA) as an extension of the two-stage ASA model to- wards AV scenes, and we propose that a coherence index between the auditory and the visual input is computed prior to AV fusion, enabling to determine whether the sensory inputs should be bound together. This is the “two-stage model of AV fusion”. Previous experiments on the modulation of the McGurk effect by AV coherent vs. incoherent contexts presented before the McGurk target have provided experimental evidence supporting the two-stage model. In this doctoral work, we further evaluate the AVSSA process within the two-stage architecture in various dimensions such as introducing noise, considering multiple sources, assessing neurophysiological correlates and testing in different populations.A first set of experiments in younger adults was focused on behavioral characterization of the AV binding process by introducing noise and results showed that the participants were able to evaluate both the level of acoustic noise and AV coherence and to monitor the AV fusion accordingly. In a second set of behavioral experiments involving competing AV sources, we showed that the AVSSA process enables to evaluate the coherence between auditory and visual features within a complex scene, in order to properly associate the adequate components of a given AV speech source, and provide to the fusion process an assessment of the AV coherence of the extracted source. It also appears that the modulation of fusion depends on the attentional focus on one source or the other.Then an EEG experiment aimed to display a neurophysiological marker of the binding and un- binding process and showed that an incoherent AV context could modulate the effect of the visual input on the N1/P2 component. The last set of experiments were focused on measurement of AV binding and its dynamics in the older population, and provided similar results as in younger adults though with a higher amount of unbinding. The whole set of results enabled better characterize the AVSSA process and were embedded in the proposal of an improved neurocognitive architecture for AV fusion in speech perception.
18

Correlação entre os métodos avaliativos Sensory Processing Measure (SPM) e Pediatric Speech Intelligibility (PSI) em escolares

BERENGUER, Jersyca Jamyll da Costa 30 August 2016 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2017-03-23T17:40:45Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE MESTRADO JERSYCA 25-10-2016.pdf: 2917570 bytes, checksum: 8cf386eb26ab6c022d59bff048258c40 (MD5) / Made available in DSpace on 2017-03-23T17:40:45Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE MESTRADO JERSYCA 25-10-2016.pdf: 2917570 bytes, checksum: 8cf386eb26ab6c022d59bff048258c40 (MD5) Previous issue date: 2016-08-30 / Capes / Introdução: Na Fonoaudiologia, um dos testes utilizados para a avaliação do processamento auditivo em crianças é o Pediatric Speech Intelligibility (PSI). Já na Terapia Ocupacional, utiliza-se questionários, como o Sensory Processing Measure (SPM), para identificar alterações de processamento sensorial, dentre eles o auditivo. Uma possível correlação entre esses testes pode facilitar a identificação precoce de crianças com tais alterações. Objetivo: correlacionar o desempenho de crianças no PSI e as respostas obtidas no SPM aplicado aos pais e professores dessas crianças. Método: A pesquisa foi composta por 16 participantes com idades entre 5 e 9 anos, de ambos os sexos, estudantes de escolas públicas em Pernambuco. Inicialmente os pais e professores das crianças responderam 8 questões do SPM, e posteriormente, as crianças foram submetidas ao teste PSI na condição contra e ipsilateral. Resultados: A análise estatística mostra que o percentual de erros do teste PSI ipsilateral foi significantemente maior que o percentual de erros do PSI contralateral. Não foi observado o efeito da idade nos resultados do teste PSI e no questionário SPM. Embora não significativa (p>0,05), houve divergência entre respostas dos dois questionário (SPMS – aplicado aos pais e SPMH – aplicado aos professores). Não houve correlação entre o teste PSI e o questionário SPM respondido por professores. Conclusão: De acordo com este estudo não se pode sugerir o questionário SPM para identificação de crianças com dificuldade de processamento auditivo. / Introduction: In speech therapy, one of the tests used to assess auditory processing in children is the Pediatric Speech Intelligibility (PSI). In the Occupational Therapy, uses questionnaires, such as Sensory Processing Measure (SPM) to identify sensory processing disorders, including hearing. A correlation between these tests may facilitate early identification of children with such changes. Objective: To correlate the performance of children in the PSI and the responses obtained in the SPM applied to parents and teachers of these children. Method: The study was composed of 16 participants aged 5 to 9 years, of both gender, from public schools in Pernambuco. Initially, the parents and teachers of children answered 8 questions of SPM, and later the children were submitted to the PSI test in contralateral and ipsilateral condition. Results: Statistical analysis shows that the percentage of ipsilateral PSI test errors was significantly higher than the percentage of PSI contralateral errors. There was no effect of age on the test results PSI and SPM questionnaire. Although not significant (p> 0.05), there was disagreement between the two questionnaire responses (SPMS - applied to parents and SPMH - applied to teachers). There was no correlation between the PSI and the SPM questionnaire answered by teachers. Conclusion: According to this study we can not suggest the SPM questionnaire to identify children with difficulties in auditory processing.
19

Clear Speech Modifications in Children Aged 6-10

Taylor, Griffin Lijding, Taylor, Griffin Lijding January 2017 (has links)
Modifications to speech production made by adult talkers in response to instructions to speak clearly have been well documented in the literature. Targeting adult populations has been motivated by efforts to improve speech production for the benefit of the communication partners, however, many adults also have communication partners who are children. Surprisingly, there is limited literature on whether children can change their speech production when cued to speak clearly. Pettinato, Tuomainen, Granlund, and Hazan (2016) showed that by age 12, children exhibited enlarged vowel space areas and reduced articulation rate when prompted to speak clearly, but did not produce any other adult-like clear speech modifications in connected speech. Moreover, Syrett and Kawahara (2013) suggested that preschoolers produced longer and more intense vowels when prompted to speak clearly at the word level. These findings contrasted with adult talkers who show significant temporal and spectral differences between speech produced in control and clear speech conditions. Therefore, it was the purpose of this study to analyze changes in temporal and spectral characteristics of speech production that children aged 6-10 made in these experimental conditions. It is important to elucidate the clear speech profile of this population to better understand which adult-like clear speech modifications they make spontaneously and which modifications are still developing. Understanding these baselines will advance future studies that measure the impact of more explicit instructions and children's abilities to better accommodate their interlocutors, which is a critical component of children’s pragmatic and speech-motor development.
20

Acoustic modelling of cochlear implants

Conning, Mariette 18 August 2008 (has links)
High levels of speech recognition have been obtained with cochlear implant users in quiet conditions. In noisy environments, speech recognition deteriorates considerably, especially in speech-like noise. The aim of this study was to determine what underlies measured speech recognition in cochlear implantees, and furthermore, what underlies perception of speech in noise. Vowel and consonant recognition was determined in ten normal-hearing listeners using acoustic simulations. An acoustic model was developed in order to process vowels and consonants in quiet and noisy conditions; multi-talker babble and speech-like noise were added to the speech segments for the noisy conditions. A total of seven conditions were simulated acoustically; namely for recognition in quiet and as a function of signal-to-noise ratio (0 dB, 20 dB and 40 dB speech-like noise and 0 dB, 20 dB and 40 dB multi-talker babble). An eight- channel SPEAK processor was modelled and used to process the speech segments. A number of biophysical interactions between simulated nerve fibres and the cochlear implant were simulated by including models of these interactions in the acoustic model. Biophysical characteristics that were modelled included dynamic range compression and current spread in the cochlea. Recognition scores deteriorated with increasing noise levels, as expected. Vowel recognition was better than consonant recognition in general. In quiet conditions, the features transmitted most efficiently for recognition of speech segments were duration and F2 for vowels and burst and affrication for consonants. In noisy conditions, listeners mainly depended on the duration of vowels for recognition and the burst of consonants. As the SNR decreased, the number of features used to recognise speech segments also became fewer. This suggests that the addition of noise reduces the number of acoustic features available for recognition. Efforts to improve the transmission of important speech features m cochlear implants should improve recognition of speech in noisy conditions. / Dissertation (MEng (Bio-Engineering))--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0438 seconds