• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 147
  • 36
  • 36
  • 36
  • 36
  • 36
  • 36
  • 15
  • 13
  • 8
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 619
  • 619
  • 144
  • 143
  • 143
  • 118
  • 78
  • 60
  • 53
  • 50
  • 44
  • 42
  • 41
  • 41
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

A Treatise on the Thresholds of Interoctave Frequencies: 1500, 3000, and 6000 Hz

Wilson, Richard H., McArdle, Rachel 01 January 2014 (has links)
Background: For the past 50+ years, audiologists have been taught to measure the pure-tone thresholds at the interoctave frequencies when the thresholds at adjacent octave frequencies differ by 20 dB or more. Although this so-called 20 dB rule is logical when enhanced audiometric resolution is required, the origin of the rule is elusive, and a thorough literature search failed to find supporting scientific data. Purpose: This study purposed to examine whether a 20 dB difference between thresholds at adjacent octave frequencies is the critical value for whether the threshold of the interoctave frequency should be measured. Along this same line of questioning is whether interoctave thresholds can be predicted from the thresholds of the adjacent or bounding octave frequencies instead of measured, thereby saving valuable time. Research Design: Retrospective, descriptive, correlational, and cross-sectional. Study Sample: Audiograms from over a million veterans provided the data, which were archived at the Department of Veterans Affairs, Denver Acquisition and Logistics Center. Data Collection and Analysis: Data from the left and right ears were independently evaluated. For each ear three interoctave frequencies (1500, 3000, and 6000 Hz) were studied. For inclusion, thresholds at the interoctave frequency and the two bounding octave frequencies had to be measurable, which produced unequal numbers of participants in each of the six conditions (2 ears by 3 interoctave frequencies). Age tags were maintained with each of the six conditions. Results: Three areas of analyses were considered. First, relations among the octave-frequency thresholds were examined. About 62% of the 1000-2000 Hz threshold differences were ≥20 dB, whereas about 74% of the 4000-8000 Hz threshold differences were <20 dB. About half of the threshold differences between 2000 and 4000 Hz were <20 dB and half were >20 dB. There was an inverse relation between frequency and the percent of negative slopes between octave-frequency thresholds, ranging from 89% at 1500 Hz to 54% at 6000 Hz. The majority of octave-frequency pairs demonstrated poorer thresholds for the higher frequency of the pair. Second, interoctave frequency thresholds were evaluated using the median metric. As the interoctave frequency increased from 1500 to 6000 Hz, the percent of thresholds at the interoctave frequencies that were not equal to the median threshold increased from ∼9.5% (1500 Hz) to 15.6% (3000 Hz) to 28.2% (6000 Hz). Bivariate plots of the interoctave thresholds and the mean octave-frequency thresholds produced 0.85-0.91 R2 values and 0.79-0.92 dB/dB slopes. Third, the predictability of the interoctave thresholds from the mean thresholds of the bounding octave frequencies was evaluated. As expected, as the disparity between octave-frequency thresholds increased, the predictability of the interoctave threshold decreased; for example, using a ±5 dB criterion at 1500 Hz, 53% of the thresholds were ±5 dB when the octave thresholds differed by ≥20 dB, whereas 77% were ±5 dB when the octave thresholds differed by <20 dB. Conclusions: The current findings support the 20 dB rule for testing interoctave frequency thresholds and suggest the rule could be increased to 25 dB or more with little adverse effect.
512

Use of 35 Words for Evaluation of Hearing Loss in Signal-to-Babble Ratio: A Clinic Protocol

Wilson, Richard H., Burks, Christopher A. 01 November 2005 (has links)
Data from earlier studies that presented 70 words at 24 to 0 dB signal-to-babble (S/B) ratios indicated that most young listeners with normal hearing required 0 to 6 dB S/B ratios to attain 50% correct word recognition. Older listeners with hearing loss often required a >12 dB S/B ratio to attain 50% correct word recognition. In our study, we converted the Words in Noise test from one 70-word list into two 35-word lists for quicker administration by clinicians. Using baseline data from previous studies, we used two strategies to randomize the 35-word lists: based on recognition performance at each S/B ratio and based on recognition performance only. With the first randomization strategy, the 50% correct word-recognition points on the two lists differed by 0.5 dB for 72 listeners with hearing loss. With the second randomization strategy, 48 listeners with hearing loss performed identically on the two lists.
513

Normative data on the auditory memory performance of three- and four-year old children as measured by the Auditory memory test package (AMTP)

Davis, Patricia R. 01 January 1984 (has links)
The purpose of this study was to collect normative data on the auditory memory performance of three- and four-year old children as measured by the Auditory Memory Test Package (AMTP). Specifically, this investigation sought to answer one question: is the AMTP sensitive to age differences when administered to young children ages 3.0-4.11?
514

The neural basis for auditory-motor interactions during musical rhythm processing

Chen, Joyce Lynn January 2008 (has links)
No description available.
515

Neural mechanisms of attention and speech perception in complex, spatial acoustic environment

Patel, Prachi January 2023 (has links)
We can hold conversations with people in environments where typically there are additional simultaneous talkers in background acoustic space or noise like vehicles on the street or music playing at a café on the sidewalk. This seemingly trivial everyday task is difficult for people with hearing deficits and is extremely hard to model in machines. This dissertation focuses on exploring the neural mechanisms of how the human brain encodes such complex acoustic environments and how cognitive processes like attention shapes processing of the attended speech. My initial experiments explore the representation of acoustic features that help us localize single sound sources in the environment- features like direction and spectrotemporal content of the sounds, and the interaction of these representations with each other. I play natural American English sentences coming from five azimuthal directions in space. Using intracranial electrocorticography (ECoG) recordings from the human auditory cortex of the listener, I show that the direction of sound and the spectrotemporal content are encoded in two distinct aspects of neural response, the direction modulates the mean of the response and the spectrotemporal features contributes to the modulation of neural response around its mean. Furthermore, I show that these features are orthogonal to each other and do not interact. This representation enables successful decoding of both spatial and phonetic information. These findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of spatial speech processing. I take a step further to investigate the role of attention in encoding the direction and phonetic features of speech. I play a mixture of male and female spatialized talkers eg. male at left side to the listener and female at right side (talker’s locations switch randomly after each sentence). I ask the listener to follow a given talker e.g. follow male talker as they switch their location after each uttered sentence. While the listener performs this experiment, I collect intracranial EEG data from their auditory cortex. I investigate the bottom-up stimulus dependent and attention independent encoding of such a cocktail party speech and the top-down attention driven role in the encoding of location and speech features. I find a bottom-up stimulus driven contralateral preference in encoding of the mixed speech i.e. Left brain hemisphere automatically and predominantly encodes speech coming from right direction and vice-versa. On top of this bottom-up representation, I find that attended talker’s direction modulates the baseline of the neural response and attended talker’s voice modulates the spectrotemporal tuning of the neural response. Moreover, the modulation to attended talker’s location is present throughout the auditory cortex but the modulation to attended talker’s voice is present only at higher order auditory cortex areas. My findings provide crucially needed evidence to determine how bottom-up and top-down signals interact in the auditory cortex in crowded and complex acoustic scenes to enable robust speech perception. Furthermore, they shed light on the hierarchical encoding of attended speech that have implications on bettering the auditory attention decoding models. Finally, I talk about a clinical case study where we show that electrical stimulation to specific sites in planum temporale (PT) of an epilepsy patient implanted with intracranial electrode leads to enhancement in speech in noise perception. When noisy speech is played with such an electrical stimulation, the patient perceives that the noise disappears, and that the speech is similar to clean speech that they hear without any noise. We performed series of analysis to determine functional organization of the three main sub regions of the human auditory cortex- planum temporale (PT), Heschl’s gyrus (HG) and superior temporal gyrus (STG). Using Cortico-Cortical Evoked Potentials (CCEPs), we modeled the PT sites to be located between the sites in HG and STG. Furthermore, we find that the discriminability of speech from nonspeech sounds increased in population neural responses from HG to the PT to the STG sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise. Together, this dissertation shows new evidence for the neural encoding of spatial speech; interaction of stimulus driven, and attention driven neural processes in spatial multi-talker speech perception and enhancement of speech in noise perception by electrical brain stimulation.
516

The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners

Schnobrich, Kathleen Marie 30 April 2009 (has links)
No description available.
517

Attention Modulates ERP Indices of the Precedence Effect

Zobel, Benjamin H. 07 November 2014 (has links) (PDF)
When presented with two identical sounds from different locations separated by a short onset asynchrony, listeners report hearing a single source at the location of the lead sound, a phenomenon called the precedence effect (Wallach et al., 1949; Haas, 1951). When the onset asynchrony is above echo threshold, listeners report hearing the lead and lag sounds as separate sources with distinct locations. Event-related potential (ERP) studies have shown that perception of separate sound sources is accompanied by an object-related negativity (ORN) 100-250 ms after onset and a late posterior positivity (LP) 300-500 ms after onset (Sanders et al., 2008; Sanders et al., 2011). The current study tested whether these ERP effects are modulated by attention. Clicks were presented in lead/lag pairs at and around listeners’ echo thresholds while in separate blocks they 1) attended to the sounds and reported if they heard the lag sound as a separate source, and 2) performed a difficult 2-back visual task. Replicating previous results, when attention was directed to the sounds, an ORN and LP were observed for click pairs 1 ms above compared to 1 ms below echo threshold. In contrast, when attention was directed away from the sounds to the visual task, neither the ORN nor the LP was evident. Instead, click pairs 1 ms above echo threshold elicited an anterior positivity 250-450 ms after onset. In addition, an effect resembling an ORN was found in comparing ERPs elicited by unattended click pairs with SOAs below attended echo threshold. These results indicate that attention modulates early perceptual processes in the precedence effect and may be critical for auditory object formation under these conditions.
518

Aplicação de uma rede neural artificial para a avaliação da rugosidade e soprosidade vocal / The use of an artificial neural network for evaluation of vocal roughness and breathiness

Baravieira, Paula Belini 28 March 2016 (has links)
A avaliação perceptivo-auditiva tem papel fundamental no estudo e na avaliação da voz, no entanto, por ser subjetiva está sujeita a imprecisões e variações. Por outro lado, a análise acústica permite a reprodutibilidade de resultados, porém precisa ser aprimorada, pois não analisa com precisão vozes com disfonias mais intensas e com ondas caóticas. Assim, elaborar medidas que proporcionem conhecimentos confiáveis em relação à função vocal resulta de uma necessidade antiga dentro desta linha de pesquisa e atuação clínica. Neste contexto, o uso da inteligência artificial, como as redes neurais artificiais, indica ser uma abordagem promissora. Objetivo: Validar um sistema automático utilizando redes neurais artificiais para a avaliação de vozes rugosas e soprosas. Materiais e métodos: Foram selecionadas 150 vozes, desde neutras até com presença em grau intenso de rugosidade e/ou soprosidade, do banco de dados da Clínica de Fonoaudiologia da Faculdade de Odontologia de Bauru (FOB/USP). Dessas vozes, 23 foram excluídas por não responderem aos critérios de inclusão na amostra, assim utilizaram-se 123 vozes. Procedimentos: avaliação perceptivo-auditiva pela escala visual analógica de 100 mm e pela escala numérica de quatro pontos; extração de características do sinal de voz por meio da Transformada Wavelet Packet e dos parâmetros acústicos: jitter, shimmer, amplitude da derivada e amplitude do pitch; e validação do classificador por meio da parametrização, treino, teste e avaliação das redes neurais artificiais. Resultados: Na avaliação perceptivo-auditiva encontrou-se, por meio do teste Coeficiente de Correlação Intraclasse (CCI), concordâncias inter e intrajuiz excelentes, com p = 0,85 na concordância interjuízes e p variando de 0,87 a 0,93 nas concordâncias intrajuiz. Em relação ao desempenho da rede neural artificial, na discriminação da soprosidade e da rugosidade e dos seus respectivos graus, encontrou-se o melhor desempenho para a soprosidade no subconjunto composto pelo jitter, amplitude do pitch e frequência fundamental, no qual obteve-se taxa de acerto de 74%, concordância excelente com a avaliação perceptivo-auditiva da escala visual analógica (0,80 no CCI) e erro médio de 9 mm. Para a rugosidade, o melhor subconjunto foi composto pela Transformada Wavelet Packet com 1 nível de decomposição, jitter, shimmer, amplitude do pitch e frequência fundamental, no qual obteve-se 73% de acerto, concordância excelente (0,84 no CCI), e erro médio de 10 mm. Conclusão: O uso da inteligência artificial baseado em redes neurais artificiais na identificação, e graduação da rugosidade e da soprosidade, apresentou confiabilidade excelente (CCI > 0,80), com resultados semelhantes a concordância interjuízes. Dessa forma, a rede neural artificial revela-se como uma metodologia promissora de avaliação vocal, tendo sua maior vantagem a objetividade na avaliação. / The auditory-perceptual evaluation is fundamental in the study and analysis of voice. This evaluation, however, is subjective and tends to be imprecise and variable. On the other hand, acoustic analysis allows reproducing results, although these results must be refined since the analysis is not precise enough for intense dysphonia or chaotic waves. Therefore, the will to develop measurements allowing reliable knowledge related to vocal function is not new on this research and clinical actuation field. In this context, the use of artificial intelligence such as neural networks seems to be a promising research field. Objective: to validate an automatic system using artificial neural networks for evaluation of vocal roughness and breathiness. Methods: One hundred fifty (150) voices were selected from from Clínica de Fonoaudiologia da Faculdade de Odontologia de Bauru (FOB/USP) database. These voices presented variation from neutral to intense roughness and/or breathiness. Twenty-three of them were excluded since they did not match inclusion criteria. Thus, 123 voices were used for analysis. The procedures include use of auditoryperception based on two scales: visual analog scale of 100 mm and four points numerical scale. Additionally, the characteristics of voice signals were extracted by Wavelet Packet Transform and by analysis of acoustic parameters: jitter, shimmer, derivative amplitude and pitch amplitude. Validation of classifying system was carried out by parameterization, training, test and evaluation of artificial neural networks. Results: In the auditory-perceptual evaluation, excellent interrater (p=0.85) and intrarater (0.87<p<0.93) agreement were obtained by means of Intraclass Correlation Coefficient (ICC) testing. The artificial neural network performance has achieved the best results for breathiness in the subset composed by parameters jitter, pitch amplitude and fundamental frequency. In this case, the neural network obtained a rate of 74%, demonstrating excellent concordance with auditory-perceptual evaluation for visual analog scale (0.80 ICC) and mean error of 9 mm. As for roughness evaluation, the best subset is composed by Wavelet Packet Transform with 1 resolution level, jitter, shimmer, pitch amplitude and fundamental frequency. For this case, a 73% rate was achieved (0.84 ICC) and mean error of 10 mm was obtained. Conclusion: The use of artificial neural networks for roughness and breathiness evaluation present high reliability (ICC&gt;0.80), with results similar to interrater agreement. Thus, the artificial neural network reveals a promising method for vocal evaluation, bringing objective analysis as a strong advantage.
519

Indicadores para o transtorno do processamento auditivo em pré-escolares / Indicators for auditory processing disorder in preschool children

Vilela, Nadia 01 September 2016 (has links)
Introdução: O sistema auditivo envolve uma formação em rede e se relaciona com outros sistemas como o da linguagem. O Processamento Auditivo Central (PAC) envolve as habilidades auditivas necessárias para a interpretação dos sons que ouvimos. Atualmente, só é possível detectar uma alteração de PAC a partir dos 7 anos de idade. Por outro lado, sabe-se que nesta idade, a criança já está em processo de alfabetização e que a alteração de PAC pode dificultar seu aprendizado. Objetivos: Investigar se o desempenho em provas auditivas realizadas em crianças aos cinco anos de idade apresenta correspondência com o desempenho alcançado aos sete anos. Método: Em dois momentos distintos, foram realizadas avaliações auditivas e do PAC em 36 crianças. Foi realizada audiometria nas oitavas de frequência entre 0,25 a 8,0 KHz, imitanciometria, avaliação eletroacústica com captação das emissões otoacústicas por estímulos transientes e avaliação do efeito inibitório da via eferente. Os testes para avaliar o PAC foram: Localização Sonora, teste de Memória Sequencial Verbal e Não-verbal, teste \'Pediatric Speech Intelligibility\', teste de Identificação de Figuras com ruído de fundo, teste Dicótico de Dígitos e o teste \'Random Gap Detection\'. As crianças também realizaram o Teste de Vocabulário por Figuras USP. À primeira avaliação, a faixa etária variou entre 5:2 e 6:1 meses e à segunda avaliação entre 7:1 e 7:8 meses. O intervalo entre as avaliações I e II variou entre 18 e 23 meses. A partir dos resultados alcançados nos testes do PAC à segunda avaliação, as crianças foram classificadas em três grupos: G I: 10 crianças com alteração de PAC e queixa de fala; G II: 18 crianças com alteração de PAC; e G III: 8 crianças com o PAC normal. Esta classificação foi mantida retrospectivamente para a avaliação I. Nos testes de hipótese foi fixado nível de significância de 0,05. Resultados: A comparação entre as avaliações mostra que já na primeira avaliação é possível identificar risco para a alteração de PAC. Foi estabelecida uma função discriminante que classificou corretamente as crianças com alteração de PAC à primeira avaliação em 77,8% no G I, 66,7% no G II e 87,5% no G III. Conclusão: Crianças que apresentam alteração do PAC aos 7 anos já demonstraram indicadores de alteração aos 5 anos de idade / Introduction: The auditory system involves a network formation and relates to other systems such as language. Central Auditory Processing (CAP) involves the listening skills necessary to interpret the sounds. Currently, it is not possible to diagnose an CAP disorder before the age of 7. On the other hand, it is known that at this age, children are already in literacy process and CAP disorders may hinder their learning. Objectives: To investigate if the performance of five-year-old children in hearing tests has correspondence with the performance achieved at the age of seven. Method: Hearing and CAP tests were applied to 36 children in two different moments. Pure-tone audiometry was performed between 0.25 to 8.0 KHz, in octave intervals, immitanciometry, electroacoustic evaluation with transient evoked otoacoustic emissions and evaluation of the inhibitory effect of efferent pathway. The tests to assess auditory processing were: Sound Localization, Verbal and Nonverbal Sequential Memory tests, Pediatric Speech Intelligibility test, Figure Identification test with ipsilateral White Noise presentation, Dichotic Digits test and Random Gap Detection test. The children also performed the USP Vocabulary Test by Figures. In the first evaluation, the ages ranged between 5:2 and 6:1 months and in the second evaluation between 7:1 and 7:8 months. The interval between evaluation I and II ranged between 18 and 23 months. From the results achieved in the tests of CAP in the second evaluation, the children were classified into three groups: G I: 10 children with CAP disorders and complaints of speech; G II: 18 children with auditory CAP; and G III: 8 children with normal CAP. This classification was maintained retrospectively for evaluation I. In hypothesis tests was set the 0.05 significance level. Results: The comparison between the evaluations showed that the first evaluation can already identify risk for CAP disorders. The discriminant function was established and appropriately classified children with CAP disorders in the first assessment in 77.8% in G I, 66.7% in G II and 87.5% in G III. Conclusion: Children with CAP disorder at the age of 7 had already shown disorder indicators at the age of 5
520

Características audiológicas pré e pós adaptação de aparelhos auditivos em pacientes com zumbido / Audiologic characteristics before and after hearing aids use in patients with tinnitus

Silva, Eleonora Csipai da 28 May 2018 (has links)
Introdução: Zumbido é um sintoma de alterações nas vias auditivas com etiologia variada. Muitos pacientes com perda auditiva possuem zumbido e um dos tratamentos é o uso dos aparelhos auditivos. Os aparelhos auditivos amplificam os sons externos e os pacientes passam a perceber melhor os sons do ambiente, diminuindo a percepção do zumbido e proporcionando melhora na entrada do som por meio do enriquecimento sonoro. Contudo, é visto na literatura que alguns pacientes não observam a diminuição do incômodo e da percepção do zumbido ao utilizar os aparelhos auditivos. O estudo da audição periférica nestes pacientes poderia fornecer informações sobre fatores que dificultam a redução da percepção do zumbido com o uso de aparelhos auditivos. Objetivo: Avaliar as características audiológicas de pacientes com zumbido e perda de audição e verificar se há diferenças entre o grupo que obteve redução na percepção do zumbido com o uso de aparelhos auditivos e o grupo que não obteve o mesmo benefício. Método: Foram avaliados 29 sujeitos, divididos em dois Grupos, sendo Grupo I (GI) composto por 20 sujeitos que observaram melhora na percepção do zumbido após dois meses de uso dos aparelhos auditivos, e o Grupo II (GII) composto por nove, que não observaram melhora na percepção do zumbido. A pontuação na Escala Visual Analógica (EVA) determinou a divisão dos grupos. Foram aplicados: questionários para avaliar o incômodo do zumbido (Tinnitus Handicap Inventory -THI) e a melhora da percepção auditiva (Hearing Handicap Inventory Elderly Screening Version- HHIE-S); avaliações audiológicas (audiometria de via área e óssea, índice de reconhecimento de fala, identificação do limiar diferencial de intensidade índice, emissões otoacústicas), testes de processamento auditivo temporal (Gaps-in-Noise-GIN, Teste de Detecção de Intervalo Aleatório-RGDT, testes Padrão de Frequência e Duração -TPF e TPD respectivamente) e medidas psicoacústicas do zumbido (pitch, loudness e Limiar Mínimo de Mascaramento - MML) antes da adaptação dos aparelhos auditivos e após dois meses de uso dos aparelhos auditivos. Os sujeitos tinham entre 28 e 68 anos (média 55 anos) de ambos os sexos, e usaram aparelhos auditivos da mesma marca, com as regulagens adequadas para cada sujeito. Resultados: Não houve diferença significativa entre os grupos nos testes audiológicos aplicados. Nos testes auditivos temporais, a porcentagem de acerto do Grupo I foi superior ao Grupo II, com tendência à significância estatística no TPF. Foi observada diferença estatística nos questionários THI, HHIE-S no GI e GII após o uso dos aparelhos, diminuindo a pontuação. Com relação às medidas psicoacústicas, houve diferença estatística significante entre o Loudness e MML iniciais e finais do GI e diferença entre os grupos no loudness e no MML final. Conclusão: As caraterísticas audiológicas avaliadas não foram suficientes para indicar se o paciente com perda de audição se beneficiaria com a diminuição na percepção do zumbido com dois meses de uso de aparelhos auditivos. Indivíduos com desempenho pobre no TPF tem tendência a não reduzir a percepção do zumbido com o uso de aparelhos auditivos. O presente estudo aponta para a necessidade de investigar outras características que podem estar associadas à dificuldade na redução na percepção do zumbido com o uso de aparelhos auditivos / Introduction: Tinnitus is a symptom of damage of the auditory pathways with varied etiology. Many patients with hearing loss have tinnitus and one of its treatments is the use of hearing aids. Hearing aids amplify external sounds and patients are able to better understand the sounds of the environment, reducing the perception of the tinnitus and improving sound input through enriched acoustics. However, according to literature, some patients do not observe discomfort and tinnitus perception decrease using hearing aids. The study of peripheral hearing in these patients could provide information on factors that make it difficult to reduce tinnitus perception with the use of hearing aids. Objective: To evaluate audiological characteristics of patients with tinnitus and hearing loss, and verify if there are differences between the group that obtained reduction in tinnitus perception with the use of hearing aids and the group that did not obtain the same benefit. Methods: 29 subjects, divided into two groups: Group I (GI) composed of 20 subjects who observed improvement in tinnitus perception after two months of hearing aids use and Group II (GII) with nine subjects who did not observe improvement in tinnitus perception, were evaluated. The Visual Analogue Scale (VAS) score determined the division of the groups. The questionnaires Hearing Handicap Inventory Elderly Screening (HHIE-S) and Tinnitus Handicap Inventory (THI), audiological evaluation (pure tone and bone conduction audiometry, speech recognition index, short increment sensitivity index, otoacoustic emissions), temporal auditory processing tests (Gaps-in-Noise-GIN, Random Interval Detection Test-RGDT, frequency and duration tests (PPS and DPT) and psychoacoustic measures of tinnitus (pitch, loudness and MML) before the fitting of hearing aids and after two months of use were conducted. Subjects were aged between 28 and 68 (mean age 55 years), both sexes, and used same brand hearing aids, with the appropriate fitting for each subject. Results: There was no significant difference between the groups in the audiological tests. Temporal auditory tests accuracy percentage of Group I was higher than Group II, with a tendency to statistical significance in the PPS. Statistical difference was observed in the THI, HHIE-S for GI and GII after the hearing aids use, reducing the score. Regarding psychoacoustic measures, there was significant statistical difference between initial and final loudness and MML in GI, and intergroup difference (GI and GII) in the final MML and loudness. Conclusion: The evaluated audiologic characteristics were not sufficient to indicate if the patient with hearing loss would benefit from a decrease in tinnitus perception after two months of hearing aids use. Individuals with poor PPS performance tend not to reduce tinnitus perception with hearing aids use. The present study indicates the need to investigate other characteristics that may be associated with the difficulty in reducing tinnitus perception with hearing aids use

Page generated in 0.0883 seconds