• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 2
  • 2
  • 2
  • Tagged with
  • 56
  • 56
  • 24
  • 21
  • 18
  • 13
  • 12
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Is music listening associated with our cognitive abilities? : A study about how auditory working memory, speech-in-noise perception and listening habits are connected

Savander, Alma January 2020 (has links)
This study explores whether hours listening to music of young adults with self-reported normal hearing is associated with auditory working memory and if hours listening to music and auditory working memory can predict speech-in-noise perception. Thirty native Swedish speaking university students with self-reported normal hearing in the ages ranging from 21 to 29 years old (M= 23.2) participated in filling out a self-reporting questionnaire concerning their listening habits, a listening span test and a speech-in-noise test. A hierarchical multiple linear regression analysis was performed. The results did not suggest a significant correlation between hours listening to music and auditory working memory nor did it indicate that hours listening to music and auditory working memory could significantly predict speech-in-noise perception. These insignificant findings might be due to several reasons including methodological issues such as the sample size, communication difficulties due to poor internet connection and/or the use of self-reported answers. These results and the arguments presented in the discussion indicate that further research is needed to better answer the research questions of the current study.
22

Development and validation of a South African English smartphone-based speech-in-noise hearing test

Engelbrecht, Jenni-Mari January 2017 (has links)
Approximately 80% of the adult and elderly population ≥65 years have not been assessed or treated for a hearing loss, despite the effect a hearing loss has on communication and quality of life (World Health Organization [WHO], 2013a). In South Africa, many challenges to the health care system exist of which access to ear and hearing health care is one of the major problems. This study aimed to develop and validate a smartphone-based digits-in-noise hearing test for South African English towards improved access to hearing screening. The study also considered the effect of hearing loss and English speaking competency on the South African English digits-in-noise hearing test to evaluate its suitability for use across native (N) and non-native (NN) speakers. Lastly, the study evaluated the digits-in-noise test’s applicability as part of the diagnostic audiometric test battery as a clinical test to measure speech recognition ability in noise. During the development and validation phase of this study the sample size consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250 – 8000 Hertz [Hz]) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. Single digits (0 – 9) were recorded and spoken by a N English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application (app) was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. The results showed steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone app. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A prospective cross-sectional cohort study of N and NN English adults with and without sensorineural hearing loss compared pure-tone air conduction thresholds to the SRT recorded with the smartphone digits-in-noise hearing test. A rating scale was used for NN English listeners’ self-reported competence in speaking English. This study consisted of 454 adult listeners (164 male, 290 female; range 16 – 90 years), of which 337 listeners had a best ear 4 frequency pure-tone average (4FPTA; 0.5, 1, 2 and 4 kHz) of ≤25 dB hearing level (HL). A linear regression model identified three predictors of the digits-in-noise SRT namely 4FPTA, age and self-reported English speaking competence. The NN group with poor self-reported English speaking competence (≤5/10) performed significantly (p<0.01) poorer than the N & NN (≥6/10) group on the digits-in-noise test. Screening characteristics of the test improved with separate cut-off values depending on self-reported English speaking competence for the N & NN (≥6/10) group and NN (≤5/10) group. Logistic regression models, that include age in the analysis, showed a further improvement in sensitivity and specificity for both groups (area under the receiver operator characteristic curve [AUROC] .962 and .903 respectively). A descriptive study evaluated 109 adult subjects (43 male, 66 female) with and without sensorineural hearing loss by comparing pure-tone air conduction thresholds, speech recognition monaural performance score intensity (SRS dB) and the digits-in-noise SRT. An additional nine adult hearing aid users (4 male, 5 female) was utilized in a subset to determine aided and unaided digits-in-noise SRTs. The digits-in-noise SRT was strongly associated with the best ear 4FPTA (r=0.81) and maximum SRS dB (r=0.72). The digits-in-noise test had high sensitivity and specificity to identify abnormal pure-tone (0.88 and 0.88 respectively) and SRS dB (0.76 and 0.88 respectively) results. There was a mean signal-to-noise ratio (SNR) improvement in the aided condition that demonstrated an overall benefit of 0.84 dB SNR. A significant individual variability between subjects in the aided condition (-3.2 to -9.4 dB SNR) and unaided condition (-2 to -9.4 dB SNR) was indicated. This study demonstrated that a smartphone app provides the opportunity to use the English digits-in-noise hearing test as a national test for South Africans. The smartphone app can accommodate NN listeners by adjusting reference scores based on a self-reported English speaking competence. The inclusion of age when determining the screening test result increases the accuracy of the screening test in normal-hearing listeners. Providing these adjustments can ensure adequate test performance across N English and NN English listeners. Furthermore, the digits-in-noise SRT is strongly associated with the best ear 4FPTA and maximum SRS dB and could therefore provide complementary information on speech recognition impairment in noise in a clinical audiometric setting. The digits-in-noise SRT can also demonstrate benefit for hearing aid fittings. The test is quick to administer and provides information on the SNR loss. The digits-in-noise SRT could therefore serve as a valuable tool in counselling and management of expectations for persons with hearing loss who receives amplification. / Thesis (PhD)--University of Pretoria, 2017. / National Research Foundation (NRF) / Speech-Language Pathology and Audiology / PhD / Unrestricted
23

The Effects of Aging on Temporal Masking

Fulton, Susan E 30 June 2010 (has links)
The ability to resolve rapid intensity and frequency fluctuations in sound is important for understanding speech, especially in real-world environments that include background noise and reverberation. Older listeners often complain of difficulties understanding speech in such real-world environments. One factor thought to influence speech understanding in noisy and reverberant environments is temporal resolution, the ability to follow rapid acoustic changes over time. Temporal resolution is thought to help listeners resolve rapid acoustic changes in speech as well as use small glimpses of speech available in the dips or gaps in the background sounds. Temporal resolution is an ability that is known to deteriorate with age and hearing loss, negatively affecting the ability to understand speech in noisy real-world environments. Measures of temporal resolution, including temporal masking, gap detection, and speech in interrupted noise, use a silent gap as the cue of interest. Temporal masking and speech in interrupted noise tasks measure how well a listener resolves a stimulus before, after, or between sounds (i.e., within a silent gap), while gap detection tasks measure how well the listener resolves the timing of a silent gap itself. A listener needs to resolve information within the gap and the timing of the gap when listening to speech in background sounds. This study examined the role of aging and hearing loss on three measures of temporal resolution: temporal masking, gap detection, and speech understanding in interrupted noise. For all three measures, participants were young listeners with normal hearing (n = 8, mean age = 25.4 years) and older listeners with hearing loss (n = 9, mean age = 72.1 years). Results showed significant differences between listener groups for all three temporal measures. Specifically, older listeners with hearing loss had higher temporal masked thresholds, larger gap detection thresholds, and required a higher signal-to-noise ratio for speech understanding in interrupted noise. Relations between temporal tasks were observed. Temporal masked thresholds and gap detection thresholds accounted for a significant amount of the variance in speech-in-noise scores. Findings suggest that deficits in temporal resolution abilities may contribute to the speech-in-noise difficulties reported by older listeners.
24

Vowel perception in severe noise

Swanepoel, Rikus 05 March 2013 (has links)
A model that can accurately predict speech recognition for cochlear implant (CI) listeners is essential for the optimal fitting of cochlear implants. By implementing a CI acoustic model that mimics CI speech processing, the challenge of predicting speech perception in cochlear implants can be simplified. As a first step in predicting the recognition of speech processed through an acoustic model, vowel perception in severe speech-shaped noise was investigated in the current study. The aim was to determine the acoustic cues that listeners use to recognize vowels in severe noise and make suggestions regarding a vowel perception predictor. It is known that formants play an important role in quiet, while in severe noise the role of formants is still unknown. The relative importance of F1 and F2 is also of interest, since the masking of noise is not always evenly distributed over the vowel spectrum. The problem was addressed by synthesizing vowels consisting of either detailed spectral shape or formant information. F1 and F2 were also suppressed to examine the effect in severe noise. The synthetic stimuli were presented to listeners in quiet and signal-to-noise ratios of 0 dB, -5 dB and -10 dB. Results showed that in severe noise, vowels synthesized according to the whole-spectrum were recognized significantly better than vowels containing only formants. Multidimensional scaling and FITA analysis indicated that formants were still perceived and extracted by the human auditory system in severe noise, especially when the vowel spectrum consisted of the whole spectral shape. Although F1 and F2 vary in importance in listening conditions of quiet and less noisy conditions, the role of the two cues appears to be similar in severe noise. It was suggested that not only the availability formants, but also details of the vowel spectral shape can help to predict vowel recognition in severe noise to a certain degree. / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
25

Clinical Experience With the Words-in-Noise Test on 3430 Veterans: Comparisons With Pure-Tone Thresholds and Word Recognition in Quiet

Wilson, Richard H. 01 July 2011 (has links)
Background: Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. Purpose: To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Research Design: Retrospective, descriptive. Study Sample: The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). Data Collection and Analysis: The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Results: Overall, the 1000-8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1-3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. Conclusions: The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the "stress test" for auditory function.
26

The Words-in-Noise (WIN) Test With Multitalker Babble and Speech-Spectrum Noise Maskers

Wilson, Richard H., Carnell, Crystal S., Cleghorn, Amber L. 01 January 2007 (has links)
The Words-in-Noise (WIN) test uses monosyllabic words in seven signal-to-noise ratios of multitalker babble (MTB) to evaluate the ability of individuals to understand speech in background noise. The purpose of this study was to evaluate the criterion validity of the WIN by comparing recognition performances under MTB and speech-spectrum noise (SSN) using listeners with normal hearing and listeners with hearing loss. The MTB and SSN had identical rms and similar spectra but different amplitude-modulation characteristics. The performances by the listeners with normal hearing, which were 2 dB better in MTB than in SSN, were about 10 dB better than the performances by the listeners with hearing loss, which were about 0.5 dB better in MTB with 56% of the listeners better in MTB and 40% better in SSN. The slopes of the functions for the normal-hearing listeners (8-9%/dB) were steeper than the functions for the listeners with hearing loss (5-6%/dB). The data indicate that the WIN has good criterion validity.
27

Insights into the neural bases of speech perception in noise

Vander Ghinst, Marc 23 February 2021 (has links) (PDF)
Pour pouvoir communiquer efficacement dans son environnement social naturel, l’être humain doit pouvoir isoler le discours de son interlocuteur des autres voix composants le bruit ambiant. Cette situation, connue sous l’expression « d’effet cocktail party », va engager notre cerveau dans différents processus auditifs et attentionnels lui permettant d’analyser spécifiquement le signal acoustique de son interlocuteur. La façon dont le cerveau procède pour extraire les attributs acoustiques de la voix d’intérêt de l’ensemble de la scène auditive restent toutefois méconnus. Cette méconnaissance est d’autant plus importante que certaines populations peuvent présenter des troubles de compréhension dans le bruit alors que leur système auditif périphérique ne présente aucun déficit (comme les enfants ou chez certains jeunes adultes). L’objectif de cette thèse de doctorat était d’identifier les mécanismes corticaux permettant la compréhension dans le bruit et d’évaluer si ceux-ci étaient déficitaires dans deux populations souffrant de troubles de compréhension dans le bruit sans atteinte auditive périphérique. Pour y parvenir, nous avons étudié par magnétoencéphalographie le couplage entre l’activité corticale d’un auditeur et les différentes voix constituant une scène auditive de type cocktail party. Ces investigations ont été menées chez des sujets sains (étude I), chez des enfants (étude II) ainsi que chez des jeunes adultes présentant des troubles isolés de la compréhension dans le bruit (étude III). Nos études ont révélé que le cortex auditif suit sélectivement la voix d’intérêt plutôt que la scène auditive globale. Ce couplage « cerveau-parole » se produit à des fréquences correspondant aux fluctuations rythmiques de la prosodie (<1 Hz), des mots (1–4 Hz) et des syllabes (4–8 Hz), et diminue lorsque le niveau de bruit augmente. De plus, le couplage « cerveau-parole » à <1 Hz est latéralisé dans l'hémisphère gauche en présence d’un bruit de type cocktail party. Enfin, une diminution de ce couplage au rythme syllabique est associée aux difficultés de compréhension dans le bruit, que ce soit chez les enfants ou chez les jeunes adultes.Nos travaux ont ainsi démontré que ce couplage sélectif à la voix d'intérêt lors de situations de type cocktail party est essentiel à la compréhension en milieu bruyant. Une diminution de ce couplage au niveau syllabique est associée à un déficit de compréhension dans le bruit, soutenant ainsi l’hypothèse d’une origine centrale aux troubles de compréhension dans le bruit sans atteinte auditive périphérique. / Doctorat en Sciences médicales (Médecine) / info:eu-repo/semantics/nonPublished
28

Is there a correlation between the ability to recognise speech-in-noise and sensory memory?

Svedberg, Stella January 2023 (has links)
Recently, research has begun to pay more attention to the cognitive functions associated with auditory perception. In this study, two tests are performed to investigate the correlation between the ability to recognise speech-in-noise and the performance of sensory memory, as well as to investigate whether the performance would improve during the sensory memory test. For measuring speech-in-noise, the Hagerman test was used. A random noise test to detect deviant noises was used to measure sensory memory. In total 16 participantstook part in the study (mean age=24.8125, SD=3.14), half of the group began with the Hagerman test, and the other half with the random noise test. Two different statistical analyses were performed on the data. For examining the correlation between the performance on the Hagerman test and the random noise test, a Pearson correlation was used. The results were as follows: p = 0.4962, r = -0.1835734. Observing the results, the tests did indicate a slight negative correlation regarding the r-value, but not a significant correlation. Thus, the analysis did not derive any significant results. The second analysis was a dependent t-test to examine whether there was an improvement in performance during the random noise test, as it was divided into four separate blocks. The analysis showed the following results: t = 1.0266, df = 28.943, p = 0.3131. These results were not significant, though observing the block graph might indicate a tendency for improvement. For further studies, the random noise test should perhaps be modified into an easier version. This is based upon the data, as many of the participants merely got a score above, or even below, chance. Further studiesshould also use a higher number of participants as well to increase the chance of receiving significant results from the tests.
29

Towards Development of Intelligibility Assessment for Dysphonic Speech

Ishikawa, Keiko 16 June 2017 (has links)
No description available.
30

SPEECH IN NOISE: EFFECTS OF NOISE ON SPEECH PERCEPTION AND SPOKEN WORD COMPREHENSION

Eranović, Jovan January 2022 (has links)
The study investigated the effects of noise, one of the major environmental stressors, on speech perception and spoken word comprehension. Three tasks were employed – listening span, listening comprehension, and shadowing – in order to find out to what extent different types of background noise affected speech perception and encoding into working verbal memory, as well as spoken word comprehension. Six types of maskers were used – (1) single babble masker in English, (2) single babble masker in Mandarin, (3) multi babble masker in Greek and (4) construction site noise, (5) narrow-band speech signal emulating phone effect and (6) reverberated speech signal. These could be categorized as energetic (2, 3, and 4), informational (1) and signal degradation (6 and 7) noise maskers. The study found that general speech perception and specific word comprehension are not equally affected by the different noise maskers – if shadowing is considered primarily a task relying on speech perception, with the other two tasks considered to rely on working memory, word comprehension and semantic inference. The results indicate that informational masking is most detrimental to speech perception, while energetic masking and sound degradation are most detrimental to spoken word comprehension. The results imply that masking categories must be used with caution, since not all maskers belonging to one category had the same effect on performance. Finally, introducing a noise component to any memory task, particularly to speech perception and spoken word recognition tasks, adds another cognitively stimulating real-life dimension to them. This could be beneficial to students training to become interpreters helping them to get accustomed to working in a noisy environment, an inevitable part of this profession. A final study explored the effects of noise on automatic speech recognition. The same types of noise as in the human studies were tested on two automatic speech recognition programs: Otter and Ava. This technology was originally developed as an aid for the deaf and hard of hearing. However, their application has since been extended to a broad range of fields, including education, healthcare and finance. The analysis of the transcripts created by the two programs found speech to text technology to be fairly resilient to the degradation of the speech signal, while mechanical background noise still presented a serious challenge to this technology. / Dissertation / Doctor of Philosophy (PhD) / The study investigated the effects of noise, one of the major environmental stressors, on speech perception and spoken word comprehension. Throughout three different tasks (listening span task, in which participants were asked to recall a certain number of items from a list; listening comprehension task, in which listeners needed to demonstrate understanding of the incoming speech; and shadowing, in which listeners were required to listen and simultaneously repeat aloud the incoming speech), various types of background noise were presented in order to find out which ones would cause more disruptions to the two cognitive processes. The study found that general speech perception and specific word comprehension are not equally affected by the different noise maskers – provided that shadowing is considered primarily a task relying on speech perception, with the other two tasks considered to rely on working memory, word comprehension and semantic inference, or the way the listener combines and synthesizes information from different parts of a text (or speech) in order to establish its meaning. The results indicate that understandable background speech is most detrimental to speech perception, while any type of noise, if loud enough, as well as degraded target speech signal are most detrimental to spoken word comprehension. Finally, introducing a noise component to these tasks, adds another cognitively stimulating real-life dimension, which could potentially be beneficial to students of interpreting by getting them accustomed to working in a noisy environment, an inevitable part of this profession. Another field of application is the optimization of speech recognition software. In the last study, the same types of noise as used in the first studies were tested on two automatic speech recognition programs. This technology was originally developed as an aid for the deaf and hard of hearing. However, its application has since been extended to a broad range of fields including education, healthcare and finance. The analysis of the transcripts created by the two programs found speech to text technology to be fairly resilient to a degraded speech signal, while mechanical background noise still presented a serious challenge to this technology.

Page generated in 0.0548 seconds