Spelling suggestions: "subject:"epeech inn noise"" "subject:"epeech iin noise""
31 |
Gap Discrimination and Speech Perception in NoiseFagelson, Marc A. 03 November 1999 (has links)
The relation between discrimination of silent gaps and speech‐in‐noise perception was measured in 20 normal‐hearing listeners using speech‐shaped noise as both the gap markers and the noise source for speech testing. In the gap discrimination experiment, subjects compared silent gaps marked by 60 dB SPL 250‐ms noise bursts to standards of either 5, 10, 20, 50, 100, or 200 ms. The gap results were most similar to those reported by Abel [S. M. Abel, J. Acoust. Soc. Am. 52, 519–524 (1972)] as ΔT/T decreased non‐monotonically with increased gap length. In a second experiment, the California Consonant Test (CCT) was administered at 50 dB HL via CD in three conditions: quiet, +10 S/N, and 0 S/N. Results from both experiments were correlated and the association between ΔT/T and CCT scores was generally negative. Listeners who discriminated the gaps with greater acuity typically had higher speech scores. The relation was strongest for the smaller gap standards at each S/N, or when performance for any gap duration was compared to the CCT results obtained in quiet.
|
32 |
The Effect of Linked Bilateral Noise Reduction Processing on Speech in Noise PerformanceMeija, J., Keidser, G., Dillon, Harvey, Nguyen, Cong-Van, Johnson, Earl E. 01 August 2011 (has links)
No description available.
|
33 |
Perception of prosody by cochlear implant recipientsVan Zyl, Marianne January 2014 (has links)
Recipients of present-day cochlear implants (CIs) display remarkable success with speech recognition in quiet, but not with speech recognition in noise. Normal-hearing (NH) listeners, in contrast, perform relatively well with speech recognition in noise. Understanding which speech features support successful perception in noise in NH listeners could provide insight into the difficulty that CI listeners experience in background noise. One set of speech features that has not been thoroughly investigated with regard to its noise immunity is prosody. Existing reports show that CI users have difficulty with prosody perception. The present study endeavoured to determine if prosody is particularly noise-immune in NH listeners and whether the difficulty that CI users experience in noise can be partly explained by poor prosody perception. This was done through the use of three listening experiments.
The first listening experiment examined the noise immunity of prosody in NH listeners by comparing perception of a prosodic pattern to word recognition in speech-weighted noise (SWN). Prosody perception was tested in a two-alternatives forced-choice (2AFC) test paradigm using sentences conveying either conditional or unconditional permission, agreement or approval. Word recognition was measured in an open set test paradigm using meaningful sentences. Results indicated that the deterioration slope of prosody recognition (corrected for guessing) was significantly shallower than that of word recognition. At the lowest signal-to-noise ratio (SNR) tested, prosody recognition was significantly better than word recognition.
The second experiment compared recognition of prosody and phonemes in SWN by testing perception of both in a 2AFC test paradigm. NH and CI listeners were tested using single words as stimuli. Two prosody recognition tasks were used; the first task required discrimination between questions and statements, while the second task required discrimination between a certain and a hesitant attitude. Phoneme recognition was measured with three vowel pairs selected according to specific acoustic cues. Contrary to the first experiment, the results of this experiment indicated that vowel recognition was significantly better than prosody recognition in noise in both listener groups.
The difference between the results of the first and second experiments was thought to have been due to either the test paradigm difference in the first experiment (closed set versus open set), or a difference in stimuli between the experiments (single words versus sentences). The third experiment tested emotional prosody and phoneme perception of NH and CI listeners in SWN using sentence stimuli and a 4AFC test paradigm for both tasks. In NH listeners, deterioration slopes of prosody and phonemes (vowels and consonants) did not differ significantly, and at the lowest SNR tested there was no significant difference in recognition of the different types of speech material. In the CI group, prosody and vowel perception deteriorated with a similar slope, while consonant recognition showed a steeper slope than prosody recognition. It is concluded that while prosody might support speech recognition in noise in NH listeners, explicit recognition of prosodic patterns is not particularly noise-immune and does not account for the difficulty that CI users experience in noise. ## Ontvangers van hedendaagse kogleêre inplantings (KI’s) behaal merkwaardige sukses met spraakherkenning in stilte, maar nie met spraakherkenning in geraas nie. Normaalhorende (NH) luisteraars, aan die ander kant, vaar relatief goed met spraakherkenning in geraas. Begrip van die spraakeienskappe wat suksesvolle persepsie in geraas ondersteun in NH luisteraars, kan lei tot insig in die probleme wat KI-gebruikers in agtergrondgeraas ervaar. Een stel spraakeienskappe wat nog nie deeglik ondersoek is met betrekking tot ruisimmuniteit nie, is prosodie. Bestaande navorsing wys dat KI-gebruikers sukkel met persepsie van prosodie. Die huidige studie is onderneem om te bepaal of prosodie besonder ruisimmuun is in NH luisteraars en of die probleme wat KI-gebruikers in geraas ondervind, deels verklaar kan word deur swak prosodie-persepsie. Dit is gedoen deur middel van drie luistereksperimente.
Die eerste luistereksperiment het die ruisimmuniteit van prosodie in NH luisteraars ondersoek deur die persepsie van ’n prosodiese patroon te vergelyk met woordherkenning in spraakgeweegde ruis (SGR). Prosodie-persepsie is getoets in ’n twee-alternatiewe-gedwonge-keuse- (2AGK) toetsparadigma met sinne wat voorwaardelike of onvoorwaardelike toestemming, instemming of goedkeuring oordra. Woordherkenning is gemeet in ’n oopstel-toetsparadigma met betekenisvolle sinne. Resultate het aangedui dat die helling van agteruitgang van prosodieherkenning (gekorrigeer vir raai) betekenisvol platter was as dié van woordherkenning, en dat by die laagste sein-tot-ruiswaarde (STR) wat getoets is, prosodieherkenning betekenisvol beter was as woordherkenning.
Die tweede eksperiment het prosodie- en foneemherkenning in SGR vergelyk deur die persepsie van beide te toets in ’n 2AGK-toetsparadigma. NH en KI-luisteraars is getoets met enkelwoorde as stimuli. Twee prosodieherkenningstake is gebruik; die eerste taak het diskriminasie tussen vrae en stellings vereis, terwyl die tweede taak diskriminasie tussen ’n seker en onseker houding vereis het. Foneemherkenning is gemeet met drie vokaalpare wat geselekteer is na aanleiding van spesifieke akoestiese eienskappe. In teenstelling met die eerste eksperiment, het resultate van hierdie eksperiment aangedui dat vokaalherkenning betekenisvol beter was as prosodieherkenning in geraas in beide luisteraarsgroepe.
Die verskil tussen die resultate van die eerste en tweede eksperimente kon moontlik die gevolg wees van óf die verskil in toetsparadigma in die eerste eksperiment (geslote- teenoor oop-stel), óf ’n verskil in stimuli tussen die eksperimente (enkelwoorde teenoor sinne). Die derde eksperiment het emosionele-prosodie- en foneempersepsie van NH en KI-luisteraars getoets in SGR met sinstimuli en ’n 4AGK-toetsparadigma vir beide take. In NH luisteraars het die helling van agteruitgang van die persepsie van prosodie en foneme (vokale en konsonante) nie betekenisvol verskil nie, en by die laagste STR wat getoets is, was daar nie ’n betekenisvolle verskil in die herkenning van die twee tipes spraakmateriaal nie. In die KI-groep het prosodie- en vokaalpersepsie met soortgelyke hellings agteruitgegaan, terwyl konsonantherkenning ’n steiler helling as prosodieherkenning vertoon het. Die gevolgtrekking was dat alhoewel prosodie spraakherkenning in geraas in NH luisteraars mag ondersteun, die eksplisiete herkenning van prosodiese patrone nie besonder ruisimmuun is nie en dus nie ’n verklaring bied vir die probleme wat KI-gebruikers in geraas ervaar nie. / Thesis (PhD)--University of Pretoria, 2014. / lk2014 / Electrical, Electronic and Computer Engineering / PhD / unrestricted
|
34 |
The association between cognition and speech-in-noise perception : Investigating the link between speech-in-noise perception and fluid intelligence in people with and without hearing lossDahlgren, Simon January 2020 (has links)
The link between speech-in-noise recognition and cognition has been researched extensively over the years, and the purpose of this thesis was to add to this field. Data from a sample of 394 participants from the n200 database (Rönnberg et al., 2016) was used to calculate the correlation between their performance on a speech-in-noise test and their score on a test measuring fluid intelligence. The speech-in-noise test consisted of matrix sentences with 4-talker babble as noise and fluid intelligence was represented by the score on a Raven’s Progressive Matrices test. Around half of the participants (n = 199) had documented hearing loss and were hearing aid users, while the rest were participants with normal hearing. The overall correlation between speech-in-noise recognition and fluid intelligence was -.317, which shows that a better (lower) score on the speech-in-noise test is correlated to a better score in the Raven’s test. The same type of correlation was calculated within the two different groups, and the results showed correlation of -.338 for the group without hearing loss and one of -.303 for the group with hearing loss. The results indicate that there is a weak to moderate correlation between speech-in-noise and fluid intelligence, and they support the theory that cognitive processing is related to speech perception in all people, regardless of hearing status.
|
35 |
The Effects of Energetic and Informational Masking on the Words-in-Noise Test (Win)Wilson, Richard H., Trivette, Cristine P., Williams, Daniel A., Watts, Kelly L. 01 July 2012 (has links)
Background: In certain masking paradigms, the masker can have two components, energetic and informational. Energetic masking is the traditional peripheral masking, whereas informational masking involves confusions (uncertainty) between the signal and masker that originate more centrally in the auditory system. Sperry et al (1997) used Northwestern University Auditory Test No. 6 (NU-6) words in multitalker babble to study the differential effects of energetic and informational masking using babble played temporally forward (FB) and backward (BB). The FB and BB are the same except BB is void of the contextual and semantic content cues that are available in FB. It is these informational cues that are thought to fuel informational masking. Sperry et al found 15% better recognition performance (∼3 dB) on BB than on FB, which can be interpreted as the presence of informational masking in the FB condition and not in the BB condition (Dirks and Bower, 1969). The Words-in-Noise Test (WIN) (Wilson, 2003; Wilson and McArdle, 2007) uses NU-6 words as the signal and multitalker babble as the masker, which is a combination of stimuli that potentially could produce informational masking. The WIN presents 5 or 10 words at each of seven signal-to-noise ratios (S/N, SNR) from 24 to 0 dB in 4 dB decrements with the 50% correct point being the metric of interest. The same recordings of the NU-6 words and multitalker babble used by Sperry et al are used in the WIN. Purpose: To determine whether informational masking was involved with the WIN. Research Design: Descriptive, quasi-experimental designs were conducted in three experiments using FB and BB in various paradigms in which FB and BB varied from 4.3 sec concatenated segments to essentially continuous. Study Sample: Eighty young adults with normal hearing and 64 older adults with sensorineural hearing losses participated in a series of three experiments. Data Collection and Analysis: Experiment 1 compared performance on the normal WIN (FB) with performance on the WIN in which the babble segment with each word was reversed temporally (BB). Experiment 2 examined the effects of continuous FB and BB segments on WIN performance. Experiment 3 replicated the Sperry et al (1997) experiment at 4 and 0 dB S/N using NU-6 words in the FB and BB conditions. Results: Experiment 1-with the WIN paradigm, recognition performances on FB and BB were the same for listeners with normal hearing and listeners with hearing loss, except at the 0 dB S/N with the listeners with normal hearing at which performance was significantly better on BB than FB. Experiment 2-recognition performances on FB and BB were the same at all SNRs for listeners with normal hearing using a slightly modified WIN paradigm. Experiment 3-there was no difference in performances on the FB and BB conditions with either of the two SNRs. Conclusions: Informational masking was not involved in the WIN paradigm. The Sperry et al results were not replicated, which is thought to be related to the way in which the Sperry et al BB condition was produced.
|
36 |
Speech Signals Used to Evaluate Functional Status of the Auditory SystemWilson, Richard H., McArdle, Rachel 01 July 2005 (has links)
This review presents a brief history of the evolution of speech audiometry from the 1800s to present day. The two-component aspect of hearing loss (audibility and distortion), which was formalized into a framework in past literature, is presented in the context of speech recognition. The differences between speech recognition in quiet and in background noise are discussed as they relate to listeners with normal hearing and listeners with hearing loss. A discussion of the use of sentence materials versus word materials for clinical use is included as is a discussion of the effects of presentation level on recognition performance in quiet and noise. Finally, the effects of age and hearing loss on speech recognition are considered.
|
37 |
Use of 35 Words for Evaluation of Hearing Loss in Signal-to-Babble Ratio: A Clinic ProtocolWilson, Richard H., Burks, Christopher A. 01 November 2005 (has links)
Data from earlier studies that presented 70 words at 24 to 0 dB signal-to-babble (S/B) ratios indicated that most young listeners with normal hearing required 0 to 6 dB S/B ratios to attain 50% correct word recognition. Older listeners with hearing loss often required a >12 dB S/B ratio to attain 50% correct word recognition. In our study, we converted the Words in Noise test from one 70-word list into two 35-word lists for quicker administration by clinicians. Using baseline data from previous studies, we used two strategies to randomize the 35-word lists: based on recognition performance at each S/B ratio and based on recognition performance only. With the first randomization strategy, the 50% correct word-recognition points on the two lists differed by 0.5 dB for 72 listeners with hearing loss. With the second randomization strategy, 48 listeners with hearing loss performed identically on the two lists.
|
38 |
Developing a digits in noise screening test with higher sensitivity to high-frequency hearing lossMotlagh Zadeh, Lina 02 August 2019 (has links)
No description available.
|
39 |
Examining the Interaction Effects of Fluid Intelligence, Visual Cue Reliance, and Hearing Aid Usage on Speech-in-Noise Recognition Abilities : An investigative study in hearing aid usersGhebregziabiher, Tnbit Isayas January 2023 (has links)
Research within cognitive hearing science has throughout the years examined the relationship between speech recognition and cognitive functioning. The purpose of this study was to examine the effects of hearing aid experience, fluid intelligence, and visual cues on speech-in-noise recognition. Data from the n200 database by Rönnberg et al. (2016) was analyzed to address two research problems: (1) whether the number of years of hearing aid use influence reliance on visual cues in speech-in-noise recognition and (2) how the relationship between fluid intelligence and reliance on visual cues changes depending on hearing aid use experience. Data from 214 participants with hearing impairment was analyzed using linear mixed effects models. No statistically significant interactions were observed in the results for both research questions. However, the results indicated that increased hearing aid experience, as well as the presence of visual cues resulted in better speech-in-noise recognition ability.
|
40 |
Associations Between Speech Understanding and Auditory and Visual Tests of Verbal Working Memory: Effects of Linguistic Complexity, Task, Age, and Hearing LossSmith, Sherri L., Pichora-Fuller, M. K. 01 January 2015 (has links)
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners' auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding.
|
Page generated in 0.0861 seconds