• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 2
  • 2
  • 2
  • Tagged with
  • 57
  • 57
  • 24
  • 21
  • 18
  • 13
  • 12
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Cognitive Abilities and their Influence on Speech-In-Noise Information Processing : a Study on Different Kinds of Speech Support and Their Relation to the Human Cognition / Kognitiva förmågor och deras influens på informationsbearbetning av tal-i-brus : en studie på olika typer av talstöd och deras relation till mänsklig kognition

Sjöström, Elin January 2017 (has links)
In this paper, top-down and bottom-up processing were studied regarding their effect on speech-in-noise. Three cognitive functions were also studied (divided attention, executive functioning, and semantic comprehension), and the effect they have on the speech processing and on each other. The research questions asked were if a difference in speech-in noise perception can be observed regarding the different levels of top-down and bottom-up support, if speech-in-noise is related to any of the researched cognitive abilities, and if there exists any correlation between these abilities. The method is a within-subject experimental design, consisting of four different tests: PASAT, to measure attention, LIT, to measure semantic comprehension, TMT, to measure executive functioning and SIN, to measure speech-in noise. The results showed a significant difference between top-down and bottom-up processing, a significant difference between top-down processing in decreasing and increasing conditions could also be seen. A negative correlation between the benefit of top-down support and the semantic comprehension task was found. Regarding the cognitive abilities a few correlations were found; the semantic comprehension task had a positive correlation to both the central executive task and the attentional task, the attentional task had a negative correlation to the central executive task, and both of the central executive subtasks had a positive correlation to each other. Most of the findings were expected, built on earlier cognitive hearing theories and studies.
52

Identification des indices acoustiques utilisés lors de la compréhension de la parole dégradée / Identification of acoustic cues involved in degraded speech comprehension

Varnet, Léo 18 November 2015 (has links)
Bien qu’il existe un large consensus de la communauté scientifique quant au rôle des indices acoustiques dans la compréhension de la parole, les mécanismes exacts permettant la transformation d’un flux acoustique continu en unités linguistiques élémentaires demeurent aujourd’hui largement méconnus. Ceci est en partie dû à l’absence d’une méthodologie efficace pour l’identification et la caractérisation des primitives auditives de la parole. Depuis les premières études de l’interface acoustico-phonétique par les Haskins Laboratories dans les années 50, différentes approches ont été proposées ; cependant, toutes sont fondamentalement limitées par l’artificialité des stimuli utilisés, les contraintes du protocole expérimental et le poids des connaissances a priori nécessaires. Le présent travail de thèse s’est intéressé { la mise en oeuvre d’une nouvelle méthode tirant parti de la situation de compréhension de parole dégradée pour mettre en évidence les indices acoustiques utilisés par l’auditeur.Dans un premier temps, nous nous sommes appuyés sur la littérature dans le domaine visuel en adaptant la méthode des Images de Classification à une tâche auditive de catégorisation de phonèmes dans le bruit. En reliant la réponse de l’auditeur { chaque essai à la configuration précise du bruit lors de cet essai, au moyen d’un Modèle Linéaire Généralisé, il est possible d’estimer le poids des différentes régions temps-fréquence dans la décision. Nous avons illustré l’efficacité de notre méthode, appelée Image de Classification Auditive, à travers deux exemples : une catégorisation /aba/-/ada/, et une catégorisation /da/-/ga/ en contexte /al/ ou /aʁ/. Notre analyse a confirmé l’implication des attaques des formants F2 et F3, déjà suggérée par de précédentes études, mais a également permis de révéler des indices inattendus. Dans un second temps, nous avons employé cette technique pour comparer les résultats de participants musiciens experts (N=19) ou dyslexiques (N=18) avec ceux de participants contrôles. Ceci nous a permis d’étudier les spécificités des stratégies d’écoute de ces différents groupes.L’ensemble des résultats suggèrent que les Images de Classification Auditives pourraient constituer une nouvelle approche, plus précise et plus naturelle, pour explorer et décrire les mécanismes { l’oeuvre au niveau de l’interface acoustico-phonétique. / There is today a broad consensus in the scientific community regarding the involvement of acoustic cues in speech perception. Up to now, however, the precise mechanisms underlying the transformation from continuous acoustic stream into discrete linguistic units remain largely undetermined. This is partly due to the lack of an effective method for identifying and characterizing the auditory primitives of speech. Since the earliest studies on the acoustic–phonetic interface by the Haskins Laboratories in the 50’s, a number of approaches have been proposed; they are nevertheless inherently limited by the non-naturalness of the stimuli used, the constraints of the experimental apparatus, and the a priori knowledge needed. The present thesis aimed at introducing a new method capitalizing on the speech-in-noise situation for revealing the acoustic cues used by the listeners.As a first step, we adapted the Classification Image technique, developed in the visual domain, to a phoneme categorization task in noise. The technique relies on a Generalized Linear Model to link each participant’s response to the specific configuration of noise, on a trial-by-trail basis, thereby estimating the perceptual weighting of the different time-frequency regions for the decision. We illustrated the effectiveness of our Auditory Classification Image method through 2 examples: a /aba/-/ada/ categorization and a /da/-/ga/ categorization in context /al/ or /aʁ/. Our analysis confirmed that the F2 and F3 onsets were crucial for the tasks, as suggested in previous studies, but also revealed unexpected cues. In a second step, we relied on this new method to compare the results of musical experts (N=19) or dyslexics participants (N=18) to those of controls. This enabled us to explore the specificities of each group’s listening strategies.All the results taken together show that the Auditory Classification Image method may be a more precise and more straightforward approach to investigate the mechanisms at work at the acoustic-phonetic interface.
53

Acclimatation aux appareils auditifs par les personnes âgées avec perte auditive

Wright, Dominique 08 1900 (has links)
Les aides auditives (AA) sont les principaux outils d’intervention de réadaptation recommandés aux personnes âgées ayant une perte auditive, car elles offrent un large éventail d’avantages. Cependant, beaucoup de personnes qui possèdent des AA ne les utilisent pas ou les sous-utilisent. La raison la plus récurrente exprimée par ces non-utilisateurs d’AA est la difficulté persistante à comprendre les conversations dans des environnements bruyants. Il n’est pas mentionné si ces personnes ont essayé de porter leurs AA pendant un certain temps avant de décider de ne plus les porter. Dans l’éventualité où elles auraient abandonné peu de temps après l’obtention de leurs AA, il est possible que ces individus n’aient pas bénéficié d’une adaptation optimale à l’environnement sonore, appelée acclimatation auditive. L’objectif principal de cette thèse est d’évaluer l’apport de l’expérience avec les AA sur l’acclimatation auditive. La première étude visait à déterminer, au moyen d'une revue systématique, si un effet d’acclimatation se produit après l’utilisation d’AA et, le cas échéant, à établir l’amplitude et l’évolution dans le temps de cet effet. Quatorze articles évaluant l’acclimatation via des mesures comportementales, d’auto-évaluation et électrophysiologiques répondaient aux critères d’inclusion et d’exclusion. Bien que leur qualité scientifique générale soit faible ou très faible, les résultats de la revue systématique appuient l’hypothèse qu’un effet d'acclimatation est présent, tel que documenté par les trois types de mesures. Pour la reconnaissance de la parole dans le bruit, l’amélioration varie entre 2 et 3 dB en termes de rapport signal sur bruit (RSB) sur une période minimale d'un mois. Cette étude met en évidence l'importance d’utiliser les AA après l’appareillage afin d’optimiser les bénéfices que celles-ci peuvent procurer. L’objectif du deuxième article était de rapporter les résultats d’une étude longitudinale pour déterminer si l’acclimatation aux AA des personnes âgées peut être évaluée par leurs performances à des tâches de reconnaissance de la parole dans le bruit ainsi que par des mesures d’effort auditif. Trente-deux nouveaux utilisateurs d’AA et 15 utilisateurs expérimentés ont été évalués sur une période de 38 semaines en utilisant un paradigme de double tâche. Pour les nouveaux utilisateurs, les résultats ont révélé une amélioration significative de 2 dB RSB sur un test de reconnaissance de la parole dans le bruit après quatre semaines d’utilisation des AA, et aucune diminution de l’effort auditif, tel que mesuré par le coût proportionnel de la double tâche et par le temps de réponse à la tâche secondaire. Chez les utilisateurs expérimentés, les résultats n’ont dévoilé aucune amélioration de leur performance de reconnaissance de la parole dans le bruit suite à l’utilisation des AA. En conclusion, les résultats confirment la présence d’un effet d’acclimatation tel qu’évalué par des mesures comportementales, d’auto-évaluation et électrophysiologiques suite à une utilisation régulière d’AA. Plus précisément, les nouveaux utilisateurs présentaient une amélioration cliniquement significative de 2 à 3 dB en termes de RSB après une utilisation régulière de leurs AA. Par conséquent, les nouveaux utilisateurs d’AA devraient être informés de cette possible amélioration au fil du temps, car cela pourrait les inciter à continuer de s’adapter à leurs AA plus longtemps avant de décider de les utiliser ou non. / Hearing aids (HAs) are the primary rehabilitation intervention recommended for older adults with hearing loss, as they provide a wide range of benefits. However, a large proportion of individuals who own HAs does not use or underuse them. The most recurring reason reported by non-HA users is their difficulty to understand conversations in noisy environments even when they use HAs. It is unclear if these individuals tried to use their HAs for an extended period of time before abandoning their use. If they gave up too soon after being fitted with their HAs they may not have benefited from an auditory adaptation to the new auditory stimulation, referred to as auditory acclimatization. The main objective of this thesis is to evaluate the contribution of HA experience on auditory acclimatization. The first study aimed to determine, by means of a systematic review, if an acclimatization effect occurs after HA use and if so, to establish the magnitude and time-course of this effect. Fourteen articles that assessed acclimatization through behavioural, self-reported and physiological outcomes met the inclusion and the exclusion criteria. Although their general scientific quality was low or very low, the results of systematic review support the existence of an acclimatization effect as calculated by all three types of outcome measures. For speechrecognition- in-noise performance, improvement ranged from 2 to 3 dB in signal-to-noise ratio (SNR) over a minimum period of 1-month. This study highlights the importance of using the HAs on a regular basis after being fitted with HAs. The goal of the second study was to conduct a longitudinal investigation in order to determine whether acclimatization to HAs by older adults can be assessed data obtained on a speech-recognition-in-noise task and by measures of listening effort. Thirty-two new HA users and 15 experienced HA users were tested over a 38-week period using a dual-task paradigm. For new HA users, the results showed a significant improvement of 2 dB SNR on a speech-recognitionin- noise task after 4 weeks of using the HAs post fitting. Based on the proportional dual-task cost data and by the response time measures recorded on the secondary task. No improvement of speech perception performance in noise was observed for the experienced HA users. 8 The general findings from this thesis support the presence of an acclimatization effect as measured by behavioural, self-reported and physiological measures following regular HA use. Specifically, new HA users show a clinically significant change of 2 and 3 dB SNR on speechrecognition- in noise tasks following their initial fitting. Therefore, new HA users should be informed of the possible improvement in speech recognition over time, as it could entice them to pursue the use of their HAs for a longer period of time before deciding to abandon them.
54

The Importance of Glimpsed Audibility for Speech-In-Speech Recognition

Wasiuk, Peter Anthony 23 May 2022 (has links)
No description available.
55

Altered processing of communication signals in the subcortical auditory sensory pathway in autism

Schelinski, Stefanie, Tabas, Alejandro, Kriegstein, Katharina von 04 June 2024 (has links)
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
56

Neurophysiological Mechanisms of Speech Intelligibility under Masking and Distortion

Vibha Viswanathan (11189856) 29 July 2021 (has links)
<pre><p>Difficulty understanding speech in background noise is the most common hearing complaint. Elucidating the neurophysiological mechanisms underlying speech intelligibility in everyday environments with multiple sound sources and distortions is hence important for any technology that aims to improve real-world listening. Using a combination of behavioral, electroencephalography (EEG), and computational modeling experiments, this dissertation provides insight into how the brain analyzes such complex scenes, and what roles different acoustic cues play in facilitating this process and in conveying phonetic content. Experiment #1 showed that brain oscillations selectively track the temporal envelopes (i.e., modulations) of attended speech in a mixture of competing talkers, and that the strength and pattern of this attention effect differs between individuals. Experiment #2 showed that the fidelity of neural tracking of attended-speech envelopes is strongly shaped by the modulations in interfering sounds as well as the temporal fine structure (TFS) conveyed by the cochlea, and predicts speech intelligibility in diverse listening environments. Results from Experiments #1 and #2 support the theory that temporal coherence of sound elements across envelopes and/or TFS shapes scene analysis and speech intelligibility. Experiment #3 tested this theory further by measuring and computationally modeling consonant categorization behavior in a range of background noises and distortions. We found that a physiologically plausible model that incorporated temporal-coherence effects predicted consonant confusions better than conventional speech-intelligibility models, providing independent evidence that temporal coherence influences scene analysis. Finally, results from Experiment #3 also showed that TFS is used to extract speech content (voicing) for consonant categorization even when intact envelope cues are available. Together, the novel insights provided by our results can guide future models of speech intelligibility and scene analysis, clinical diagnostics, improved assistive listening devices, and other audio technologies.</p></pre>
57

Improving Speech Intelligibility Without Sacrificing Environmental Sound Recognition

Johnson, Eric Martin 27 September 2022 (has links)
No description available.

Page generated in 0.0544 seconds