Spelling suggestions: "subject:"walker familiarity"" "subject:"walker amiliarity""
1 |
An examination of the effect of talker familiarity on the sentence recognition skills of cochlear implant usersBarker, Brittan Ann 01 January 2006 (has links)
Three experiments examined normal-hearing and cochlear-implant listeners' abilities to perceive and use talker-specific information in the speech signal. In Experiment 1 voice similarity judgments were gathered from normal-hearing listeners to maximize variability across talkers used in Experiment 2. These judgments were submitted to a multidimensional scaling (MDS) analysis; this solution was used to select the talkers of Experiment 2.
Experiment 2 was an approximate replication of Nygaard and Pisoni's (1998) work. In this study cochlear-implant and normal-hearing listeners were trained to recognize 6 different voices. The cochlear-implant users recognized the voices with 59.31% accuracy and the normal-hearing listeners achieved 92.64% accuracy. After training the listeners completed a sentence recognition task in noise. In the task 6 familiar talkers spoken half of the sentences and 6 novel talkers spoke the other half. It was predicted that sentences spoken by the familiar talkers would be more accurately perceived than those spoken by the novel talkers. However, there was no difference in accuracy, nor was there a difference in performance across the groups of listeners. The factors contributing to these null results were discussed at length.
Experiment 3 gathered voice similarity judgments from the normal-hearing and cochlear-implant listeners of Experiment 2. These data were submitted to both classical and weighted MDS analyses. The voice maps showed notable differences in the perceptual spaces of the two groups of listeners. The participant space yielded from the weighted MDS showed great variation across all of the participants' judgments, but no clear trend supporting the listeners' group membership.
In conclusion, despite listening via a constrained, electric signal, the cochlear-implant users were trained to recognize voices with notable accuracy (as were the normal-hearing listeners). Nevertheless, Experiment 2 failed to provide insight into talker familiarity's effect on the sentence recognition skills of cochlear-implant and normal-hearing listeners. These results are contrary to research with normal-hearing listeners that suggests talker familiarity facilitates speech processing in noise. The present studies did show, though, that cochlear-implant users appear to perceive and use talker-specific information differently than normal-hearing listeners.
|
2 |
Neural and behavioral interactions in the processing of speech and speaker informationKreitewolf, Jens 10 July 2015 (has links)
Während wir Konversationen führen, senden wir akustische Signale, die nicht nur den Inhalt des Gesprächs betreffen, sondern auch eine Fülle an Informationen über den Sprecher liefern. Traditionellerweise wurden Sprachverständnis und Sprechererkennung als zwei voneinander unabhängige Prozesse betrachtet. Neuere Untersuchungen zeigen jedoch eine Integration in der Verarbeitung von Sprach- und Sprecher-Information. In dieser Dissertation liefere ich weitere empirische Evidenz dafür, dass Prozesse des Sprachverstehens und der Sprechererkennung auf neuronaler und behavioraler Ebene miteinander interagieren. In Studie 1 präsentiere ich die Ergebnisse eines Experiments, das funktionelle Magnetresonanztomographie (fMRT) nutzte, um die neuronalen Grundlagen des Sprachverstehens unter wechselnden Sprecherbedingungen zu untersuchen. Die Ergebnisse dieser Studie deuten auf einen neuronalen Mechanismus hin, der funktionelle Interaktionen zwischen sprach- und sprecher-sensitiven Arealen der linken und rechten Hirnhälfte nutzt, um das korrekte Verstehen von Sprache im Kontext von Sprecherwechseln zu gewährleisten. Dieser Mechanismus impliziert, dass die Sprachverarbeitung, einschließlich des Erkennens von linguistischer Prosodie, vornehmlich von Arealen der linken Hemisphäre unterstützt wird. In Studie 2 präsentiere ich zwei fMRT-Experimente, die die hemisphärische Lateralisierung der Erkennung von linguistischer Prosodie im Vergleich zur Erkennung der Sprachmitteilung respektive der Sprecheridentität untersuchten. Die Ergebnisse zeigten eine deutliche Beteiligung von Arealen in der linken Hirnhälfte, wenn linguistische Prosodie mit Sprecheridentität verglichen wurde. Studie 3 untersuchte, unter welchen Bedingungen Hörer von vorheriger Bekanntheit mit einem Sprecher profitieren. Die Ergebnisse legen nahe, dass Hörer akustische Sprecher-Information implizit während einer Sprach-Aufgabe lernen und dass sie diese Information nutzen, um ihr Sprachverständnis zu verbessern. / During natural conversation, we send rich acoustic signals that do not only determine the content of conversation but also provide a wealth of information about the person speaking. Traditionally, the question of how we understand speech has been studied separately from the question of how we recognize the person speaking either implicitly or explicitly assuming that speech and speaker recognition are two independent processes. Recent studies, however, suggest integration in the processing of speech and speaker information. In this thesis, I provide further empirical evidence that processes involved in the analysis of speech and speaker information interact on the neural and behavioral level. In Study 1, I present data from an experiment which used functional magnetic resonance imaging (fMRI) to investigate the neural basis for speech recognition under varying speaker conditions. The results of this study suggest a neural mechanism that exploits functional interactions between speech- and speaker-sensitive areas in left and right hemispheres to allow for robust speech recognition in the context of speaker variations. This mechanism assumes that speech recognition, including the recognition of linguistic prosody, predominantly involves areas in the left hemisphere. In Study 2, I present two fMRI experiments that investigated the hemispheric lateralization of linguistic prosody recognition in comparison to the recognition of the speech message and speaker identity, respectively. The results showed a clear left-lateralization when recognition of linguistic prosody was compared to speaker recognition. Study 3 investigated under which conditions listeners benefit from prior exposure to a speaker''s voice in speech recognition. The results suggest that listeners implicitly learn acoustic speaker information during a speech task and use such information to improve comprehension of speech in noise.
|
Page generated in 0.0366 seconds