Spelling suggestions: "subject:"epeech comprehension"" "subject:"cpeech comprehension""
1 |
The importance of consensus assessment in speech act comprehension /Yeung, Wai-lan, Victoria. January 1999 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2000. / Includes bibliographical references.
|
2 |
Acquisition and transfer of language functionGoodman, Julie Marianne January 1990 (has links)
No description available.
|
3 |
Facilitating listening in second language classrooms through the manipulation of temporal variablesHiggins, Janet M. D. January 1995 (has links)
No description available.
|
4 |
Word learning in infancySchafer, Graham January 1998 (has links)
No description available.
|
5 |
Multi-modal imaging of brain networks subserving speech comprehensionHalai, Ajay Devshi January 2013 (has links)
Neurocognitive models of speech comprehension generally outline either the spatial or temporal organisation of speech processing and rarely consider combining the two to provide a more complete model. Simultaneous EEG-fMRI recordings have the potential to link these domains, due to the complementary high spatial (fMRI) and temporal (EEG) sensitivities. Although the neural basis of speech comprehension has been investigated intensively during the past few decades there are still some important outstanding questions. For instance, there is considerable evidence from neuropsychology and other convergent sources that the anterior temporal lobe (ATL) should play an important role in accessing meaning. However, fMRI studies do not usually highlight this area, possibly because magnetic susceptibility artefacts cause severe signal loss within the ventral ATL (vATL). In this thesis EEG and fMRI were used to refine the spatial and temporal components of neurocognitive models of speech comprehension, and to attempt to provide a combined spatial and temporal model. Chapter 2 describes an EEG study that was conducted while participants listened to intelligible and unintelligible single words. A two-pass processing framework best explained the results, which showed comprehension to proceed in a somewhat hierarchical manner; however, top-down processes were involved during the early stages. These early processes were found to originate from the mid-superior temporal gyrus (STG) and inferior frontal gyrus (IFG), while the late processes were found within ATL and IFG regions. Chapter 3 compared two novel fMRI methods known to overcome signal loss within vATL: dual-echo and spin-echo fMRI. The results showed dual-echo fMRI outperformed spin-echo fMRI in vATL regions, as well as extra temporal regions. Chapter 4 harnessed the dual-echo method to investigate a speech comprehension task (sentences). Intelligibility related activation was found in bilateral STG, left vATL and left IFG. This is consistent with converging evidence implicating the vATL in semantic processing. Chapter 5 describes how simultaneous EEG-fMRI was used to investigate word comprehension. The results showed activity in superior temporal sulcus (STS), vATL and IFG. The temporal profile showed that these nodes were most active around 400 ms (specifically the anterior STS and vATL), while the vATL was consistently active across the whole epoch. Overall, these studies suggest that models of speech comprehension need to be updated to include the vATL region, as a way of accessing semantic meaning. Furthermore, the temporal evolution is best explained within a two-pass framework. The early top-down influence of vATL regions attempt to map speech-like sounds onto semantic representations. Successful mapping, and therefore comprehension, is achieved around 400 ms in the vATL and anterior STS.
|
6 |
Sound and Meaning Components during Speech Comprehension of Mandarin CompoundsJi, Sunjing, Ji, Sunjing January 2016 (has links)
Under the framework of dual-route theory of speech comprehension, two neurological routes are simultaneously active during speech decoding, the dorsal stream and the ventral stream. The dorsal stream is argued to be a sound processor whereas the ventral stream is a meaning processor, hence in cognitive terms, they are called the sound component and the meaning component respectively. Hypotheses concerning the processing speed and response accuracy of these two cognitive components were tested on compound words in Modern Mandarin Chinese. Four experiments were run contrasting, the sound-based task and the meaning-based task, corresponding to each of the two cognitive components. In Experiment 1 and 2, the Task effect was tested on one set of words in which the word-level and word-initial-syllable frequencies were controlled. In Experiment 3 and 4, the Task effect was tested on a different set of words in which semantic transparency was controlled. Multiple regression analyses integrating the data collected in Experiment 1-4 were conducted to test which language theory was preferred, the probability-based theory, the rule-based theory or the integrative theory. The probability-based theory suggests that speech comprehension of compound words relies only on the probability distribution of linguistic units. The rule-based theory suggests that speech comprehension of compound words relies only on phrase-structural rules. The integrative theory suggests that speech comprehension of compound words relies on both the probabilities of linguistic units and phrase-structural rules. It was suggested that the integrative theory explains the data best, but further data testing is needed to confirm this hypothesis. The results of the present study provide evidence for functional trade-off of the sound and meaning components, garden path effects during parsing opaque words and the possibility of the role of a mirror system in human speech comprehension.
|
7 |
Discourse Comprehension and Informational Masking: The Effect of Age, Semantic Content, and Acoustic SimilarityLu, Zihui 10 January 2014 (has links)
It is often difficult for people to understand speech when there are other ongoing conversations in the background. This dissertation investigates how different background maskers interfere with our ability to comprehend speech and the reasons why older listeners have more difficulties than younger listeners in these tasks. An ecologically valid approach was applied: instead of words or short sentences, participants were presented with two fairly lengthy lectures simultaneously, and their task was to listen to the target lecture, and ignore the competing one. Afterwards, they answered questions regarding the target lecture. Experiment 1 found that both normal-hearing and hearing-impaired older adults performed poorer than younger adults when everyone was tested in identical listening situations. However, when the listening situation was individually adjusted to compensate for age-related differences in the ability to recognize individual words in noise, age-related difference in comprehension disappeared. Experiment 2 compared the masking effects of a single-talker competing lecture to a babble of 12 voices, and the signal-to-noise ratio (SNR) was manipulated so that the masker was either of similar volume as the target, or much louder. The results showed that the competing speech was much more distracting than babble. Moreover, increasing the masker level negatively affected speech comprehension only when the masker was babble; when it was a single-talker lecture, the performance plateaued as the SNR decreased from -2 to -12 dB. Experiment 3 compared the effects of semantic content and acoustic similarity on speech comprehension by comparing a normal speech masker with a time-reversed one (to examine the effect of semantic content) and a normal speech masker with an 8-band vocoded speech (to examine the effect of acoustic similarity). The results showed that both semantic content and acoustic similarity contributed to informational masking, but the latter seemed to play a bigger role than the former. Together, the results indicated that older adults’ speech comprehension difficulties with maskers were mainly due to declines in their hearing capacities rather than their cognitive functions. The acoustic similarity between the target and competing speech may be the main reason for informational masking, with semantic interference playing a secondary role.
|
8 |
Discourse Comprehension and Informational Masking: The Effect of Age, Semantic Content, and Acoustic SimilarityLu, Zihui 10 January 2014 (has links)
It is often difficult for people to understand speech when there are other ongoing conversations in the background. This dissertation investigates how different background maskers interfere with our ability to comprehend speech and the reasons why older listeners have more difficulties than younger listeners in these tasks. An ecologically valid approach was applied: instead of words or short sentences, participants were presented with two fairly lengthy lectures simultaneously, and their task was to listen to the target lecture, and ignore the competing one. Afterwards, they answered questions regarding the target lecture. Experiment 1 found that both normal-hearing and hearing-impaired older adults performed poorer than younger adults when everyone was tested in identical listening situations. However, when the listening situation was individually adjusted to compensate for age-related differences in the ability to recognize individual words in noise, age-related difference in comprehension disappeared. Experiment 2 compared the masking effects of a single-talker competing lecture to a babble of 12 voices, and the signal-to-noise ratio (SNR) was manipulated so that the masker was either of similar volume as the target, or much louder. The results showed that the competing speech was much more distracting than babble. Moreover, increasing the masker level negatively affected speech comprehension only when the masker was babble; when it was a single-talker lecture, the performance plateaued as the SNR decreased from -2 to -12 dB. Experiment 3 compared the effects of semantic content and acoustic similarity on speech comprehension by comparing a normal speech masker with a time-reversed one (to examine the effect of semantic content) and a normal speech masker with an 8-band vocoded speech (to examine the effect of acoustic similarity). The results showed that both semantic content and acoustic similarity contributed to informational masking, but the latter seemed to play a bigger role than the former. Together, the results indicated that older adults’ speech comprehension difficulties with maskers were mainly due to declines in their hearing capacities rather than their cognitive functions. The acoustic similarity between the target and competing speech may be the main reason for informational masking, with semantic interference playing a secondary role.
|
9 |
De la mesure de l'intelligibilité à l'évaluation de la compréhension de la parole pathologique en situation de communication / From intelligibility measures to the assessment of disordered speech comprehension in a communication task.Fontan, Lionel 08 November 2012 (has links)
Ce travail de recherche répond à un besoin exprimé par des médecins et des orthophonistes travaillant auprès de patients souffrant de troubles pathologiques de production de la parole (TPPP). Le suivi et la prise en charge des patients impliquent de disposer de méthodes d’évaluation fiables et valides permettant de quantifier leurs performances de communication verbale. Aujourd’hui les méthodes généralement utilisées à cette fin sont les tests de retranscription orthographique (tests d’intelligibilité).Malgré le fait que cette utilisation soit très répandue en pratique, peu d’auteurs ont étudié la relation entre les scores de retranscription orthographique et l’aptitude des patients à être compris par des tiers (Beukelman, 1979 ; Hustad, 2008). Notre travail s’inscrit dans la continuité directe de ces études. Nous avons élaboré une méthode permettant d’évaluer la compréhension de la parole en observant les réactions comportementales des auditeurs à des énoncés verbaux. Cette méthode, réalisée sous la forme d’un logiciel baptisé « EloKanz », nous a permis d’étudier plus en avant la relation existant entre l’intelligibilité et la compréhension de la parole en situation de communication, et de proposer un nouvel outil d’évaluation.Nos résultats montrent que les scores de retranscription orthographique ne sont pas des indicateurs fiables de la performance de communication verbale. Les implications cliniques de ce résultat sont importantes, dans la mesure où les scores d’intelligibilité sont non seulement utilisés pour suivre les progrès des patients dans le temps — et donc juger de l’efficacité de thérapies ou d’interventions — mais aussi pour prendre des décisions aussi importantes que la prise en charge d’une personne ou, au contraire, de l’arrêt d’un traitement. / This research is a direct answer to a need that has been shown within the field of speech pathology. In order to assess their patients' skills and to follow their progress over time, speech pathologists need reliable and valid methods for quantifying communicative performance. Today speech intelligibility tests are the most widely used tools for this purpose.Despite this fact, few authors have studied the relationship between speech intelligibility scores and a patient's ability to be understood by others (Beukelman, 1979 ; Hustad, 2008). Our research work directly builds on these previous studies. We created a method to assess speech comprehension by observing the listener's reactions to speech. This method — implemented in a software program called ‘EloKanz‘ — allowed us to study the relationship between speech intelligibility and speech comprehension in a more precise manner.Our results show that speech intelligibility scores are not valid predictors of communicative performance. Clinical implications matter greatly, since speech intelligibility scores are used not only to evaluate the effectiveness of speech therapies and treatments, but also to make enrollment or dismissal decisions.
|
10 |
The involvement of the speech production system in prediction during comprehension : an articulatory imaging investigationDrake, Eleanor Katherine Elizabeth January 2017 (has links)
This thesis investigates the effects in speech production of prediction during speech comprehension. The topic is raised by recent theoretical models of speech comprehension, which suggest a more integrated role for speech production and comprehension mechanisms than has previously been posited. The thesis is specifically concerned with the suggestion that during speech comprehension upcoming input is simulated with reference to the listener’s own speech production system by way of efference copy. Throughout this thesis the approach taken is to investigate whether representations elicited during comprehension impact speech production. The representations of interest are those generated endogenously by the listener during prediction of upcoming input. We investigate whether predictions are represented at a form level within the listener’s speech production system. We first present an overview of the relevant literature. We then present details of a picture word interference study undertaken to confirm that the item set employed elicits typical phonological effects within a conventional paradigm in which the competing representation is perceptually available. The main body of the thesis presents evidence concerning the nature of representations arising during prediction, specifically their effect on speech output. We first present evidence from picture naming vocal response latencies. We then complement and extend this with evidence from articulatory imaging, allowing an examination of pre-acoustic aspects of speech production. To investigate effects on speech production as a dynamic motor-activity we employ the Delta method, developed to quantify articulatory variability from EPG and ultrasound recordings. We apply this technique to ultrasound data acquired during mid-sagittal imaging of the tongue and extend the approach to allow us to explore the time-course of articulation during the acoustic response latency period. We investigate whether prediction of another’s speech evokes articulatorily specified activation within the listener’s speech production system The findings presented in this thesis suggest that representations evoked as predictions during speech comprehension do affect speech motor output. However, we found no evidence to suggest that predictions are represented in an articulatorily specified manner. We discuss this conclusion with reference to models of speech production-perception that implicate efference copies in the generation of predictions during speech comprehension.
|
Page generated in 0.1062 seconds