Spelling suggestions: "subject:"auditory comprehension"" "subject:"auditory omprehension""
1 |
Auditory comprehension : from the voice up to the single word levelJones, Anna Barbara January 2016 (has links)
Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’-/tæ/ to ‘female’-/pæ/, healthy participants (n=18) showed that phonemes are categorised faster than voice, in contradistinction with the common hypothesis that voice information is stripped away (or normalised) to access phonemic content. Furthermore, reverse correlation analysis suggests that gender and phoneme are processed on the basis of different perceptual representations. A follow-up study (same paradigm) in stroke patients (n=25, right or left hemispheric brain lesions, both with and without aphasia) showed that lesions of the right frontal cortex (likely ventral inferior frontal gyrus) leads to systematic voice perception deficits while left hemispheric lesions can elicit both voice and phoneme deficits. Together these results show that phoneme processing is lateralized while voice information processing requires both hemispheres. Furthermore, this suggests that commencing Speech and Language Therapy at a low level of acoustic processing/voice perception may be an appropriate method in the treatment of phoneme perception impairments. A longitudinal case study (CF) of crossed aphasia (rare acquired communication impairment secondary to lesion ipsilateral to the dominant hand) is then presented alongside a mini-review of the literature. Extensive clinical investigation showed that CF presented with word-finding difficulties related to impaired auditory phonological analysis, while functional Magnetic Resonance Imaging (fMRI) analyses showed right hemispheric lateralization of language functions (reading, repetition and verb generation). These results, together with the co-morbidity analysis from the mini-review, suggest that crossed aphasia can be explained by developmental disorders which cause partial right lateralization shift of language processes. Interestingly, in CF this process did not affect voice lateralization and information processing, suggesting partial segregation of voice and speech processing. In the last two studies, auditory comprehension was examined at the single word level using a word-picture matching task with congruent (correct target) and incongruent (semantic, phonological and unrelated foils) conditions. fMRI in healthy participants (n=16) revealed a key role of the pars triangularis (phonological processing), the left angular gyrus (semantic incongruency) and the left precuneus (semantic relatedness) in this task – regions typically associated via the arcuate fasciculus and often impaired in aphasia. Further investigation of stroke patients on the same task (n=15) suggested that the connections between the angular gyrus and the pars triangularis serve a fundamental role in semantic processing. The quality of a published word-picture matching task was also investigated, with results questioning the clinical relevance of this task as an assessment tool. Finally, a pilot study looking at the effect of a computer-assisted auditory comprehension therapy (React2©) in 6 stroke patients (vs. 6 healthy controls and 6 stroke patients without therapy) is presented. Results show that the more therapy patients carry out the more improvement is seen in the semantic processing of single nouns. However, these results need to be reproduced on a larger scale in order to generalise any outcomes. Overall, the findings from these studies present new insight into, as well as extending on, current cognitive and neuroanatomical models of voice perception, speech perception and single word auditory comprehension. A combinatorial approach to cognitive and neuroanatomical models is proposed in order to further research, and thus improve clinical care, into impaired auditory comprehension.
|
2 |
A Moderately Intensive Functional Treatment For Severe Auditory Comprehension Deficits Associated with AphasiaGrant, Meredith Kathleen 25 April 2013 (has links)
No description available.
|
3 |
Using Eye Tracking to Examine the Relationship between Working Memory and Auditory Comprehension in Persons with AphasiaSullivan, Penny 16 June 2011 (has links)
No description available.
|
4 |
Severe, Chronic Auditory Comprehension Deficits: An Intensive Treatment and Cueing ProtocolGroh, Ellen Louise 08 May 2012 (has links)
No description available.
|
5 |
Validação e normatização de instrumentos para avaliar vocabulários receptivo e expressivo em crianças de 18 meses a 6 anos de idade / Validation and standardization of instruments to assess receptive and expressive vocabularies in children from 18 months to 6 years of ageDamazio, Miriam 14 September 2015 (has links)
Um dos mais clássicos testes para avaliar vocabulário auditivo é o Peabody Picture Vocabulary Test, normatizado por Capovilla para crianças a partir de de 30 meses de idade. Para preencher a lacuna no Brasil, esta tese apresenta o Teste de Vocabulário Auditivo por Figuras USP (TVAud33), validado e normatizado com 1.279 crianças paulistas de 12 escolas (públicas e particulares) a partir de 1 ano e 6 meses até 6 anos de idade. Vocabulário expressivo costuma ser avaliado indiretamente por meio de inventários (como o Language Development Survey, normatizado por Capovilla para crianças com 2-6 anos de idade), cuja validade e precisão são relativas, dada a mediação de terceiros. Para minorar este problema, esta tese apresenta o Teste de Vocabulário Expressivo USP (TVExpr100), validado e normatizado para 1.279 crianças de 18 meses a 6 anos de idade. O propósito é reduzir a carência de instrumentos brasileiros devidamente normatizados e validados para avaliar precocemente vocabulários receptivo e expressivo em crianças de 18 meses a 6 anos de idade. Esta tese é parte do Programa de Pesquisa e Desenvolvimento de Instrumentos de Avaliação e Intervenção do Laboratório de Neuropsicolinguística Experimental da USP coordenado por Capovilla. O programa objetiva gerar, validar, e normatizar testes genuinamente brasileiros para a população brasileira, e disponibilizá-los sem custos de royalties aos profissionais. A Parte 1 apresenta três instrumentos validados e normatizados em versão original, sendo um teste de nomeação oral de figuras: o Teste de Vocabulário Expressivo, versão original, com 100 itens (TVExp-100o), e um teste de compreensão de palavras ouvidas por escolha de figuras (Teste de Vocabulário Auditivo) nas Formas A e B (TVAud-A33o e TVAud-B33o), com 33 itens cada uma (ambas derivadas por análise de itens, do Teste de Vocabulário Auditivo Usp com 107 itens nas Formas A e B: TVAud-A107o e B TVAud-B107o). A Parte 2 apresenta os três instrumentos com itens reordernados por grau de dificuldade crescente (TVExp-100r, o TVAud- A33r e o TVAud-B33r) a partir da análise de itens, e fornece dados normativos de 1 a 6 anos e de validade desenvolvimental e cruzada esses instrumentos. Resultados mostraram que a incidência de nomeação das 100 figuras por 1.279 crianças de 1 a 6 anos foi função positiva de características (como a univocidade dessas figuras e a familiaridade dos nomes correspondentes) documentadas no software Pictografia Evocadora da Fala, de Capovilla e colaboradores. Para crianças de 6 anos, observoue que, quanto maior a habilidade de compreender nomes de figuras faladas (TVAud- B33r), e de nomear figuras por fala (TVExp-100r), maior a habilidade de nomear figuras por escrito (TNF-Escrita). O presente estudo estabeleceu a validade dos instrumentos desenvolvidos no Programa de Pesquisa e Desenvolvimento do Laboratório de Neuropsicolinguística Experimental da USP (do qual derivam todas as figuras e respectivos dados de iconicidade e univocidade e familiaridade para compor os testes), e ofereceu tabelas de dados normativos que permitem acompanhar o desenvolvimento da linguagem receptiva auditiva e expressiva oral de nossas crianças, de modo a identificar precocemente as crianças que necessitam de intervenção preventiva e remediativa / One of the most classic instruments to assess auditory vocabulary is the Peabody Picture Vocabulary Test, which has veen standardized in Brazil by Capovilla to evaluate children from 2y6m on. In order to circumvent problems due to the low commercial availability of PPVT in Brazil, this dissertation presents the USP Auditory Vocabulary Test (TVAud33), which has been validated and standardized with 1,279 children from 1y6m until 6y of age. Expressive vocabulary is usually assessed indirectly by means of inventories (such as Rescorla\'s Language Development Survey, which has been validated and standardized in Brazil by Capovilla for children from 2 to 6 years of age), whose validity and precision are compromised by biases of informants. In order to control for that, this dissertation presents the USP Picture Vocabulary Test (TVExpr100) which has been validated and standardized with 1,279 children from 18 months to 6 years of age. The purpose is to circumvent the lack of Brazilian instruments fully validated and standardized for early assessment of receptive and expressive vocabularies of children from 18 months to 6 years of age. The present dissertation is part of the Language Assessment and Intervention Research & Development Program of the Experimental Neuropsychology Laboratory at the University of Sao Paulo headed by Capovilla. Such a program aims at making freely available standardized tests for language assessment and intervention. Part 1 presents three standardized tests: a picture naming test: The Expressive Vocabulary Test (TVExp-100o), and an auditory picture vocabulary test in two forms (TVAud-A33o e TVAud-B33o), with 33 items each. Part 2 presents three tests with items ranked by difficulty (TVExp-100r, o TVAud-A33r e o TVAud-B33r) after item analyses. It provides stardardization data for children from 1-6 years of age, as well as validation data based on other tests. Results showed that the naming incidence of the 100 pictures by the 1,279 children from 1-6 years of age was a positive function of characteristics (such as picture naming agreement scores as well as word familiarity) which may be found in Capovilla\'s Speech-Invoking Pictography. For 6 year old children, it was found that the greater the ability to understand picture spoken names (TVAud-B33r), and the greater the ability to name pictures by speech (TVExp-100r), the greater the ability to name pictures by writing (TNF-Escrita). The present study has established the validity of three instruments which have been developed by the Language Assessment and Intervention Research & Development Program of the Experimental Neuropsychology Laboratory at the University of Sao Paulo (which has produced all pictures and respective iconicity, univocity and familiarity built in the tests). It has offered standardization data tables that permit early assessment and intervention upon the development of both receptive and expressive speech competencies of Brazilian children
|
6 |
The effect of frequency of augmented input on the auditory comprehension of narratives for persons with Wernicke’s aphasiaLeuvennink, Jacqueline Lisinda January 2019 (has links)
Augmented input refers to the support of any form of linguistic or visual strategies to enhance understanding during intervention. Previous research predominantly focused on the various types of augmented input that can be used, especially to support reading comprehension. The purpose of this study was to determine and compare the effect of varying amounts of augmented input using partner-pointing on the accuracy of auditory comprehension for persons with Wernicke’s aphasia specifically. The research was conducted with seven participants with Wernicke’s aphasia. The participants listened to three narratives in three conditions, namely 0%, 50% and 100% augmented input with partner-pointing, and then responded to comprehension items based on the narratives. Most participants had more accurate scores during the 50% augmented input condition. In addition, participants did significantly better in the 50% condition than in the 100% augmented input condition. The main clinical implication is that supporting narrative auditory comprehension with augmented input, used as pre-task and during-task stimulation, seems to facilitate the improved auditory comprehension of narratives for some persons with Wernicke’s aphasia. However, providing augmented input for all the content units of a narrative seems to have a negative effect on the auditory comprehension of some persons with Wernicke’s aphasia. Continued research is necessary to determine what types and frequency of augmented input will lead to improved auditory comprehension for persons with aphasia, specifically Wernicke’s aphasia. / Dissertation (MA)--University of Pretoria, 2019. / Centre for Augmentative and Alternative Communication (CAAC) / MA / Unrestricted
|
7 |
The effect of augmented input on the auditory comprehension of narratives for persons with chronic aphasiaStockley, Nicola January 2017 (has links)
Background: Augmented input (AI) refers to any visual or linguistic strategy used by
communication partners to increase the message comprehension of a person with
aphasia. Previous research has focused on the type of AI, such as high versus low
context images and linguistic versus visual supports, that can be used to facilitate
improved auditory and reading comprehension. The results of these studies have been
varied. To date, researchers have not evaluated the frequency of AI required to
improve auditory comprehension of persons with chronic aphasia.
Aims: The purpose of this study was to determine the effect of AI using no context
Picture Communication Symbols™ (PCS) images, presented at a frequency of 70%,
versus no AI on the accuracy of auditory comprehension of narratives for persons with
chronic aphasia.
Methods and procedures: A total of 12 participants with chronic aphasia listened to two
narratives, one in each of the conditions. Auditory comprehension was measured by
assessing participants’ accuracy in responding to 15 multiple choice cloze-type
statements related to the narratives.
Results: Of the 12 participants, 7 participants (58.33%) gave more accurate responses
to comprehension items in the AI condition, 4 participants (33.33%) gave more
accurate responses in the no AI condition and 1 participant scored the same in both
the conditions.
Conclusion: No context Picture Communication Symbols™ (PCS) images used as AI
improved the accuracy of responses to comprehension items based on narratives for
some persons with chronic aphasia. Continued research is necessary in order to
determine what forms and frequency of AI will lead to improved auditory
comprehension for persons with aphasia. / Mini Dissertation (M(AAC))--University of Pretoria, 2017. / National Research Foundation (NRF) / Centre for Augmentative and Alternative Communication (CAAC) / M(AAC) / Unrestricted
|
8 |
An Intensive Treatment Protocol For Severe Chronic Auditory Comprehension Deficits In Aphasia: A Feasibility StudyLundeen, Kelly Anne 05 May 2011 (has links)
No description available.
|
9 |
Intensive Auditory Comprehension Treatment for People with Severe Aphasia: Outcomes and Use of Self-Directed StrategiesKnollman-Porter, Kelly 05 October 2012 (has links)
No description available.
|
10 |
A Novel Pupillometric Method for the Assessment of Auditory Comprehension in Individuals with Neurological DisordersRoche, Laura 03 October 2011 (has links)
No description available.
|
Page generated in 0.0926 seconds