• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 5
  • 1
  • Tagged with
  • 14
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for speechreading acquisition tools

Gorman, Benjamin Millar January 2018 (has links)
At least 360 million people worldwide have disabling hearing loss that frequently causes difficulties in day-to-day conversations. Hearing aids often fail to offer enough benefits and have low adoption rates. However, people with hearing loss find that speechreading can improve their understanding during conversation. Speechreading (often called lipreading) refers to using visual information about the movements of a speaker's lips, teeth, and tongue to help understand what they are saying. Speechreading is commonly used by people with all severities of hearing loss to understand speech, and people with typical hearing also speechread (albeit subconsciously) to help them understand others. However, speechreading is a skill that takes considerable practice to acquire. Publicly-funded speechreading classes are sometimes provided, and have been shown to improve speechreading acquisition. However, classes are only provided in a handful of countries around the world and students can only practice effectively when attending class. Existing tools have been designed to help improve speechreading acquisition, but are often not effective because they have not been designed within the context of contemporary speechreading lessons or practice. To address this, in this thesis I present a novel speechreading acquisition framework that can be used to design Speechreading Acquisition Tools (SATs) - a new type of technology to improve speechreading acquisition. I interviewed seven speechreading tutors and used thematic analysis to identify and organise the key elements of the framework. I evaluated the framework by using it to: 1) categorise every tutor-identified speechreading teaching technique, 2) critically evaluate existing Conversation Aids and SATs, and 3) design three new SATs. I then conducted a postal survey with 59 speechreading students to understand students' perspectives on speechreading, and how their thoughts could influence future SATs. To further evaluate the framework's effectiveness I then developed and evaluated two new SATs (PhonemeViz and MirrorMirror) designed using the framework. The findings from the evaluation of these two new SATs demonstrates that using the framework can help design effective tools to improve speechreading acquisition.
2

Beyond lips : components of speechreading skill

Lyxell, Björn January 1989 (has links)
<p>[1] s., s. 4-70: sammanfattning, s. 73-153: 4 uppsatser</p> / digitalisering@umu
3

Leitura alfabética, escrita sob ditado, e leitura orofacial: interrelações com vocabulário, consciência fonológica e memória / Alphabetical reading, spelling under dictation, and speech reading: inter-relationships and the role of vocabulary, phonological awareness and memory

Santos, Luiz Eduardo Graton 30 June 2017 (has links)
A tese divide-se em duas partes: 1) Relações entre leitura orofacial e leitura alfabética, vocabulário auditivo, consciência fonológica, memória de reconhecimento visual de figuras e de pseudofiguras, e compreensão de leitura de sentenças; 2) Escrita sob ditado e relações com leitura orofacial. Na Parte 1, 157 crianças de 6, 7 e 8 anos de idade de NSE muito elevado, foram avaliadas em leituras orofacial e alfabética, vocabulário auditivo, consciência fonológica, memória de reconhecimento visual, e compreensão de leitura de sentenças. Foram empregados: Prova de Leitura Orofacial, Teste de Vocabulário por Leitura Orofacial versão 1A computadorizada, Teste de Competência de Leitura de Palavras e Pseudopalavras, Teste de Vocabulário Auditivo Usp, Prova de Consciência Fonológica por Escolha de Figuras (PCFF-o), Teste Computadorizado de Memória de Reconhecimento de Figuras, e Teste de Compreensão de Leitura de Sentenças. Resultados revelaram que: 1) O léxico ortográfico aumenta sistematicamente de modo de modo monotônico desde o acaso aos 6 anos para 7 anos e daí para 8 anos; 2) A competência mais fortemente associada à leitura orofacial é a leitura de itens escritos; 3) O desenvolvimento das competências de leitura fonológica e lexical é diretamente proporcional ao desenvolvimento das habilidades metafonológicas no nível fonêmico; 4) Os subtestes de leitura fonológicos (decodificação) são mais fortemente associados com leitura orofacial. Assim, quanto maior a habilidade de converter grafemas em fonemas (rota fonológica de leitura por decodificação), maior a habilidade de converter optolalemas em fonemas para compreeder fala por leitura orofacial; 5) O desenvolvimento da leitura orofacial correlaciona-se mais fortemente com habilidades metafonológica de Transposição e Adição Fonêmicas. Logo, consciência fonêmica é precursora da leitura orofacial. Transposição Fonêmica correlaciona-se mais fortemente com rotas de leitura fonológica e lexical. De fato, a correlação com os subtestes logográficos foi não significativa, ou foi menos significativa que com subtestes fonêmicos (Transposição e Subtração Fonêmicas, e Trocadilho). Assim, o desenvolvimento de habilidades metalinguísticas no nível fonêmico (Transposição e Subtração Fonêmica e Trocadilhos) associa-se ao desenvolvimento de mais eficientes habilidades de leitura (inicialmente pela rota fonológica de decodificação grafema-fonema, convertendo grafemas em fonemas para compreender a escrita, e subsequentemente, pela rota lexical de reconhecimento visual direto das formas ortográficas, que, por sua vez, está associado com o desenvolvimento de mais eficientes habilidades de leitura orofacial (converter optolalemas em fonemas para compreender a fala lida nos lábios). Subtestes logográficos (de pré-leitura) não predizem confiavelmente nem bom nível de habilidade metafonológica nem bom nível de leitura orofacial (conversão de optolalemas em fonemas). Dados apoiam a interpretação de que a leitura orofacial (conversão de optolalemas em fonemas) depende de leitura alfabética que, por sua vez, depende de habilidades metafonológicas. Foram normatizados oito testes, e cinco subtestes de leitura, e nove subtestes de consciência fonológica. Na Parte 2, 154 alunos (61 do Ensino Superior e 93 do Ensino Fundamental) foram submetidos a uma prova de escrita sob ditado de 560 palavras de baixa frequência de ocorrência. Anovas revelaram que a precisão da cifragem das palavras ouvidas foi função positiva da média aritmética dos índices de cifrabilidade das relações fonografêmicas componentes dessas palavras, conforme modelo de Capovilla. Análise de erros de escrita (paragrafias) indicam efeito de leitura orofacial visual, pois os grafemas produzidos nas paragrafias mapearam mais concentradamente os fonemas dos mesmos pontos de articulação daqueles componentes das palavras ouvidas / The dissertation is divided into two parts. Part 1 assessed the relationships holding among alphabetical reading, speech reading, auditory vocabulary, phonological awareness, recognition memory, and sentence reading comprehension in a sample of 157 children aged 6- 8 years. Part 2 assessed writing under dictation: error analyses based on cyphering precision measures and phonetics underlying speech reading (articulation points of the phonemes corresponding to the writing errors) in 154 college and elementary school students.Part 1 used the following tests: Speech Reading Skill Test (SRST), Computerized Speech Reading Vocabulary Test (CSRVT), Alphabetic Reading Skill (Word Decoding-Recognition) Test (ARST); Auditory Vocabulary (AVT), Phonological Awareness Test (PAT), Computerized Picture Recognition Memory Test (CPRMT-112), Computerized Image Recognition Memory Test (CIRMT-180), Reading Comprehension Skill Test, Alphabetic Reading Skill (Word Decoding-Recognition) Test (ARST). Results showed that: 1) The orthographic lexicon (subtest of rejecting Homophone Pseudo-words) increased systematically in a monotonic way from kindergarten to 1st grade to 2nd grade; 2) Speech-reading skill is most strongly associated with the alphabetic reading skill; 3) Reading subtests that demand phonological reading strategy and lexical reading strategy were directly proportional to meta-phonological skills subtests, specially those at the phonemic level; 4) Subtests that demand phonological reading strategy were strongly associated with speech-reading tests. The greater the skill of converting graphemes into phonemes, the greater the skill of converting optolalemes into phonemes in order to comprehend speech via lip-reading. 5) Speech-reading skill was most strongly correlated with the meta-phonological skills of Phonemic Transposition and Addition. Thus, meta-phonological skills at the phonemic level seemed to foster speech-reading skill. Phonemic Transposition skill was most strongly correlated with phonological and lexical reading strategies. Alphabetic Reading Test subtests that demanded just basic logographic strategy were not correlated or were only very weakly correlated with Phonological Awareness Test subtests that demand high phonemic processing. There was a positive correlation between phonemic skills on one hand and, on the other hand, phonological reading skills and lexical reading skills. Also, there was a positive correlation between phonological and lexical reading skills on one hand and speech-reading skills on the other hand. Alphabetic Reading Skill Test subtests that require only a basic logographic reading strategydo not predict reliably neither meta-phonological skills, nor speech-reading skills. Such data give support to the interpretation that the speech-reading skill of converting optolalemes into phonemes relies on the alphabetic reading skills. In turn, such alphabetic reading skills rely on meta-phonological skills. In Part 2, 154 participants (61 college students and 93 elementary school students) were assessed in a spelling under auditory dictation test of 560 extremely rare words. ANOVAS revealed that the encoding precision of the spoken words to be spelled under dictation was a positive function of the encoding index mean of the phoneme-grapheme relationships that make up those words, according to Capovillas model. Spelling error (paragraphia) analyses suggest visual speech reading effects in spelling under auditory dictation: The unconventional spelling units produced in the paragraphias matched the articulation points of the phonemes to be mapped
4

Cognitive deafness : The deterioration of phonological representations in adults with an acquired severe hearing loss and its implications for speech understanding

Andersson, Ulf January 2001 (has links)
<p>The aim of the present thesis was to examine possible cognitive consequences of acquired hearing loss and the possible impact of these</p><p>cognitive consequences on the ability to process spoken language presented through visual speechreading or through a cochlear implant.</p><p>The main findings of the present thesis can be summarised in the following conclusions: (a) The phonological processing capabilities of</p><p>individuals who have acquired a severe hearing loss or deafness deteriorate progressively as a function of number of years with a complete or partial auditory deprivation. (b) The observed phonological deterioration is restricted to certain aspects of the phonological system. Specifically, the phonological representations of words in the mental lexicon are of less good quality, whereas the phonological system in verbal working memory is preserved. (c) The deterioration of the phonological representations has a negative effect on the individual's ability to process speech, either presented visually (i.e., speechreading) or through a cochlear implant, as it may impair word recognition processes which involve activation of and discrimination between the phonological representations in the lexicon. (d) Thus, the present research describes an acquired cognitive disability not previously documented in the literature, and contributes to the context of other populations with phonological disabilities by showing that a complete or partial deprivation of auditory speech stimulation in adulthood can give rise to a phonological disability. (e) From a clinical point of view, the results from the present thesis suggest that early cochlear implantation after the onset of an acquired severe hearing loss is an important objective in order to reach a high level of speech understanding with the implant.</p>
5

Cognitive deafness : The deterioration of phonological representations in adults with an acquired severe hearing loss and its implications for speech understanding

Andersson, Ulf January 2001 (has links)
The aim of the present thesis was to examine possible cognitive consequences of acquired hearing loss and the possible impact of these cognitive consequences on the ability to process spoken language presented through visual speechreading or through a cochlear implant. The main findings of the present thesis can be summarised in the following conclusions: (a) The phonological processing capabilities of individuals who have acquired a severe hearing loss or deafness deteriorate progressively as a function of number of years with a complete or partial auditory deprivation. (b) The observed phonological deterioration is restricted to certain aspects of the phonological system. Specifically, the phonological representations of words in the mental lexicon are of less good quality, whereas the phonological system in verbal working memory is preserved. (c) The deterioration of the phonological representations has a negative effect on the individual's ability to process speech, either presented visually (i.e., speechreading) or through a cochlear implant, as it may impair word recognition processes which involve activation of and discrimination between the phonological representations in the lexicon. (d) Thus, the present research describes an acquired cognitive disability not previously documented in the literature, and contributes to the context of other populations with phonological disabilities by showing that a complete or partial deprivation of auditory speech stimulation in adulthood can give rise to a phonological disability. (e) From a clinical point of view, the results from the present thesis suggest that early cochlear implantation after the onset of an acquired severe hearing loss is an important objective in order to reach a high level of speech understanding with the implant.
6

Exploration of Lip Shape Measures and their Association with Tongue Contact Patterns

Wagner, Jessica Lynn 05 August 2005 (has links) (PDF)
A variety of tools and techniques have been developed to measure the movements of the vocal tract, specifically of the tongue and lips. In recent years, computer technology has allowed for extensive exploration of these precise movements and for the development of speech recognition systems. However, there has been relatively little work on the combination of visible facial movements and internal articulatory activity. In this study, two different technologies were used to explore the internal and external movements of speech production in eight speakers: palatometry quantified tongue contact patterns and computerized video image analysis was used to derive lip shape parameters. Results showed that the lip measures used here cannot predict the identity of phonemes in all speakers as well as the tongue contact patterns can. Results also indicated that the data from lip measures were strongly influenced by who the speaker was, whereas the palatometric data were not. These results suggest that more variation exists in lip shape than in tongue contact patterns during speech production. Understanding more about lip measures and vocal tract movement during speech production may potentially benefit the area of speechreading; however, more research is needed to refine the procedures used.
7

Perception auditive, visuelle et audiovisuelle des voyelles nasales par les adultes devenus sourds. Lecture labiale, implant cochléaire, implant du tronc cérébral. / Auditory, visual and auditory-visual perception of nasal vowels by deafened adults : Speechareading, Cochlear Implant, Auditory Brainstem Implant

Borel, Stéphanie 14 January 2015 (has links)
Cette thèse porte sur la perception visuelle, auditive et audiovisuelle des voyelles nasales [ɑ̃] (« lent »),[ɔ̃] (« long ») et [ɛ̃] (« lin ») par des adultes devenus sourds, implantés cochléaires et implantés dutronc cérébral. L’étude sur la perception visuelle des voyelles, auprès de 22 adultes devenus sourds,redéfinit les sosies labiaux des voyelles nasales et propose une mise à jour de la classification desvisèmes. Trois études sur l’identification auditive des voyelles nasales auprès de 82, 15 et 10 adultesimplantés cochléaires mettent en évidence leur difficulté à reconnaitre les trois voyelles nasales, qu’ilsperçoivent comme des voyelles orales. Les analyses acoustiques et perceptives suggèrent que lesadultes implantés cochléaires s’appuient sur les informations fréquentielles des deux premiers picsspectraux mais négligent les informations d’intensité relative de ces pics. D’après l’étude menéeauprès de 13 adultes implantés du tronc cérébral, des informations acoustiques linguistiques sonttransmises par l’implant du tronc cérébral mais la fusion entre les informations auditives et visuellespourrait être optimisée pour l’identification des voyelles. Enfin, une enquête auprès de 179orthophonistes pointe le besoin d’une information sur la définition phonétique articulatoire actualiséedes voyelles [ɑ̃] et [ɛ̃]. / This thesis focuses on the visual, auditory and auditory-visual perception of french nasal vowels [ɑ̃](« lent »), [ɔ̃] (« long ») and [ɛ̃] (« lin ») by Cochlear Implant (CI) and Auditory Brainstem Implant(ABI) adults users. The study on visual perception of vowels, with 22 deafened adults, redefines thelip configuration of french nasal vowels and provides an update of the classification of vocalic visualphonemes. Three studies on auditory identification of nasal vowels with 82, 15 and 10 CI usershighlight their difficulty in recognizing the three nasal vowels, which they perceive as oral vowels.Acoustic and perceptual analyzes suggest that adults with CI rely on frequency informations of thefirst two spectral peaks but miss the informations of relative intensity of these peaks. The study with13 ABI users show that some linguistic acoustic cues are transmitted by the ABI but the fusion ofauditory and visual features could be optimized for the identification of vowels. Finally, a survey of179 Speech Language and Hearing Therapists show the need of an update on the phonetic articulationof french nasal vowels [ɑ̃] and [ɛ̃].
8

Perception auditive, visuelle et audiovisuelle des voyelles nasales par les adultes devenus sourds. Lecture labiale, implant cochléaire, implant du tronc cérébral. / Auditory, visual and auditory-visual perception of nasal vowels by deafened adults : Speechareading, Cochlear Implant, Auditory Brainstem Implant

Borel, Stéphanie 14 January 2015 (has links)
Cette thèse porte sur la perception visuelle, auditive et audiovisuelle des voyelles nasales [ɑ̃] (« lent »),[ɔ̃] (« long ») et [ɛ̃] (« lin ») par des adultes devenus sourds, implantés cochléaires et implantés dutronc cérébral. L’étude sur la perception visuelle des voyelles, auprès de 22 adultes devenus sourds,redéfinit les sosies labiaux des voyelles nasales et propose une mise à jour de la classification desvisèmes. Trois études sur l’identification auditive des voyelles nasales auprès de 82, 15 et 10 adultesimplantés cochléaires mettent en évidence leur difficulté à reconnaitre les trois voyelles nasales, qu’ilsperçoivent comme des voyelles orales. Les analyses acoustiques et perceptives suggèrent que lesadultes implantés cochléaires s’appuient sur les informations fréquentielles des deux premiers picsspectraux mais négligent les informations d’intensité relative de ces pics. D’après l’étude menéeauprès de 13 adultes implantés du tronc cérébral, des informations acoustiques linguistiques sonttransmises par l’implant du tronc cérébral mais la fusion entre les informations auditives et visuellespourrait être optimisée pour l’identification des voyelles. Enfin, une enquête auprès de 179orthophonistes pointe le besoin d’une information sur la définition phonétique articulatoire actualiséedes voyelles [ɑ̃] et [ɛ̃]. / This thesis focuses on the visual, auditory and auditory-visual perception of french nasal vowels [ɑ̃](« lent »), [ɔ̃] (« long ») and [ɛ̃] (« lin ») by Cochlear Implant (CI) and Auditory Brainstem Implant(ABI) adults users. The study on visual perception of vowels, with 22 deafened adults, redefines thelip configuration of french nasal vowels and provides an update of the classification of vocalic visualphonemes. Three studies on auditory identification of nasal vowels with 82, 15 and 10 CI usershighlight their difficulty in recognizing the three nasal vowels, which they perceive as oral vowels.Acoustic and perceptual analyzes suggest that adults with CI rely on frequency informations of thefirst two spectral peaks but miss the informations of relative intensity of these peaks. The study with13 ABI users show that some linguistic acoustic cues are transmitted by the ABI but the fusion ofauditory and visual features could be optimized for the identification of vowels. Finally, a survey of179 Speech Language and Hearing Therapists show the need of an update on the phonetic articulationof french nasal vowels [ɑ̃] and [ɛ̃].
9

Semantic Framing of Speech : Emotional and Topical Cues in Perception of Poorly Specified Speech

Lidestam, Björn January 2003 (has links)
The general aim of this thesis was to test the effects of paralinguistic (emotional) and prior contextual (topical) cues on perception of poorly specified visual, auditory, and audiovisual speech. The specific purposes were to (1) examine if facially displayed emotions can facilitate speechreading performance; (2) to study the mechanism for such facilitation; (3) to map information-processing factors that are involved in processing of poorly specified speech; and (4) to present a comprehensive conceptual framework for speech perception, with specification of the signal being considered. Experi¬mental and correlational designs were used, and 399 normal-hearing adults participated in seven experiments. The main conclusions are summarised as follows. (a) Speechreading can be facilitated by paralinguistic information as constituted by facial displayed emotions. (b) The facilitatory effect of emitted emotional cues is mediated by their degree of specification in transmission and ambiguity as percepts; and by how distinct the perceived emotions combined with topical cues are as cues for lexical access. (c) The facially displayed emotions affect speech perception by conveying semantic cues; no effect via enhanced articulatory distinctiveness, nor of emotion-related state in the perceiver is needed for facilitation. (d) The combined findings suggest that emotional and topical cues provide constraints for activation spreading in the lexicon. (e) Both bottom-up and top-down factors are associated with perception of poorly specified speech, indicating that variation in information-processing abilities is a crucial factor for perception if there is paucity in sensory input. A conceptual framework for speech perception, comprising specification of the linguistic and paralinguistic information, as well as distinctiveness of primes, is presented. Generalisations of the findings to other forms of paralanguage and language processing are discussed.
10

Perception auditive, visuelle et audiovisuelle des voyelles nasales par les adultes devenus sourds. Lecture labiale, implant cochléaire, implant du tronc cérébral. / Auditory, visual and auditory-visual perception of nasal vowels by deafened adults : Speechareading, Cochlear Implant, Auditory Brainstem Implant

Borel, Stéphanie 14 January 2015 (has links)
Cette thèse porte sur la perception visuelle, auditive et audiovisuelle des voyelles nasales [ɑ̃] (« lent »),[ɔ̃] (« long ») et [ɛ̃] (« lin ») par des adultes devenus sourds, implantés cochléaires et implantés dutronc cérébral. L’étude sur la perception visuelle des voyelles, auprès de 22 adultes devenus sourds,redéfinit les sosies labiaux des voyelles nasales et propose une mise à jour de la classification desvisèmes. Trois études sur l’identification auditive des voyelles nasales auprès de 82, 15 et 10 adultesimplantés cochléaires mettent en évidence leur difficulté à reconnaitre les trois voyelles nasales, qu’ilsperçoivent comme des voyelles orales. Les analyses acoustiques et perceptives suggèrent que lesadultes implantés cochléaires s’appuient sur les informations fréquentielles des deux premiers picsspectraux mais négligent les informations d’intensité relative de ces pics. D’après l’étude menéeauprès de 13 adultes implantés du tronc cérébral, des informations acoustiques linguistiques sonttransmises par l’implant du tronc cérébral mais la fusion entre les informations auditives et visuellespourrait être optimisée pour l’identification des voyelles. Enfin, une enquête auprès de 179orthophonistes pointe le besoin d’une information sur la définition phonétique articulatoire actualiséedes voyelles [ɑ̃] et [ɛ̃]. / This thesis focuses on the visual, auditory and auditory-visual perception of french nasal vowels [ɑ̃](« lent »), [ɔ̃] (« long ») and [ɛ̃] (« lin ») by Cochlear Implant (CI) and Auditory Brainstem Implant(ABI) adults users. The study on visual perception of vowels, with 22 deafened adults, redefines thelip configuration of french nasal vowels and provides an update of the classification of vocalic visualphonemes. Three studies on auditory identification of nasal vowels with 82, 15 and 10 CI usershighlight their difficulty in recognizing the three nasal vowels, which they perceive as oral vowels.Acoustic and perceptual analyzes suggest that adults with CI rely on frequency informations of thefirst two spectral peaks but miss the informations of relative intensity of these peaks. The study with13 ABI users show that some linguistic acoustic cues are transmitted by the ABI but the fusion ofauditory and visual features could be optimized for the identification of vowels. Finally, a survey of179 Speech Language and Hearing Therapists show the need of an update on the phonetic articulationof french nasal vowels [ɑ̃] and [ɛ̃].

Page generated in 0.0645 seconds