Spelling suggestions: "subject:"phonemes"" "subject:"chronemes""
1 |
Gender of Speaker Influences Infants' Discrimination of Non-Native Phonemes in a Multimodal ContextBhullar, Naureen 15 February 2005 (has links)
Previous research has shown that infants can discriminate both native and non-native speech contrasts before the age of 10-12 months. After this age, infants' phoneme discrimination starts resembling adults', as they are able to discriminate native contrasts, but lose their sensitivity to non-native ones. However, the majority of these studies have been carried out in a testing context, which is dissimilar to the natural language-learning context experienced by infants. This study was designed to see the influence of speaker-gender and visual speech information on the ability of 11 month-old infants to discriminate the non-native contrasts. Previous research in our laboratory revealed that 11 month-old infants were able to discriminate retroflex and dental Hindi contrasts when the speech was infant-directed, the speaker was a female and visual speech information was available (i.e., infant watched digital movies of female speakers). A follow-up study showed that with an adult-directed male voice and absence of visual speech information, 11 month-old infants did not discriminate the same non-native contrasts. Hence the aim of the present study was to address the questions posed by these two studies. Does the gender of the speaker matter alone? Also, to what extent is the visual speech information helpful for the discriminatory abilities of the infants? Would the manner of speech help infants discriminate the non-native contrasts? The result of the current study show that 11 month-old infants were unable to discriminate between the phonemic Hindi contrasts. Hence gender seems to matter as the presence of male face and voice did not seem to aid discrimination. / Master of Science
|
2 |
Gaelic dialect of ColonsayScouller, Alastair MacNeill January 2018 (has links)
This thesis provides a description of the Scottish Gaelic dialect spoken on the Inner Hebridean island of Colonsay. This dialect has not previously been the subject of any serious academic research. Gaelic was the dominant language on Colonsay until the 1970s, but the local dialect is now in terminal decline, with only a handful of fluent speakers still living on the island. The study focusses mainly on the phonology of the dialect, but other aspects such as morphology, syntax and lexis are also covered. Following a brief introduction, Chapter 1 seeks to situate the dialect in its wider geographical, historical and sociolinguistic context, highlighting the major changes that have taken place in the past forty years, and have led to its present endangered situation. Chapter 2, which comprises approximately half the thesis, examines the phonological structure of the dialect in detail, based on the results of the Survey of the Gaelic Dialects of Scotland (SGDS). Issues of phonetic and phonemic transcription are discussed. The phonemes identified are then listed, with their respective allophones and non-allophonic variants. Chapter 3 deals with prosodic and other non-segmental features which are of significance for the phonology of the dialect. Chapter 4 highlights those aspects of morphology and syntax where Colonsay usage differs from other varieties of Gaelic. Chapter 5 discusses lexical features which are particular to this dialect, or shared with neighbouring dialects in Argyll. An annotated Glossary lists words which are of particular interest in the study of this dialect, some of which are discussed in more detail in Chapter 5. This thesis will provide future students of Gaelic dialectology with an account of the Colonsay dialect, to complement the numerous monographs that have been written about other varieties of Gaelic. Because of the precarious position of this dialect, the timing of this study is critical: it represents the last opportunity to 'preserve by record' a distinctive variety of Gaelic which, sadly, is on the verge of extinction.
|
3 |
Segmentace řeči / Speech segmentationKašpar, Ladislav January 2015 (has links)
My diploma thesis is devoted to the problem of segmentation of speech. It includes the basic theory on this topic. The theory focuses on the calculation of parameters for seg- mentation of speech that are used in the practical part. An application for segmentation of speech has been written in Matlab. It uses techniques as segmentation of the signal, energy of the signal and zero crossing function. These parameters are used as input for the algorithm k–means.
|
4 |
Recognition of phonemes using shapes of speech waveforms in WALCarandang, Alfonso B., n/a January 1994 (has links)
Generating a phonetic transcription of the speech waveform is one method
which can be applied to continuous speech recognition. Current methods of labelling a
speech wave involve the use of techniques based on spectrographic analysis. This paper
presents a computationally simple method by which some phonemes can be identified
primarily by their shapes.
Three shapes which are regularly manifested by three phonemes were examined
in utterances made by a number of speakers. Features were then devised to recognise
their patterns using finite state automata combined with a checking mechanism. These
were implemented in the Wave Analysis Language (WAL) system developed at the
University of Canberra and the results showed that the phonemes can be recognised
with high accuracy. The resulting shape features have also demonstrated a degree of
speaker independence and context dependency.
|
5 |
Real Time Speech Driven Face Animation / Talstyrd Ansiktsanimering i RealtidAxelsson, Andreas, Björhäll, Erik January 2003 (has links)
<p>The goal of this project is to implement a system to analyse an audio signal containing speech, and produce a classifcation of lip shape categories (visemes) in order to synchronize the lips of a computer generated face with the speech. </p><p>The thesis describes the work to derive a method that maps speech to lip move- ments, on an animated face model, in real time. The method is implemented in C++ on the PC/Windows platform. The program reads speech from pre-recorded audio files and continuously performs spectral analysis of the speech. Neural networks are used to classify the speech into a sequence of phonemes, and the corresponding visemes are shown on the screen. </p><p>Some time delay between input speech and the visualization could not be avoided, but the overall visual impression is that sound and animation are synchronized.</p>
|
6 |
Real Time Speech Driven Face Animation / Talstyrd Ansiktsanimering i RealtidAxelsson, Andreas, Björhäll, Erik January 2003 (has links)
The goal of this project is to implement a system to analyse an audio signal containing speech, and produce a classifcation of lip shape categories (visemes) in order to synchronize the lips of a computer generated face with the speech. The thesis describes the work to derive a method that maps speech to lip move- ments, on an animated face model, in real time. The method is implemented in C++ on the PC/Windows platform. The program reads speech from pre-recorded audio files and continuously performs spectral analysis of the speech. Neural networks are used to classify the speech into a sequence of phonemes, and the corresponding visemes are shown on the screen. Some time delay between input speech and the visualization could not be avoided, but the overall visual impression is that sound and animation are synchronized.
|
7 |
The Impact of Visual Input on the Ability of Bilateral and Bimodal Cochlear Implant Users to Accurately Perceive Words and Phonemes in Experimental PhrasesJanuary 2015 (has links)
abstract: A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users. / Dissertation/Thesis / Masters Thesis Speech and Hearing Science 2015
|
8 |
Learning non-Swedish speech sounds : A study of Swedish students’ pronunciation and ability to learn English phonemesReinholdsson, Tommy January 2014 (has links)
Previous research has shown that L2 students have difficulties producing and even recognising sounds that do not exist in their mother tongue. It has also been concluded that accented speech not only compromises intelligibility but also makes the listener negatively biased towards the speaker. The present study explores how proficient Swedish students are in producing the speech sounds /dʒ/, /j/, /v/, /w/, /ʃ/and /tʃ/, of which /dʒ/, /w/ and /tʃ/do not exist in Swedish. In addition, it explores whether their pronunciation of these sounds improves after a brief pronunciation lesson, if this improvement is lasting and whether they tend to learn the pronunciation of words as separate units or are able to generalise the rules of pronunciation and appropriately apply them. It also investigates whether a difference in the structure of the pronunciation lesson affects the students’ results. The study revealed that the students do have difficulties with correctly producing in particular /tʃ/, /dʒ/ and /j/. More specifically, they tended to confuse /dʒ/ and /j/ whereas many students appeared to have been unaware that /tʃ/ exists and used the /ʃ/-sound instead, which exists in Swedish. After the pronunciation lesson, however, the students significantly improved their pronunciation. This improvement was shown to be lasting and the students were generalising rules rather than learning words as separate units. What the study failed to show was a significant difference in results caused by a difference in the structure of the pronunciation lesson.
|
9 |
The role of phonemes and syllables in child phonology: Evidence from psycholinguistic experiments in Mandarin廖昭媛, Liao, Chao-yuan Unknown Date (has links)
國 立 政 治 大 學 研 究 所 碩 士 論 文 摘 要
研究所所別: 語言學研究所
論文名稱:音位和音節在中文兒童語音扮演的角色:心理語言學實驗的驗證
指導教授:萬依萍
研究生:廖昭媛
論文內容摘要:(共一冊,13,641字,分五章二十節,並扼要說明內容,共453字)
本文目的是想瞭解兒童在聽辨語音相似性,對於音位和音節的敏感性。本研究的受試者是41位操持國語的學齡前兒童。其中二十一位兒童參與非詞語音分類實驗(the nonword classification task)。兒童首先要學習兩個雙音節的標準語音,然後他(她)要依據聽覺上的相似性來歸類稍候聽到的測試音。他們需要要判定測試音聽起和哪一個標準音聽起來比較相似。結果顯示兒童在音位相似性或是音節相似方面的表現並無顯著的差異。另外二十位兒童皆下來進行非詞語音聽辯的實驗(The word-pair judgment task 1 and 2)。在測試工作(一),兒童要聽辨一對兩個雙音節的非詞語音,在前面的部份聽起來是否相似。結果顯示,兒童在音位和音節相似性,音位和最多相性似上的表現皆有顯著的差異性。在測試工作(二),兒童也是要聽辨一對兩個雙音節的非詞語音,在後面的部份聽起來是否相似。實驗的結果和測試工作(一)相同。整體上而言,要解釋在兒童聽辯語音上的表現,除了從「數量」方面的考量,更可以從語音本身結構上上的差異性來討論兒童對與音位和音節的敏感性。此外,在本文的實驗裡,並未發現相似因的位置會造成兒童在聽辯語音上的差異。 / Abstract
This paper investigated the role of syllables and phonemes in Mandarin speaking children in terms of the sensitivity to the syllable correspondences and phoneme correspondences. In the first experiment, the nonword classification task, 21 kindergarteners were recruited. The child had to learn two disyllabic standard sounds. Then, he/she was presented with a test nonword and had to classifiy the test nonword to one of the two standard sounds if the test sound and the standard sound are sounded alike at the beginnings. The results in the nonword classification task have shown that the young children did not show the significant reliance on the type of the syllable correspondences. In the two further experiments, the word-pair judgment task 1 and 2, another 20 kindergarteners were recruited. The children were asked to judge whether two test words are sounded alike at the beginnings. The results in the word-pair judgment task 1 indicate that syllables are treated as a structural unit and phonemes are belonging to the lower level of the syllabic constructs. The children’s responses tend to reflect the effect of the syllable structure in the language process rather than the effect of the size of shared units. In the word-pair judgment task 2, the child has to judge whether the two test words are sounded alike at the ends. The results also indicate that children’s performance should be viewed in terms of syllable structures. Besides, the location of the shared unit did not play a role when children judge the similarity between the speech sounds. Children performed equally the same when the shared units are either at the beginnings or at the ends of words in the present study.
|
10 |
O efeito do ensino do emparelhamento auditivo-visual de fonemas e grafemas e do ditado de sílabas na aquisição de leitura recombinativa / The effect of teaching the auditory-visual pairing of phonemes and graphemes and the dictation of syllables in the acquisition of recombinative readingTeixeira, Nataly Santos do Nascimento 29 August 2018 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2018-10-05T12:07:54Z
No. of bitstreams: 1
Nataly Santos do Nascimento Teixeira.pdf: 2455783 bytes, checksum: eb93123acde1f6e51809f197267f04d5 (MD5) / Made available in DSpace on 2018-10-05T12:07:54Z (GMT). No. of bitstreams: 1
Nataly Santos do Nascimento Teixeira.pdf: 2455783 bytes, checksum: eb93123acde1f6e51809f197267f04d5 (MD5)
Previous issue date: 2018-08-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Studies have evaluated which conditions may facilitate the emergence of recombinative readings. Among the researches, the effect of teaching different units (phonemes or syllables) has been verified. The present study aimed to verify the effect of teaching auditory-visual relation involving phonemes and graphemes (AfCl), and dictation of syllables preceding the teaching of the relation between spoken word – printed word (AC) in the acquisition of recombinative reading. The participants were six pre-literacy children, exposed to the Teaching Phase. A multiple baseline design was adopted, in which three participants started the Teaching Phase concomitantly (P1, P2 and P3), and the others started at different moments considering the P3’s performance. Both the teaching words and test words were elaborated from a matrix that aimed to use four syllables in different positions and repeat them equal number of times. Eight words were taught, divided into four sets of teaching, which consisted of: (a) pre-test of the set; (b) teaching the AfCl relation (phoneme – grapheme); (c) constructed-response matching-to-sample (dictation); (d) intermediate test; (e) teaching the AC relation; and (f) post-test of the set. The tests included recombination words composed of the units of the taught words. Another evaluation with words from all sets and the Phonological Awareness Test by Oral Production (Prova de Consciência Fonológica por Produção Oral, PCFO) were applied at the beginning and at the end of the procedure. The results showed that the majority of the participants presented improvement in the performance related to both the taught and recombination words in the AC and A’C’ relations. Only two participants (P2 and P3) did not present textual behavior in the CD relation, and all of them named at least one of the recombination words (C’D’). Tests of textual behavior indicated that even in the wrong answers, there was partial control by some unit of the word. The PCFO results showed improvement in the tests involving syllabic manipulation, rhyme and alliteration / Estudos têm avaliado quais condições podem facilitar a emergência de leitura recombinativa. Dentre as investigações, tem sido verificado o efeito do ensino de diferentes unidades (fonemas ou sílabas). O presente estudo teve como objetivo verificar o efeito do ensino da relação auditivo-visual envolvendo fonemas e grafemas (AfCl) e do ditado de sílabas antecedendo o ensino da relação entre palavra falada – palavra escrita (AC) na aquisição de leitura recombinativa. Os participantes foram seis crianças da pré-alfabetização, expostas à Fase de Ensino. Um delineamento de linha de base múltipla foi adotado, no qual três participantes iniciaram a Fase de Ensino concomitantemente (P1, P2 e P3), e os demais, em momentos distintos, considerando o desempenho de P3. As palavras de ensino e de teste foram elaboradas a partir de uma matriz que visou utilizar quatro sílabas em diferentes posições e repetí-las igual número de vezes. Foram ensinadas oito palavras, divididas em quatro conjuntos de ensino, que consistiram em: (a) pré-teste do conjunto; (b) ensino da relação AfCl (fonema – grafema); (c) matching-to-sample de resposta construída (ditado); (d) teste intermediário; (e) ensino da relação AC; e (f) pós-teste do conjunto. Os testes incluíam palavras de recombinação compostas pelas unidades das palavras ensinadas. Outra avaliação com as palavras de todos os conjuntos e a Prova de Consciência Fonológica por Produção Oral (PCFO) foram aplicadas no início e no final do procedimento. Os resultados demonstraram que a maioria dos participantes apresentou melhora no desempenho em relação às palavras ensinadas e palavras de recombinação nas relações AC e A’C’. Apenas dois participantes (P2 e P3) não apresentaram comportamento textual na relação CD, e todos nomearam ao menos uma das palavras de recombinação (C’D’). Os testes do comportamento textual indicaram que, mesmo nas respostas incorretas, houve controle parcial por alguma unidade da palavra. Os resultados da PCFO revelaram melhora nas provas envolvendo manipulação silábica, rima e aliteração
|
Page generated in 0.0596 seconds