• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Corpus design for Setswana lexicography

Otlogetswe, Thapelo Joseph 01 July 2008 (has links)
This PhD thesis is about the design of a Setswana corpus for lexicography. While various corpora have been compiled and a variety of corpora-based researches attempted in African languages, no effort has been made towards corpus design. Additionally, although extensive analysis of the Setswana language has been done by missionaries, grammarians and linguists since the 1800s, none of such research is in corpus design. Most research has been largely on the grammatical study of the language. The recent corpora research in African languages in general has been on the use of corpora for the compilation of dictionaries and little of it is in corpus design. Pioneers of this kind of corpora research in African languages are Prinsloo and De Schryver (1999), De Schryver and Prisloo (2000 and 2001) and Gouws and Prisloo (2005). Because of a lack of research in corpora design particularly in African languages, this thesis is an attempt at filling that gap, especially for Setswana. It is hoped that the finding of this study will inspire similar designs in other languages comparable to Setswana. We explore corpus design by focusing on measuring a variety of text types for lexical richness at comparable token points. The study explores the question of whether a corpus compiled for lexicography must comprise a variety of texts drawn from different text types or whether the quality of retrieved information for lexicographic purposes from a corpus comprising diverse text varieties could be equally extracted from a corpus with a single text type. This study therefore determines whether linguistic variability is crucial in corpus design for lexicography. / Thesis (PhD (African Languages))--University of Pretoria, 2008. / African Languages / unrestricted
2

Data sufficiency analysis for automatic speech recognition / by J.A.C. Badenhorst

Badenhorst, Jacob Andreas Cornelius January 2009 (has links)
The languages spoken in developing countries are diverse and most are currently under-resourced from an automatic speech recognition (ASR) perspective. In South Africa alone, 10 of the 11 official languages belong to this category. Given the potential for future applications of speech-based information systems such as spoken dialog system (SDSs) in these countries, the design of minimal ASR audio corpora is an important research area. Specifically, current ASR systems utilise acoustic models to represent acoustic variability, and effective ASR corpus design aims to optimise the amount of relevant variation within training data while minimising the size of the corpus. Therefore an investigation of the effect that different amounts and types of training data have on these models is needed. With this dissertation specific consideration is given to the data sufficiency principals that apply to the training of acoustic models. The investigation of this task lead to the following main achievements: 1) We define a new stability measurement protocol that provides the capability to view the variability of ASR training data. 2) This protocol allows for the investigation of the effect that various acoustic model complexities and ASR normalisation techniques have on ASR training data requirements. Specific trends with regard to the data requirements for different phone categories and how these are affected by various modelling strategies are observed. 3) Based on this analysis acoustic distances between phones are estimated across language borders, paving the way for further research in cross-language data sharing. Finally the knowledge obtained from these experiments is applied to perform a data sufficiency analysis of a new speech recognition corpus of South African languages: The Lwazi ASR corpus. The findings correlate well with initial phone recognition results and yield insight into the sufficient number of speakers required for the development of minimal telephone ASR corpora. / Thesis (M. Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2009.
3

Data sufficiency analysis for automatic speech recognition / by J.A.C. Badenhorst

Badenhorst, Jacob Andreas Cornelius January 2009 (has links)
The languages spoken in developing countries are diverse and most are currently under-resourced from an automatic speech recognition (ASR) perspective. In South Africa alone, 10 of the 11 official languages belong to this category. Given the potential for future applications of speech-based information systems such as spoken dialog system (SDSs) in these countries, the design of minimal ASR audio corpora is an important research area. Specifically, current ASR systems utilise acoustic models to represent acoustic variability, and effective ASR corpus design aims to optimise the amount of relevant variation within training data while minimising the size of the corpus. Therefore an investigation of the effect that different amounts and types of training data have on these models is needed. With this dissertation specific consideration is given to the data sufficiency principals that apply to the training of acoustic models. The investigation of this task lead to the following main achievements: 1) We define a new stability measurement protocol that provides the capability to view the variability of ASR training data. 2) This protocol allows for the investigation of the effect that various acoustic model complexities and ASR normalisation techniques have on ASR training data requirements. Specific trends with regard to the data requirements for different phone categories and how these are affected by various modelling strategies are observed. 3) Based on this analysis acoustic distances between phones are estimated across language borders, paving the way for further research in cross-language data sharing. Finally the knowledge obtained from these experiments is applied to perform a data sufficiency analysis of a new speech recognition corpus of South African languages: The Lwazi ASR corpus. The findings correlate well with initial phone recognition results and yield insight into the sufficient number of speakers required for the development of minimal telephone ASR corpora. / Thesis (M. Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2009.
4

Automatic speech recognition for resource-scarce environments / N.T. Kleynhans.

Kleynhans, Neil Taylor January 2013 (has links)
Automatic speech recognition (ASR) technology has matured over the past few decades and has made significant impacts in a variety of fields, from assistive technologies to commercial products. However, ASR system development is a resource intensive activity and requires language resources in the form of text annotated audio recordings and pronunciation dictionaries. Unfortunately, many languages found in the developing world fall into the resource-scarce category and due to this resource scarcity the deployment of ASR systems in the developing world is severely inhibited. In this thesis we present research into developing techniques and tools to (1) harvest audio data, (2) rapidly adapt ASR systems and (3) select “useful” training samples in order to assist with resource-scarce ASR system development. We demonstrate an automatic audio harvesting approach which efficiently creates a speech recognition corpus by harvesting an easily available audio resource. We show that by starting with bootstrapped acoustic models, trained with language data obtain from a dialect, and then running through a few iterations of an alignment-filter-retrain phase it is possible to create an accurate speech recognition corpus. As a demonstration we create a South African English speech recognition corpus by using our approach and harvesting an internet website which provides audio and approximate transcriptions. The acoustic models developed from harvested data are evaluated on independent corpora and show that the proposed harvesting approach provides a robust means to create ASR resources. As there are many acoustic model adaptation techniques which can be implemented by an ASR system developer it becomes a costly endeavour to select the best adaptation technique. We investigate the dependence of the adaptation data amount and various adaptation techniques by systematically varying the adaptation data amount and comparing the performance of various adaptation techniques. We establish a guideline which can be used by an ASR developer to chose the best adaptation technique given a size constraint on the adaptation data, for the scenario where adaptation between narrow- and wide-band corpora must be performed. In addition, we investigate the effectiveness of a novel channel normalisation technique and compare the performance with standard normalisation and adaptation techniques. Lastly, we propose a new data selection framework which can be used to design a speech recognition corpus. We show for limited data sets, independent of language and bandwidth, the most effective strategy for data selection is frequency-matched selection and that the widely-used maximum entropy methods generally produced the least promising results. In our model, the frequency-matched selection method corresponds to a logarithmic relationship between accuracy and corpus size; we also investigated other model relationships, and found that a hyperbolic relationship (as suggested from simple asymptotic arguments in learning theory) may lead to somewhat better performance under certain conditions. / Thesis (PhD (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
5

Automatic speech recognition for resource-scarce environments / N.T. Kleynhans.

Kleynhans, Neil Taylor January 2013 (has links)
Automatic speech recognition (ASR) technology has matured over the past few decades and has made significant impacts in a variety of fields, from assistive technologies to commercial products. However, ASR system development is a resource intensive activity and requires language resources in the form of text annotated audio recordings and pronunciation dictionaries. Unfortunately, many languages found in the developing world fall into the resource-scarce category and due to this resource scarcity the deployment of ASR systems in the developing world is severely inhibited. In this thesis we present research into developing techniques and tools to (1) harvest audio data, (2) rapidly adapt ASR systems and (3) select “useful” training samples in order to assist with resource-scarce ASR system development. We demonstrate an automatic audio harvesting approach which efficiently creates a speech recognition corpus by harvesting an easily available audio resource. We show that by starting with bootstrapped acoustic models, trained with language data obtain from a dialect, and then running through a few iterations of an alignment-filter-retrain phase it is possible to create an accurate speech recognition corpus. As a demonstration we create a South African English speech recognition corpus by using our approach and harvesting an internet website which provides audio and approximate transcriptions. The acoustic models developed from harvested data are evaluated on independent corpora and show that the proposed harvesting approach provides a robust means to create ASR resources. As there are many acoustic model adaptation techniques which can be implemented by an ASR system developer it becomes a costly endeavour to select the best adaptation technique. We investigate the dependence of the adaptation data amount and various adaptation techniques by systematically varying the adaptation data amount and comparing the performance of various adaptation techniques. We establish a guideline which can be used by an ASR developer to chose the best adaptation technique given a size constraint on the adaptation data, for the scenario where adaptation between narrow- and wide-band corpora must be performed. In addition, we investigate the effectiveness of a novel channel normalisation technique and compare the performance with standard normalisation and adaptation techniques. Lastly, we propose a new data selection framework which can be used to design a speech recognition corpus. We show for limited data sets, independent of language and bandwidth, the most effective strategy for data selection is frequency-matched selection and that the widely-used maximum entropy methods generally produced the least promising results. In our model, the frequency-matched selection method corresponds to a logarithmic relationship between accuracy and corpus size; we also investigated other model relationships, and found that a hyperbolic relationship (as suggested from simple asymptotic arguments in learning theory) may lead to somewhat better performance under certain conditions. / Thesis (PhD (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
6

CORPORA PARALLELI E LINGUISTICA CONTRASTIVA: AMPLIAMENTO E APPLICAZIONI DEL CORPUS ITALIANO - RUSSO NEL NACIONAL'NYJ KORPUS RUSSKOGO JAZYKA / Parallel corpora and contrastive linguistics: enlargement and applications of the Italian-Russian corpus in the Nacional'nyj Korpus Russkogo Jazyka

NOSEDA, VALENTINA 19 September 2017 (has links)
La Linguistica dei corpora - che fa uso di corpora elettronici annotati per lo studio delle lingue - è un approccio ormai diffuso e consolidato. I corpora paralleli, in particolare, in cui i testi in una lingua A sono allineati con la traduzione in lingua B, sono uno strumento molto utile nell’analisi contrastiva. La mancata disponibilità di corpora paralleli di qualità per le lingue di nostro interesse - russo e italiano - ci ha portati a volere ampliare e migliorare il corpus parallelo italiano-russo presente come corpus pilota nel Nacional’nyj Korpus Russkogo Jazyka (Corpus Nazionale della Lingua Russa). Il presente lavoro ha avuto pertanto uno scopo applicativo e uno teorico. Da un lato, dopo aver studiato le questioni imprescindibili per la progettazione di un corpus di qualità, sono stati stabiliti i criteri per l’ampliamento e inseriti nuovi testi, consentendo così al corpus parallelo di passare da 700.000 a più di 4 milioni di parole, entità che consente ora di condurre ricerche scientificamente valide. In seguito, sono state proposte tre analisi corpus-based così da mettere in luce le potenzialità del corpus ampliato: lo studio dei verbi prefissali di memoria russi e la loro resa in italiano; il confronto tra il causativo analitico italiano “fare + infinito” e il causativo russo; l’analisi comparata di quindici versioni italiane de Il Cappotto di N. Gogol’. Le tre analisi hanno consentito di avanzare innanzitutto osservazioni di carattere metodologico in vista di un ulteriore ampliamento e miglioramento del corpus parallelo italiano-russo. In secondo luogo, la prospettiva corpus-based si è dimostrata utile per approfondire lo studio di questi temi dal punto di vista teorico. / Corpus Linguistics - which exploits electronic annotated corpora in the study of languages - is a widespread and consolidated approach. In particular, parallel corpora, where texts in a language are aligned with their translation in a second language, are an extremely useful tool in contrastive analysis. The lack of good parallel corpora for the languages of our interest - Russian and Italian - has led us to work for improving the Italian-Russian parallel corpus available as a pilot corpus in the Russian National Corpus. Therefore, this work had a twofold aim: practical and theoretical. On the one hand, after studying the essential issues for designing a high-quality corpus, all the criteria for expanding the corpus were established and the number of texts was increased, allowing the Italian-Russian parallel corpus, which counted 700.000 words, to reach more than 4 million words. As a result, it is now possible to conduct scientifically valid research based on this corpus. On the other hand, three corpus-based analyses were proposed in order to highlight the potential of the corpus: the study of prefixed Russian memory verbs and their translation into Italian; the comparison between the Italian analytic causative "fare + infinitive" and Russian causative verbs; The comparative analysis of fifteen Italian versions of The Overcoat by N. Gogol'. These analyses first of all allowed to advance some methodological remarks considering a further enlargement and improvement of the Italian-Russian parallel corpus. Secondly, the corpus-based approach has proved to be useful in deepening the study of these topics from a theoretical point of view.
7

A critical investigation of deaf comprehension of signed tv news interpretation

Wehrmeyer, Jennifer Ella January 2013 (has links)
This study investigates factors hampering comprehension of sign language interpretations rendered on South African TV news bulletins in terms of Deaf viewers’ expectancy norms and corpus analysis of authentic interpretations. The research fills a gap in the emerging discipline of Sign Language Interpreting Studies, specifically with reference to corpus studies. The study presents a new model for translation/interpretation evaluation based on the introduction of Grounded Theory (GT) into a reception-oriented model. The research question is addressed holistically in terms of target audience competencies and expectations, aspects of the physical setting, interpreters’ use of language and interpreting choices. The South African Deaf community are incorporated as experts into the assessment process, thereby empirically grounding the research within the socio-dynamic context of the target audience. Triangulation in data collection and analysis was provided by applying multiple mixed data collection methods, namely questionnaires, interviews, eye-tracking and corpus tools. The primary variables identified by the study are the small picture size and use of dialect. Secondary variables identified include inconsistent or inadequate use of non-manual features, incoherent or non-simultaneous mouthing, careless or incorrect sign execution, too fast signing, loss of visibility against skin or clothing, omission of vital elements of sentence structure, adherence to source language structures, meaningless additions, incorrect referencing, oversimplification and violations of Deaf norms of restructuring, information transfer, gatekeeping and third person interpreting. The identification of these factors allows the construction of a series of testable hypotheses, thereby providing a broad platform for further research. Apart from pioneering corpus-driven sign language interpreting research, the study makes significant contributions to present knowledge of evaluative models, interpreting strategies and norms and systems of transcription and annotation. / Linguistics / Thesis (D. Litt.et Phil. (Linguistics)
8

A critical investigation of deaf comprehension of signed tv news interpretation

Wehrmeyer, Jennifer Ella January 2013 (has links)
This study investigates factors hampering comprehension of sign language interpretations rendered on South African TV news bulletins in terms of Deaf viewers’ expectancy norms and corpus analysis of authentic interpretations. The research fills a gap in the emerging discipline of Sign Language Interpreting Studies, specifically with reference to corpus studies. The study presents a new model for translation/interpretation evaluation based on the introduction of Grounded Theory (GT) into a reception-oriented model. The research question is addressed holistically in terms of target audience competencies and expectations, aspects of the physical setting, interpreters’ use of language and interpreting choices. The South African Deaf community are incorporated as experts into the assessment process, thereby empirically grounding the research within the socio-dynamic context of the target audience. Triangulation in data collection and analysis was provided by applying multiple mixed data collection methods, namely questionnaires, interviews, eye-tracking and corpus tools. The primary variables identified by the study are the small picture size and use of dialect. Secondary variables identified include inconsistent or inadequate use of non-manual features, incoherent or non-simultaneous mouthing, careless or incorrect sign execution, too fast signing, loss of visibility against skin or clothing, omission of vital elements of sentence structure, adherence to source language structures, meaningless additions, incorrect referencing, oversimplification and violations of Deaf norms of restructuring, information transfer, gatekeeping and third person interpreting. The identification of these factors allows the construction of a series of testable hypotheses, thereby providing a broad platform for further research. Apart from pioneering corpus-driven sign language interpreting research, the study makes significant contributions to present knowledge of evaluative models, interpreting strategies and norms and systems of transcription and annotation. / Linguistics and Modern Languages / Thesis (D. Litt.et Phil. (Linguistics)

Page generated in 0.0468 seconds