• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Text-based language identification for the South African languages

Botha, Gerrit Reinier 04 September 2008 (has links)
We investigate the factors that determine the performance of text-based language identification, with a particular focus on the 11 official languages of South Africa. Our study uses n-gram statistics as features for classification. In particular, we compare support vector machines, Naïve Bayesian and difference-in-frequency classifiers on different amounts of input text and various values of n, for different amounts of training data. For a fixed value of n the support vector machines generally outperforms the other classifiers, but the simpler classifiers are able to handle larger values of n. The additional computational complexity of training the support vector machine classifier may not be justified in light of importance of using a large value of n, except possibly for small sizes of the input window when limited training data is available. We find that it is more difficult to discriminate languages within language families then those across families. The accuracy on small input strings is low due to this reason, but for input strings of 100 characters or more there is only a slight confusion within families and accuracies as high as 99.4% are achieved. For the smallest input strings studied here, which consist of 15 characters, the best accuracy achieved is only 83%, but when the languages in different families are grouped together, this corresponds to a usable 95.1% accuracy. The relationship between the amount of training data and the accuracy achieved is found to depend on the window size – for the largest window (300 characters) about 400 000 characters are sufficient to achieve close-to-optimal accuracy, whereas improvements in accuracy are found even beyond 1.6 million characters of training data. Finally, we show that the confusions between the different languages in our set can be used to derive informative graphical representations of the relationships between the languages. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
2

Modelování jazyka v rozpoznávání češtiny / Language Modeling for Spech Recognition in Czech

Mikolov, Tomáš Unknown Date (has links)
This work concerns the problematic of language modeling in automatic speech recognition. Currently widely used techniques for advanced language modeling based on statistical approach are described in the first part of work - class based language models, factored language models and neural network based language models. In the next section, implementation of neural network based language model is described. Results obtained on "Pražský mluvený korpus" and "Brněnský mluvený korpus" corpora (1 170 000 words) are reported, with perplexity reduction around 20%. Also, results obtained after rescoring N-best lists with spontaneous speech are reported, with absolute improvement in accuracy by more than 1%. In the conclusion, possible uses of the work are mentioned, along with possible extensions in the future. Finally, main weaknesses of current statistical language modeling techniques are described.

Page generated in 0.0981 seconds