• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Model adaptation techniques in machine translation

Shah, Kashif 29 June 2012 (has links) (PDF)
Nowadays several indicators suggest that the statistical approach to machinetranslation is the most promising. It allows fast development of systems for anylanguage pair provided that sufficient training data is available.Statistical Machine Translation (SMT) systems use parallel texts ‐ also called bitexts ‐ astraining material for creation of the translation model and monolingual corpora fortarget language modeling.The performance of an SMT system heavily depends upon the quality and quantity ofavailable data. In order to train the translation model, the parallel texts is collected fromvarious sources and domains. These corpora are usually concatenated, word alignmentsare calculated and phrases are extracted.However, parallel data is quite inhomogeneous in many practical applications withrespect to several factors like data source, alignment quality, appropriateness to thetask, etc. This means that the corpora are not weighted according to their importance tothe domain of the translation task. Therefore, it is the domain of the training resourcesthat influences the translations that are selected among several choices. This is incontrast to the training of the language model for which well‐known techniques areused to weight the various sources of texts.We have proposed novel methods to automatically weight the heterogeneous data toadapt the translation model.In a first approach, this is achieved with a resampling technique. A weight to eachbitexts is assigned to select the proportion of data from that corpus. The alignmentscoming from each bitexts are resampled based on these weights. The weights of thecorpora are directly optimized on the development data using a numerical method.Moreover, an alignment score of each aligned sentence pair is used as confidencemeasurement.In an extended work, we obtain such a weighting by resampling alignments usingweights that decrease with the temporal distance of bitexts to the test set. By thesemeans, we can use all the available bitexts and still put an emphasis on the most recentone. The main idea of our approach is to use a parametric form or meta‐weights for theweighting of the different parts of the bitexts. This ensures that our approach has onlyfew parameters to optimize.In another work, we have proposed a generic framework which takes into account thecorpus and sentence level "goodness scores" during the calculation of the phrase‐tablewhich results into better distribution of probability mass of the individual phrase pairs.
2

Using Latin Square Design To Evaluate Model Interpolation And Adaptation Based Emotional Speech Synthesis

Hsu, Chih-Yu 19 July 2012 (has links)
¡@¡@In this thesis, we use a hidden Markov model which can use a small amount of corpus to synthesize speech with certain quality to implement speech synthesis system for Chinese. More, the emotional speech are synthesized by the flexibility of the parametric speech in this model. We conduct model interpolation and model adaptation to synthesize speech from neutral to particular emotion without target speaker¡¦s emotional speech. In model adaptation, we use monophone-based Mahalanobis distance to select emotional models which are close to target speaker from pool of speakers, and estimate the interpolation weight to synthesize emotional speech. In model adaptation, we collect abundant of data training average voice models for each individual emotion. These models are adapted to specific emotional models of target speaker by CMLLR method. In addition, we design the Latin-square evaluation to reduce the systematic offset in the subjective tests, making results more credible and fair. We synthesize emotional speech include happiness, anger, sadness, and use Latin square design to evaluate performance in three part similarity, naturalness, and emotional expression respectively. According to result, we make a comprehensive comparison and conclusions of two method in emotional speech synthesis.
3

Robust gesture recognition

Cheng, You-Chi 08 June 2015 (has links)
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.
4

Dynamické Softwarové Architektury pro Resilientní Distribuované Systémy / Dynamic Software Architectures for Resilient Distributed Systems

Keznikl, Jaroslav January 2014 (has links)
Resilient Distributed Systems (RDS) are large-scale distributed systems that remain de-pendable despite their very dynamic, open-ended, and inherently unpredictable environ-ments. This combination of system and environment properties makes development of soft-ware architectures for RDS using contemporary architecture models and abstractions very challenging. Therefore, the thesis proposes: (1) new architecture abstractions that are tailored for building dynamic software architectures for RDS, (2) design models and processes that endorse these abstractions at design time, and (3) means for efficient implementation, execu-tion, and analysis of architectures based on these abstractions. Specifically, the thesis delivers (1) by introducing the DEECo component model, based on the concept of component ensembles. Contributing to (2), the thesis presents the Invari-ant Refinement Method, governing dependable, formally-grounded design of DEECo-based architectures, and the ARCAS method, focusing on dependable realization of open-ended dynamic component bindings typical for DEECo. Furthermore, it pursues (3) by presenting a formal operational semantics of DEECo and its mapping to Java in terms of an execution environment prototype - jDEECo. Additionally, the semantics is used as a basis for formal analysis via model...
5

Adaptace rozpoznávače řeči na datech bez přepisu / Unsupervised Adaptation of Speech Recognizer

Švec, Ján January 2015 (has links)
The goal of this thesis is to design and test techniques for unsupervised adaptation of speech recognizers on some audio data without any textual transcripts. A training set is prepared at first, and a baseline speech recognition system is trained. This sistem is used to transcribe some unseen data. We will experiment with an adaptation data selection process based on some speech transcript quality measurement. The system is re-trained on this new set than, and the accuracy is evaluated. Then we experiment with the amount of adaptation data.
6

Automated Defect Recognition in Digital Radiography

Xiao, Xinhua 19 October 2015 (has links)
No description available.
7

Probabilistic space maps for speech with applications

Kalgaonkar, Kaustubh 22 August 2011 (has links)
The objective of the proposed research is to develop a probabilistic model of speech production that exploits the multiplicity of mapping between the vocal tract area functions (VTAF) and speech spectra. Two thrusts are developed. In the first, a latent variable model that captures uncertainty in estimating the VTAF from speech data is investigated. The latent variable model uses this uncertainty to generate many-to-one mapping between observations of the VTAF and speech spectra. The second uses the probabilistic model of speech production to improve the performance of traditional speech algorithms, such as enhancement, acoustic model adaptation, etc. In this thesis, we propose to model the process of speech production with a probability map. This proposed model treats speech production as a probabilistic process with many-to-one mapping between VTAF and speech spectra. The thesis not only outlines a statistical framework to generate and train these probabilistic models from speech, but also demonstrates its power and flexibility with such applications as enhancing speech from both perceptual and recognition perspectives.
8

Automatic speech recognition for resource-scarce environments / N.T. Kleynhans.

Kleynhans, Neil Taylor January 2013 (has links)
Automatic speech recognition (ASR) technology has matured over the past few decades and has made significant impacts in a variety of fields, from assistive technologies to commercial products. However, ASR system development is a resource intensive activity and requires language resources in the form of text annotated audio recordings and pronunciation dictionaries. Unfortunately, many languages found in the developing world fall into the resource-scarce category and due to this resource scarcity the deployment of ASR systems in the developing world is severely inhibited. In this thesis we present research into developing techniques and tools to (1) harvest audio data, (2) rapidly adapt ASR systems and (3) select “useful” training samples in order to assist with resource-scarce ASR system development. We demonstrate an automatic audio harvesting approach which efficiently creates a speech recognition corpus by harvesting an easily available audio resource. We show that by starting with bootstrapped acoustic models, trained with language data obtain from a dialect, and then running through a few iterations of an alignment-filter-retrain phase it is possible to create an accurate speech recognition corpus. As a demonstration we create a South African English speech recognition corpus by using our approach and harvesting an internet website which provides audio and approximate transcriptions. The acoustic models developed from harvested data are evaluated on independent corpora and show that the proposed harvesting approach provides a robust means to create ASR resources. As there are many acoustic model adaptation techniques which can be implemented by an ASR system developer it becomes a costly endeavour to select the best adaptation technique. We investigate the dependence of the adaptation data amount and various adaptation techniques by systematically varying the adaptation data amount and comparing the performance of various adaptation techniques. We establish a guideline which can be used by an ASR developer to chose the best adaptation technique given a size constraint on the adaptation data, for the scenario where adaptation between narrow- and wide-band corpora must be performed. In addition, we investigate the effectiveness of a novel channel normalisation technique and compare the performance with standard normalisation and adaptation techniques. Lastly, we propose a new data selection framework which can be used to design a speech recognition corpus. We show for limited data sets, independent of language and bandwidth, the most effective strategy for data selection is frequency-matched selection and that the widely-used maximum entropy methods generally produced the least promising results. In our model, the frequency-matched selection method corresponds to a logarithmic relationship between accuracy and corpus size; we also investigated other model relationships, and found that a hyperbolic relationship (as suggested from simple asymptotic arguments in learning theory) may lead to somewhat better performance under certain conditions. / Thesis (PhD (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
9

Automatic speech recognition for resource-scarce environments / N.T. Kleynhans.

Kleynhans, Neil Taylor January 2013 (has links)
Automatic speech recognition (ASR) technology has matured over the past few decades and has made significant impacts in a variety of fields, from assistive technologies to commercial products. However, ASR system development is a resource intensive activity and requires language resources in the form of text annotated audio recordings and pronunciation dictionaries. Unfortunately, many languages found in the developing world fall into the resource-scarce category and due to this resource scarcity the deployment of ASR systems in the developing world is severely inhibited. In this thesis we present research into developing techniques and tools to (1) harvest audio data, (2) rapidly adapt ASR systems and (3) select “useful” training samples in order to assist with resource-scarce ASR system development. We demonstrate an automatic audio harvesting approach which efficiently creates a speech recognition corpus by harvesting an easily available audio resource. We show that by starting with bootstrapped acoustic models, trained with language data obtain from a dialect, and then running through a few iterations of an alignment-filter-retrain phase it is possible to create an accurate speech recognition corpus. As a demonstration we create a South African English speech recognition corpus by using our approach and harvesting an internet website which provides audio and approximate transcriptions. The acoustic models developed from harvested data are evaluated on independent corpora and show that the proposed harvesting approach provides a robust means to create ASR resources. As there are many acoustic model adaptation techniques which can be implemented by an ASR system developer it becomes a costly endeavour to select the best adaptation technique. We investigate the dependence of the adaptation data amount and various adaptation techniques by systematically varying the adaptation data amount and comparing the performance of various adaptation techniques. We establish a guideline which can be used by an ASR developer to chose the best adaptation technique given a size constraint on the adaptation data, for the scenario where adaptation between narrow- and wide-band corpora must be performed. In addition, we investigate the effectiveness of a novel channel normalisation technique and compare the performance with standard normalisation and adaptation techniques. Lastly, we propose a new data selection framework which can be used to design a speech recognition corpus. We show for limited data sets, independent of language and bandwidth, the most effective strategy for data selection is frequency-matched selection and that the widely-used maximum entropy methods generally produced the least promising results. In our model, the frequency-matched selection method corresponds to a logarithmic relationship between accuracy and corpus size; we also investigated other model relationships, and found that a hyperbolic relationship (as suggested from simple asymptotic arguments in learning theory) may lead to somewhat better performance under certain conditions. / Thesis (PhD (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
10

Model adaptation techniques in machine translation / Techniques d'adaptation en traduction automatique

Shah, Kashif 29 June 2012 (has links)
L’approche statistique pour la traduction automatique semble être aujourd’hui l’approche la plusprometteuse. Cette approche permet de développer rapidement un système de traduction pour unenouvelle paire de langue lorsque les données d'apprentissage disponibles sont suffisammentconséquentes.Les systèmes de traduction automatique statistique (Statistical Machine Translation (SMT)) utilisentdes textes parallèles, aussi appelés les bitextes, comme support d'apprentissage pour créer lesmodèles de traduction. Ils utilisent également des corpus monolingues afin de modéliser la langueciblée.Les performances d'un système de traduction automatique statistique dépendent essentiellement dela qualité et de la quantité des données disponibles. Pour l'apprentissage d'un modèle de traduction,les textes parallèles sont collectés depuis différentes sources, dans différents domaines. Ces corpussont habituellement concaténés et les phrases sont extraites suite à un processus d'alignement desmots.Néanmoins, les données parallèles sont assez hétérogènes et les performances des systèmes detraduction automatique dépendent généralement du contexte applicatif. Les performances varient laplupart du temps en fonction de la source de données d’apprentissage, de la qualité de l'alignementet de la cohérence des données avec la tâche. Les traductions, sélectionnées parmi différenteshypothèses, sont directement influencées par le domaine duquel sont récupérées les donnéesd'apprentissage. C'est en contradiction avec l'apprentissage des modèles de langage pour lesquelsdes techniques bien connues sont utilisées pour pondérer les différentes sources de données. Ilapparaît donc essentiel de pondérer les corpus d’apprentissage en fonction de leur importance dansle domaine de la tâche de traduction.Nous avons proposé de nouvelles méthodes permettant de pondérer automatiquement les donnéeshétérogènes afin d'adapter le modèle de traduction.Dans une première approche, cette pondération automatique est réalisée à l'aide d'une technique deré-échantillonnage. Un poids est assigné à chaque bitextes en fonction de la proportion de donnéesdu corpus. Les alignements de chaque bitextes sont par la suite ré-échantillonnés en fonction de cespoids. Le poids attribué aux corpus est optimisé sur les données de développement en utilisant uneméthode numérique. De plus, un score d'alignement relatif à chaque paire de phrase alignée estutilisé comme mesure de confiance.Dans un travail approfondi, nous pondérons en ré-échantillonnant des alignements, en utilisant despoids qui diminuent en fonction de la distance temporelle entre les bitextes et les données de test.Nous pouvons, de cette manière, utiliser tous les bitextes disponibles tout en mettant l'accent sur leplus récent.L'idée principale de notre approche est d'utiliser une forme paramétrique, ou des méta-poids, pourpondérer les différentes parties des bitextes. De cette manière, seuls quelques paramètres doiventêtre optimisés.Nous avons également proposé un cadre de travail générique qui, lors du calcul de la table detraduction, ne prend en compte que les corpus et les phrases réalisant les meilleurs scores. Cetteapproche permet une meilleure distribution des masses de probabilités sur les paires de phrasesindividuelles.Nous avons présenté les résultats de nos expériences dans différentes campagnes d'évaluationinternationales, telles que IWSLT, NIST, OpenMT et WMT, sur les paires de langues Anglais/Arabeet Fançais/Arabe. Nous avons ainsi montré une amélioration significative de la qualité destraductions proposées. / Nowadays several indicators suggest that the statistical approach to machinetranslation is the most promising. It allows fast development of systems for anylanguage pair provided that sufficient training data is available.Statistical Machine Translation (SMT) systems use parallel texts ‐ also called bitexts ‐ astraining material for creation of the translation model and monolingual corpora fortarget language modeling.The performance of an SMT system heavily depends upon the quality and quantity ofavailable data. In order to train the translation model, the parallel texts is collected fromvarious sources and domains. These corpora are usually concatenated, word alignmentsare calculated and phrases are extracted.However, parallel data is quite inhomogeneous in many practical applications withrespect to several factors like data source, alignment quality, appropriateness to thetask, etc. This means that the corpora are not weighted according to their importance tothe domain of the translation task. Therefore, it is the domain of the training resourcesthat influences the translations that are selected among several choices. This is incontrast to the training of the language model for which well‐known techniques areused to weight the various sources of texts.We have proposed novel methods to automatically weight the heterogeneous data toadapt the translation model.In a first approach, this is achieved with a resampling technique. A weight to eachbitexts is assigned to select the proportion of data from that corpus. The alignmentscoming from each bitexts are resampled based on these weights. The weights of thecorpora are directly optimized on the development data using a numerical method.Moreover, an alignment score of each aligned sentence pair is used as confidencemeasurement.In an extended work, we obtain such a weighting by resampling alignments usingweights that decrease with the temporal distance of bitexts to the test set. By thesemeans, we can use all the available bitexts and still put an emphasis on the most recentone. The main idea of our approach is to use a parametric form or meta‐weights for theweighting of the different parts of the bitexts. This ensures that our approach has onlyfew parameters to optimize.In another work, we have proposed a generic framework which takes into account thecorpus and sentence level "goodness scores" during the calculation of the phrase‐tablewhich results into better distribution of probability mass of the individual phrase pairs.

Page generated in 0.3431 seconds