• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 344
  • 40
  • 24
  • 14
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 4
  • 3
  • Tagged with
  • 508
  • 508
  • 508
  • 181
  • 125
  • 103
  • 90
  • 50
  • 49
  • 44
  • 42
  • 42
  • 42
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Použití rekurentních neuronových sítí pro automatické rozpoznávání řečníka, jazyka a pohlaví / Neural networks for automatic speaker, language, and sex identification

Do, Ngoc January 2016 (has links)
Title: Neural networks for automatic speaker, language, and sex identifica- tion Author: Bich-Ngoc Do Department: Institute of Formal and Applied Linguistics Supervisor: Ing. Mgr. Filip Jurek, Ph.D., Institute of Formal and Applied Linguistics and Dr. Marco Wiering, Faculty of Mathematics and Natural Sciences, University of Groningen Abstract: Speaker recognition is a challenging task and has applications in many areas, such as access control or forensic science. On the other hand, in recent years, deep learning paradigm and its branch, deep neural networks have emerged as powerful machine learning techniques and achieved state-of- the-art in many fields of natural language processing and speech technology. Therefore, the aim of this work is to explore the capability of a deep neural network model, recurrent neural networks, in speaker recognition. Our pro- posed systems are evaluated on TIMIT corpus using speaker identification task. In comparison with other systems in the same test conditions, our systems could not surpass reference ones due to the sparsity of validation data. In general, our experiments show that the best system configuration is a combination of MFCCs with their dynamic features and a recurrent neural network model. We also experiment recurrent neural networks and convo- lutional neural...
472

Continuous space models with neural networks in natural language processing / Modèles neuronaux pour la modélisation statistique de la langue

Le, Hai Son 20 December 2012 (has links)
Les modèles de langage ont pour but de caractériser et d'évaluer la qualité des énoncés en langue naturelle. Leur rôle est fondamentale dans de nombreux cadres d'application comme la reconnaissance automatique de la parole, la traduction automatique, l'extraction et la recherche d'information. La modélisation actuellement état de l'art est la modélisation "historique" dite n-gramme associée à des techniques de lissage. Ce type de modèle prédit un mot uniquement en fonction des n-1 mots précédents. Pourtant, cette approche est loin d'être satisfaisante puisque chaque mot est traité comme un symbole discret qui n'a pas de relation avec les autres. Ainsi les spécificités du langage ne sont pas prises en compte explicitement et les propriétés morphologiques, sémantiques et syntaxiques des mots sont ignorées. De plus, à cause du caractère éparse des langues naturelles, l'ordre est limité à n=4 ou 5. Sa construction repose sur le dénombrement de successions de mots, effectué sur des données d'entrainement. Ce sont donc uniquement les textes d'apprentissage qui conditionnent la pertinence de la modélisation n-gramme, par leur quantité (plusieurs milliards de mots sont utilisés) et leur représentativité du contenu en terme de thématique, époque ou de genre. L'usage des modèles neuronaux ont récemment ouvert de nombreuses perspectives. Le principe de projection des mots dans un espace de représentation continu permet d'exploiter la notion de similarité entre les mots: les mots du contexte sont projetés dans un espace continu et l'estimation de la probabilité du mot suivant exploite alors la similarité entre ces vecteurs. Cette représentation continue confère aux modèles neuronaux une meilleure capacité de généralisation et leur utilisation a donné lieu à des améliorations significative en reconnaissance automatique de la parole et en traduction automatique. Pourtant, l'apprentissage et l'inférence des modèles de langue neuronaux à grand vocabulaire restent très couteux. Ainsi par le passé, les modèles neuronaux ont été utilisés soit pour des tâches avec peu de données d'apprentissage, soit avec un vocabulaire de mots à prédire limités en taille. La première contribution de cette thèse est donc de proposer une solution qui s’appuie sur la structuration de la couche de sortie sous forme d’un arbre de classification pour résoudre ce problème de complexité. Le modèle se nomme Structure OUtput Layer (SOUL) et allie une architecture neuronale avec les modèles de classes. Dans le cadre de la reconnaissance automatique de la parole et de la traduction automatique, ce nouveau type de modèle a permis d'obtenir des améliorations significatives des performances pour des systèmes à grande échelle et à état l'art. La deuxième contribution de cette thèse est d'analyser les représentations continues induites et de comparer ces modèles avec d'autres architectures comme les modèles récurrents. Enfin, la troisième contribution est d'explorer la capacité de la structure SOUL à modéliser le processus de traduction. Les résultats obtenus montrent que les modèles continus comme SOUL ouvrent des perspectives importantes de recherche en traduction automatique. / The purpose of language models is in general to capture and to model regularities of language, thereby capturing morphological, syntactical and distributional properties of word sequences in a given language. They play an important role in many successful applications of Natural Language Processing, such as Automatic Speech Recognition, Machine Translation and Information Extraction. The most successful approaches to date are based on n-gram assumption and the adjustment of statistics from the training data by applying smoothing and back-off techniques, notably Kneser-Ney technique, introduced twenty years ago. In this way, language models predict a word based on its n-1 previous words. In spite of their prevalence, conventional n-gram based language models still suffer from several limitations that could be intuitively overcome by consulting human expert knowledge. One critical limitation is that, ignoring all linguistic properties, they treat each word as one discrete symbol with no relation with the others. Another point is that, even with a huge amount of data, the data sparsity issue always has an important impact, so the optimal value of n in the n-gram assumption is often 4 or 5 which is insufficient in practice. This kind of model is constructed based on the count of n-grams in training data. Therefore, the pertinence of these models is conditioned only on the characteristics of the training text (its quantity, its representation of the content in terms of theme, date). Recently, one of the most successful attempts that tries to directly learn word similarities is to use distributed word representations in language modeling, where distributionally words, which have semantic and syntactic similarities, are expected to be represented as neighbors in a continuous space. These representations and the associated objective function (the likelihood of the training data) are jointly learned using a multi-layer neural network architecture. In this way, word similarities are learned automatically. This approach has shown significant and consistent improvements when applied to automatic speech recognition and statistical machine translation tasks. A major difficulty with the continuous space neural network based approach remains the computational burden, which does not scale well to the massive corpora that are nowadays available. For this reason, the first contribution of this dissertation is the definition of a neural architecture based on a tree representation of the output vocabulary, namely Structured OUtput Layer (SOUL), which makes them well suited for large scale frameworks. The SOUL model combines the neural network approach with the class-based approach. It achieves significant improvements on both state-of-the-art large scale automatic speech recognition and statistical machine translations tasks. The second contribution is to provide several insightful analyses on their performances, their pros and cons, their induced word space representation. Finally, the third contribution is the successful adoption of the continuous space neural network into a machine translation framework. New translation models are proposed and reported to achieve significant improvements over state-of-the-art baseline systems.
473

Aceitação de tecnologia por estudantes surdos na perspectiva da educação inclusiva / Technology Acceptance for deaf students in the perspective of inclusive education

Prietch, Soraia Silva 04 September 2014 (has links)
Com a Política Nacional de Educação Especial na perspectiva da Educação Inclusiva (2008), as escolas regulares vêm recebendo um número maior de estudantes surdos ou com deficiência auditiva (S/DA), que antes frequentavam escolas especializadas. No entanto, dados apontam a diminuição do número de estudantes S/DA matriculados no ensino fundamental para o ensino médio, e do ensino médio para o ensino superior; ou seja, existem razões para se acreditar que barreiras educacionais se impõem no caminho desses estudantes para que conquistem uma formação educacional completa. Neste contexto, o objetivo deste trabalho é propor um modelo de aceitação de tecnologias levando em consideração fatores que envolvam aspectos do contexto da educação inclusiva, bem como efetuar experimento da interação de usuários S/DA com uma tecnologia para avaliar o modelo. Dentre os fatores mencionados um deles se refere às potenciais barreiras educacionais vivenciadas pelos estudantes S/DA em salas de aula inclusivas. Com relação à metodologia de pesquisa, o estudo desenvolveu-se em ciclos. Na medida em que as investigações avançavam, um novo estudo iniciava, se desenvolvia e se fechava. Isso permitiu que a proposta inicial tivesse sucessivos refinamentos ao longo do tempo até o ponto em que os questionamentos iniciais foram respondidos e o objetivo foi atingido. O modelo proposto mostrou resultados positivos, no sentido de conseguir capturar os fatores que podem influenciar a aceitação de tecnologias considerando o contexto de aplicação específico, uma vez que estes incorporam os aspectos da qualidade pragmática e os aspectos da qualidade hedônica, questões relacionadas à utilidade percebida da minimização de potenciais barreiras educacionais, expectativas futuras, e condições facilitadoras. Conclui-se que o modelo engloba tanto a investigação sobre questões motivacionais pessoais dos usuários quanto a investigação de aspectos do contexto de uso, e que o modelo pode ser utilizado para a finalidade a qual foi proposto, a avaliação de aceitação de tecnologias em ambientes de educação incluvisa. / With the foundation of the National Policy on Special Education on the Perspective of Inclusive Education (2008), mainstream schools are receiving a greater number of deaf or hard of hearing (D/HH) students, that once before were attending specialized schools. However, data point to the declining number of D/HH students enrolled from primary school to high school, and from high school students to higher education; ie, there are reasons to believe that educational barriers are imposed on the way of these students to conquer a complete education. In this context, the goal of this work is to propose a technology acceptance model that takes into account factors that ivolve aspects of the inclusive education context, as well as performing experiment on the interaction of D/HH users with a technology to evaluate the model. Among the factors, one of them refers to the potential educational barriers experienced by D/HH students in inclusive classrooms. With regard to research methodology, the study was developed in cycles. To the extent that the investigations progressed, a new study began, was unfolded and closed. This allowed successive refinements over time to the point where the initial questions were answered and the goal was reached. The proposed model has shown positive results in capturing factors that influence technology acceptance given the domain specific context, since they incorporate aspects of pragmatic quality and hedonic quality, also issues related to perceived usefulness in minimizing potential educational barriers, future expectations, and facilitating conditions. We conclude that the model encompasses both users personal motivation and context of use aspects, and the model can be used for the purpose for which it was proposed, technology acceptance evaluation considering inclusive education contexts.
474

Aceitação de tecnologia por estudantes surdos na perspectiva da educação inclusiva / Technology Acceptance for deaf students in the perspective of inclusive education

Soraia Silva Prietch 04 September 2014 (has links)
Com a Política Nacional de Educação Especial na perspectiva da Educação Inclusiva (2008), as escolas regulares vêm recebendo um número maior de estudantes surdos ou com deficiência auditiva (S/DA), que antes frequentavam escolas especializadas. No entanto, dados apontam a diminuição do número de estudantes S/DA matriculados no ensino fundamental para o ensino médio, e do ensino médio para o ensino superior; ou seja, existem razões para se acreditar que barreiras educacionais se impõem no caminho desses estudantes para que conquistem uma formação educacional completa. Neste contexto, o objetivo deste trabalho é propor um modelo de aceitação de tecnologias levando em consideração fatores que envolvam aspectos do contexto da educação inclusiva, bem como efetuar experimento da interação de usuários S/DA com uma tecnologia para avaliar o modelo. Dentre os fatores mencionados um deles se refere às potenciais barreiras educacionais vivenciadas pelos estudantes S/DA em salas de aula inclusivas. Com relação à metodologia de pesquisa, o estudo desenvolveu-se em ciclos. Na medida em que as investigações avançavam, um novo estudo iniciava, se desenvolvia e se fechava. Isso permitiu que a proposta inicial tivesse sucessivos refinamentos ao longo do tempo até o ponto em que os questionamentos iniciais foram respondidos e o objetivo foi atingido. O modelo proposto mostrou resultados positivos, no sentido de conseguir capturar os fatores que podem influenciar a aceitação de tecnologias considerando o contexto de aplicação específico, uma vez que estes incorporam os aspectos da qualidade pragmática e os aspectos da qualidade hedônica, questões relacionadas à utilidade percebida da minimização de potenciais barreiras educacionais, expectativas futuras, e condições facilitadoras. Conclui-se que o modelo engloba tanto a investigação sobre questões motivacionais pessoais dos usuários quanto a investigação de aspectos do contexto de uso, e que o modelo pode ser utilizado para a finalidade a qual foi proposto, a avaliação de aceitação de tecnologias em ambientes de educação incluvisa. / With the foundation of the National Policy on Special Education on the Perspective of Inclusive Education (2008), mainstream schools are receiving a greater number of deaf or hard of hearing (D/HH) students, that once before were attending specialized schools. However, data point to the declining number of D/HH students enrolled from primary school to high school, and from high school students to higher education; ie, there are reasons to believe that educational barriers are imposed on the way of these students to conquer a complete education. In this context, the goal of this work is to propose a technology acceptance model that takes into account factors that ivolve aspects of the inclusive education context, as well as performing experiment on the interaction of D/HH users with a technology to evaluate the model. Among the factors, one of them refers to the potential educational barriers experienced by D/HH students in inclusive classrooms. With regard to research methodology, the study was developed in cycles. To the extent that the investigations progressed, a new study began, was unfolded and closed. This allowed successive refinements over time to the point where the initial questions were answered and the goal was reached. The proposed model has shown positive results in capturing factors that influence technology acceptance given the domain specific context, since they incorporate aspects of pragmatic quality and hedonic quality, also issues related to perceived usefulness in minimizing potential educational barriers, future expectations, and facilitating conditions. We conclude that the model encompasses both users personal motivation and context of use aspects, and the model can be used for the purpose for which it was proposed, technology acceptance evaluation considering inclusive education contexts.
475

Joint Evaluation Of Multiple Speech Patterns For Speech Recognition And Training

Nair, Nishanth Ulhas 01 1900 (has links)
Improving speech recognition performance in the presence of noise and interference continues to be a challenging problem. Automatic Speech Recognition (ASR) systems work well when the test and training conditions match. In real world environments there is often a mismatch between testing and training conditions. Various factors like additive noise, acoustic echo, and speaker accent, affect the speech recognition performance. Since ASR is a statistical pattern recognition problem, if the test patterns are unlike anything used to train the models, errors are bound to occur, due to feature vector mismatch. Various approaches to robustness have been proposed in the ASR literature contributing to mainly two topics: (i) reducing the variability in the feature vectors or (ii) modify the statistical model parameters to suit the noisy condition. While some of those techniques are quite effective, we would like to examine robustness from a different perspective. Considering the analogy of human communication over telephones, it is quite common to ask the person speaking to us, to repeat certain portions of their speech, because we don't understand it. This happens more often in the presence of background noise where the intelligibility of speech is affected significantly. Although exact nature of how humans decode multiple repetitions of speech is not known, it is quite possible that we use the combined knowledge of the multiple utterances and decode the unclear part of speech. Majority of ASR algorithms do not address this issue, except in very specific issues such as pronunciation modeling. We recognize that under very high noise conditions or bursty error channels, such as in packet communication where packets get dropped, it would be beneficial to take the approach of repeated utterances for robust ASR. In this thesis, we have formulated a set of algorithms for both joint evaluation/decoding for recognizing noisy test utterances as well as utilize the same formulation for selective training of Hidden Markov Models (HMMs), again for robust performance. We first address joint recognition of multiple speech patterns given that they belong to the same class. We formulated this problem considering the patterns as isolated words. If there are K test patterns (K ≥ 2) of a word by a speaker, we show that it is possible to improve the speech recognition accuracy over independent single pattern evaluation of test speech, for the case of both clean and noisy speech. We also find the state sequence which best represents the K patterns. This formulation can be extended to connected word recognition or continuous speech recognition also. Next, we consider the benefits of joint multi-pattern likelihood for HMM training. In the usual HMM training, all the training data is utilized to arrive at a best possible parametric model. But, it is possible that the training data is not all genuine and therefore may have labeling errors, noise corruptions, or plain outlier exemplars. Such outliers will result in poorer models and affect speech recognition performance. So it is important to selectively train them so that the outliers get a lesser weightage. Giving lesser weight to an entire outlier pattern has been addressed before in speech recognition literature. However, it is possible that only some portions of a training pattern are corrupted. So it is important that only the corrupted portions of speech are given a lesser weight during HMM training and not the entire pattern. Since in HMM training, multiple patterns of speech from each class are used, we show that it is possible to use joint evaluation methods to selectively train HMMs such that only the corrupted portions of speech are given a lesser weight and not the entire speech pattern. Thus, we have addressed all the three main tasks of a HMM, to jointly utilize the availability of multiple patterns belonging to the same class. We experimented the new algorithms for Isolated Word Recognition in the case of both clean speech and noisy speech. Significant improvement in speech recognition performance is obtained, especially for speech affected by transient/burst noise.
476

Signal processing methods for enhancing speech and music signals in reverberant environments / Μέθοδοι ανάλυσης και ψηφιακής επεξεργασίας για την βελτίωση σημάτων ομιλίας και μουσικής σε χώρους με αντήχηση

Τσιλφίδης, Αλέξανδρος 06 October 2011 (has links)
This thesis presents novel signal processing algorithms for speech and music dereverberation. The proposed algorithms focus on blind single-channel suppression of late reverberation; however binaural and semi-blind methods have also been introduced. Late reverberation is a particularly harmful distortion, since it significantly decreases the perceived quality of the reverberant signals but also degrades the performance of Automatic Speech Recognition (ASR) systems and other speech and music processing algorithms. Hence, the proposed deverberation methods can be either used as standalone enhancing techniques or implemented as preprocessing schemes prior to ASR or other applied systems. The main dereverberation method proposed here is a blind dereverberation technique based on perceptual reverberation modeling has been developed. This technique employs a computational auditory masking model and locates the signal regions where late reverberation is audible, i.e. where it is unmasked from the clean signal components. Following a selective signal processing approach, only such signal regions are further processed through sub-band gain filtering. The above technique has been evaluated for both speech and music signals and for a wide range of reverberation conditions. In all cases it was found to minimize the processing artifacts and to produce perceptually superior clean signal estimations than any other tested technique. Moreover, extensive ASR tests have shown that it significantly improves the recognition performance, especially in highly reverberant environments. / Η διατριβή αποτελείται από εννιά κεφάλαια, δύο παραρτήματα καθώς και την σχετική βιβλιογραφία. Είναι γραμμένη στα αγγλικά ενώ περιλαμβάνει και ελληνική περίληψη. Στην παρούσα διατριβή, αναπτύσσονται μεθόδοι ψηφιακής επεξεργασίας σήματος για την αφαίρεση αντήχησης από σήματα ομιλίας και μουσικής. Οι προτεινόμενοι αλγόριθμοι καλύπτουν ένα μεγάλο εύρος εφαρμογών αρχικά εστιάζοντας στην τυφλή (“blind”) αφαίρεση για μονοκαναλικά σήματα. Στοχεύοντας σε πιο ειδικά σενάρια χρήσης προτείνονται επίσης αμφιωτικοί αλγόριθμοι αλλά και τεχνικές που προϋποθέτουν την πραγματοποίηση κάποιας ακουστικής μέτρησης. Οι αλγόριθμοι επικεντρώνουν στην αφαίρεση της καθυστερημένης αντήχησης που είναι ιδιαίτερα επιβλαβής για την ποιότητα σημάτων ομιλίας και μουσικής και μειώνει την καταληπτότητα της ομιλίας. Επίσης, επειδή αλλοιώνει σημαντικά τα στατιστικά των σημάτων, μειώνει σημαντικά την απόδοση συστημάτων αυτόματης αναγνώρισης ομιλίας καθώς και άλλων αλγορίθμων ψηφιακής επεξεργασίας ομιλίας και μουσικής. Έτσι οι προτεινόμενοι αλγόριθμοι μπορούν είτε να χρησιμοποιηθούν σαν αυτόνομες τεχνικές βελτίωσης της ποιότητας των ακουστικών σημάτων είτε να ενσωματωθούν σαν στάδια προ-επεξεργασίας σε άλλες εφαρμογές. Η κύρια μέθοδος αφαίρεσης αντήχησης που προτείνεται στην διατριβή, είναι βασισμένη στην αντιληπτική μοντελοποίηση και χρησιμοποιεί ένα σύγχρονο ψυχοακουστικό μοντέλο. Με βάση αυτό το μοντέλο γίνεται μία εκτίμηση των σημείων του σήματος που η αντήχηση είναι ακουστή δηλαδή που δεν επικαλύπτεται από το ισχυρότερο σε ένταση καθαρό από αντήχηση σήμα. Η συγκεκριμένη εκτίμηση οδηγεί σε μία επιλεκτική επεξεργασία σήματος όπου η αφαίρεση πραγματοποιείται σε αυτά και μόνο τα σημεία, μέσω πρωτότυπων υβριδικών συναρτήσεων κέρδους που βασίζονται σε δείκτες αντικειμενικής και υποκειμενικής αλλοίωσης. Εκτεταμένα αντικειμενικά και υποκειμενικά πειράματα δείχνουν ότι η προτεινόμενη τεχνική δίνει βέλτιστες ποιοτικά ανηχωικές εκτιμήσεις ανεξάρτητα από το μέγεθος του χώρου.
477

Acoustic gesture modeling. Application to a Vietnamese speech recognition system / Modélisation des gestes acoustiques. Application à un système de reconnaissance de la parole Vietnamienne

Tran, Thi-Anh-Xuan 30 March 2016 (has links)
La sélection de caractéristiques acoustiques appropriées est essentielle dans tout système de traitement de la parole. Pendant près de 40 ans, la parole a été généralement considérée comme une séquence de signaux quasi-stables (voyelles) séparés par des transitions (consonnes). Bien qu‟un grand nombre d'études documentent clairement l'importance de la coarticulation, et révèlent que les cibles articulatoires et acoustiques ne sont pas indépendantes du contexte, l‟hypothèse que chaque voyelle présente une cible acoustique qui peut être spécifiée d'une manière indépendante du contexte reste très répandue. Ce point de vue implique des limitations fortes. Il est bien connu que les fréquences de formants sont des caractéristiques acoustiques qui présentent un lien évident avec la production de la parole, et qui peuvent participer à la distinction perceptive entre les voyelles. Par conséquent, les voyelles sont généralement décrites avec des configurations articulatoires statiques représentées par des cibles dans l'espace acoustique, généralement par les fréquences des formants correspondants, représentées dans les plans F1-F2 et F2-F3. Les consonnes occlusives peuvent être décrites en termes de point d'articulation, représentés par locus (ou locus équations) dans le plan acoustique. Mais les trajectoires des fréquences de formants dans la parole fluide présentent rarement un état d'équilibre pour chaque voyelle. Elles varient avec le locuteur, l'environnement consonantique (co-articulation) et le débit de parole (relative à un continuum entre hypo et hyper-articulation). En vue des limites inhérentes aux approches statiques, la démarche adoptée ici consiste à étudier les transitions entre les voyelles et les consonnes (V1V2 et V1CV2) d‟un point de vue dynamique. / Speech plays a vital role in human communication. Selection of relevant acoustic speech features is key to in the design of any system using speech processing. For some 40 years, speech was typically considered as a sequence of quasi-stable portions of signal (vowels) separated by transitions (consonants). Despite a wealth of studies that clearly document the importance of coarticulation, and reveal that articulatory and acoustic targets are not context-independent, the view that each vowel has an acoustic target that can be specified in a context-independent manner remains widespread. This point of view entails strong limitations. It is well known that formant frequencies are acoustic characteristics that bear a clear relationship with speech production, and that can distinguish among vowels. Therefore, vowels are generally described with static articulatory configurations represented by targets in the acoustic space, typically by formant frequencies in F1-F2 and F2-F3 planes. Plosive consonants can be described in terms of places of articulation, represented by locus or locus equations in an acoustic plane. But formant frequencies trajectories in fluent speech rarely display a steady state for each vowel. They vary with speaker, consonantal environment (co-articulation) and speaking rate (relating to continuum between hypo- and hyper-articulation). In view of inherent limitations of static approaches, the approach adopted here consists in studying both vowels and consonants from a dynamic point of view.Firstly we studied the effects of the impulse response at the beginning, at the end and during transitions of the signal both in the speech signal and at the perception level. Variations of the phases of the components were then examined. Results show that the effects of these parameters can be observed in spectrograms. Crucially, the amplitudes of the spectral components distinguished under the approach advocated here are sufficient for perceptual discrimination. From this result, for all speech analysis, we only focus on amplitude domain, deliberately leaving aside phase information. Next we extent the work to vowel-consonant-vowel perception from a dynamic point of view. These perceptual results, together with those obtained earlier by Carré (2009a), show that vowel-to-vowel and vowel-consonant-vowel stimuli can be characterized and separated by the direction and rate of the transitions on formant plane, even when absolute frequency values are outside the vowel triangle (i.e. the vowel acoustic space in absolute values).Due to limitations of formant measurements, the dynamic approach needs to develop new tools, based on parameters that can replace formant frequency estimation. Spectral Subband Centroid Frequency (SSCF) features was studied. Comparison with vowel formant frequencies show that SSCFs can replace formant frequencies and act as “pseudo-formant” even during consonant production.On this basis, SSCF is used as a tool to compute dynamic characteristics. We propose a new way to model the dynamic speech features: we called it SSCF Angles. Our analysis work on SSCF Angles were performed on transitions of vowel-to-vowel (V1V2) sequences of both Vietnamese and French. SSCF Angles appear as reliable and robust parameters. For each language, the analysis results show that: (i) SSCF Angles can distinguish V1V2 transitions; (ii) V1V2 and V2V1 have symmetrical properties on the acoustic domain based on SSCF Angles; (iii) SSCF Angles for male and female are fairly similar in the same studied transition of context V1V2; and (iv) they are also more or less invariant for speech rate (normal speech rate and fast one). And finally, these dynamic acoustic speech features are used in Vietnamese automatic speech recognition system with several obtained interesting results.
478

Collecter, Transcrire, Analyser : quand la machine assiste le linguiste dans son travail de terrain / Collecting, Transcribing, Analyzing : Machine-Assisted Linguistic Fieldwork

Gauthier, Elodie 30 March 2018 (has links)
Depuis quelques décennies, de nombreux scientifiques alertent au sujet de la disparition des langues qui ne cesse de s'accélérer.Face au déclin alarmant du patrimoine linguistique mondial, il est urgent d'agir afin de permettre aux linguistes de terrain, a minima, de documenter les langues en leur fournissant des outils de collecte innovants et, si possible, de leur permettre de décrire ces langues grâce au traitement des données assisté par ordinateur.C'est ce que propose ce travail, en se concentrant sur trois axes majeurs du métier de linguiste de terrain : la collecte, la transcription et l'analyse.Les enregistrements audio sont primordiaux, puisqu'ils constituent le matériau source, le point de départ du travail de description. De plus, tel un instantané, ils représentent un objet précieux pour la documentation de la langue. Cependant, les outils actuels d'enregistrement n'offrent pas au linguiste la possibilité d'être efficace dans son travail et l'ensemble des appareils qu'il doit utiliser (enregistreur, ordinateur, microphone, etc.) peut devenir encombrant.Ainsi, nous avons développé LIG-AIKUMA, une application mobile de collecte de parole innovante, qui permet d'effectuer des enregistrements directement exploitables par les moteurs de reconnaissance automatique de la parole (RAP). Les fonctionnalités implémentées permettent d'enregistrer différents types de discours (parole spontanée, parole élicitée, parole lue) et de partager les enregistrements avec les locuteurs. L'application permet, en outre, la construction de corpus alignés << parole source (peu dotée)-parole cible (bien dotée) >>, << parole-image >>, << parole-vidéo >> qui présentent un intérêt fort pour les technologies de la parole, notamment pour l'apprentissage non supervisé.Bien que la collecte ait été menée de façon efficace, l'exploitation (de la transcription jusqu'à la glose, en passant par la traduction) de la totalité de ces enregistrements est impossible, tant la tâche est fastidieuse et chronophage.Afin de compléter l'aide apportée aux linguistes, nous proposons d'utiliser des techniques de traitement automatique de la langue pour lui permettre de tirer partie de la totalité de ses données collectées. Parmi celles-ci, la RAP peut être utilisée pour produire des transcriptions, d'une qualité satisfaisante, de ses enregistrements.Une fois les transcriptions obtenues, le linguiste peut s'adonner à l'analyse de ses données. Afin qu'il puisse procéder à l'étude de l'ensemble de ses corpus, nous considérons l'usage des méthodes d'alignement forcé. Nous démontrons que de telles techniques peuvent conduire à des analyses linguistiques fines. En retour, nous montrons que la modélisation de ces observations peut mener à des améliorations des systèmes de RAP. / In the last few decades, many scientists were concerned with the fast extinction of languages. Faced with this alarming decline of the world's linguistic heritage, action is urgently needed to enable fieldwork linguists, at least, to document languages by providing them innovative collection tools and to enable them to describe these languages. Machine assistance might be interesting to help them in such a task.This is what we propose in this work, focusing on three pillars of the linguistic fieldwork: collection, transcription and analysis.Recordings are essential, since they are the source material, the starting point of the descriptive work. Speech recording is also a valuable object for the documentation of the language.The growing proliferation of smartphones and other interactive voice mobile devices offer new opportunities for fieldwork linguists and researchers in language documentation. Field recordings should also include ethnolinguistic material which is particularly valuable to document traditions and way of living. However, large data collections require well organized repositories to access the content, with efficient file naming and metadata conventions.Thus, we have developed LIG-AIKUMA, a free Android app running on various mobile phones and tablets. The app aims to record speech for language documentation, over an innovative way.It includes a smart generation and handling of speaker metadata as well as respeaking and parallel audio data mapping.LIG-AIKUMA proposes a range of different speech collection modes (recording, respeaking, translation and elicitation) and offers the possibility to share recordings between users. Through these modes, parallel corpora are built such as "under-resourced speech - well-resourced speech", "speech - image", "speech - video", which are also of a great interest for speech technologies, especially for unsupervised learning.After the data collection step, the fieldwork linguist transcribes these data. Nonetheless, it can not be done -currently- on the whole collection, since the task is tedious and time-consuming.We propose to use automatic techniques to help the fieldwork linguist to take advantage of all his speech collection. Along these lines, automatic speech recognition (ASR) is a way to produce transcripts of the recordings, with a decent quality.Once the transcripts are obtained (and corrected), the linguist can analyze his data. In order to analyze the whole collection collected, we consider the use of forced alignment methods. We demonstrate that such techniques can lead to fine evaluation of linguistic features. In return, we show that modeling specific features may lead to improvements of the ASR systems.
479

Automatic speech recognition, with large vocabulary, robustness, independence of speaker and multilingual processing

Caon, Daniel Régis Sarmento 27 August 2010 (has links)
Made available in DSpace on 2016-12-23T14:33:42Z (GMT). No. of bitstreams: 1 Dissertacao de Daniel Regis Sarmento Caon.pdf: 1566094 bytes, checksum: 67b557539f4bc5b354bc90066e805215 (MD5) Previous issue date: 2010-08-27 / This work aims to provide automatic cognitive assistance via speech interface, to the elderly who live alone, at risk situation. Distress expressions and voice commands are part of the target vocabulary for speech recognition. Throughout the work, the large vocabulary continuous speech recognition system Julius is used in conjunction with the Hidden Markov Model Toolkit(HTK). The system Julius has its main features described, including its modification. This modification is part of the contribution which is in this work, including the detection of distress expressions ( situations of speech which suggest emergency). Four different languages were provided as target for recognition: French, Dutch, Spanish and English. In this same sequence of languages (determined by data availability and the local of scenarios for the integration of systems) theoretical studies and experiments were conducted to solve the need of working with each new configuration. This work includes studies of the French and Dutch languages. Initial experiments (in French) were made with adaptation of hidden Markov models and were analyzed by cross validation. In order to perform a new demonstration in Dutch, acoustic and language models were built and the system was integrated with other auxiliary modules (such as voice activity detector and the dialogue system). Results of speech recognition after acoustic adaptation to a specific speaker (and the creation of language models for a specific scenario to demonstrate the system) showed 86.39 % accuracy rate of sentence for the Dutch acoustic models. The same data shows 94.44 % semantical accuracy rate of sentence / Este trabalho visa prover assistência cognitiva automática via interface de fala, à idosos que moram sozinhos, em situação de risco. Expressões de angústia e comandos vocais fazem parte do vocabulário alvo de reconhecimento de fala. Durante todo o trabalho, o sistema de reconhecimento de fala contínua de grande vocabulário Julius é utilizado em conjunto com o Hidden Markov Model Toolkit(HTK). O sistema Julius tem suas principais características descritas, tendo inclusive sido modificado. Tal modificação é parte da contribuição desse estudo, assim como a detecção de expressões de angústia (situações de fala que caracterizam emergência). Quatro diferentes linguas foram previstas como alvo de reconhecimento: Francês, Holandês, Espanhol e Inglês. Nessa mesma ordem de linguas (determinadas pela disponibilidade de dados e local de cenários de integração de sistemas) os estudos teóricos e experimentos foram conduzidos para suprir a necessidade de trabalhar com cada nova configuração. Este trabalho inclui estudos feitos com as linguas Francês e Holandês. Experimentos iniciais (em Francês) foram feitos com adaptação de modelos ocultos de Markov e analisados por validação cruzada. Para realizar uma nova demonstração em Holandês, modelos acústicos e de linguagem foram construídos e o sistema foi integrado a outros módulos auxiliares (como o detector de atividades vocais e sistema de diálogo). Resultados de reconhecimento de fala após adaptação dos modelos acústicos à um locutor específico (e da criação de modelos de linguagem específicos para um cenário de demonstração do sistema) demonstraram 86,39% de taxa de acerto de sentença para os modelos acústicos holandeses. Os mesmos dados demonstram 94,44% de taxa de acerto semântico de sentença
480

Language Identification Through Acoustic Sub-Word Units

Sai Jayram, A K V 05 1900 (has links) (PDF)
No description available.

Page generated in 0.1389 seconds