• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 16
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 181
  • 181
  • 61
  • 38
  • 38
  • 35
  • 33
  • 33
  • 20
  • 19
  • 18
  • 17
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Grapheme-to-phoneme conversion and its application to transliteration

Jiampojamarn, Sittichai 06 1900 (has links)
Grapheme-to-phoneme conversion (G2P) is the task of converting a word, represented by a sequence of graphemes, to its pronunciation, represented by a sequence of phonemes. The G2P task plays a crucial role in speech synthesis systems, and is an important part of other applications, including spelling correction and speech-to-speech machine translation. G2P conversion is a complex task, for which a number of diverse solutions have been proposed. In general, the problem is challenging because the source string does not unambiguously specify the target representation. In addition, the training data include only example word pairs without the structural information of subword alignments. In this thesis, I introduce several novel approaches for G2P conversion. My contributions can be categorized into (1) new alignment models and (2) new output generation models. With respect to alignment models, I present techniques including many-to-many alignment, phonetic-based alignment, alignment by integer linear programing and alignment-by-aggregation. Many-to-many alignment is designed to replace the one-to-one alignment that has been used almost exclusively in the past. The new many-to-many alignments are more precise and accurate in expressing grapheme-phoneme relationships. The other proposed alignment approaches attempt to advance the training method beyond the use of Expectation-Maximization (EM). With respect to generation models, I first describe a framework for integrating many-to-many alignments and language models for grapheme classification. I then propose joint processing for G2P using online discriminative training. I integrate a generative joint n-gram model into the discriminative framework. Finally, I apply the proposed G2P systems to name transliteration generation and mining tasks. Experiments show that the proposed system achieves state-of-the-art performance in both the G2P and name transliteration tasks.
152

Υλοποίηση βαθμίδας ΨΕΣ (Ψηφιακής Επεξεργασίας Σήματος) συστήματος σύνθεσης ομιλίας με βάση τον αλγόριθμο ΗΝΜ. / HNM-based DSP (Digital Signal Processing) module implementation of a TTS system

Βασιλόπουλος, Ιωάννης 16 May 2007 (has links)
Ένα TTS (Τext-To-Speech) σύστημα μετατρέπει ένα οποιοδήποτε κείμενο στην αντιστοιχούσα ομιλία, η οποία έχει φυσικά χαρακτηριστικά. Το ΤΤS αποτελείται από δύο βαθμίδες, τη βαθμίδα Επεξεργασίας Φυσικής Γλώσσας (ΕΦΓ) και τη βαθμίδα Ψηφιακής Επεξεργασίας Σήματος (ΨΕΣ). Η βαθμίδα ΕΦΓ είναι υπεύθυνη για την σωστή ανάλυση του κειμένου εισόδου σε φωνήματα και το καθορισμό των επιθυμητών προσωδιακών χαρακτηριστικών, όπως το pitch, η διάρκεια και η ένταση του κάθε φωνήματος. Η βαθμίδα ΨΕΣ αναλαμβάνει να συνθέσει την ομιλία με τα επιθυμητά προσωδιακά χαρακτηρίστηκα, τα οποία έδωσε η βαθμίδα ΕΦΓ. Ένας τρόπος για να επιτευχθεί αυτό είναι με χρήση αλγορίθμων ανάλυσης και σύνθεσης ομιλίας, όπως ο αλγόριθμος HNM (Harmonic plus Noise Model).Ο ΗΝΜ μοντελοποιεί το σήμα ομιλίας ως άθροισμα δύο τμημάτων, ενός τμήματος με αρμονικά χαρακτηριστικά και ενός τμήματος με χαρακτηριστικά θορύβου. Χρησιμοποιώντας αυτό το μοντέλο γίνεται η ανάλυση και η σύνθεση του σήματος ομιλίας με ή χωρίς προσωδιακές μεταβολές. / A TTS (Text-To-Speech) System is used to convert any given text to its corresponding speech with natural characteristics. A TTS consists of two modules, the Natural Language Processing (NLP) module and the Digital Signal Processing (DSP) module. The NLP module analyses the input text and supplies the DSP module with the appropriate phonemes and prosodic modifications, with concern to pitch, duration and volume of each phoneme. Then the DSP module synthesizes speech with the target prosody, using speech analysis-synthesis algorithms such as HNM. HNM (Harmonic plus Noise Model) algorithm models speech signal as the sum two parts, the harmonic part and the noise part. Speech analysis and speech synthesis with or without modifications, is achieved using the harmonic and the noise part
153

Melizmų sintezė dirbtinių neuronų tinklais / Melisma synthesis using artificial neural networks

Leonavičius, Romas January 2006 (has links)
Modern methods of speech synthesis are not suitable for restoration of song signals due to lack of vitality and intonation in the resulted sounds. The aim of presented work is to synthesize melismas met in Lithuanian folk songs, by applying Artificial Neural Networks. An analytical survey of rather a widespread literature is presented. First classification and comprehensive discussion of melismas are given. The theory of dynamic systems which will make the basis for studying melismas is presented and finally the relationship for modeling a melisma with nonlinear and dynamic systems is outlined. Investigation of the most widely used Linear Prediction Coding method and possibilities of its improvement. The modification of original Linear Prediction method based on dynamic LPC frame positioning is proposed. On its basis, the new melisma synthesis technique is presented.Developed flexible generalized melisma model, based on two Artificial Neural Networks – a Multilayer Perceptron and Adaline – as well as on two network training algorithms – Levenberg- Marquardt and the Least Squares error minimization – is presented. Moreover, original mathematical models of Fortis, Gruppett, Mordent and Trill are created, fit for synthesizing melismas, and their minimal sizes are proposed. The last chapter concerns experimental investigation, using over 500 melisma records, and corroborates application of the new mathematical models to melisma synthesis of one [ ...].
154

Lietuviškų fonemų dinaminių modelių analizė ir sintezė / Analysis and synthesis of Lithuanian phoneme dynamic sound models

Pyž, Gražina 25 November 2013 (has links)
Kalba yra natūralus žmonių bendravimo būdas. Teksto-į-šneką (TTS) problemos atsiranda įvairiose srityse: elektroninių laiškų skaitymas balsu, teksto iš elektroninių knygų skaitymas balsu, paslaugos kalbos sutrikimų turintiems žmonėms. Kalbos sintezatoriaus kūrimas yra be galo sudėtingas uždavinys. Įvairių šalių mokslininkai bando automatizuoti kalbos sintezę. Siekiant išspręsti lietuvių kalbos sintezės problemą, būtina kurti naujus lietuvių kalbos garsų matematinius modelius. Disertacijos tyrimo objektas yra dinaminiai lietuviškos šnekos balsių ir pusbalsių fonemų modeliai. Pasiūlyti balsių ir pusbalsių fonemų dinaminiai modeliai gali būti panaudoti kuriant formantinį kalbos sintezatorių. Garsams aprašyti pasiūlyta modeliavimo sistema pagrįsta balsių ir pusbalsių fonemų matematiniu modeliu bei pagrindinio tono ir įėjimų nustatymo automatine procedūra. Fonemos signalas yra gaunamas kai daugelio-įėjimų ir vieno-išėjimo (MISO) sistemos išėjimas. MISO sistema susideda iš lygiagrečiai sujungtų vieno-įėjimo ir vieno-išėjimo (SISO) sistemų, kurių įėjimų amplitudes kinta laike. Disertacijoje du sintezės metodai sukurti: harmoninis ir formantinis. Eksperimentiniai rezultatai parodė, kad balsiai ir pusbalsiai sintezuoti minėta sistema skamba pakankamai natūraliai. / Speech is the most natural way of human communication. Text-to-speech (TTS) problem arises in various applications: reading email aloud, reading text from e-book aloud, services for the people with speech disorders. Construction of speech synthesizer is a very complex task. Researchers are trying to automate speech synthesis. In order to solve the problem of Lithuanian speech synthesis, it is necessary to develop mathematical models for Lithuanian speech sounds. The research object of the dissertation is Lithuanian vowel and semivowel phoneme models. The proposed vowel and semivowel phoneme models can be used for developing a TTS formant synthesizer. Lithuanian vowel and semivowel phoneme modelling framework based on a vowel and semivowel phoneme mathematical model and an automatic procedure of estimation of the vowel phoneme fundamental frequency and input determining is proposed. Using this framework, the phoneme signal is described as the output of a linear multiple-input and single-output (MISO) system. The MISO system is a parallel connection of single-input and single-output (SISO) systems whose input impulse amplitudes vary in time. Within this framework two synthesis methods are proposed: harmonic and formant. Simulation has revealed that that the proposed framework gives sufficiently good vowel and semivowel synthesis quality.
155

Analysis and synthesis of Lithuanian phoneme dynamic sound models / Lietuviškų fonemų dinaminių modelių analizė ir sintezė

Pyž, Gražina 25 November 2013 (has links)
Speech is the most natural way of human communication. Text-to-speech (TTS) problem arises in various applications: reading email aloud, reading text from e-book aloud, services for the people with speech disorders. Construction of speech synthesizer is a very complex task. Researchers are trying to automate speech synthesis. In order to solve the problem of Lithuanian speech synthesis, it is necessary to develop mathematical models for Lithuanian speech sounds. The research object of the dissertation is Lithuanian vowel and semivowel phoneme models. The proposed vowel and semivowel phoneme models can be used for developing a TTS formant synthesizer. Lithuanian vowel and semivowel phoneme modelling framework based on a vowel and semivowel phoneme mathematical model and an automatic procedure of estimation of the vowel phoneme fundamental frequency and input determining is proposed. Using this framework, the phoneme signal is described as the output of a linear multiple-input and single-output (MISO) system. The MISO system is a parallel connection of single-input and single-output (SISO) systems whose input impulse amplitudes vary in time. Within this framework two synthesis methods are proposed: harmonic and formant. Simulation has revealed that that the proposed framework gives sufficiently good vowel and semivowel synthesis quality. / Kalba yra natūralus žmonių bendravimo būdas. Teksto-į-šneką (TTS) problemos atsiranda įvairiose srityse: elektroninių laiškų skaitymas balsu, teksto iš elektroninių knygų skaitymas balsu, paslaugos kalbos sutrikimų turintiems žmonėms. Kalbos sintezatoriaus kūrimas yra be galo sudėtingas uždavinys. Įvairių šalių mokslininkai bando automatizuoti kalbos sintezę. Siekiant išspręsti lietuvių kalbos sintezės problemą, būtina kurti naujus lietuvių kalbos garsų matematinius modelius. Disertacijos tyrimo objektas yra dinaminiai lietuviškos šnekos balsių ir pusbalsių fonemų modeliai. Pasiūlyti balsių ir pusbalsių fonemų dinaminiai modeliai gali būti panaudoti kuriant formantinį kalbos sintezatorių. Garsams aprašyti pasiūlyta modeliavimo sistema pagrįsta balsių ir pusbalsių fonemų matematiniu modeliu bei pagrindinio tono ir įėjimų nustatymo automatine procedūra. Fonemos signalas yra gaunamas kai daugelio-įėjimų ir vieno-išėjimo (MISO) sistemos išėjimas. MISO sistema susideda iš lygiagrečiai sujungtų vieno-įėjimo ir vieno-išėjimo (SISO) sistemų, kurių įėjimų amplitudes kinta laike. Disertacijoje du sintezės metodai sukurti: harmoninis ir formantinis. Eksperimentiniai rezultatai parodė, kad balsiai ir pusbalsiai sintezuoti minėta sistema skamba pakankamai natūraliai.
156

Grapheme-to-phoneme conversion and its application to transliteration

Jiampojamarn, Sittichai Unknown Date
No description available.
157

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
158

Probabilistic modeling of neural data for analysis and synthesis of speech

Matthews, Brett Alexander 13 August 2012 (has links)
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
159

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
160

Phonetische Transkription für ein multilinguales Sprachsynthesesystem

Hain, Horst-Udo 06 February 2012 (has links) (PDF)
Die vorliegende Arbeit beschäftigt sich mit einem datengetriebenen Verfahren zur Graphem-Phonem-Konvertierung für ein Sprachsynthesesystem. Die Aufgabe besteht darin, die Aussprache für beliebige Wörter zu bestimmen, auch für solche Wörter, die nicht im Lexikon des Systems enthalten sind. Die Architektur an sich ist sprachenunabhängig, von der Sprache abhängig sind lediglich die Wissensquellen, die zur Laufzeit des Systems geladen werden. Die Erstellung von Wissensquellen für weitere Sprachen soll weitgehend automatisch und ohne Einsatz von Expertenwissen möglich sein. Expertenwissen kann verwendet werden, um die Ergebnisse zu verbessern, darf aber keine Voraussetzung sein. Für die Bestimmung der Transkription werden zwei neuronale Netze verwendet. Das erste Netz generiert aus der Buchstabenfolge des Wortes die zu realisierenden Laute einschließlich der Silbengrenzen, und das zweite bestimmt im Anschluß daran die Position der Wortbetonung. Diese Trennung hat den Vorteil, daß man für die Bestimmung des Wortakzentes das Wissen über die gesamte Lautfolge einbeziehen kann. Andere Verfahren, die die Transkription in einem Schritt bestimmen, haben das Problem, bereits zu Beginn des Wortes über den Akzent entscheiden zu müssen, obwohl die Aussprache des Wortes noch gar nicht feststeht. Zudem bietet die Trennung die Möglichkeit, zwei speziell auf die Anforderung zugeschnittene Netze zu trainieren. Die Besonderheit der hier verwendeten neuronalen Netze ist die Einführung einer Skalierungsschicht zwischen der eigentlichen Eingabe und der versteckten Schicht. Eingabe und Skalierungsschicht werden über eine Diagonalmatrix verbunden, wobei auf die Gewichte dieser Verbindung ein Weight Decay (Gewichtezerfall) angewendet wird. Damit erreicht man eine Bewertung der Eingabeinformation während des Trainings. Eingabeknoten mit einem großen Informationsgehalt werden verstärkt, während weniger interessante Knoten abgeschwächt werden. Das kann sogar soweit gehen, daß einzelne Knoten vollständig abgetrennt werden. Der Zweck dieser Verbindung ist, den Einfluß des Rauschens in den Trainingsdaten zu reduzieren. Durch das Ausblenden der unwichtigen Eingabewerte ist das Netz besser in der Lage, sich auf die wichtigen Daten zu konzentrieren. Das beschleunigt das Training und verbessert die erzielten Ergebnisse. In Verbindung mit einem schrittweisen Ausdünnen der Gewichte (Pruning) werden zudem störende oder unwichtige Verbindungen innerhalb der Netzwerkarchitektur gelöscht. Damit wird die Generalisierungsfähigkeit noch einmal erhöht. Die Aufbereitung der Lexika zur Generierung der Trainingsmuster für die neuronalen Netze wird ebenfalls automatisch durchgeführt. Dafür wird mit Hilfe der dynamischen Zeitanpassung (DTW) der optimale Pfad in einer Ebene gesucht, die auf der einen Koordinate durch die Buchstaben des Wortes und auf der anderen Koordinate durch die Lautfolge aufgespannt wird. Somit erhält man eine Zuordnung der Laute zu den Buchstaben. Aus diesen Zuordnungen werden die Muster für das Training der Netze generiert. Um die Transkriptionsergebnisse weiter zu verbessern, wurde ein hybrides Verfahren unter Verwendung der Lexika und der Netze entwickelt. Unbekannte Wörter werden zuerst in Bestandteile aus dem Lexikon zerlegt und die Lautfolgen dieser Teilwörter zur Gesamttranskription zusammengesetzt. Dabei werden Lücken zwischen den Teilwörtern durch die neuronalen Netze aufgefüllt. Dies ist allerdings nicht ohne weiteres möglich, da es zu Fehlern an den Schnittstellen zwischen den Teiltranskriptionen kommen kann. Dieses Problem wird mit Hilfe des Lexikons gelöst, das für die Generierung der Trainingsmuster aufbereitet wurde. Hier ist eine eindeutige Zuordnung der Laute zu den sie generierenden Buchstaben enthalten. Somit können die Laute an den Schnittstellen neu bewertet und Transkriptionsfehler vermieden werden. Die Verlagsausgabe dieser Dissertation erschien 2005 im w.e.b.-Universitätsverlag Dresden (ISBN 3-937672-76-1). / The topic of this thesis is a system which is able to perform a grapheme-to-phoneme conversion for several languages without changes in its architecture. This is achieved by separation of the language dependent knowledge bases from the run-time system. Main focus is an automated adaptation to new languages by generation of new knowledge bases without manual effort with a minimal requirement for additional information. The only source is a lexicon containing all the words together with their appropriate phonetic transcription. Additional knowledge can be used to improve or accelerate the adaptation process, but it must not be a prerequisite. Another requirement is a fully automatic process without manual interference or post-editing. This allows for the adaptation to a new language without even having a command of that language. The only precondition is the pronunciation dictionary which should be enough for the data-driven approach to learn a new language. The automatic adaptation process is divided into two parts. In the first step the lexicon is pre-processed to determine which grapheme sequence belongs to which phoneme. This is the basis for the generation of the training patterns for the data-driven learning algorithm. In the second part mapping rules are derived automatically which are finally used to create the phonetic transcription of any word, even if it not contained in the dictionary. Task is to have a generalisation process that can handle all words in a text that has to be read out by a text-to-speech system.

Page generated in 0.0664 seconds