• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 38
  • 20
  • 20
  • 18
  • 11
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Untersuchungen zur Proteinsekretion in Bradyrhizobium japonicum unter besonderer Berücksichtigung des Typ III- Sekretionssystems und Charakterisierung der „metal ion-inducible autocleavage“ Effektordomäne

Zehner, Susanne 09 September 2020 (has links)
Im Mittelpunkt dieser Arbeit steht die Untersuchung des Typ III-Sekretionssystem (T3SS) bei Bradyrhizobium japonicum USDA110. Im ersten Teil der Arbeit wird die Regulation der Gene des Typ III-Sekretionssystems in B. japonicum beschrieben. Dabei wurde die tts-Box, als neuartige Promotorsequenz für die Gene des T3SS und sekretierter Proteine charakterisiert. Mittels Expressionsanalysen konnte die Aktivität von 34 Genregionen downstream der tts-Boxen in Abhängigkeit von Flavonoiden gezeigt werden. Auch in Symbiose, in frühen Infektionsstadien und in reifen Knöllchen von verschiedenen Wirtspflanzen wurde die Expression ausgewählter Gene nachgewiesen. Der zweite Teil der Arbeit widmet sich der Analyse der sekretierten Proteine von B. japonicum. Über 100 Proteine wurden im Überstand von Kulturen nachgewiesen, wovon 68 Proteine durch Massenspektrometrie identifiziert werden konnten. Zusätzlich wurden 12 Proteine identifiziert, die in Abhängigkeit des T3SS sekretiert werden. Im dritten Teil der Arbeit wurden die sekretierten Proteine NopE1 und NopE2 näher untersucht und als bona-fide Effektoren nachgewiesen. Die rekombinant produzierten Effektoren NopE1 und NopE2 wurden biochemisch charakterisiert. Für die Domäne unbekannter Funktion (DUF1521) wurde eine spezifische Selbstspaltungsaktivität in Gegenwart von Calcium gezeigt. Nachweislich ist diese Aktivität relevant für die Symbiose. Somit konnte der Proteindomäne mit bisher unbekannter Funktion eine biochemische Funktion zugeordnet werden. Im letzten Teil der Arbeit wurden die Untersuchungen an dieser Domäne auf weitere Proteine ausgedehnt. Für das putative Effektorprotein VIC_001052 aus V. coralliilyticus, dem Verursacher der Korallenbleiche bei Pocilla damicornis, wurde ebenfalls die in vitro Calcium-induzierte Selbstspaltungsaktivität gezeigt. Aufgrund der konservierten Selbstspaltungsfunktion wurde die DUF1521-Domäne in metal-ion inducible autocleavage- (MIIA)- Domäne umbenannt.:I. Einleitung Rhizobien-Leguminosen-Interaktion Proteinsekretionssysteme II. Zusammenfassung der Forschungsergebnisse 1. Untersuchung der Expression des Typ III-Sekretionssystems in Knöllchen und Analyse des tts-Box-Promotors 2. Identifizierung sekretierter Proteine von Bradyrhizobium japonicum 3. Untersuchung der sekretierten Proteine NopE1 und NopE2 4. Untersuchung von konservierten MIIA-Domänen III. Diskussion IV. Zusammenfassung V. Literaturverzeichnis VI. Anhang VII. Publikationen
32

Evaluating Multi-Uav System with Text to Spech for Sitational Awarness and Workload

Lindgren, Viktor January 2021 (has links)
With improvements to miniaturization technologies, the ratio between operators required per UAV has become increasingly smaller at the cost of increased workload. Workload is an important factor to consider when designing the multi-UAV systems of tomorrow as too much workload may decrease an operator's performance. This study proposes the use of text to speech combined with an emphasis on a single screen design as a way of improving situational awareness and perceived workload. A controlled experiment consisting of 18 participants was conducted inside a simulator. Their situational awareness and perceived workload was measured using SAGAT and NASA-TLX respectively. The results show that the use of text to speech lead to a decrease in situational awareness for all elements inside the graphical user interface that were not directly handled by a text to speech event. All of the NASA-TLX measurements showed an improvement in perceived workload except for physical demand. Overall an improvement of perceived workload was observed when text to speech was in use.
33

Grapheme-to-phoneme transcription of English words in Icelandic text

Ármannsson, Bjarki January 2021 (has links)
Foreign words, such as names, locations or sometimes entire phrases, are a problem for any system that is meant to convert graphemes to phonemes (g2p; i.e.converting written text into phonetic transcription). In this thesis, we investigate both rule-based and neural methods of phonetically transcribing English words found in Icelandic text, taking into account the rules and constraints of how foreign phonemes can be mapped into Icelandic phonology. We implement a rule-based system by compiling grammars into finite-state transducers. In deciding on which rules to include, and evaluating their coverage, we use a list of the most frequently-found English words in a corpus of Icelandic text. The output of the rule-based system is then manually evaluated and corrected (when needed) and subsequently used as data to train a simple bidirectional LSTM g2p model. We train models both with and without length and stress labels included in the gold annotated data. Although the scores for neither model are close to the state-of-the-art for either Icelandic or English, both our rule-based system and LSTM model show promising initial results and improve on the baseline of simply using an Icelandic g2p model, rule-based or neural, on English words. We find that the greater flexibility of the LSTM model seems to give it an advantage over our rule-based system when it comes to modeling certain phenomena. Most notable is the LSTM’s ability to more accurately transcribe relations between graphemes and phonemes for English vowel sounds. Given there does not exist much previous work on g2p transcription specifically handling English words within the Icelandic phonological constraints and it remains an unsolved task, our findings present a foundation for the development of further research, and contribute to improving g2p systems for Icelandic as a whole.
34

Homograph Disambiguation and Diacritization for Arabic Text-to-Speech Using Neural Networks / Homografdisambiguering och diakritisering för arabiska text-till-talsystem med hjälp av neurala nätverk

Lameris, Harm January 2021 (has links)
Pre-processing Arabic text for Text-to-Speech (TTS) systems poses major challenges, as Arabic omits short vowels in writing. This omission leads to a large number of homographs, and means that Arabic text needs to be diacritized to disambiguate these homographs, in order to be matched up with the intended pronunciation. Diacritizing Arabic has generally been achieved by using rule-based, statistical, or hybrid methods that combine rule-based and statistical methods. Recently, diacritization methods involving deep learning have shown promise in reducing error rates. These deep-learning methods are not yet commonly used in TTS engines, however. To examine neural diacritization methods for use in TTS engines, we normalized and pre-processed a version of the Tashkeela corpus, a large diacritized corpus containing largely Classical Arabic texts, for TTS purposes. We then trained and tested three state-of-the-art Recurrent-Neural-Network-based models on this data set. Additionally we tested these models on the Wiki News corpus, a test set that contains Modern Standard Arabic (MSA) news articles and thus more closely resembles most TTS queries. The models were evaluated by comparing the Diacritic Error Rate (DER) and Word Error Rate (WER) achieved for each data set to one another and to the DER and WER reported in the original papers. Moreover, the per-diacritic accuracy was examined, and a manual evaluation was performed. For the Tashkeela corpus, all models achieved a lower DER and WER than reported in the original papers. This was largely the result of using more training data in addition to the TTS pre-processing steps that were performed on the data. For the Wiki News corpus, the error rates were higher, largely due to the domain gap between the data sets. We found that for both data sets the models overfit on common patterns and the most common diacritic. For the Wiki News corpus the models struggled with Named Entities and loanwords. Purely neural models generally outperformed the model that combined deep learning with rule-based and statistical corrections. These findings highlight the usability of deep learning methods for Arabic diacritization in TTS engines as well as the need for diacritized corpora that are more representative of Modern Standard Arabic.
35

Efficacy of a 6-week Neuromuscular Training Program for Improving Postural Control in Figure Skaters

Saunders, Nathan William 20 July 2011 (has links)
No description available.
36

Towards a unified model for speech and language processing

Ploujnikov, Artem 12 1900 (has links)
Ce travail de recherche explore les méthodes d’apprentissage profond de la parole et du langage, y inclus la reconnaissance et la synthèse de la parole, la conversion des graphèmes en phonèmes et vice-versa, les modèles génératifs, visant de reformuler des tâches spécifiques dans un problème plus général de trouver une représentation universelle d’information contenue dans chaque modalité et de transférer un signal d’une modalité à une autre en se servant de telles représentations universelles et à générer des représentations dans plusieurs modalités. Il est compris de deux projets de recherche: 1) SoundChoice, un modèle graphème-phonème tenant compte du contexte au niveau de la phrase qui réalise de bonnes performances et des améliorations remarquables comparativement à un modèle de base et 2) MAdmixture, une nouvelle approche pour apprendre des représentations multimodales dans un espace latent commun. / The present work explores the use of deep learning methods applied to a variety of areas in speech and language processing including speech recognition, grapheme-to-phoneme conversion, speech synthesis, generative models for speech and others to build toward a unified approach that reframes these individual tasks into a more general problem of finding a universal representation of information encoded in different modalities and being able to seamlessly transfer a signal from one modality to another by converting it to this universal representations and to generate samples in multiple modalities. It consists of two main research projects: 1) SoundChocice, a context-aware sentence level Grapheme-to-Phoneme model achieving solid performance on the task and a significant improvement on phoneme disambiguation over baseline models and 2) MAdmixture, a novel approach to learning a variety of speech representations in a common latent space.
37

Explicit Segmentation Of Speech For Indian Languages

Ranjani, H G 03 1900 (has links)
Speech segmentation is the process of identifying the boundaries between words, syllables or phones in the recorded waveforms of spoken natural languages. The lowest level of speech segmentation is the breakup and classification of the sound signal into a string of phones. The difficulty of this problem is compounded by the phenomenon of co-articulation of speech sounds. The classical solution to this problem is to manually label and segment spectrograms. In the first step of this two step process, a trained person listens to a speech signal, recognizes the word and phone sequence, and roughly determines the position of each phonetic boundary. The second step involves examining several features of the speech signal to place a boundary mark at the point where these features best satisfy a certain set of conditions specific for that kind of phonetic boundary. Manual segmentation of speech into phones is a highly time-consuming and painstaking process. Required for a variety of applications, such as acoustic analysis, or building speech synthesis databases for high-quality speech output systems, the time required to carry out this process for even relatively small speech databases can rapidly accumulate to prohibitive levels. This calls for automating the segmentation process. The state-of-art segmentation techniques use Hidden Markov Models (HMM) for phone states. They give an average accuracy of over 95% within 20 ms of manually obtained boundaries. However, HMM based methods require large training data for good performance. Another major disadvantage of such speech recognition based segmentation techniques is that they cannot handle very long utterances, Which are necessary for prosody modeling in speech synthesis applications. Development of Text to Speech (TTS) systems in Indian languages has been difficult till date owing to the non-availability of sizeable segmented speech databases of good quality. Further, no prosody models exist for most of the Indian languages. Therefore, long utterances (at the paragraph level and monologues) have been recorded, as part of this work, for creating the databases. This thesis aims at automating segmentation of very long speech sentences recorded for the application of corpus-based TTS synthesis for multiple Indian languages. In this explicit segmentation problem, we need to force align boundaries in any utterance from its known phonetic transcription. The major disadvantage of forcing boundary alignments on the entire speech waveform of a long utterance is the accumulation of boundary errors. To overcome this, we force boundaries between 2 known phones (here, 2 successive stop consonants are chosen) at a time. Here, the approach used is silence detection as a marker for stop consonants. This method gives around 89% (for Hindi database) accuracy and is language independent and training free. These stop consonants act as anchor points for the next stage. Two methods for explicit segmentation have been proposed. Both the methods rely on the accuracy of the above stop consonant detection stage. Another common stage is the recently proposed implicit method which uses Bach scale filter bank to obtain the feature vectors. The Euclidean Distance of the Mean of the Logarithm (EDML) of these feature vectors shows peaks at the point where the spectrum changes. The method performs with an accuracy of 87% within 20 ms of manually obtained boundaries and also achieves a low deletion and insertion rate of 3.2% and 21.4% respectively, for 100 sentences of Hindi database. The first method is a three stage approach. The first is the stop consonant detection stage followed by the next, which uses Quatieri’s sinusoidal model to classify sounds as voiced/unvoiced within 2 successive stop consonants. The final stage uses the EDML function of Bach scale feature vectors to further obtain boundaries within the voiced and unvoiced regions. It gives a Frame Error Rate (FER) of 26.1% for Hindi database. The second method proposed uses duration statistics of the phones of the language. It again uses the EDML function of Bach scale filter bank to obtain the peaks at the phone transitions and uses the duration statistics to assign probability to each peak being a boundary. In this method, the FER performance improves to 22.8% for the Hindi database. Both the methods are equally promising for the fact that they give low frame error rates. Results show that the second method outperforms the first, because it incorporates the knowledge of durations. For the proposed approaches to be useful, manual interventions are required at the output of each stage. However, this intervention is less tedious and reduces the time taken to segment each sentence by around 60% as compared to the time taken for manual segmentation. The approaches have been successfully tested on 3 different languages, 100 sentences each -Kannada, Tamil and English (we have used TIMIT database for validating the algorithms). In conclusion, a practical solution to the segmentation problem is proposed. Also, the algorithm being training free, language independent (ES-SABSF method) and speaker independent makes it useful in developing TTS systems for multiple languages reducing the segmentation overhead. This method is currently being used in the lab for segmenting long Kannada utterances, spoken by reading a set of 1115 phonetically rich sentences.
38

Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources

Pérez González de Martos, Alejandro Manuel 12 July 2022 (has links)
[ES] En los últimos años, el aprendizaje profundo ha cambiado significativamente el panorama en diversas áreas del campo de la inteligencia artificial, entre las que se incluyen la visión por computador, el procesamiento del lenguaje natural, robótica o teoría de juegos. En particular, el sorprendente éxito del aprendizaje profundo en múltiples aplicaciones del campo del procesamiento del lenguaje natural tales como el reconocimiento automático del habla (ASR), la traducción automática (MT) o la síntesis de voz (TTS), ha supuesto una mejora drástica en la precisión de estos sistemas, extendiendo así su implantación a un mayor rango de aplicaciones en la vida real. En este momento, es evidente que las tecnologías de reconocimiento automático del habla y traducción automática pueden ser empleadas para producir, de forma efectiva, subtítulos multilingües de alta calidad de contenidos audiovisuales. Esto es particularmente cierto en el contexto de los vídeos educativos, donde las condiciones acústicas son normalmente favorables para los sistemas de ASR y el discurso está gramaticalmente bien formado. Sin embargo, en el caso de TTS, aunque los sistemas basados en redes neuronales han demostrado ser capaces de sintetizar voz de un realismo y calidad sin precedentes, todavía debe comprobarse si esta tecnología está lo suficientemente madura como para mejorar la accesibilidad y la participación en el aprendizaje en línea. Además, existen diversas tareas en el campo de la síntesis de voz que todavía suponen un reto, como la clonación de voz inter-lingüe, la síntesis incremental o la adaptación zero-shot a nuevos locutores. Esta tesis aborda la mejora de las prestaciones de los sistemas actuales de síntesis de voz basados en redes neuronales, así como la extensión de su aplicación en diversos escenarios, en el contexto de mejorar la accesibilidad en el aprendizaje en línea. En este sentido, este trabajo presta especial atención a la adaptación a nuevos locutores y a la clonación de voz inter-lingüe, ya que los textos a sintetizar se corresponden, en este caso, a traducciones de intervenciones originalmente en otro idioma. / [CA] Durant aquests darrers anys, l'aprenentatge profund ha canviat significativament el panorama en diverses àrees del camp de la intel·ligència artificial, entre les quals s'inclouen la visió per computador, el processament del llenguatge natural, robòtica o la teoria de jocs. En particular, el sorprenent èxit de l'aprenentatge profund en múltiples aplicacions del camp del processament del llenguatge natural, com ara el reconeixement automàtic de la parla (ASR), la traducció automàtica (MT) o la síntesi de veu (TTS), ha suposat una millora dràstica en la precisió i qualitat d'aquests sistemes, estenent així la seva implantació a un ventall més ampli a la vida real. En aquest moment, és evident que les tecnologies de reconeixement automàtic de la parla i traducció automàtica poden ser emprades per a produir, de forma efectiva, subtítols multilingües d'alta qualitat de continguts audiovisuals. Això és particularment cert en el context dels vídeos educatius, on les condicions acústiques són normalment favorables per als sistemes d'ASR i el discurs està gramaticalment ben format. No obstant això, al cas de TTS, encara que els sistemes basats en xarxes neuronals han demostrat ser capaços de sintetitzar veu d'un realisme i qualitat sense precedents, encara s'ha de comprovar si aquesta tecnologia és ja prou madura com per millorar l'accessibilitat i la participació en l'aprenentatge en línia. A més, hi ha diverses tasques al camp de la síntesi de veu que encara suposen un repte, com ara la clonació de veu inter-lingüe, la síntesi incremental o l'adaptació zero-shot a nous locutors. Aquesta tesi aborda la millora de les prestacions dels sistemes actuals de síntesi de veu basats en xarxes neuronals, així com l'extensió de la seva aplicació en diversos escenaris, en el context de millorar l'accessibilitat en l'aprenentatge en línia. En aquest sentit, aquest treball presta especial atenció a l'adaptació a nous locutors i a la clonació de veu interlingüe, ja que els textos a sintetitzar es corresponen, en aquest cas, a traduccions d'intervencions originalment en un altre idioma. / [EN] In recent years, deep learning has fundamentally changed the landscapes of a number of areas in artificial intelligence, including computer vision, natural language processing, robotics, and game theory. In particular, the striking success of deep learning in a large variety of natural language processing (NLP) applications, including automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS), has resulted in major accuracy improvements, thus widening the applicability of these technologies in real-life settings. At this point, it is clear that ASR and MT technologies can be utilized to produce cost-effective, high-quality multilingual subtitles of video contents of different kinds. This is particularly true in the case of transcription and translation of video lectures and other kinds of educational materials, in which the audio recording conditions are usually favorable for the ASR task, and there is a grammatically well-formed speech. However, although state-of-the-art neural approaches to TTS have shown to drastically improve the naturalness and quality of synthetic speech over conventional concatenative and parametric systems, it is still unclear whether this technology is already mature enough to improve accessibility and engagement in online learning, and particularly in the context of higher education. Furthermore, advanced topics in TTS such as cross-lingual voice cloning, incremental TTS or zero-shot speaker adaptation remain an open challenge in the field. This thesis is about enhancing the performance and widening the applicability of modern neural TTS technologies in real-life settings, both in offline and streaming conditions, in the context of improving accessibility and engagement in online learning. Thus, particular emphasis is placed on speaker adaptation and cross-lingual voice cloning, as the input text corresponds to a translated utterance in this context. / Pérez González De Martos, AM. (2022). Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/184019 / TESIS / Premios Extraordinarios de tesis doctorales

Page generated in 0.0245 seconds