• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

実走行車内音声対話コーパスの設計と特徴

河口, 信夫, Kawaguchi, Nobuo, 松原, 茂樹, Matsubara, Shigeki, 若松, 佳広, Wakamatsu, Yoshihiro, 梶田, 将司, Kajita, Masashi, 武田, 一哉, Takeda, Kazuya, 板倉, 文忠, Itakura, Fumitada, 稲垣, 康善, Inagaki, Yasuyoshi 12 1900 (has links)
No description available.
12

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
13

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
14

Synthèse de parole expressive au delà du niveau de la phrase : le cas du conte pour enfant : conception et analyse de corpus de contes pour la synthèse de parole expressive / Expressive speech synthesis beyond the level of the sentence : the children tale usecase : tale corpora design and analysis for expressive speech synthesis

Doukhan, David 20 September 2013 (has links)
L'objectif de la thèse est de proposer des méthodes permettant d'améliorer l'expressivité des systèmes de synthèse de la parole. Une des propositions centrales de ce travail est de définir, utiliser et mesurer l'impact de structures linguistiques opérant au delà du niveau de la phrase, par opposition aux approches opérant sur des phrases isolées de leur contexte. Le cadre de l'étude est restreint au cas de la lecture de contes pour enfants. Les contes ont la particularité d'avoir été l'objet d'un certain nombre d'études visant à en dégager une structure narrative et de faire intervenir une certain nombre de stéréotypes de personnages (héros, méchant, fée) dont le discours est souvent rapporté. Ces caractéristiques particulières sont exploitées pour modéliser les propriétés prosodiques des contes au delà du niveau de la phrase. La transmission orale des contes a souvent été associée à une pratique musicale (chants, instruments) et leur lecture reste associée à des propriétés mélodiques très riches, dont la reproduction reste un défi pour les synthétiseurs de parole modernes. Pour répondre à ces problématiques, un premier corpus de contes écrits est collecté et annoté avec des informations relatives à la structure narrative des contes, l'identification et l'attribution des citations directes, le référencement des mentions des personnages ainsi que des entités nommées et des énumérations étendues. Le corpus analysé est décrit en terme de couverture et d'accord inter-annotateurs. Il est utilisé pour modéliser des systèmes de segmentation des contes en épisode, de détection des citations directes, des actes de dialogue et des modes de communication. Un deuxième corpus de contes lus par un locuteur professionnel est présenté. La parole est alignée avec les transcriptions lexicale et phonétique, les annotations du corpus texte et des méta-informations décrivant les caractéristiques des personnages intervenant dans le conte. Les relations entre les annotations linguistiques et les propriétés prosodiques observées dans le corpus de parole sont décrites et modélisées. Finalement, un prototype de contrôle des paramètres expressifs du synthétiseur par sélection d'unités Acapela est réalisé. Le prototype génère des instructions prosodiques opérant au delà du niveau de la phrase, notamment en utilisant les informations liées à la structure du conte et à la distinction entre discours direct et discours rapporté. La validation du prototype de contrôle est réalisée dans le cadre d'une expérience perceptive, qui montre une amélioration significative de la qualité de la synthèse. / The aim of this thesis is to propose ways to improve the expressiveness of speech synthesis systems. One of the central propositions of this work is to define, use and measure the impact of linguistic structures operating beyond the sentence level, as opposed to approaches operating on sentences out of their context. The scope of the study is restricted to the case of storytelling for children. The stories have the distinction of having been the subject of a number of studies in order to highlight a narrative structure and involve a number of stereotypical characters (hero, villain, fairy) whose speech is often reported. These special features are used to model the prosodic properties tales beyond the sentence level. The oral transmission of tales was often associated with musical practice (vocals, instruments) and their reading is associated with rich melodic properties including reproduction remains a challenge for modern speech synthesizers. To address these issues, a first corpus of written tales is collected and annotated with information about the narrative structure of stories, identification and allocation of direct quotations, referencing references to characters as well as named entities and enumerations areas. The corpus analyzed is described in terms of coverage and inter-annotator agreement. It is used to model systems segmentation tales episode, detection of direct quotes, dialogue acts and modes of communication. A second corpus of stories read by a professional speaker is presented. The word is aligned with the lexical and phonetic transcriptions, annotations of the corpus text and meta-information describing the characteristics of the characters involved in the story. The relationship between linguistic annotations and prosodic properties observed in the speech corpus are described and modeled. Finally, a prototype control expressive synthesizer parameters by Acapela unit selection is made. The prototype generates prosodic operating instructions beyond the sentence level, including using the information related to the structure of the story and the distinction between direct speech and reported speech. Prototype validation control is performed through a perceptual experience, which shows a significant improvement in the quality of the synthesis.
15

Effective Speech Features for Cognitive Load Assessment: Classification and Regression

Herms, Robert 03 June 2019 (has links)
This thesis is about the effectiveness of speech features for cognitive load assessment, with particular attention being paid to new perspectives of this research area. A new cognitive load database, called CoLoSS, is introduced containing speech recordings of users who performed a learning task. Various acoustic features from different categories including prosody, voice quality, and spectrum are investigated in terms of their relevance. Moreover, Teager energy parameters, which have proven highly successful in stress detection, are introduced for cognitive load assessment and it is demonstrated how automatic speech recognition technology can be used to extract potential indicators. The suitability of the extracted features is systematically evaluated by recognition experiments with speaker-independent systems designed for discriminating between three levels of load. Additionally, a novel approach to speech-based cognitive load modelling is introduced, whereby the load is represented as a continuous quantity and its prediction can thus be regarded as a regression problem. / Die vorliegende Arbeit befasst sich mit der automatischen Erkennung von kognitiver Belastung auf Basis menschlicher Sprachmerkmale. Der Schwerpunkt liegt auf der Effektivität von akustischen Parametern, wobei die aktuelle Forschung auf diesem Gebiet um neuartige Ansätze erweitert wird. Hierzu wird ein neuer Datensatz – als CoLoSS bezeichnet – vorgestellt, welcher Sprachaufzeichnungen von Nutzern enthält und speziell auf Lernprozesse fokussiert. Zahlreiche Parameter der Prosodie, Stimmqualität und des Spektrums werden im Hinblick auf deren Relevanz analysiert. Darüber hinaus werden die Eigenschaften des Teager Energy Operators, welche typischerweise bei der Stressdetektion Verwendung finden, im Rahmen dieser Arbeit berücksichtigt. Ebenso wird gezeigt, wie automatische Spracherkennungssysteme genutzt werden können, um potenzielle Indikatoren zu extrahieren. Die Eignung der extrahierten Merkmale wird systematisch evaluiert. Dabei kommen sprecherunabhängige Klassifikationssysteme zur Unterscheidung von drei Belastungsstufen zum Einsatz. Zusätzlich wird ein neuartiger Ansatz zur sprachbasierten Modellierung der kognitiven Belastung vorgestellt, bei dem die Belastung eine kontinuierliche Größe darstellt und eine Vorhersage folglich als ein Regressionsproblem betrachtet werden kann.
16

Automatic Speech Recognition and Machine Translation with Deep Neural Networks for Open Educational Resources, Parliamentary Contents and Broadcast Media

Garcés Díaz-Munío, Gonzalo Vicente 25 November 2024 (has links)
[ES] En la última década, el reconocimiento automático del habla (RAH) y la traducción automática (TA) han mejorado enormemente mediante el uso de modelos de redes neuronales profundas (RNP) en constante evolución. Si a principios de los 2010 los sistemas de RAH y TA previos a las RNP llegaron a afrontar con éxito algunas aplicaciones reales como la transcripción y traducción de vídeos docentes pregrabados, ahora en los 2020 son abordables aplicaciones que suponen un reto mucho mayor, como la subtitulación de retransmisiones audiovisuales en directo. En este mismo período, se están invirtiendo cada vez mayores esfuerzos en la accesibilidad a los medios audiovisuales para todos, incluidas las personas sordas. El RAH y la TA, en su estado actual, son grandes herramientas para aumentar la disponibilidad de medidas de accesibilidad como subtítulos, transcripciones y traducciones, y también para proporcionar acceso multilingüe a todo tipo de contenidos. En esta tesis doctoral presentamos resultados de investigación sobre RAH y TA basadas en RNP en tres campos muy activos: los recursos educativos abiertos, los contenidos parlamentarios y los medios audiovisuales. En el área de los recursos educativos abiertos (REA), presentamos primeramente trabajos sobre la evaluación y postedición de RAH y TA con métodos de interacción inteligente, en el marco del proyecto de investigación europeo "transLectures: Transcripción y Traducción de Vídeos Docentes". Los resultados obtenidos confirman que la interacción inteligente puede reducir aún más el esfuerzo de postedición de transcripciones y traducciones automáticas. Seguidamente, en el contexto del posterior proyecto europeo X5gon, presentamos una investigación sobre el desarrollo de sistemas de TA neuronal basados en RNP, y sobre sacar el máximo partido de corpus de TA masivos mediante filtrado automático de datos. Este trabajo dio como resultado sistemas de TA neuronal clasificados entre los mejores en una competición internacional de TA, y mostramos cómo estos nuevos sistemas mejoraron la calidad de los subtítulos multilingües en casos reales de REA. En el ámbito también en crecimiento de las tecnologías del lenguaje para contenidos parlamentarios, describimos una investigación sobre técnicas de filtrado de datos de habla para el RAH en tiempo real en el contexto de debates del Parlamento Europeo. Esta investigación permitió la publicación de Europarl-ASR, un nuevo y amplio corpus de habla para entrenamiento y evaluación de sistemas de RAH en continuo, así como para la evaluación comparativa de técnicas de filtrado de datos de habla. Finalmente, presentamos un trabajo en un ámbito en la vanguardia tecnológica del RAH y de la TA: la subtitulación de retransmisiones audiovisuales en directo, en el marco del Convenio de colaboración I+D+i 2020-2023 entre la radiotelevisión pública valenciana À Punt y la Universitat Politècnica de València para la subtitulación asistida por ordenador de contenidos audiovisuales en tiempo real. Esta investigación ha resultado en la implantación de sistemas de RAH en tiempo real, de alta precisión y baja latencia, para una lengua no mayoritaria en el mundo (el catalán) y una de las lenguas más habladas del mundo (el castellano) en un medio audiovisual real. / [CA] En l'última dècada, el reconeixement automàtic de la parla (RAP) i la traducció automàtica (TA) han millorat enormement mitjançant l'ús de models de xarxes neuronals profundes (XNP) en constant evolució. Si a principis dels 2010 els sistemes de RAP i TA previs a les XNP van arribar a afrontar amb èxit algunes aplicacions reals com la transcripció i traducció de vídeos docents pregravats, ara en els 2020 són abordables aplicacions que suposen un repte molt major, com la subtitulació de retransmissions audiovisuals en directe. En aquest mateix període, s'estan invertint cada vegada majors esforços en l'accessibilitat als mitjans audiovisuals per a tots, incloses les persones sordes. El RAP i la TA, en el seu estat actual, són grans eines per a incrementar la disponibilitat de mesures d'accessibilitat com subtítols, transcripcions i traduccions, també com una manera de proporcionar accés multilingüe a tota classe de continguts. En aquesta tesi doctoral presentem resultats d'investigació sobre RAP i TA basades en XNP en tres camps molt actius: els recursos educatius oberts, els continguts parlamentaris i els mitjans audiovisuals. En l'àrea dels recursos educatius oberts (REO), presentem primerament treballs sobre l'avaluació i postedició de RAP i TA amb mètodes d'interacció intel·ligent, en el marc del projecte d'investigació europeu "transLectures: Transcripció i traducció de vídeos docents". Els resultats obtinguts confirmen que la interacció intel·ligent pot reduir encara més l'esforç de postedició de transcripcions i traduccions automàtiques. Seguidament, en el context del posterior projecte europeu X5gon, presentem una investigació sobre el desenvolupament de sistemes de TA neuronal basats en XNP, i sobre traure el màxim partit de corpus de TA massius mitjançant filtratge automàtic de dades. Aquest treball va donar com a resultat sistemes de TA neuronal classificats entre els millors en una competició internacional de TA, i mostrem com aquests nous sistemes milloren la qualitat dels subtítols multilingües en casos reals de REO. En l'àmbit també en creixement de les tecnologies del llenguatge per a continguts parlamentaris, descrivim una investigació sobre tècniques de filtratge de dades de parla per al RAP en temps real en el context de debats del Parlament Europeu. Aquesta investigació va permetre la publicació d'Europarl-ASR, un corpus de parla nou i ampli per a l'entrenament i l'avaluació de sistemes de RAP en continu, així com per a l'avaluació comparativa de tècniques de filtratge de dades de parla. Finalment, presentem un treball en un àmbit en l'avantguarda tecnològica del RAP i de la TA: la subtitulació de retransmissions audiovisuals en directe, en el context del Conveni de col·laboració R+D+i 2020-2023 entre la radiotelevisió pública valenciana À Punt i la Universitat Politècnica de València per a la subtitulació assistida per ordinador de continguts audiovisuals en temps real. Aquesta investigació ha donat com a resultat la implantació de sistemes de RAP en temps real, amb alta precisió i baixa latència, per a una llengua no majoritària en el món (el català) i una de les llengües més parlades del món (el castellà) en un mitjà audiovisual real. / [EN] In the last decade, automatic speech recognition (ASR) and machine translation (MT) have improved enormously through the use of constantly evolving deep neural network (DNN) models. If at the beginning of the 2010s the then pre-DNN ASR and MT systems were ready to tackle with success some real-life applications such as offline video lecture transcription and translation, now in the 2020s much more challenging applications are within grasp, such as live broadcast media subtitling. At the same time in this period, media accessibility for everyone, including deaf and hard-of-hearing people, is being given more and more importance. ASR and MT, in their current state, are powerful tools to increase the coverage of accessibility measures such as subtitles, transcriptions and translations, also as a way of providing multilingual access to all types of content. In this PhD thesis, we present research results on automatic speech recognition and machine translation based on deep neural networks in three very active domains: open educational resources, parliamentary contents and broadcast media. Regarding open educational resources (OER), we first present work on the evaluation and post-editing of ASR and MT with intelligent interaction approaches, as carried out in the framework of EU project transLectures: Transcription and Translation of Video Lectures. The results obtained confirm that the intelligent interaction approach can make post-editing automatic transcriptions and translations even more cost-effective. Then, in the context of subsequent EU project X5gon, we present research on developing DNN-based neural MT systems, and making the most of larger MT corpora through automatic data filtering. This work resulted in a first-rank classification in an international evaluation campaign on MT, and we show how these new NMT systems improved the quality of multilingual subtitles in real OER scenarios. In the also growing domain of language technologies for parliamentary contents, we describe research on speech data curation techniques for streaming ASR in the context of European Parliament debates. This research resulted in the release of Europarl-ASR, a new, large speech corpus for streaming ASR system training and evaluation, as well as for the benchmarking of speech data curation techniques. Finally, we present work in a domain on the edge of the state of the art for ASR and MT: the live subtitling of broadcast media, in the context of the 2020-2023 R&D collaboration agreement between the Valencian public broadcaster À Punt and the Universitat Politècnica de València for real-time computer assisted subtitling of media contents. This research has resulted in the deployment of high-quality, low-latency, real-time streaming ASR systems for a less-spoken language (Catalan) and a widely spoken language (Spanish) in a real broadcast use case. / The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 287755 (transLectures), Competitiveness and Innovation Framework Programme (CIP) under grant agreement no. 621030 (EMMA), Horizon 2020 research and innovation programme under grant agreements no. 761758 (X5gon) and no. 952215 (TAILOR), and EU4Health Programme 2021–2027 as part of Europe’s Beating Cancer Plan under grant agreements no. 101056995 (INTERACT-EUROPE) and no. 101129375 (INTERACT-EUROPE 100); from the Government of Spain’s research projects iTrans2 (ref. TIN2009-14511, MICINN/ERDF EU), MORE (ref. TIN2015-68326-R,MINECO/ERDF EU), Multisub (ref. RTI2018-094879-B-I00, MCIN/AEI/10.13039/501100011033 ERDF “A way of making Europe”), and XLinDub (ref. PID2021-122443OB-I00, MCIN/AEI/10.13039/501100011033 ERDF “A way of making Europe”); from the Generalitat Valenciana’s “R&D collaboration agreement between the Corporació Valenciana de Mitjans de Comunicació (À Punt Mèdia) and the Universitat Politècnica de València (UPV) for real-time computer assisted subtitling of audiovisual contents based on artificial intelligence”, and research project Classroom Activity Recognition (PROMETEO/2019/111); and from the Universitat Politècnica de València’s PAID-01-17 R&D support programme. This work uses data from the RTVE 2018 and 2020 Databases. This set of data has been provided by RTVE Corporation to help develop Spanish-language speech technologies. / Garcés Díaz-Munío, GV. (2024). Automatic Speech Recognition and Machine Translation with Deep Neural Networks for Open Educational Resources, Parliamentary Contents and Broadcast Media [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/212454

Page generated in 0.0417 seconds