1 |
Synthesis and evaluation of conversational characteristics in speech synthesisAndersson, Johan Sebastian January 2013 (has links)
Conventional synthetic voices can synthesise neutral read aloud speech well. But, to make synthetic speech more suitable for a wider range of applications, the voices need to express more than just the word identity. We need to develop voices that can partake in a conversation and express, e.g. agreement, disagreement, hesitation, in a natural and believable manner. In speech synthesis there are currently two dominating frameworks: unit selection and HMM-based speech synthesis. Both frameworks utilise recordings of human speech to build synthetic voices. Despite the fact that the content of the recordings determines the segmental and prosodic phenomena that can be synthesised, surprisingly little research has been made on utilising the corpus to extend the limited behaviour of conventional synthetic voices. In this thesis we will show how natural sounding conversational characteristics can be added to both unit selection and HMM-based synthetic voices, by adding speech from a spontaneous conversation to the voices. We recorded a spontaneous conversation, and by manually transcribing and selecting utterances we obtained approximately two thousand utterances from it. These conversational utterances were rich in conversational speech phenomena, but they lacked the general coverage that allows unit selection and HMM-based synthesis techniques to synthesise high quality speech. Therefore we investigated a number of blending approaches in the synthetic voices, where the conversational utterances were augmented with conventional read aloud speech. The synthetic voices that contained conversational speech were contrasted with conventional voices without conversational speech. The perceptual evaluations showed that the conversational voices were generally perceived by listeners as having a more conversational style than the conventional voices. This conversational style was largely due to the conversational voices’ ability to synthesise utterances that contained conversational speech phenomena in a more natural manner than the conventional voices. Additionally, we conducted an experiment that showed that natural sounding conversational characteristics in synthetic speech can convey pragmatic information, in our case an impression of certainty or uncertainty, about a topic to a listener. The conclusion drawn is that the limited behaviour of conventional synthetic voices can be enriched by utilising conversational speech in both unit selection and HMM-based speech synthesis.
|
2 |
Overcoming the limitations of statistical parametric speech synthesisMerritt, Thomas January 2017 (has links)
At the time of beginning this thesis, statistical parametric speech synthesis (SPSS) using hidden Markov models (HMMs) was the dominant synthesis paradigm within the research community. SPSS systems are effective at generalising across the linguistic contexts present in training data to account for inevitable unseen linguistic contexts at synthesis-time, making these systems flexible and their performance stable. However HMM synthesis suffers from a ‘ceiling effect’ in the naturalness achieved, meaning that, despite great progress, the speech output is rarely confused for natural speech. There are many hypotheses for the causes of reduced synthesis quality, and subsequent required improvements, for HMM speech synthesis in literature. However, until this thesis, these hypothesised causes were rarely tested. This thesis makes two types of contributions to the field of speech synthesis; each of these appears in a separate part of the thesis. Part I introduces a methodology for testing hypothesised causes of limited quality within HMM speech synthesis systems. This investigation aims to identify what causes these systems to fall short of natural speech. Part II uses the findings from Part I of the thesis to make informed improvements to speech synthesis. The usual approach taken to improve synthesis systems is to attribute reduced synthesis quality to a hypothesised cause. A new system is then constructed with the aim of removing that hypothesised cause. However this is typically done without prior testing to verify the hypothesised cause of reduced quality. As such, even if improvements in synthesis quality are observed, there is no knowledge of whether a real underlying issue has been fixed or if a more minor issue has been fixed. In contrast, I perform a wide range of perceptual tests in Part I of the thesis to discover what the real underlying causes of reduced quality in HMM synthesis are and the level to which they contribute. Using the knowledge gained in Part I of the thesis, Part II then looks to make improvements to synthesis quality. Two well-motivated improvements to standard HMM synthesis are investigated. The first of these improvements follows on from averaging across differing linguistic contexts being identified as a major contributing factor to reduced synthesis quality. This is a practice typically performed during decision tree regression in HMM synthesis. Therefore a system which removes averaging across differing linguistic contexts and instead performs averaging only across matching linguistic contexts (called rich-context synthesis) is investigated. The second of the motivated improvements follows the finding that the parametrisation (i.e., vocoding) of speech, standard practice in SPSS, introduces a noticeable drop in quality before any modelling is even performed. Therefore the hybrid synthesis paradigm is investigated. These systems aim to remove the effect of vocoding by using SPSS to inform the selection of units in a unit selection system. Both of the motivated improvements applied in Part II are found to make significant gains in synthesis quality, demonstrating the benefit of performing the style of perceptual testing conducted in the thesis.
|
3 |
Join cost for unit selection speech synthesisVepa, Jithendra January 2004 (has links)
Undoubtedly, state-of-the-art unit selection-based concatenative speech systems produce very high quality synthetic speech. this is due to a large speech database containing many instances of each speech unit, with a varied and natural distribution of prosodic and spectral characteristics. the join cost, which measures how well two units can be joined together is one of the main criteria for selecting appropriate units from this large speech database. The ideal join cost is one that measures percieved discontinuity based on easily measurable spectral properties of the units being joined, inorder to ensure smooth and natural sounding synthetic speech. During first part of my research, I have investigated various spectrally based distance measures for use in computation of the join cost by designing a perceptual listening experiment. A variation to the usual perceptual test paradigm is proposed in this thesis by deliberately including a wide range of qualities of join in polysyllabic words. The test stimuli are obtained using a state-of-the-art unit-selection text-to-speech system: rVoice from Rhetorical Systems Ltd. Three spectral features Mel-frequency cepstral coefficients (MFCC), line spectral frequencies (LSF) and multiple centroid analysis (MCA) parameters and various statistical distances - Euclidean, Kullback-Leibler, Mahalanobis - are used to obtain distance measures. Based on the correlations between perceptual scores and these spectral distances. I proposed new spectral distance measures, which have good correlation with human perception to concatenation discontinuities. The second part of my research concentrates on combining join cost computation and the smoothing operation, which is required to disguise joins, by learning an underlying representation from the acoustic signal. In order to accomplish this task, I have chosen linear dynamic models (LDM), sometimes known as Kalman filters. Three different initialisation schemes are used prior to Expectation-Maximisation (KM) in LDM training. Once the models are trained, the join cost is computed based on the error between model predictions and actual observations. Analytical measures are derived based on the shape of this error plot. These measures and initialisation schemes are compared by computing correlations using the perceptual data. The LDMs are also able to smooth the observations which are then used to synthesise speech. To evaluate the LDM smoothing operation, another listening test is performed where it is compared with the standard methods (simple linear interpolation). I have compared the best three join cost functions, chosen from the first and second parts of my research, subjectively using a listening test in the third part of my research. in this test, I also evaluated different smoothing methods: no smoothing, linear smoothing and smoothing achieved using LDMs.
|
4 |
Synthèse acoustico-visuelle de la parole par sélection d'unités bimodales / Acoustic-Visual Speech Synthesis by Bimodal Unit SelectionMusti, Utpala 21 February 2013 (has links)
Ce travail porte sur la synthèse de la parole audio-visuelle. Dans la littérature disponible dans ce domaine, la plupart des approches traite le problème en le divisant en deux problèmes de synthèse. Le premier est la synthèse de la parole acoustique et l'autre étant la génération d'animation faciale correspondante. Mais, cela ne garantit pas une parfaite synchronisation et cohérence de la parole audio-visuelle. Pour pallier implicitement l'inconvénient ci-dessus, nous avons proposé une approche de synthèse de la parole acoustique-visuelle par la sélection naturelle des unités synchrones bimodales. La synthèse est basée sur le modèle de sélection d'unité classique. L'idée principale derrière cette technique de synthèse est de garder l'association naturelle entre la modalité acoustique et visuelle intacte. Nous décrivons la technique d'acquisition de corpus audio-visuelle et la préparation de la base de données pour notre système. Nous présentons une vue d'ensemble de notre système et nous détaillons les différents aspects de la sélection d'unités bimodales qui ont besoin d'être optimisées pour une bonne synthèse. L'objectif principal de ce travail est de synthétiser la dynamique de la parole plutôt qu'une tête parlante complète. Nous décrivons les caractéristiques visuelles cibles que nous avons conçues. Nous avons ensuite présenté un algorithme de pondération de la fonction cible. Cet algorithme que nous avons développé effectue une pondération de la fonction cible et l'élimination de fonctionnalités redondantes de manière itérative. Elle est basée sur la comparaison des classements de coûts cible et en se basant sur une distance calculée à partir des signaux de parole acoustiques et visuels dans le corpus. Enfin, nous présentons l'évaluation perceptive et subjective du système de synthèse final. Les résultats montrent que nous avons atteint l'objectif de synthétiser la dynamique de la parole raisonnablement bien / This work deals with audio-visual speech synthesis. In the vast literature available in this direction, many of the approaches deal with it by dividing it into two synthesis problems. One of it is acoustic speech synthesis and the other being the generation of corresponding facial animation. But, this does not guarantee a perfectly synchronous and coherent audio-visual speech. To overcome the above drawback implicitly, we proposed a different approach of acoustic-visual speech synthesis by the selection of naturally synchronous bimodal units. The synthesis is based on the classical unit selection paradigm. The main idea behind this synthesis technique is to keep the natural association between the acoustic and visual modality intact. We describe the audio-visual corpus acquisition technique and database preparation for our system. We present an overview of our system and detail the various aspects of bimodal unit selection that need to be optimized for good synthesis. The main focus of this work is to synthesize the speech dynamics well rather than a comprehensive talking head. We describe the visual target features that we designed. We subsequently present an algorithm for target feature weighting. This algorithm that we developed performs target feature weighting and redundant feature elimination iteratively. This is based on the comparison of target cost based ranking and a distance calculated based on the acoustic and visual speech signals of units in the corpus. Finally, we present the perceptual and subjective evaluation of the final synthesis system. The results show that we have achieved the goal of synthesizing the speech dynamics reasonably well
|
5 |
Study of unit selection text-to-speech synthesis algorithms / Étude des algorithmes de sélection d’unités pour la synthèse de la parole à partir du texteGuennec, David 22 September 2016 (has links)
La synthèse de la parole par corpus (sélection d'unités) est le sujet principal de cette thèse. Tout d'abord, une analyse approfondie et un diagnostic de l'algorithme de sélection d'unités (algorithme de recherche dans le treillis d'unités) sont présentés. L'importance de l'optimalité de la solution est discutée et une nouvelle mise en œuvre de la sélection basée sur un algorithme A* est présenté. Trois améliorations de la fonction de coût sont également présentées. La première est une nouvelle façon – dans le coût cible – de minimiser les différences spectrales en sélectionnant des séquences d'unités minimisant un coût moyen au lieu d'unités minimisant chacune un coût cible de manière absolue. Ce coût est testé pour une distance sur la durée phonémique mais peut être appliqué à d'autres distances. Notre deuxième proposition est une fonction de coût cible visant à améliorer l'intonation en se basant sur des coefficients extraits à travers une version généralisée du modèle de Fujisaki. Les paramètres de ces fonctions sont utilisés au sein d'un coût cible. Enfin, notre troisième contribution concerne un système de pénalités visant à améliorer le coût de concaténation. Il pénalise les unités en fonction de classes reposant sur une hiérarchie du degré de risque qu'un artefact de concaténation se produise lors de la concaténation sur un phone de cette classe. Ce système est différent des autres dans la littérature en cela qu'il est tempéré par une fonction floue capable d'adoucir le système de pénalités pour les unités présentant des coûts de concaténation parmi les plus bas de leur distribution. / This PhD thesis focuses on the automatic speech synthesis field, and more specifically on unit selection. A deep analysis and a diagnosis of the unit selection algorithm (lattice search algorithm) is provided. The importance of the solution optimality is discussed and a new unit selection implementation based on a A* algorithm is presented. Three cost function enhancements are also presented. The first one is a new way – in the target cost – to minimize important spectral differences by selecting sequences of candidate units that minimize a mean cost instead of an absolute one. This cost is tested on a phonemic duration distance but can be applied to others. Our second proposition is a target sub-cost addressing intonation that is based on coefficients extracted through a generalized version of Fujisaki's command-response model. This model features gamma functions modeling F0 called atoms. Finally, our third contribution concerns a penalty system that aims at enhancing the concatenation cost. It penalizes units in function of classes defining the risk a concatenation artifact occurs when concatenating on a phone of this class. This system is different to others in the literature in that it is tempered by a fuzzy function that allows to soften penalties for units presenting low concatenation costs.
|
6 |
Optimització perceptiva dels sistemes de síntesi de la parla basats en selecció d’unitats mitjançant algorismes genètics interactius actiusFormiga Fanals, Lluís 27 April 2011 (has links)
Els sistemes de conversió de text en parla (CTP-SU) s'encarreguen de produir veu sintètica a partir d'un text d'entrada. Els CTP basats en selecció d'unitats (CTP-SU) recuperen la millor seqüència d'unitats de veu enregistrades prèviament en una base de dades (corpus). La recuperació es realitza mitjançant algorismes de programació dinàmica i una funció de cost ponderada. La ponderació de la funció de cost es realitza típicament de forma manual per part d'un expert. No obstant, l'ajust manual resulta costós des d'un punt de vista de coneixement prèvi, i imprecís en la seva execució.
Per tal d'ajustar els pesos de la funció de cost, aquesta tesi parteix de la prova de viabilitat d'ajust perceptiu presentada per Alías (2006) que empra algorismes genètics interactius actius (active interactive Genetic Algorithm - aiGA). Aquesta tesi doctoral investiga les diferents problemàtiques que es presenten en aplicar els aiGAs en l'ajust de pesos d'un CTP-SU en un context real de selecció d'unitats.
Primerament la tesi realitza un estudi de l'estat de l'art en l'ajust de pesos. Tot seguit, repassa la idoneïtat de la computació evolutiva interactiva per realitzar l'ajust revisant amb profunditat el treball previ. Llavors es presenten i es validen les propostes de millora.
Les quatre línies mestres que guien les contribucions d'aquesta tesi són: la precisió en l'ajust dels pesos, la robustesa dels pesos obtinguts, l'aplicabilitat de la metodologia per qualsevol funció de cost i el consens dels pesos obtinguts incorporant el criteri de diferents usuaris. En termes de precisió la tesi proposa realitzar l'ajust perceptiu per diferents tipus (clústers) d'unitats respectant les seves peculiaritats fonètiques i contextuals. En termes de robustesa la tesi incorpora diferents mètriques evolutives (indicadors) que avaluen aspectes com l'ambigüitat en la cerca, la convergència d'un usuari o el nivell de consens entre diferents usuaris. Posteriorment, per estudiar l'aplicabilitat de la metodologia proposada s'ajusten perceptivament diferents pesos que combinen informació lingüística i simbòlica. La última contribució d'aquesta tesi estudia l'idoneïtat dels models latents per modelar les preferències dels diferents usuaris i obtenir una solució de consens. Paral•lelament, per fer el pas d'una prova de viabilitat a un entorn real de selecció d'unitats es treballa amb un corpus d'extensió mitjana (1.9h) etiquetat automàticament. La tesi permet concloure que l'aiGA a nivell de clúster és una metodologia altament competitiva respecte les altres tècniques d'ajust presents en l'estat de l'art. / Los sistemas de conversión texto-habla (CTH-SU) se encargan de producir voz sintética a partir de un texto de entrada. Los CTH basados en selección de unidades (CTH-SU) recuperan la mejor secuencia de unidades de voz grabadas previamente en una base de datos (corpus). La recuperación se realitza mediante algoritmos de programación dinámica y una función de coste ponderada. La ponderación de la función de coste se realiza típicamente de forma manual por parte de un experto. Sin embargo, el ajuste manual resulta costoso desde un punto de vista de conocimiento previo e impreciso en su ejecución. Para ajustar los pesos de la función de coste, esta tesis parte de la prueba de viabilidad de ajuste perceptivo presentada por Alías (2006) que emplea algoritmos genéticos interactivos activos (active interactive Genetic Algorithm - aiGA). Esta tesis doctoral investiga las diferentes problemáticas que se presentan al aplicar los aiGAs en el ajuste de pesos de un CTH-SU en un contexto real de selección de unidades.
Primeramente la tesis realiza un estudio del estado del arte en el ajuste de pesos, posteriormente repasa la idoneidad de la computación evolutiva interactiva para realizar el ajuste revisando en profundidad el trabajo previo. Entonces se presentan y se validan las propuestas de mejora.
Las cuatro líneas maestras que guían las contribuciones de esta tesis son: la precisión en el ajuste de los pesos, la robustez de los pesos obtenidos, la aplicabilidad de la metodología para cualquier función de coste y el consenso de los pesos obtenidos incorporando el criterio de diferentes usuarios. En términos de precisión la tesis propone realizar el ajuste perceptivo por diferentes tipos (clusters) de unidades respetando sus peculiaridades fonéticas y contextuales. En términos de robustez la tesis incorpora diferentes métricas evolutivas (indicadores) que evalúan aspectos como la ambigüedad en la búsqueda, la convergencia de un usuario o el nivel de consenso entre diferentes usuarios. Posteriormente, para estudiar la aplicabilidad de la metodología propuesta se ajustan perceptivamente diferentes pesos que combinan información lingüística y simbólica. La última contribución de esta tesis estudia la idoneidad de los modelos latentes para modelar las preferencias de los diferentes usuarios y obtener una solución de consenso. Paralelamente, para dar el paso de una prueba de viabilidad a un entorno real de selección de unidades se trabaja con un corpus de extensión media (1.9h) etiquetado automáticamente. La tesis permite concluir que el aiGA a nivel de cluster es una metodología altamente competitiva respecto a las otras técnicas de ajuste presentes en el estado del arte. / Text-to-Speech Systems (TTS) produce synthetic speech from an input text. Unit Selection TTS (US-TTS) systems are based on the retrieval of the best sequence of recorded speech units previously recorded into a database (corpus). The retrieval is done by means of dynamic programming algorithm and a weighted cost function. An expert typically performs the weighting of the cost function by hand. However, hand tuning is costly from a standpoint of previous training and inaccurate in terms of methodology. In order to properly tune the weights of the cost function, this thesis continues the perceptual tuning proposal submitted by Alías(2006) which uses active interactive Genetic Algorithms (aiGAs). This thesis conducts an investigation to the various problems that arise in applying aiGAs to the weight tuning of the cost function. Firstly, the thesis makes a deep revision to the state-of-the-art in weight tuning. Afterwards, the thesis outlines the suitability of Interactive Evolutionary Computation (IEC) to perform the weight tuning making a thorough review of previous work. Then, the proposals of improvement are presented. The four major guidelines pursued by this thesis are: accuracy in adjusting the weights, robustness of the weights obtained, the applicability of the methodology to any subcost distance and the consensus of weights obtained by different users. In terms of precision cluster-level perceptual tuning is proposed in order to obtain weights for different types (clusters) of units considering their phonetic and contextual properties. In terms of robustness of the evolutionary process, the thesis presents different metrics (indicators) to assess aspects such as the ambiguity within the evolutionary search, the convergence of one user or the level of consensus among different users. Subsequently, to study the applicability of the proposed methodology different weights are perceptually tuned combining linguistic and symbolic information. The last contribution of this thesis examines the suitability of latent models for modeling the preferences of different users and obtains a consensus solution. In addition, the experimentation is carried out through a medium size corpus (1.9h) automatically labelled in order fill the gap between the proof-of-principle and a real unit selection scenario.
The thesis concludes that aiGAs are highly competitive in comparison to other weight tuning techniques from the state-of-the-art.
|
Page generated in 0.111 seconds