• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Characterization of the Voice Source by the DCT for Speaker Information

Abhiram, B January 2014 (has links) (PDF)
Extracting speaker-specific information from speech is of great interest to both researchers and developers alike, since speaker recognition technology finds application in a wide range of areas, primary among them being forensics and biometric security systems. Several models and techniques have been employed to extract speaker information from the speech signal. Speech production is generally modeled as an excitation source followed by a filter. Physiologically, the source corresponds to the vocal fold vibrations and the filter corresponds to the spectrum-shaping vocal tract. Vocal tract-based features like the melfrequency cepstral coefficients (MFCCs) and linear prediction cepstral coefficients have been shown to contain speaker information. However, high speed videos of the larynx show that the vocal folds of different individuals vibrate differently. Voice source (VS)-based features have also been shown to perform well in speaker recognition tasks, thereby revealing that the VS does contain speaker information. Moreover, a combination of the vocal tract and VS-based features has been shown to give an improved performance, showing that the latter contains supplementary speaker information. In this study, the focus is on extracting speaker information from the VS. The existing techniques for the same are reviewed, and it is observed that the features which are obtained by fitting a time-domain model on the VS perform poorly than those obtained by simple transformations of the VS. Here, an attempt is made to propose an alternate way of characterizing the VS to extract speaker information, and to study the merits and shortcomings of the proposed speaker-specific features. The VS cannot be measured directly. Thus, to characterize the VS, we first need an estimate of the VS, and the integrated linear prediction residual (ILPR) extracted from the speech signal is used as the VS estimate in this study. The voice source linear prediction model, which was proposed in an earlier study to obtain the ILPR, is used in this work. It is hypothesized here that a speaker’s voice may be characterized by the relative proportions of the harmonics present in the VS. The pitch synchronous discrete cosine transform (DCT) is shown to capture these, and the gross shape of the ILPR in a few coefficients. The ILPR and hence its DCT coefficients are visually observed to distinguish between speakers. However, it is also observed that they do have intra-speaker variability, and thus it is hypothesized that the distribution of the DCT coefficients may capture speaker information, and this distribution is modeled by a Gaussian mixture model (GMM). The DCT coefficients of the ILPR (termed the DCTILPR) are directly used as a feature vector in speaker identification (SID) tasks. Issues related to the GMM, like the type of covariance matrix, are studied, and it is found that diagonal covariance matrices perform better than full covariance matrices. Thus, mixtures of Gaussians having diagonal covariances are used as speaker models, and by conducting SID experiments on three standard databases, it is found that the proposed DCTILPR features fare comparably with the existing VS-based features. It is also found that the gross shape of the VS contains most of the speaker information, and the very fine structure of the VS does not help in distinguishing speakers, and instead leads to more confusion between speakers. The major drawbacks of the DCTILPR are the session and handset variability, but they are also present in existing state-of-the-art speaker-specific VS-based features and the MFCCs, and hence seem to be common problems. There are techniques to compensate these variabilities, which need to be used when the systems using these features are deployed in an actual application. The DCTILPR is found to improve the SID accuracy of a system trained with MFCC features by 12%, indicating that the DCTILPR features capture speaker information which is missed by the MFCCs. It is also found that a combination of MFCC and DCTILPR features on a speaker verification task gives significant performance improvement in the case of short test utterances. Thus, on the whole, this study proposes an alternate way of extracting speaker information from the VS, and adds to the evidence for speaker information present in the VS.
2

Model-based synthesis of singing / Modellbaserad syntes av sång

Zeng, Xiaofeng January 2023 (has links)
The legacy KTH Music and Singing Synthesis Equipment (MUSSE) system, developed decades ago, is no longer compatible with contemporary computer systems. Nonetheless, the fundamental synthesis model at its core, known as the source-filter model, continues to be a valuable technology in the research field of voice synthesis. In this thesis, the author re-implemented the legacy system with the traditional source-filter model and the modern platform SuperCollider. This re-implementation led to great enhancements in functionality, flexibility and performance. The most noteworthy improvement introduced in the new system is the addition of notch filters, which is able to simulate anti-resonances in the human vocal tract, thereby allowing a broader range of vocal nuances to be reproduced. To demonstrate the significance of notches in vowel synthesis, a subjective auditory experiment was conducted. The results of this experiment clearly show that vowels synthesized with notches sound much more natural and closer to real human voice. The work presented in this thesis, the new MUSSE program with notch filters, will serve as a foundation to support general acoustics research at TMH in the future. / Den äldre KTH Music and Singing Synthesis Equipment (MUSSE) -systemet, som utvecklades för decennier sedan, är inte längre kompatibelt med samtida datorsystem. Trots det fortsätter den grundläggande syntesmodellen vid dess kärna, känd som källa-filtermodellen, att vara en värdefull teknik inom forskningsområdet för röstsyntes. I den här avhandlingen har författaren återimplementerat det äldre systemet med den traditionella källa-filtermodellen och den moderna plattformen SuperCollider. Denna återimplementering ledde till betydande förbättringar i funktionalitet, flexibilitet och prestanda. Den mest anmärkningsvärda förbättringen som infördes i det nya systemet är tillägget av notch-filter, som kan simulera anti-resonanser i den mänskliga röstkanalen och därmed möjliggöra en bredare uppsättning vokala nyanser att återskapas. För att visa betydelsen av notch-filter i vokalsyntes utfördes en subjektiv auditiv undersökning. Resultaten av denna undersökning visar tydligt att vokaler som syntetiseras med notch-filter låter mycket mer naturliga och liknar den verkliga mänskliga rösten. Arbetet som presenteras i denna avhandling, det nya MUSSE-programmet med notch-filter, kommer att fungera som en grund för att stödja allmän akustikforskning vid TMH i framtiden.

Page generated in 0.0482 seconds