Language Identification (LID) is the task of automatically identifying the language of speech signal uttered by an unknown speaker. An N language LID task is to classify an input speech utterance, spoken by an unknown speaker and of unknown text, as belonging to one of the N languages L1, L2, . . , LN.
We present a new approach to spoken language modeling for language identification using the Lempel-Ziv-Welch (LZW) algorithm, with which we try to overcome the limitations of n-gram stochastic models by automatically identifying the valid set of variable length patterns from the training data. However, since several patterns in a language pattern table are also shared by other language pattern tables, confusability prevailed in the LID task. To overcome this, three pruning techniques are proposed to make these pattern tables more language specific. For LID with limited training data, we present another language modeling technique, which compensates for language specific patterns missing in the language specific LZW pattern table. We develop two new discriminative measures for LID based on the LZW algorithm, viz., (i) Compression Ratio Score (LZW-CRS) and (ii) Weighted Discriminant Score (LZW-WDS). It is shown that for a 6-language LID task of the OGI-TS database, the new model (LZW-WDS) significantly outperforms the conventional bigram approach.
With regard to the front end of the LID system, we develop a modified technique to model for Acoustic Sub-Word Units (ASWU) and explore its effectiveness. The segmentation of speech signal is done using an acoustic criterion (ML-segmentation). However, we believe that consistency and discriminability among speech units is the key issue for the success of ASWU based speech processing. We develop a new procedure for clustering and modeling the segments using sub-word GMMs. Because of the flexibility in choosing the labels for the sub-word units, we do an iterative re-clustering and modeling of the segments. Using a consistency measure of labeling the acoustic segments, the convergence of iterations is demonstrated. We show that the performance of new ASWU based front-end and the new LZW based back-end for LID outperforms the earlier reported PSWR based LID.
Identifer | oai:union.ndltd.org:IISc/oai:etd.ncsi.iisc.ernet.in:2005/571 |
Date | January 2007 |
Creators | Basavaraja, S V |
Contributors | Sreenivas, T V |
Source Sets | India Institute of Science |
Language | en_US |
Detected Language | English |
Type | Thesis |
Relation | G21500 |
Page generated in 0.0017 seconds