Return to search

Cascade RLS with Subsection Adaptation

Speech coding or speech compression is one of the important aspects of speech communications nowadays. By coding the speech, the speed needed to transmit the digitized speech, called the bit rate, can be reduced. This means that for a certain speech communications channel, the lower the bit rate of the speech coding, the more communicating parties can be carried on that channel. This research has as its main application the extraction of the parameters of human speech for speech coding purposes.

We propose an RLS-based cascade adaptive filter structure that can significantly reduce the computational effort required by the RLS algorithm for inverse filtering types of applications. We named it the Cascade RLS with Subsection Adaptation (CRLS-SA) algorithm. The reduction in computational effort comes from the fact that, for inverse filtering applications, the gradients of each section in the cascade are almost uncorrelated with the gradients in other sections. Hence, the gradient autocorrelation matrix is assumed to be block diagonal. Since we use a second order filter for each section, the computation of the adaptation involves only the 2x2- gradient autocorrelation matrix for that section, while still being based on a global minimization criterion. The gradient signal of a section itself is defined as the derivative of the overall output error with respect to the coefficients of the particular section, which can be computed efficiently by passing the overall output of the cascade to a filter with coefficients that are derived from the coefficients of that section. The computational effort of the CRLS-SA algorithm is approximately 20*L*N/2, where L is the data record length and N is the order of the filter.

We analyze the convergence rate of the CRLS-SA algorithm based on the convergence time constant concept, which is the ratio of the condition number and the sensitivity. The CRLS- SA structure is shown to satisfy the DeBrunner-Beex conjecture which says that a structure with a smaller convergence time constant converges faster than a structure with a larger convergence time constant. We show that CRLS-SA converges faster than the Direct Form RLS (DFRLS) algorithm and that its convergence time constant is lower than that of the direct form. The convergence behavior is verified by looking at how fast the estimated system approaches the true system. Here we use the Itakura distance as the measure of closeness between the estimated and the true system. We show that the Itakura distance associated with the CRLS-SA algorithm approaches zero faster than that associated with the direct form RLS algorithm.

The CRLS-SA algorithm is applied in this dissertation to general linear prediction, to the direct adaptive computation of the LSF and their representation in quantized form using a split vector quantization (VQ) approach, and to the detection and tracking of the frequencies in signals consisting of multiple sinusoids in noise. / Ph. D.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/26295
Date26 February 2000
CreatorsZakaria, Gaguk
ContributorsElectrical and Computer Engineering, Beex, A. A. Louis, Moose, Richard L., VanLandingham, Hugh F., Reed, Jeffrey H., Ball, Joseph A.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeDissertation
Formatapplication/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/
Relationchapter2.PDF, chapter4.PDF, abstract.PDF, table_of_contents.PDF, Biography.PDF, chapter6.PDF, chapter7.PDF, list_of_tables.PDF, chapter5.PDF, chapter3.PDF, Acknowledgements.pdf, title_page.pdf, chapter1.PDF, list_of_figures.PDF, list_of_abbreviations.PDF

Page generated in 0.002 seconds