Return to search

Switching adaptive filter structures for improved performance

We describe an adaptive filter system that is able to switch between adaptive filter algorithms in order to produce fast convergence and low mean Square error (MSE). The switching system employs two adaptive filters for two different tasks; one is intended to yield fast convergence, and logically called the "fast convergence structure", while the other is intended to give small MSE, and called the "low MSE structure".

The switching from one algorithm to the other is determined by the state of the system. For example, switching from the "fast convergence structure” to the “low MSE structure” happens if the former has reached its steady state according to some pre-defined criterion, while switching from the “low MSE structure" to the "fast convergence structure” happens if the former starts diverging according to some pre-defined criterion. We define here that an algorithm has reached its steady state if the average of the square of its output error is small and approximately constant for several iterations. After an algorithm has reached its steady state, not much additional error reduction can be obtained from it, so that there is no payoff using "the fast convergence structure”, which is usually more computation intensive than the "low MSE structure”. In this situation it would be better to use the Least Mean Squares (LMS) algorithm as the “low MSE structure" because of its simplicity and its numerical robustness.

Experiments using the recursive-least-squares-lattice (RLSL) algorithm together with the LMS algorithm, the fast-transversal-filter (FTF) algorithm together with the LMS algorithm, and the gradient-adaptive-lattice (GAL) algorithm together with the LMS algorithm for a system identification application, in particular for echo cancellation, show the expected result of providing faster convergence and lower mean square error than would be possible with a single algorithm. The switching system demonstrates other important results: it can avoid the numerical instability of some algorithms, such as the RLSL and FTF algorithms, without adding any additional computations; it is able to handle a change in the unknown system, as long as it settles, without suffering a slow convergence rate caused by an incorrect initial condition; it is able to handle a change of the observation noise without facing a divergence problem; and it is able to produce an optimum result even for i/l-conditioned input signals, i.e. the ratio of the maximum and minimum eigenvalue of the auto-correlation of the signal is high.

When switching to the “low MSE structure" we also apply a computationally reduced order technique, in which only the values of the impulse response that are greater than some threshold are used for computation. This technique is applied to the switching structure of the recursive-least-squareslattice algorithm together with the LMS algorithm and exhibits fast convergence and low MSE even for ill-conditioned input signals. For the white Gaussian noise input, on the other hand, this technique yields a somewhat larger mean square error. / Master of Science

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/43788
Date21 July 2009
CreatorsZakaria, Gaguk
ContributorsElectrical Engineering
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis, Text
Formatix, 144 leaves, BTD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/
RelationOCLC# 30095416, LD5655.V855_1993.Z353.pdf

Page generated in 0.0078 seconds