31 |
Novel Complex Adaptive Signal Processing Techniques Employing Optimally Derived Time-varying Convergence Factors With ApplicatioRanganathan, Raghuram 01 January 2008 (has links)
In digital signal processing in general, and wireless communications in particular, the increased usage of complex signal representations, and spectrally efficient complex modulation schemes such as QPSK and QAM has necessitated the need for efficient and fast-converging complex digital signal processing techniques. In this research, novel complex adaptive digital signal processing techniques are presented, which derive optimal convergence factors or step sizes for adjusting the adaptive system coefficients at each iteration. In addition, the real and imaginary components of the complex signal and complex adaptive filter coefficients are treated as separate entities, and are independently updated. As a result, the developed methods efficiently utilize the degrees of freedom of the adaptive system, thereby exhibiting improved convergence characteristics, even in dynamic environments. In wireless communications, acceptable co-channel, adjacent channel, and image interference rejection is often one of the most critical requirements for a receiver. In this regard, the fixed-point complex Independent Component Analysis (ICA) algorithm, called Complex FastICA, has been previously applied to realize digital blind interference suppression in stationary or slow fading environments. However, under dynamic flat fading channel conditions frequently encountered in practice, the performance of the Complex FastICA is significantly degraded. In this dissertation, novel complex block adaptive ICA algorithms employing optimal convergence factors are presented, which exhibit superior convergence speed and accuracy in time-varying flat fading channels, as compared to the Complex FastICA algorithm. The proposed algorithms are called Complex IA-ICA, Complex OBA-ICA, and Complex CBC-ICA. For adaptive filtering applications, the Complex Least Mean Square algorithm (Complex LMS) has been widely used in both block and sequential form, due to its computational simplicity. However, the main drawback of the Complex LMS algorithm is its slow convergence and dependence on the choice of the convergence factor. In this research, novel block and sequential based algorithms for complex adaptive digital filtering are presented, which overcome the inherent limitations of the existing Complex LMS. The block adaptive algorithms are called Complex OBA-LMS and Complex OBAI-LMS, and their sequential versions are named Complex HA-LMS and Complex IA-LMS, respectively. The performance of the developed techniques is tested in various adaptive filtering applications, such as channel estimation, and adaptive beamforming. The combination of Orthogonal Frequency Division Multiplexing (OFDM) and the Multiple-Input-Multiple-Output (MIMO) technique is being increasingly employed for broadband wireless systems operating in frequency selective channels. However, MIMO-OFDM systems are extremely sensitive to Intercarrier Interference (ICI), caused by Carrier Frequency Offset (CFO) between local oscillators in the transmitter and the receiver. This results in crosstalk between the various OFDM subcarriers resulting in severe deterioration in performance. In order to mitigate this problem, the previously proposed Complex OBA-ICA algorithm is employed to recover user signals in the presence of ICI and channel induced mixing. The effectiveness of the Complex OBA-ICA method in performing ICI mitigation and signal separation is tested for various values of CFO, rate of channel variation, and Signal to Noise Ratio (SNR).
|
32 |
Dynamic Model-Based Estimation Strategies for Fault DiagnosisSaeedzadeh, Ahsan January 2024 (has links)
Fault Detection and Diagnosis (FDD) constitutes an essential aspect of modern life, with far-reaching implications spanning various domains such as healthcare, maintenance of industrial machinery, and cybersecurity. A comprehensive approach to FDD entails addressing facets related to detection, invariance, isolation, identification, and supervision. In FDD, there are two main perspectives: model-based and data-driven approaches. This thesis centers on model-based methodologies, particularly within the context of control and industrial applications. It introduces novel estimation strategies aimed at enhancing computational efficiency, addressing fault discretization, and considering robustness in fault detection strategies.
In cases where the system's behavior can vary over time, particularly in contexts like fault detection, presenting multiple scenarios is essential for accurately describing the system. This forms the underlying principle in Multiple Model Adaptive Estimation (MMAE) like well-established Interacting Multiple Model (IMM) strategy. In this research, an exploration of an efficient version of the IMM framework, named Updated IMM (UIMM), is conducted. UIMM is applied for the identification of irreversible faults, such as leakage and friction faults, within an Electro-Hydraulic Actuator (EHA). It reduces computational complexity and enhances fault detection and isolation, which is very important in real-time applications such as Fault-Tolerant Control Systems (FTCS). Employing robust estimation strategies such as the Smooth Variable Structure Filter (SVSF) in the filter bank of this algorithm will significantly enhance its performance, particularly in the presence of system uncertainties. To relax the irreversible assumption used in the UIMM algorithm and thereby expanding its application to a broader range of problems, the thesis introduces the Moving Window Interacting Multiple Model (MWIMM) algorithm. MWIMM enhances efficiency by focusing on a subset of possible models, making it particularly valuable for fault intensity and Remaining Useful Life (RUL) estimation.
Additionally, this thesis delves into exploring chattering signals generated by the SVSF filter as potential indicators of system faults. Chattering, arising from model mismatch or faults, is analyzed for spectral content, enabling the identification of anomalies. The efficacy of this framework is verified through case studies, including the detection and measurement of leakage and friction faults in an Electro-Hydraulic Actuator (EHA). / Thesis / Candidate in Philosophy / In everyday life, from doctors diagnosing illnesses to mechanics inspecting cars, we encounter the need for fault detection and diagnosis (FDD). Advances in technology, like powerful computers and sensors, are making it possible to automate fault diagnosis processes and take corrective actions in real-time when something goes wrong. The first step in fault detection and diagnosis is to precisely identify system faults, ensuring they can be properly separated from normal variations caused by uncertainties, disruptions, and measurement errors.
This thesis explores model-based approaches, which utilize prior knowledge about how a normal system behaves, to detect abnormalities or faults in the system. New algorithms are introduced to enhance the efficiency and flexibility of this process. Additionally, a new strategy is proposed for extracting information from a robust filter, when used for identifying faults in the system.
|
33 |
Performance Assessment of the Finite Impulse Response Adaptive Line EnhancerCampbell, Roy Lee, Jr 03 August 2002 (has links)
Although the finite impulse response (FIR) Adaptive Line Enhancer (ALE) was developed in 1975 and has been used in a host of applications, no comprehensive performance analysis has been performed for this method, meaning no general equation exists for its signal-to-noise ratio (SNR) gain. Such an equation would provide practitioners an avenue for determining the amount of noise reduction the ALE provides for a particular application and would add to the general knowledge of adaptive filtering. Based on this motivation, this work derives the general equation for the FIR ALE SNR gain and verifies the equation through computer simulation, under the following assumptions: (1) A simplified Least Mean Squares (LMS) method is used for updating the embedded adaptive filter located within the ALE, (2) The received signal (i.e. the input signal to the ALE) is a summation of sinusoids buried in additive zero-mean white-Gaussian noise (AWGN), (3) The received signal is oversampled (i.e. the sampling rate is larger than the Nyquist rate), and (4) The ALE filter length is an integer multiple of the number of samples within one fundamental period of the original, noiseless signal.
|
34 |
Active Control of Impact Acoustic NoiseSun, Guohua January 2013 (has links)
No description available.
|
35 |
Non-Intrusive Sensing and Feedback Control of Serpentine Inlet Flow DistortionAnderson, Jason 23 April 2003 (has links)
A technique to infer circumferential total pressure distortion intensity found in serpentine inlet airflow was established using wall-pressure fluctuation measurements. This sensing technique was experimentally developed for aircraft with serpentine inlets in a symmetric, level flight condition. The turbulence carried by the secondary flow field that creates the non-uniform total pressure distribution at the compressor fan-face was discovered to be an excellent indicator of the distortion intensity. A basic understanding of the secondary flow field allowed for strategic sensor placement to provide a distortion estimate with a limited number of sensors. The microphone-based distortion estimator was validated through its strong correlation with experimentally determined circumferential total pressure distortion parameter intensities (DPCP).
This non-intrusive DPCP estimation technique was then used as a DPCP observer in a distortion feedback control system. Lockheed Martin developed the flow control technique used in this control system, which consisted of jet-type vortex generators that injected secondary flow to counter the natural secondary flow inherent to the serpentine inlet. A proportional-integral-derivative (PID) based control system was designed that achieved a requested 66% reduction in DPCP (from a DPCP of 0.023 down to 0.007) in less than 1 second. This control system was also tested for its ability to maintain a DPCP level of 0.007 during a quick ramp-down and ramp-up engine throttling sequence, which served as a measure of system robustness. The control system allowed only a maximum peak DPCP of 0.009 during the engine ramp-up. The successful demonstrations of this automated distortion control system showed great potential for applying this distortion sensing scheme along with Lockheed Martin's flow control technique to military aircraft with serpentine inlets.
A final objective of this research was to broaden the non-intrusive sensing capabilities in the serpentine inlet. It was desired to develop a sensing technique that could identify control efforts that optimized the overall inlet aerodynamic performance with regards to both circumferential distortion intensity DPCP and average pressure recovery PR. This research was conducted with a new serpentine inlet developed by Lockheed Martin having a lower length-to-diameter ratio and two flow control inputs. A cost function based on PR and DPCP was developed to predict the optimal flow control efforts at several Mach numbers. Two wall-mounted microphone signals were developed as non-intrusive inlet performance sensors in response to the two flow control inputs. These two microphone signals then replaced the PR and DPCP metrics in the original cost function, and the new non-intrusive-based cost function yielded extremely similar optimal control efforts. / Ph. D.
|
36 |
AMPS co-channel interference rejection techniques and their impact on system capacityHe, Rong 02 October 2008 (has links)
With the rapid and ubiquitous deployment of mobile communications in recent years, cochannel interference has become a critical problem because of its impact on system capacity and quality of service. The conventional approach to minimizing interference is through better cell planning and design. Digital Signal Processing COSP) based interference rejection techniques provide an alternative approach to minimize interference and improve system capacity.
Single channel adaptive interference rejection techniques have long been used for enhancing digitally modulated signals. However these techniques are not well suited for analog mobile phone system (AMPS) and narrowband AMPS (NAMPS) signals because of the large spectral overlap of the signals of interest with interfering signals and because of the lack of a well defined signal structure that can be used to separate the signals. Our research has created novel interference rejection techniques based on time-dependent filtering which exploit spectral correlation characteristics exhibited by AMPS and NAMPS signals. A mathematical analysis of the cyclostationary features of AMPS and NAMPS signals is presented to help explain and analyze these techniques. Their performance is investigated using both simulated and digitized data. The impact of these new techniques on AMPS system capacity is also studied. The adaptive algorithms and structures are refined to be robust in various channel environments and to be computationally efficient. / Ph. D.
|
37 |
Cyclostationary Methods for Communication and Signal Detection Under InterferenceCarrick, Matthew David 24 September 2018 (has links)
In this dissertation novel methods are proposed for communicating in interference limited environments as well as detecting such interference. The methods include introducing redundancies into multicarrier signals to make them more robust, applying a novel filtering structure for mitigating radar interference to orthogonal frequency division multiplexing (OFDM) signals and for exploiting the cyclostationary nature of signals to whiten the spectrum in blind signal detection.
Data symbols are repeated in both time and frequency across orthogonal frequency division multiplexing (OFDM) symbols, creating a cyclostationary nature in the signal. A Frequency Shift (FRESH) filter can then be applied to the cyclostationary signal, which is the optimal filter and is able to reject interference much better than a time-invariant filter such as the Wiener filter. A novel time-varying FRESH filter (TV-FRESH) filter is developed and its Minimum Mean Squared Error (MMSE) filter weights are found.
The repetition of data symbols and their optimal combining with the TV-FRESH filter creates an effect of improving the Bit Error Rate (BER) at the receiver, similar to an error correcting code. The important distinction for the paramorphic method is that it is designed to operate within cyclostationary interference, and simulation results show that the symbol repetition can outperform other error correcting codes. Simulated annealing is used to optimize the signaling parameters, and results show that a balance between the symbol repetition and error correcting codes produces a better BER for the same spectral efficiency than what either method could have achieved alone.
The TV-FRESH filter is applied to a pulsed chirp radar signal, demonstrating a new tool to use in radar and OFDM co-existence. The TV-FRESH filter applies a set of filter weights in a periodically time-varying fashion. The traditional FRESH filter is periodically time-varying due to the periodicities of the frequency shifters, but applies time-invariant filters after optimally combine any spectral redundancies in the signal. The time segmentation of the TV-FRESH filter allows spectral redundancies of the radar signal to be exploited across time due to its deterministic nature.
The TV-FRESH filter improves the rejection of the radar signal as compared to the traditional FRESH filter under the simulation scenarios, improving the SINR and BER at the output of the filter. The improvement in performance comes at the cost of additional filtering complexity.
A time-varying whitening filter is applied to blindly detect interference which overlaps with the desired signal in frequency. Where a time-invariant whitening filter shapes the output spectrum based on the power levels, the proposed time-varying whitener whitens the output spectrum based on the spectral redundancy in the desired signal. This allows signals which do not share the same cyclostationary properties to pass through the filter, improving the sensitivity of the algorithm and producing higher detection rates for the same probability of false alarm as compared to the time-invariant whitener. / Ph. D. / This dissertation proposes novel methods for building robust wireless communication links which can be used to improve their reliability and resilience while under interference. Wireless interference comes from many sources, including other wireless transmitters in the area or devices which emit electromagnetic waves such as microwaves. Interference reduces the quality of a wireless link and depending on the type and severity may make it impossible to reliably receive information. The contributions are both for communicating under interference and being able to detect interference. A novel method for increasing the redundancy in a wireless link is proposed which improves the resiliency of a wireless link. By transmitting additional copies of the desired information the wireless receiver is able to better estimate the original transmitted signal. The digital receiver structure is proposed to optimally combine the redundant information, and simulation results are used to show its improvement over other analogous methods. The second contribution applies a novel digital filter for mitigating interference from a radar signal to an Orthogonal Frequency Division Multiplexing (OFDM) signal, similar to the one which is being used in Long Term Evolution (LTE) mobile phones. Simulation results show that the proposed method out performs other digital filters at the most of additional complexity. The third contribution applies a digital filter and trains it such that the output of the filter can be used to detect the presence of interference. An algorithm which detects interference can tip off an appropriate response, and as such is important to reliable wireless communications. Simulation results are used to show that the proposed method produces a higher probability of detection while reducing the false alarm rate as compared to a similar digital filter trained to produce the same effect.
|
38 |
Time-Varying Frequency Selective IQ Imbalance Estimation and CompensationInti, Durga Laxmi Narayana Swamy 14 June 2017 (has links)
Direct-Down Conversion (DDC) principle based transceiver architectures are of interest to meet the diverse needs of present and future wireless systems. DDC transceivers have a simple structure with fewer analog components and offer low-cost, flexible and multi-standard solutions. However, DDC transceivers have certain circuit impairments affecting their performance in wide-band, high data rate and multi-user systems.
IQ imbalance is one of the problems of DDC transceivers that limits their image rejection capabilities. Compensation techniques for frequency independent IQI arising due to gain and phase mismatches of the mixers in the I/Q paths of the transceiver have been widely discussed in the literature. However for wideband multi-channel transceivers, it is becoming increasingly important to address frequency dependent IQI arising due to mismatches in the analog I/Q lowpass filters.
A hardware-efficient and standard independent digital estimation and compensation technique for frequency dependent IQI is introduced which is also capable of tracking time-varying IQI changes. The technique is blind and adaptive in nature, based on the second order statistical properties of complex random signals such as properness/circularity.
A detailed performance analysis of the introduced technique is executed through computer simulations for various real-time operating scenarios. A novel technique for finding the optimal number of taps required for the adaptive IQI compensation filter is proposed and the performance of this technique is validated. In addition, a metric for the measure of properness is developed and used for error power and step size analysis. / Master of Science / A wireless transceiver consists of two major building blocks namely the RF front-end and digital baseband. The front-end performs functions such as frequency conversion, filtering, and amplification. Impurities because of deep-submicron fabrication lead to non-idealities of the front-end components which limit their accuracy and affect the performance of the overall transceiver.
Complex (I/Q) mixing of baseband signals is preferred over real mixing because of its inherent trait of bandwidth efficiency. The I/Q paths enabling this complex mixing in the front-end may not be exactly identical thereby disturbing the perfect orthogonality of inphase and quadrature components leading to IQ Imbalance. The resultant IQ imbalance leads to an image of the signal formed at its mirror frequencies. Imbalances arising from mixers lead to an image of constant strength whereas I/Q low-pass filter mismatches lead to an image of varying strength across the Nyquist range. In addition, temperature effects cause slow variation in IQ imbalance with time.
In this thesis a hardware efficient and standard-independent technique is introduced to compensate for performance degrading IQ imbalance. The technique is blind and adaptive in nature and uses second order statistical signal properties like circularity or properness for IQ imbalance estimation.
The contribution of this work, which gives a key insight into the optimal number of taps required for the adaptive compensation filter improves the state-of-the-art technique. The performance of the technique is evaluated under various scenarios of interest and a detailed analysis of the results is presented.
|
39 |
Implementation and evaluation of echo cancellation algorithmsSankaran, Sundar G. 13 February 2009 (has links)
Echo in telephones is generally undesirable but inevitable. There are two possible sources of echo in a telephone system. The impedance mismatch in hybrids generates network (electric) echo. The acoustic coupling between loudspeaker and microphone, in hands-free telephones, produces acoustic echo. Echo cancelers are used to control these echoes.
In this thesis, we analyze the Least Mean Squares (LMS), Normalized LMS (NLMS), Recursive Least Squares (RLS), and Subband NLMS (SNLMS) algorithms, and evaluate their performance as acoustic and network echo cancelers. The algorithms are compared based on their convergence rate, steady state echo return loss (ERL), and complexity of implementation. While LMS is simple, its convergence rate is dependent on the eigenvalue spread of the signal. In particular, it converges slowly with speech as input. This problem is mitigated in NLMS. The complexity of NLMS is comparable to that of LMS. The convergence rate of RLS is independent of the eigenvalue spread, and it has the fastest convergence. On the other hand, RLS is highly computation intensive. Among the four algorithms considered here, SNLMS has the least complexity of implementation, as well as the slowest rate of convergence.
Switching between the NLMS and SNLMS algorithms is used to achieve fast convergence with low computational requirements. For a given computational power, it is shown that switching between algorithms can give better performance than using either of the two algorithms exclusively, especially in rooms with long reverberation times.
We also discuss various implementation issues associated with an integrated echo cancellation system, such as double-talk detection, finite precision effects, nonlinear processing, and howling detection and control. The use of a second adaptive filter is proposed, to reduce near-end ambient noise. Simulation results indicate that this approach can reduce the ambient noise by about 20 dB.
A configuration is presented for the real time single-chip DSP implementation of acoustic and network echo cancelers, and an interface between the echo canceler and the telephone is proposed. Finally, some results obtained from simulations and implementations of individual modules, on the TMS320C31 and ADSP 2181 processors, are reported. The real time NLMS DSP implementations provide 15 dB of echo return loss. / Master of Science
|
40 |
Estratégias incrementais em combinação de filtros adaptativos. / Incremental strategies in combination of adaptive filters.Lopes, Wilder Bezerra 14 February 2012 (has links)
Neste trabalho uma nova estratégia de combinação de filtros adaptativos é apresentada e estudada. Inspirada por esquemas incrementais e filtragem adaptativa cooperativa, a combinação convexa usual de filtros em paralelo e independentes é reestruturada como uma configuração série-cooperativa, sem aumento da complexidade computacional. Dois novos algoritmos são projetados utilizando Recursive Least-Squares (RLS) e Least-Mean-Squares (LMS) como subfiltros que compõem a combinação. Para avaliar a performance da estrutura incremental, uma análise de média quadrática é realizada. Esta é feita assumindo que os combinadores têm valores fixos, de forma a permitir o estudo da universalidade da estrutura desacoplada da dinâmica do supervisor. As simulações realizadas mostram uma boa concordância com o modelo teórico obtido. / In this work a new strategy for combination of adaptive filters is introduced and studied. Inspired by incremental schemes and cooperative adaptive filtering, the standard convex combination of parallel-independent filters is rearranged into a series-cooperative configuration, while preserving computational complexity. Two new algorithms are derived employing Recursive Least-Squares (RLS) and Least-Mean-Squares (LMS) algorithms as the component filters. In order to assess the performance of the incremental structure, tracking and steady-state mean-square analysis is derived. The analysis is carried out assuming the combiners are fixed, so that the universality of the new structure may be studied decoupled from the supervisor\'s dynamics. The resulting analytical model shows good agreement with simulation results.
|
Page generated in 0.1118 seconds