Spelling suggestions: "subject:"291703 4digital lemsystems"" "subject:"291703 4digital atemsystems""
1 |
A resampling theory for non-bandlimited signals and its applications : a thesis presented for the partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Wellington, New ZealandHuang, Beilei January 2008 (has links)
Currently, digital signal processing systems typically assume that the signals are bandlimited. This is due to our knowledge based on the uniform sampling theorem for bandlimited signals which was established over 50 years ago by the works of Whittaker, Kotel'nikov and Shannon. However, in practice the digital signals are mostly of finite length. This kind of signals are not strictly bandlimited. Furthermore, advances in electronics have led to the use of very wide bandwidth signals and systems, such as Ultra-Wide Band (UWB) communication systems with signal bandwidths of several giga-hertz. This kind of signals can effectively be viewed as having infinite bandwidth. Thus there is a need to extend existing theory and techniques for signals of finite bandwidths to that for non-bandlimited signals. Two recent approaches to a more general sampling theory for non-bandlimited signals have been published. One is for signals with finite rate of innovation. The other introduced the concept of consistent sampling. It views sampling and reconstruction as projections of signals onto subspaces spanned by the sampling (acquisition) and reconstruction (synthesis) functions. Consistent sampling is achieved if the same discrete signal is obtained when the reconstructed continuous signal is sampled. However, it has been shown that when this generalized theory is applied to the de-interlacing of video signals, incorrect results are obtained. This is because de-interlacing is essentially a resampling problem rather than a sampling problem because both the input and output are discrete. While the theory for the resampling for bandlimited signals is well established, the problem of resampling without bandlimited constraints is largely unexplored. The aim of this thesis is to develop a resampling theory for non-bandlimited discrete signals and explore some of its potential applications. The first major contribution is the the theory and techniques for designing an optimal resampling system for signals in the general Hilbert Space when noise is not present. The system is optimal in the sense that the input of the system can always be obtained from the output. The theory is based on the concept of consistent resampling which means that the same continuous signal will be obtained when either the original or the resampled discrete signal is presented to the reconstruction filter. While comparing the input and output of a sampling/reconstruction system is relatively simple since both are continuous signals, comparing the discrete input and output of a resampling system is not. The second major contribution of this thesis is the proposal of a metric that allows us to evaluate the performance of a resampling system. The performance is analyzed in the Fourier domain as well. This performance metric also provides a way by which different resampling algorithms can be compared effectively. It therefore facilitates the process of choosing proper resampling schemes for a particular purpose. Unfortunately consistent resampling cannot always be achieved if noise is present in the signal or the system. Based on the performance metric proposed, the third major contribution of this thesis is the development of procedures for designing resampling systems in the presence of noise which is optimal in the mean squared error (MSE) sense. Both discrete and continuous noise are considered. The problem is formulated as a semi-definite program which can be solved effciently by existing techniques. The usefulness and correctness of the consistent resampling theory is demonstrated by its application to the video de-interlacing problem, image processing, the demodulation of ultra-wideband communication signals and mobile channel detection. The results show that the proposed resampling system has many advantages over existing approaches, including lower computational and time complexities, more accurate prediction of system performances, as well as robustness against noise.
|
2 |
Analysis and application of the spectral warping transform to digital signal processingAllen, Warwick Peter Malcolm January 2007 (has links)
This thesis provides a thorough analysis of the theoretical foundations and properties of the Spectral Warping Transform. The spectral warping transform is defined as a time-domain-to-time-domain digital signal processing transform that shifts the frequency components of a signal along the frequency axis. The z -transform coefficients of a warped signal correspond to z -domain ‘samples’ of the original signal that are unevenly spaced along the unit circle (equivalently, frequency-domain coefficients of the warped signal correspond to frequency-domain samples of the original signal that are unevenly spaced along the frequency axis). The location of these unevenly spaced frequency-domain samples is determined by a z -domain mapping function. This function may be arbitrary, except that it must map the unit circle to the unit circle. It is shown that, in addition to the frequency location, the bandwidth, duration and amplitude of each frequency component of a signal are affected by spectral warping. Specifically, frequency components within bands that are expanded in frequency have shortened durations and larger amplitudes (conversely, components in compressed frequency bands become longer with smaller amplitudes). A property related to the expansion and compression of the duration of frequency components is that if a signal is time delayed (its digital sequence is prepended with zeroes) then each of the frequency components will have a different delay after warping. This time-domain separation phenomenon is useful for separating in time the frequency components of a signal. Such separation is employed in the generation of spectrally flat chirp signals. Because spectral warping will generally expand the duration of some frequency components within a signal, the transform must produce more output samples than there are (non-zero) input samples in order to avoid time-domain aliasing. A discussion of the necessary output signal length is presented. Particular attention is given to spectral warping using all-pass mapping function, which can be realised as a cascade of all-pass filters. There exists an efficient hardware implementation for this all-pass SW realisation [1, 2]. A proof-of-concept application-specific integrated circuit that performs the core operations required by this algorithm was developed. Another focus of the presented research is spectral warping using a piecewise- linear mapping function. This type of spectral warping has the advantage that the changes in frequency, duration and amplitude between the non-warped and warped signals are constant factors over fixed frequency bands. A matrix formulation of the spectral warping transformation is developed. It presents the spectral warping transform as a single matrix multiplication. The transform matrix is the product of the three matrices that represent three conceptual steps. The first step is to apply a discrete Fourier transform to the time-domain signal, providing the frequency-domain representation. Step two is an interpolation to produce the signal content at the desired new frequency samples. This interpolation effectively provides the frequency warping. The final step is an inverse DFT to transform the signal back into the time domain. A special case of the spectral warping transform matrix has the same result as a linear (finite-impulse-response) filter, showing that spectral warping is a generalisation of linear filtering. The conditions for the invertibility of the spectral warping transformation are derived. Several possible realisation of the SW transform are discussed. These include two realisation using parallel finite-impulse-response filter banks and a realisation that uses a cascade of infinite-impulse-response filters. Finally, examples of applications for the spectral warping transform are given. These include: non-uniform spectral analysis (and signal generation), approximate spectral analysis in the time domain, and filter design. This thesis concludes that the SW transform is a useful tool for the manipulation of the frequency content of digital signals, and is particularly useful when the frequency content of a signal (or the frequency response of a system) over a limited band is of interest. It is also claimed that the SW transform may have valuable applications for embedded mixed-signal testing.
|
3 |
Adaptive transmission for block-fading channelsNguyen, Dang Khoa January 2010 (has links)
Multipath propagation and mobility in wireless communication systems give rise to variations in the amplitude and phase of the transmitted signal, commonly referred to as fading. Many wireless applications are affected by slowly varying fading, where the channel is non-ergodic, leading to non-reliable transmission during bad channel realizations. These communication scenarios are well modeled by the block-fading channel, where the reliability is quantatively characterized by the outage probability. This thesis focuses on the analysis and design of adaptive transmission schemes to improve the outage performance of both single- and multiple-antenna transmission over the block-fading channel, especially for the cases where discrete input constellations are used. Firstly, a new lower bound on the outage probability of non-adaptive transmission is proposed, providing an efficient tool for evaluating the performance of non-adaptive transmission. The lower bound, together with its asymptotic analysis, is essential for efficiently designing the adaptive transmission schemes considered in the thesis. Secondly, new power allocation rules are derived to minimize the outage probability of fixed-rate transmission over block-fading channels. Asymptotic outage analysis for the resulting schemes is performed, revealing important system design criteria. Furthermore, the thesis proposes novel suboptimal power allocation rules, which enjoy low-complexity while suffering minimal losses as compared to the optimal solution. Thus, these schemes facilitate power adaptation in low-cost devices. Thirdly, the thesis considers incremental-redundancy automatic-repeat-request (INR-ARQ) strategies, which perform adaptive transmission based on receiver feedback. In particular, the thesis concentrates on multi-bit feedback, which has been shown to yield significant gains in performance compared to conventional single-bit ARQ schemes. The thesis proposes a new information-theoretic framework for multi-bit feedback INR-ARQ, whereby the receiver feeds back a quantized version of the accumulated mutual information. Within this framework, the thesis presents an asymptotic analysis which yields the large gains in outage performance offered by multi-bit feedback. Furthermore, the thesis proposes practical design rules, which further illustrates the benefits of multi-bit feedback in INR-ARQ systems. In short, the thesis studies the outage performance of transmission over block-fading channels. Outage analysis is performed for non-adaptive and adaptive transmission. Improvements for the existing adaptive schemes are also proposed, leading to either lower complexity requirements or better outage performance. Still, further research is needed to bring the benefits offered by adaptive transmission into practical systems. / Thesis (PhD)--University of South Australia, 2010
|
4 |
Channel based medium access control for ad hoc wireless networksAshraf, Manzur January 2009 (has links)
Opportunistic communication techniques have shown to provide significant performance improvements in centralised random access wireless networks. The key mechanism of opportunistic communication is to send back-to-back data packets whenever the channel quality is deemed "good". Recently there have been attempts to introduce opportunistic communication techniques in distributed wireless networks such as wireless ad hoc networks. In line of this research, we propose a new paradigm of medium access control, called Channel MAC based on the channel randomness and opportunistic communication principles. Scheduling in Channel MAC depends on the instance at which the channel quality improves beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad". We derive an analytical throughput equation of the proposed MAC in a multiple access environment and validate it by simulations. It is observed that Channel MAC outperforms IEEE 802.11 for all probabilities of good channel condition and all numbers of nodes. For higher number of nodes, Channel MAC achieves higher throughput at lower probabilities of good channel condition increasing the operating range. Furthermore, the total throughput of the network grows with increasing number of nodes considering negligible propagation delay in the network. A scalable channel prediction scheme is required to implement the practical Channel MAC protocol in practice. We propose a mean-value based channel prediction scheme, which provides prediction with enough accuracy to be used in the Channel MAC protocol. NS2 simulation result shows that the Channel MAC protocol outperforms the IEEE 802.11 in throughput due to its channel diversity mechanism in spite of the prediction errors and packet collisions. Next, we extend the Channel MAC protocol to support multi-rate communications. At present, two prominent multi-rate mechanisms, Opportunistic Auto Rate (OAR) and Receiver Based Auto Rate (RBAR) are unable to adapt to short term changes in channel conditions during transmission as well as to use optimum power and throughput during packet transmissions. On the other hand, using channel predictions, each source-destinations pair in Channel MAC can fully utilise the non-fade durations. We combine the scheduling of Channel MAC and the rate adaptive transmission based on the channel state information to design the 'Rate Adaptive Channel MAC' protocol. However, to implement the Rate adaptive Channel MAC, we need to use a channel prediction scheme to identify transmission opportunities as well as auto rate adaptation mechanism to select rates and number of packets to transmit during those times. For channel prediction, we apply the scheme proposed for the practical implementation of Channel MAC. We propose a "safety margin" based technique to provide auto rate adaptation. Simulation results show that a significant performance improvement can be achieved by Rate adaptive Channel MAC as compared to existing rate adaptive protocols such as OAR.
|
5 |
Channel based medium access control for ad hoc wireless networksAshraf, Manzur January 2009 (has links)
Opportunistic communication techniques have shown to provide significant performance improvements in centralised random access wireless networks. The key mechanism of opportunistic communication is to send back-to-back data packets whenever the channel quality is deemed "good". Recently there have been attempts to introduce opportunistic communication techniques in distributed wireless networks such as wireless ad hoc networks. In line of this research, we propose a new paradigm of medium access control, called Channel MAC based on the channel randomness and opportunistic communication principles. Scheduling in Channel MAC depends on the instance at which the channel quality improves beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad". We derive an analytical throughput equation of the proposed MAC in a multiple access environment and validate it by simulations. It is observed that Channel MAC outperforms IEEE 802.11 for all probabilities of good channel condition and all numbers of nodes. For higher number of nodes, Channel MAC achieves higher throughput at lower probabilities of good channel condition increasing the operating range. Furthermore, the total throughput of the network grows with increasing number of nodes considering negligible propagation delay in the network. A scalable channel prediction scheme is required to implement the practical Channel MAC protocol in practice. We propose a mean-value based channel prediction scheme, which provides prediction with enough accuracy to be used in the Channel MAC protocol. NS2 simulation result shows that the Channel MAC protocol outperforms the IEEE 802.11 in throughput due to its channel diversity mechanism in spite of the prediction errors and packet collisions. Next, we extend the Channel MAC protocol to support multi-rate communications. At present, two prominent multi-rate mechanisms, Opportunistic Auto Rate (OAR) and Receiver Based Auto Rate (RBAR) are unable to adapt to short term changes in channel conditions during transmission as well as to use optimum power and throughput during packet transmissions. On the other hand, using channel predictions, each source-destinations pair in Channel MAC can fully utilise the non-fade durations. We combine the scheduling of Channel MAC and the rate adaptive transmission based on the channel state information to design the 'Rate Adaptive Channel MAC' protocol. However, to implement the Rate adaptive Channel MAC, we need to use a channel prediction scheme to identify transmission opportunities as well as auto rate adaptation mechanism to select rates and number of packets to transmit during those times. For channel prediction, we apply the scheme proposed for the practical implementation of Channel MAC. We propose a "safety margin" based technique to provide auto rate adaptation. Simulation results show that a significant performance improvement can be achieved by Rate adaptive Channel MAC as compared to existing rate adaptive protocols such as OAR.
|
6 |
Adaptive transmission for block-fading channelsNguyen, Dang Khoa January 2010 (has links)
Multipath propagation and mobility in wireless communication systems give rise to variations in the amplitude and phase of the transmitted signal, commonly referred to as fading. Many wireless applications are affected by slowly varying fading, where the channel is non-ergodic, leading to non-reliable transmission during bad channel realizations. These communication scenarios are well modeled by the block-fading channel, where the reliability is quantatively characterized by the outage probability. This thesis focuses on the analysis and design of adaptive transmission schemes to improve the outage performance of both single- and multiple-antenna transmission over the block-fading channel, especially for the cases where discrete input constellations are used. Firstly, a new lower bound on the outage probability of non-adaptive transmission is proposed, providing an efficient tool for evaluating the performance of non-adaptive transmission. The lower bound, together with its asymptotic analysis, is essential for efficiently designing the adaptive transmission schemes considered in the thesis. Secondly, new power allocation rules are derived to minimize the outage probability of fixed-rate transmission over block-fading channels. Asymptotic outage analysis for the resulting schemes is performed, revealing important system design criteria. Furthermore, the thesis proposes novel suboptimal power allocation rules, which enjoy low-complexity while suffering minimal losses as compared to the optimal solution. Thus, these schemes facilitate power adaptation in low-cost devices. Thirdly, the thesis considers incremental-redundancy automatic-repeat-request (INR-ARQ) strategies, which perform adaptive transmission based on receiver feedback. In particular, the thesis concentrates on multi-bit feedback, which has been shown to yield significant gains in performance compared to conventional single-bit ARQ schemes. The thesis proposes a new information-theoretic framework for multi-bit feedback INR-ARQ, whereby the receiver feeds back a quantized version of the accumulated mutual information. Within this framework, the thesis presents an asymptotic analysis which yields the large gains in outage performance offered by multi-bit feedback. Furthermore, the thesis proposes practical design rules, which further illustrates the benefits of multi-bit feedback in INR-ARQ systems. In short, the thesis studies the outage performance of transmission over block-fading channels. Outage analysis is performed for non-adaptive and adaptive transmission. Improvements for the existing adaptive schemes are also proposed, leading to either lower complexity requirements or better outage performance. Still, further research is needed to bring the benefits offered by adaptive transmission into practical systems. / Thesis (PhD)--University of South Australia, 2010
|
Page generated in 0.0495 seconds