• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Graph-based Estimation of Information Divergence Functions

January 2017 (has links)
abstract: Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric model fits the data, non-parametric density estimation is used. In statistical signal processing applications, Gaussianity is usually assumed since closed-form expressions for common divergence measures have been derived for this family of distributions. Parametric assumptions are preferred when it is known that the data follows the model, however this is rarely the case in real-word scenarios. Non-parametric density estimators are characterized by a very large number of parameters that have to be tuned with costly cross-validation. In this dissertation we focus on a specific family of non-parametric estimators, called direct estimators, that bypass density estimation completely and directly estimate the quantity of interest from the data. We introduce a new divergence measure, the $D_p$-divergence, that can be estimated directly from samples without parametric assumptions on the distribution. We show that the $D_p$-divergence bounds the binary, cross-domain, and multi-class Bayes error rates and, in certain cases, provides provably tighter bounds than the Hellinger divergence. In addition, we also propose a new methodology that allows the experimenter to construct direct estimators for existing divergence measures or to construct new divergence measures with custom properties that are tailored to the application. To examine the practical efficacy of these new methods, we evaluate them in a statistical learning framework on a series of real-world data science problems involving speech-based monitoring of neuro-motor disorders. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
2

Joint Radar-Communications Performance Bounds: Data versus Estimation Information Rates

January 2014 (has links)
abstract: The problem of cooperative radar and communications signaling is investigated. Each system typically considers the other system a source of interference. Consequently, the tradition is to have them operate in orthogonal frequency bands. By considering the radar and communications operations to be a single joint system, performance bounds on a receiver that observes communications and radar return in the same frequency allocation are derived. Bounds in performance of the joint system is measured in terms of data information rate for communications and radar estimation information rate for the radar. Inner bounds on performance are constructed. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2014
3

Joint source-channel turbo techniques and variable length codes

Jaspar, Xavier 08 April 2008 (has links)
Efficient multimedia communication over mobile or wireless channels remains a challenging problem. To deal with that problem so far, the industry has followed mostly a divide and conquer approach, by considering separately the source of data (text, image, video, etc.) and the communication channel (electromagnetic waves across the air, a telephone line, a coaxial cable, etc.). The goal is always the same: to transmit (or store) more data reliably per unit of time, of energy, of physical medium, etc. With today's applications, the divide and conquer approach has, in a sense, started to show its limits. Let us consider, for example, the digital transmission of an image. At the transmitter, the first main step is data compression, at the source level. The number of bits that are necessary to represent the image with a given level of quality is reduced, usually by removing details in the image that are invisible (or less visible) to the human eye. The second main step is data protection, at the channel level. The transmission is made ideally resistant to deteriorations caused by the channel, by implementing techniques such as time/frequency/space expansions. In a sense, the two steps are quite antagonistic --- we first compress then expand the original signal --- and have different goals --- compression enables to transfer more data per unit of time/energy/medium while protection enables to transfer data reliably. At the receiver, the "reversed" operations are implemented. This separation in two steps dates back to Shannon's source and channel coding separation theorem in 1948 and has encouraged the division of the research community in two groups, one focusing on data compression, the other on data protection. This separation has also seduced the industry for the design, thereby supported by theory, of layered communication protocols. But this theorem holds only under asymptotic conditions that are rarely satisfied with today's multimedia content and mobile channels. Therefore, it is usually wise in practice to drop this strict separation and to allow at least some cross-layer cooperation between the source and channel layers. This is what lies behind the words joint source-channel techniques. As the name suggests, these techniques are optimized jointly, without a strict separation. Intuitively, since the optimization is less constrained from a mathematical standpoint, the solution can only be better or equivalent. In this thesis, we investigate a promising subset of these techniques, based on the turbo principle and on variable length codes. The potential of this subset has been illustrated for the first time in 2000, with an example that, since then, has been successfully improved in several directions. Unfortunately, most decoding algorithms have been so far developed on an ad hoc basis, without a unified view and often without specifying the approximations made. Besides, most code-related conclusions are based on simulations or on extrinsic information analysis. A theoretical framework on the error correcting properties of variable length codes in turbo systems is lacking. The purpose of this work, in three parts, is to fill in these gaps up to a certain extent. The first part presents the literature in this field and attempts to give a unified overview. The second part proposes a transmission system that generalizes previous systems from the literature, with the simple addition of a repetition code. While most previous systems are designed for bit streams with a high level of residual redundancy, the proposed system has the interesting flexibility to handle easily different levels of redundancy. Its performance is then analyzed for small levels of redundancy, which is a case not tackled extensively in the literature. This analysis leads notably to the discovery of surprising interleaving gains with reversible variable length codes. The third part develops the mathematical framework that was motivated during the second part but skipped on purpose for the sake of clarity. We first clarify several issues that arise with non-uniform bits and the extrinsic information charts, and propose and discuss two methods to compute these charts. Next, several theoretical results are stated on the robustness of variable length codes concatenated with linear error correcting codes. Notably, an approximate average distance spectrum of the concatenated code is rigorously developed. Together with the union bound, this spectrum provides upper bounds on the symbol and frame/packet error rates. These bounds are then analyzed from an interleaving gain standpoint and it is proved that the variable length code improves the interleaving gain if its spectrum is bounded.
4

Spectral Image Processing Theory and Methods: Reconstruction, Target Detection, and Fundamental Performance Bounds

Krishnamurthy, Kalyani January 2011 (has links)
<p>This dissertation presents methods and associated performance bounds for spectral image processing tasks such as reconstruction and target detection, which are useful in a variety of applications such as astronomical imaging, biomedical imaging and remote sensing. The key idea behind our spectral image processing methods is the fact that important information in a spectral image can often be captured by low-dimensional manifolds embedded in high-dimensional spectral data. Based on this key idea, our work focuses on the reconstruction of spectral images from <italic>photon-limited</italic>, and distorted observations. </p><p>This dissertation presents a partition-based, maximum penalized likelihood method that recovers spectral images from noisy observations and enjoys several useful properties; namely, it (a) adapts to spatial and spectral smoothness of the underlying spectral image, (b) is computationally efficient, (c) is near-minimax optimal over an <italic>anisotropic</italic> Holder-Besov function class, and (d) can be extended to inverse problem frameworks.</p><p>There are many applications where accurate localization of desired targets in a spectral image is more crucial than a complete reconstruction. Our work draws its inspiration from classical detection theory and compressed sensing to develop computationally efficient methods to detect targets from few projection measurements of each spectrum in the spectral image. Assuming the availability of a spectral dictionary of possible targets, the methods discussed in this work detect targets that either come from the spectral dictionary or otherwise. The theoretical performance bounds offer insight on the performance of our detectors as a function of the number of measurements, signal-to-noise ratio, background contamination and properties of the spectral dictionary. </p><p>A related problem is that of level set estimation where the goal is to detect the regions in an image where the underlying intensity function exceeds a threshold. This dissertation studies the problem of accurately extracting the level set of a function from indirect projection measurements without reconstructing the underlying function. Our partition-based set estimation method extracts the level set of proxy observations constructed from such projection measurements. The theoretical analysis presented in this work illustrates how the projection matrix, proxy construction and signal strength of the underlying function affect the estimation performance.</p> / Dissertation
5

Multiple-Input Multiple-Output Wireless Systems: Coding, Distributed Detection and Antenna Selection

Bahceci, Israfil 26 August 2005 (has links)
This dissertation studies a number of important issues that arise in multiple-input multiple-out wireless systems. First, wireless systems equipped with multiple-transmit multiple-receive antennas are considered where an energy-based antenna selection is performed at the receiver. Three different situations are considered: (i) selection over iid MIMO fading channel, (ii) selection over spatially correlated fading channel, and (iii) selection for space-time coded OFDM systems. In all cases, explicit upper bounds are derived and it is shown that using the proposed antenna selection, one can achieve the same diversity order as that attained by full-complexity MIMO systems. Next, joint source-channel coding problem for MIMO antenna systems is studied and a turbo-coded multiple description code for multiple antenna transmission is developed. Simulations indicate that by the proposed iterative joint source-channel decoding that exchanges the extrinsic information between the source code and the channel code, one can achieve better reconstruction quality than that can be achieved by the single-description codes at the same rate. The rest of the dissertation deals with wireless networks. Two problems are studied: channel coding for cooperative diversity in wireless networks, and distributed detection in wireless sensor networks. First, a turbo-code based channel code for three-terminal full-duplex wireless relay channels is proposed where both the source and the relay nodes employ turbo codes. An iterative turbo decoding algorithm exploiting the information arriving from both the source and relay nodes is proposed. Simulation results show that the proposed scheme can perform very close to the capacity of a wireless relay channel. Next the parallel and serial binary distributed detection problem in wireless sensor networks is investigated. Detection strategies based on single-bit and multiple-bit decisions are considered. The expressions for the detection and false alarm rates are derived and used for designing the optimal detection rules at all sensor nodes. Also, an analog approach to the distributed detection in wireless sensor networks is proposed where each sensor nodes simply amplifies-and-forwards its sufficient statistics to the fusion center. This method requires very simple processing at the local sensor. Numerical examples indicate that the analog approach is superior to the digital approach in many cases.
6

Studies on the Performance and Impact of Channel Estimation in MIMO and OFDM Systems

Larsen, Michael David 08 December 2009 (has links)
The need for reliable, high-throughput, mobile wireless communication technologies has never been greater as increases in the demand for on-the-go access to information, entertainment, and other electronic services continues. Two such technologies, which are at the forefront of current research efforts, are orthogonal frequency division multiplexing (OFDM) and multiple-input multiple-output (MIMO) systems, their union being known simply as MIMO-OFDM. The successful performance of these technologies depends upon the availability of accurate information concerning the wireless communication channel. In this dissertation, several issues related to quality of this channel state information (CSI) are studied. Specifically, the first part of this dissertation considers the design of optimal pilot signals for OFDM systems. The optimization is addressed via lower bounds on the estimation error variance, which bounds are given by formulations of the Cram'{e}r-Rao bound (CRB). The second part of this dissertation uses the CRB once again, this time as a tool for evaluating the potential performance of MIMO-OFDM channel estimation and prediction. Bounds are found for several parametric time-varying wideband MIMO-OFDM channel models, and numerical evaluations of these bounds are used to illuminate several interesting features regarding the estimation and prediction of MIMO-OFDM channels. The final part of this dissertation considers the problem of MIMO multiplexing using SVD-based methods when only imperfect CSI is available. For this purpose, general per-MIMO-subchannel signal and interference-plus-noise power expressions are derived to quantify the effects of CSI imperfections, and these expressions are then used to find robust MIMO-SVD power and bit allocations which maintain good overall performance in spite of imperfect CSI.

Page generated in 0.0625 seconds