• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 4
  • Tagged with
  • 17
  • 17
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quantifying the Gains of Compressive Sensing for Telemetering Applications

Davis, Philip 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper we study a new streaming Compressive Sensing (CS) technique that aims to replace high speed Analog to Digital Converters (ADC) for certain classes of signals and reduce the artifacts that arise from block processing when conventional CS is applied to continuous signals. We compare the performance of both streaming and block processing methods on several types of signals and quantify the signal reconstruction quality when packet loss is applied to the transmitted sampled data.
2

Estimation for Sensor Fusion and Sparse Signal Processing

Zachariah, Dave January 2013 (has links)
Progressive developments in computing and sensor technologies during the past decades have enabled the formulation of increasingly advanced problems in statistical inference and signal processing. The thesis is concerned with statistical estimation methods, and is divided into three parts with focus on two different areas: sensor fusion and sparse signal processing. The first part introduces the well-established Bayesian, Fisherian and least-squares estimation frameworks, and derives new estimators. Specifically, the Bayesian framework is applied in two different classes of estimation problems: scenarios in which (i) the signal covariances themselves are subject to uncertainties, and (ii) distance bounds are used as side information. Applications include localization, tracking and channel estimation. The second part is concerned with the extraction of useful information from multiple sensors by exploiting their joint properties. Two sensor configurations are considered here: (i) a monocular camera and an inertial measurement unit, and (ii) an array of passive receivers. New estimators are developed with applications that include inertial navigation, source localization and multiple waveform estimation. The third part is concerned with signals that have sparse representations. Two problems are considered: (i) spectral estimation of signals with power concentrated to a small number of frequencies,and (ii) estimation of sparse signals that are observed by few samples, including scenarios in which they are linearly underdetermined. New estimators are developed with applications that include spectral analysis, magnetic resonance imaging and array processing. / <p>QC 20130426</p>
3

Compressive sensing using lp optimization

Pant, Jeevan Kumar 26 April 2012 (has links)
Three problems in compressive sensing, namely, recovery of sparse signals from noise-free measurements, recovery of sparse signals from noisy measurements, and recovery of so called block-sparse signals from noisy measurements, are investigated. In Chapter 2, the reconstruction of sparse signals from noise-free measurements is investigated and three algorithms are developed. The first and second algorithms minimize the approximate L0 and Lp pseudonorms, respectively, in the null space of the measurement matrix using a sequential quasi-Newton algorithm. An efficient line search based on Banach's fixed-point theorem is developed and applied in the second algorithm. The third algorithm minimizes the approximate Lp pseudonorm in the null space by using a sequential conjugate-gradient (CG) algorithm. Simulation results are presented which demonstrate that the proposed algorithms yield improved signal reconstruction performance relative to that of the iterative reweighted (IR), smoothed L0 (SL0), and L1-minimization based algorithms. They also require a reduced amount of computations relative to the IR and L1-minimization based algorithms. The Lp-minimization based algorithms require less computation than the SL0 algorithm. In Chapter 3, the reconstruction of sparse signals and images from noisy measurements is investigated. First, two algorithms for the reconstruction of signals are developed by minimizing an Lp-pseudonorm regularized squared error as the objective function using the sequential optimization procedure developed in Chapter 2. The first algorithm minimizes the objective function by taking steps along descent directions that are computed in the null space of the measurement matrix and its complement space. The second algorithm minimizes the objective function in the time domain by using a CG algorithm. Second, the well known total variation (TV) norm has been extended to a nonconvex version called the TVp pseudonorm and an algorithm for the reconstruction of images is developed that involves minimizing a TVp-pseudonorm regularized squared error using a sequential Fletcher-Reeves' CG algorithm. Simulation results are presented which demonstrate that the first two algorithms yield improved signal reconstruction performance relative to the IR, SL0, and L1-minimization based algorithms and require a reduced amount of computation relative to the IR and L1-minimization based algorithms. The TVp-minimization based algorithm yields improved image reconstruction performance and a reduced amount of computation relative to Romberg's algorithm. In Chapter 4, the reconstruction of so-called block-sparse signals is investigated. The L2/1 norm is extended to a nonconvex version, called the L2/p pseudonorm, and an algorithm based on the minimization of an L2/p-pseudonorm regularized squared error is developed. The minimization is carried out using a sequential Fletcher-Reeves' CG algorithm and the line search described in Chapter 2. A reweighting technique for the reduction of amount of computation and a method to use prior information about the locations of nonzero blocks for the improvement in signal reconstruction performance are also proposed. Simulation results are presented which demonstrate that the proposed algorithm yields improved reconstruction performance and requires a reduced amount of computation relative to the L2/1-minimization based, block orthogonal matching pursuit, IR, and L1-minimization based algorithms. / Graduate
4

Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

Al-Rabah, Abdullatif R. 05 1900 (has links)
Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly recommended to be an effective and practical alternative to state-of-art PAPR reduction techniques.
5

Remote-Sensed LIDAR Using Random Sampling and Sparse Reconstruction

Martinez, Juan Enrique Castorera 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper, we propose a new, low complexity approach for the design of laser radar (LIDAR) systems for use in applications in which the system is wirelessly transmitting its data from a remote location back to a command center for reconstruction and viewing. Specifically, the proposed system collects random samples in different portions of the scene, and the density of sampling is controlled by the local scene complexity. The range samples are transmitted as they are acquired through a wireless communications link to a command center and a constrained absolute-error optimization procedure of the type commonly used for compressive sensing/sampling is applied. The key difficulty in the proposed approach is estimating the local scene complexity without densely sampling the scene and thus increasing the complexity of the LIDAR front end. We show here using simulated data that the complexity of the scene can be accurately estimated from the return pulse shape using a finite moments approach. Furthermore, we find that such complexity estimates correspond strongly to the surface reconstruction error that is achieved using the constrained optimization algorithm with a given number of samples.
6

Convex and non-convex optimizations for recovering structured data: algorithms and analysis

Cho, Myung 15 December 2017 (has links)
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
7

Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

Masood, Mudassir 05 1900 (has links)
Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c) distributed estimation of sparse signals. Extensive experiments are conducted to demonstrate the power and robustness of our proposed sparse signal estimation algorithms. Specifically, we target the problems of a) channel estimation in massive-MIMO, and b) Narrowband interference mitigation in SC-FDMA. We model these problems as sparse recovery problems and demonstrate how these could be solved naturally using the proposed algorithms.
8

Real Time SLAM Using Compressed Occupancy Grids For a Low Cost Autonomous Underwater Vehicle

Cain, Christopher Hawthorn 07 May 2014 (has links)
The research presented in this dissertation pertains to the development of a real time SLAM solution that can be performed by a low cost autonomous underwater vehicle equipped with low cost and memory constrained computing resources. The design of a custom rangefinder for underwater applications is presented. The rangefinder makes use of two laser line generators and a camera to measure the unknown distance to objects in an underwater environment. A visual odometry algorithm is introduced that makes use of a downward facing camera to provide our underwater vehicle with localization information. The sensor suite composed of the laser rangefinder, downward facing camera, and a digital compass are verified, using the Extended Kalman Filter based solution to the SLAM problem along with the particle filter based solution known as FastSLAM, to ensure that they provide in- formation that is accurate enough to solve the SLAM problem for out low cost underwater vehicle. Next, an extension of the FastSLAM algorithm is presented that stores the map of the environment using an occupancy grid is introduced. The use of occupancy grids greatly increases the amount of memory required to perform the algorithm so a version of the Fast- SLAM algorithm that stores the occupancy grids using the Haar wavelet representation is presented. Finally, a form of the FastSLAM algorithm is presented that stores the occupancy grid in compressed form to reduce the amount memory required to perform the algorithm. It is shown in experimental results that the same result can be achieved, as that produced by the algorithm that stores the complete occupancy grid, using only 40% of the memory required to store the complete occupancy grid. / Ph. D.
9

Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction Techniques

Yamada, Randy Matthew 03 May 2013 (has links)
Software-defined radios have the agility and flexibility to tune performance parameters, allowing them to adapt to environmental changes, adapt to desired modes of operation, and provide varied functionality as needed.  Traditional software-defined radios use a combination of conditional processing and software-tuned hardware to enable these features and will critically sample the spectrum to ensure that only the required bandwidth is digitized.  While flexible, these systems are still constrained to perform only a single function at a time and digitize a single frequency sub-band at time, possibly limiting the radio's effectiveness. Radio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion.  Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling.  Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications.  Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements. This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects.  First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing.  By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy.  We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce.  Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum. / Master of Science
10

Grassmannian Fusion Frames for Block Sparse Recovery and Its Application to Burst Error Correction

Mukund Sriram, N January 2013 (has links) (PDF)
Fusion frames and block sparse recovery are of interest in signal processing and communication applications. In these applications it is required that the fusion frame have some desirable properties. One such requirement is that the fusion frame be tight and its subspaces form an optimal packing in a Grassmannian manifold. Such fusion frames are called Grassmannian fusion frames. Grassmannian frames are known to be optimal dictionaries for sparse recovery as they have minimum coherence. By analogy Grassmannian fusion frames are potential candidates as optimal dictionaries in block sparse processing. The present work intends to study fusion frames in finite dimensional vector spaces assuming a specific structure useful in block sparse signal processing. The main focus of our work is the design of Grassmannian fusion frames and their implication in block sparse recovery. We will consider burst error correction as an application of block sparsity and fusion frame concepts. We propose two new algebraic methods for designing Grassmannian fusion frames. The first method involves use of Fourier matrix and difference sets to obtain a partial Fourier matrix which forms a Grassmannian fusion frame. This fusion frame has a specific structure and the parameters of the fusion frame are determined by the type of difference set used. The second method involves constructing Grassmannian fusion frames from Grassmannian frames which meet the Welch bound. This method uses existing constructions of optimal Grassmannian frames. The method, while fairly general, requires that the dimension of the vector space be divisible by the dimension of the subspaces. A lower bound which is an analog of the Welch bound is derived for the block coherence of dictionaries along with conditions to be satisfied to meet the bound. From these results we conclude that the matrices constructed by us are optimal for block sparse recovery from block coherence viewpoint. There is a strong relation between sparse signal processing and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve the burst error correction problem using block sparse signal recovery methods. The use of Grassmannian fusion frames which we constructed as optimal dictionary allows correction of maximum possible number of errors, when used in conjunction with reconstruction algorithms which exploit block sparsity. We also suggest a modification to improve the applicability of the technique and point out relationship with a method which appeared previously in literature. As an application example, we consider the use of the burst error correction technique for impulse noise cancelation in OFDM system. Impulse noise is bursty in nature and severely degrades OFDM performance. The Grassmannian fusion frames constructed with Fourier matrix and difference sets is ideal for use in this application as it can be easily incorporated into the OFDM system.

Page generated in 0.0334 seconds