Spelling suggestions: "subject:"designal reconstruction"" "subject:"absignal reconstruction""
1 |
Quantifying the Gains of Compressive Sensing for Telemetering ApplicationsDavis, Philip 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper we study a new streaming Compressive Sensing (CS) technique that aims to replace high speed Analog to Digital Converters (ADC) for certain classes of signals and reduce the artifacts that arise from block processing when conventional CS is applied to continuous signals. We compare the performance of both streaming and block processing methods on several types of signals and quantify the signal reconstruction quality when packet loss is applied to the transmitted sampled data.
|
2 |
Projected Wirtinger gradient descent for spectral compressed sensingLiu, Suhui 01 August 2017 (has links)
In modern data and signal acquisition, one main challenge arises from the growing scale of data. The data acquisition devices, however, are often limited by physical and hardware constraints, precluding sampling with the desired rate and precision. It is thus of great interest to reduce the sensing complexity while retaining recovery resolution. And that is why we are interested in reconstructing a signal from a small number of randomly observed time domain samples. The main contributions of this thesis are as follows.
First, we consider reconstructing a one-dimensional (1-D) spectrally sparse signal from a small number of randomly observed time-domain samples. The signal of interest is a linear combination of complex sinusoids at R distinct frequencies. The frequencies can assume any continuous values in the normalized frequency domain [0, 1). After converting the spectrally sparse signal into a low-rank Hankel structured matrix completion problem, we propose an efficient feasible point approach, named projected Wirtinger gradient descent (PWGD) algorithm, to efficiently solve this structured matrix completion problem. We give the convergence analysis of our proposed algorithms. We then apply this algorithm to a different formulation of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix. The algorithms provide better recovery performance; and faster signal recovery than existing algorithms including atomic norm minimization (ANM) and Enhanced Matrix Completion (EMaC). We further accelerate our proposed algorithm by a scheme inspired by FISTA. Extensive numerical experiments are provided to illustrate the efficiency of our proposed algorithms. Different from earlier approaches, our algorithm can solve problems of very large dimensions very efficiently. Moreover, we extend our algorithms to signal recovery from noisy samples. Finally, we aim to reconstruct a two-dimension (2-D) spectrally sparse signal from a small size of randomly observed time-domain samples. We extend our algorithms to high-dimensional signal recovery from noisy samples and multivariate frequencies.
|
3 |
Compressed Sampling for High Frequency Receivers Applicationsbi, xiaofei January 2011 (has links)
In digital signal processing field, for recovering the signal without distortion, Shannon sampling theory must be fulfilled in the traditional signal sampling. However, in some practical applications, it is becoming an obstacle because of the dramatic increase of the costs due to increased volume of the storage and transmission as a function of frequency for sampling. Therefore, how to reduce the number of the sampling in analog to digital conversion (ADC) for wideband and how to compress the large data effectively has been becoming major subject for study. Recently, a novel technique, so-called “compressed sampling”, abbreviated as CS, has been proposed to solve the problem. This method will capture and represent compressible signals at a sampling rate significantly lower than the Nyquist rate. This paper not only surveys the theory of compressed sampling, but also simulates the CS with the software Matlab. The error between the recovered signal and original signal for simulation is around -200dB. The attempts were made to apply CS. The error between the recovered signal and original one for experiment is around -40 dB which means the CS is realized in a certain extent. Furthermore, some related applications and the suggestions of the further work are discussed.
|
4 |
A dual wavelength fiber optic strain sensing systemMalik, Asif 03 March 2009 (has links)
The extrinsic Fabry-Perot interferometer (EFPI) has been extensively used as a strain sensor in various applications. However, like other interferometric sensors, the EFPI suffers from ambiguity in detecting directional changes of the applied perturbation, when the operating point is at a maxima or a minima on the transfer function curve. Different methods, or sensor configurations have been proposed to solve this problem. This thesis investigates the use of dual wavelength interferometry to overcome this limitation. Possible systems configurations based on dual wavelength interferometry were considered, and the comprehensive design and implementation of a dual laser time division multiplexed (TOM) system based is presented. The system operates by alternately pulse modulating two laser diodes, which are closely spaced in center wavelength. Although the strain rate measurement capability of the system is dependent primarily on the speed of its hardware and the accuracy of its software, it is shown that it can be considerably enhanced by employing digital signal processing techniques. / Master of Science
|
5 |
Remote-Sensed LIDAR Using Random Sampling and Sparse ReconstructionMartinez, Juan Enrique Castorera 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper, we propose a new, low complexity approach for the design of laser radar (LIDAR) systems for use in applications in which the system is wirelessly transmitting its data from a remote location back to a command center for reconstruction and viewing. Specifically, the proposed system collects random samples in different portions of the scene, and the density of sampling is controlled by the local scene complexity. The range samples are transmitted as they are acquired through a wireless communications link to a command center and a constrained absolute-error optimization procedure of the type commonly used for compressive sensing/sampling is applied. The key difficulty in the proposed approach is estimating the local scene complexity without densely sampling the scene and thus increasing the complexity of the LIDAR front end. We show here using simulated data that the complexity of the scene can be accurately estimated from the return pulse shape using a finite moments approach. Furthermore, we find that such complexity estimates correspond strongly to the surface reconstruction error that is achieved using the constrained optimization algorithm with a given number of samples.
|
6 |
Jitter measurement of high-speed digital signals using low-cost signal acquisition hardware and associated algorithmsChoi, Hyun 06 July 2010 (has links)
This dissertation proposes new methods for measuring jitter of high-speed digital signals. The proposed techniques are twofold. First, a low-speed jitter measurement environment is realized by using a jitter expansion sensor. This sensor uses a low-frequency reference signal as compared to high-frequency reference signals required in standard high-speed signal jitter measurement instruments. The jitter expansion sensor generates a low-speed signal at the output, which contains jitter content of the original high-speed digital signal. The low-speed sensor output signal can be easily acquired with a low-speed digitizer and then analyzed for jitter. The proposed low-speed jitter measurement environment using the jitter expansion sensor enhances the reliability of current jitter measurement approaches since low-speed signals used as a reference signal and a sensor output signal can be generated and applied to measurement systems with reduced additive noise. The second approach is direct digitization without using a sensor, in which a high-speed digital signal with jitter is incoherently sub-sampled and then reconstructed in the discrete-time domain by using digital signal reconstruction algorithms. The core idea of this technique is to remove the hardware required in standard sampling-based jitter measurement instruments for time/phase synchronization by adopting incoherent sub-sampling as compared to coherent sub-sampling and to reduce the need for a high-speed digitizer by sub-sampling a periodic signal over its many realizations. In the proposed digitization technique, the signal reconstruction algorithms are used as a substitute for time/phase synchronization hardware. When the reconstructed signal is analyzed for jitter in digital post-processing, a self-reference signal is extracted from the reconstructed signal by using wavelet denoising methods. This digitally generated self-reference signal alleviates the need for external analog reference signals. The self-reference signal is used as a timing reference when timing dislocations of the reconstructed signal are measured in the discrete-time domain. Various types of jitter of the original high-speed reference signals can be estimated using the proposed jitter analysis algorithms.
|
7 |
DVB-T based bistatic passive radars in noisy environmentsMahfoudia, Osama 02 October 2017 (has links) (PDF)
Passive coherent location (PCL) radars employ illuminators of opportunity to detect and track targets. This silent operating mode provides many advantages such as low cost and interception immunity. Many radiation sources have been exploited as illumination sources such as broadcasting and telecommunication transmitters. The classical architecture of the bistatic PCL radars involves two receiving channels: a reference channel and a surveillance channel. The reference channel captures the direct-path signal from the transmitter, and the surveillancesignal collects the possible target echoes. The two major challenges for the PCL radars are the reference signal noise and the surveillance signal static clutter. A noisy reference signal degrades the detection probability by increasing the noise-floor level of the detection filter output. And the static clutter presence in the surveillance signal reduces the detector dynamic range and buries low magnitude echoes.In this thesis, we consider a PCL radar based on the digital video broadcasting-terrestrial (DVB-T) signals, and we propose a set of improved methods to deal with the reference signal noise and the static clutter in the surveillance signal. The DVB-T signals constitute an excellentcandidate as an illumination source for PCL radars; they are characterized by a wide bandwidth and a high radiated power. In addition, they provide the possibility of reconstructing the reference signal to enhance its quality, and they allow a straightforward static clutter suppressionin the frequency domain. This thesis proposes an optimum method for the reference signal reconstruction and an improved method for the static clutter suppression.The optimum reference signal reconstruction minimizes the mean square error between the reconstructed signal and the exact one. And the improved static clutter suppression method exploits the possibility of estimating the propagation channel. These two methods extend thefeasibility of a single receiver PCL radar, where the reference signal is extracted from the direct-path signal present in the surveillance signal. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
8 |
Real Time SLAM Using Compressed Occupancy Grids For a Low Cost Autonomous Underwater VehicleCain, Christopher Hawthorn 07 May 2014 (has links)
The research presented in this dissertation pertains to the development of a real time SLAM solution that can be performed by a low cost autonomous underwater vehicle equipped with low cost and memory constrained computing resources. The design of a custom rangefinder for underwater applications is presented. The rangefinder makes use of two laser line generators and a camera to measure the unknown distance to objects in an underwater environment. A visual odometry algorithm is introduced that makes use of a downward facing camera to provide our underwater vehicle with localization information. The sensor suite composed of the laser rangefinder, downward facing camera, and a digital compass are verified, using the Extended Kalman Filter based solution to the SLAM problem along with the particle filter based solution known as FastSLAM, to ensure that they provide in- formation that is accurate enough to solve the SLAM problem for out low cost underwater vehicle. Next, an extension of the FastSLAM algorithm is presented that stores the map of the environment using an occupancy grid is introduced. The use of occupancy grids greatly increases the amount of memory required to perform the algorithm so a version of the Fast- SLAM algorithm that stores the occupancy grids using the Haar wavelet representation is presented. Finally, a form of the FastSLAM algorithm is presented that stores the occupancy grid in compressed form to reduce the amount memory required to perform the algorithm. It is shown in experimental results that the same result can be achieved, as that produced by the algorithm that stores the complete occupancy grid, using only 40% of the memory required to store the complete occupancy grid. / Ph. D.
|
9 |
Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction TechniquesYamada, Randy Matthew 03 May 2013 (has links)
Software-defined radios have the agility and flexibility to tune performance parameters, allowing them to adapt to environmental changes, adapt to desired modes of operation, and provide varied functionality as needed. Traditional software-defined radios use a combination of conditional processing and software-tuned hardware to enable these features and will critically sample the spectrum to ensure that only the required bandwidth is digitized. While flexible, these systems are still constrained to perform only a single function at a time and digitize a single frequency sub-band at time, possibly limiting the radio's effectiveness.
Radio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion. Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling. Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications. Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements.
This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects. First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing. By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy. We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce. Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum. / Master of Science
|
10 |
Signal processing issues related to deterministic sea wave predictionAbusedra, Lamia January 2009 (has links)
The bulk of the research work in wave related areas considers sea waves as stochastic objects leading to wave forecasting techniques based on statistical approaches. Due to the complex dynamics of the sea waves’ behaviour, statistical techniques are probably the only viable approach when forecasting over substantial spatial and temporal intervals. However this view changes when limiting the forecasting time to a few seconds or when the goal is to estimate the quiescent periods that occur due to the beating interaction of the wave components, especially in narrow band seas. This work considers the multi disciplinary research field of deterministic sea wave prediction (DSWP), exploring different aspects of DSWP associated with shallow angle LIDAR systems. The main goal of this project is to study and develop techniques to reduce the prediction error. The first part deals with issues related to shallow angle LIDAR systems data problems, while the remaining part of this work concentrates on the prediction system and propagation models regardless of the source of the data. The two main LIDAR data problems addressed in this work are the non-uniform distribution and the shadow region problems. An empirical approach is used to identify the characteristics of shadow regions associated with different wave conditions and different laser position. A new reconstruction method is developed to address the non-uniformed sampling problem, it is shown that including more information about the geometry and the dynamics of the problem improves the reconstruction error considerably. The frequency domain approach to the wave propagation model is examined. The effect of energy leakage on the prediction error is illustrated. Two approaches are explored to reduce this error. First a modification of the simple dispersive phase shifting filter is tested and shown to improve the prediction. The second approach is to reduce the energy leakage with an iterative Window-Expansion method. Significant improvement of the prediction error is achieved using this method in comparison to the End-Matching method typically used in DSWP systems. The final part in examining the frequency domain approach is to define the prediction region boundaries associated with a given prediction accuracy. The second propagation model approach is the Time/Space domain approach. In this method the convolution of the measured data and the propagation filter impulse response is used in the prediction system. In this part of the research work properties of these impulse responses are identified. These are found to be quite complicated representations. The relation between the impulse response (duration and shift) with prediction time and distance are studied. Quantification of these impulse responses properties are obtained by polynomial approximation and non-symmetric filter analysis. A new method is shown to associate the impulse response properties to the prediction region of both the Fixed Time and Fixed Point mode.
|
Page generated in 0.0869 seconds