Return to search

Compressive Sensing Using Random Demodulation

The new theory of Compressive Sensing allows wideband signals to be sampled at a rate much closer to the information contained within. This rate is much lower than the Nyquist rate required by Shannon’s sampling theory. This “Analog to Information Conversion” has allowed an outlet for already overloaded Analog to Digital converters [15]. Although the locations of frequencies can’t be known a priori, the expected sparseness of a signal can be. This is the circumstance that allows this method to be possible.
In order to accomplish this very low rate, there is some trade off in sampling rate reduction to computing load. In contrast to the uniform sampling in common acquisition processes, nonlinear methods must be used resulting in convex programming algorithms becoming a necessity to recover the signal.
This thesis tests this new theory using a Random Demodulation data acquisition scheme set forth in [1]. The scheme involves a demodulation step that spreads the information content across the spectrum before an anti-aliasing filter prepares for an Analog to Digital converter to sample it at a very slow rate. The acquisition process is simulated using a computer, the data is run through an optimization algorithm and the recovery results are analyzed. Finally, the paper then compares the results to the Compressive Sensing theoretical and empirical results of others.

Identiferoai:union.ndltd.org:UTENN/oai:trace.tennessee.edu:utk_gradthes-1049
Date01 August 2009
CreatorsBoggess, Benjamin Scott
PublisherTrace: Tennessee Research and Creative Exchange
Source SetsUniversity of Tennessee Libraries
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMasters Theses

Page generated in 0.0016 seconds