101 |
Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message PassingVila, Jeremy P. 22 May 2015 (has links)
No description available.
|
102 |
Architecture for Multi Input Multi Output CompressiveRadarsBaskar, Siddharth January 2017 (has links)
No description available.
|
103 |
Accurate code phase estimation of LOS GPS signal using Compressive Sensing and multipath mitigation using interpolation/MEDLLViswa, Chaithanya 19 October 2015 (has links)
No description available.
|
104 |
Approximate Message Passing for Multi-Carrier Transmission over Doubly Selective ChannelsMeng, Dong 19 July 2012 (has links)
No description available.
|
105 |
OFDM Coupled Compressive Sensing Algorithm for Stepped Frequency Ground Penetrating RadarMetwally, Mohamed 01 January 2014 (has links)
Dating back to as far as 1940, the US road and bridge infrastructure system has garnered quite the status for strategically connecting together half a continent. As monumental as the infrastructure's status, is its rate of deterioration, with the average bridge age coming at a disconcerting 50 years. Aside from visual inspection, a battery of non-destructive tests were developed to conduct structural fault assessment and detect laminations, in order to preemptively take preventive measures.
The mainstream commercially favored test is the impulse time domain ground penetrating radar (GPR). An extremely short, high voltage pulse is used to visualize cross-sections of the bridge decks. While effective and it does not disturb traffic flow, impulse radar suffers from major drawbacks. The drawbacks are namely, its limited dynamic range and high cost of system manufacturing. A less prominent yet highly effective system, stepped frequency continuous wave (SFCW) GPR, was developed to address the aforementioned drawbacks. Mostly developed for research centers and academia, SFCW boasts a high dynamic range and low cost of system manufacturing, while producing comparable if not identical results to the impulse counterpart. However, data procurement speed is an inherent problem in SFCW GPR, which seems to keep impulse radar in the lead for production and development.
I am proposing a novel approach to elevate SFCW's data acquisition speed and its scanning efficiency altogether. This approach combines an encoding method called orthogonal frequency division multiplexing (OFDM) and an emerging paradigm called compressive sensing (CS). In OFDM, a digital data stream, the transmit signal, is encoded on multiple carrier frequencies. These frequencies are combined in such a way to achieve orthogonality between the carrier frequencies, while mitigating any interference between said frequencies. In CS, a signal can be potentially reconstructed from a few samples below the standardized Nyquist rate. A novel design of the SFCW GPR architecture coupled with the OFDM-CS algorithm is proposed and evaluated using ideal channels and realistically modelled bridge decks.
|
106 |
Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomographyCoban, Sophia January 2017 (has links)
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
|
107 |
Non-uniform sampling: algorithms and architecturesLuo, Chenchi 09 November 2012 (has links)
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques.
Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses.
Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
|
108 |
Compressive Sensing for 3D Data Processing Tasks: Applications, Models and AlgorithmsJanuary 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
|
109 |
Computational spectral microscopy and compressive millimeter-wave holographyFernandez, Christy Ann January 2010 (has links)
<p>This dissertation describes three computational sensors. The first sensor is a scanning multi-spectral aperture-coded microscope containing a coded aperture spectrometer that is vertically scanned through a microscope intermediate image plane. The spectrometer aperture-code spatially encodes the object spectral data and nonnegative</p>
<p>least squares inversion combined with a series of reconfigured two-dimensional (2D spatial-spectral) scanned measurements enables three-dimensional (3D) (x, y, λ) object estimation. The second sensor is a coded aperture snapshot spectral imager that employs a compressive optical architecture to record a spectrally filtered projection</p>
<p>of a 3D object data cube onto a 2D detector array. Two nonlinear and adapted TV-minimization schemes are presented for 3D (x,y,λ) object estimation from a 2D compressed snapshot. Both sensors are interfaced to laboratory-grade microscopes and</p>
<p>applied to fluorescence microscopy. The third sensor is a millimeter-wave holographic imaging system that is used to study the impact of 2D compressive measurement on 3D (x,y,z) data estimation. Holography is a natural compressive encoder since a 3D</p>
<p>parabolic slice of the object band volume is recorded onto a 2D planar surface. An adapted nonlinear TV-minimization algorithm is used for 3D tomographic estimation from a 2D and a sparse 2D hologram composite. This strategy aims to reduce scan time costs associated with millimeter-wave image acquisition using a single pixel receiver.</p> / Dissertation
|
110 |
Compressed Sensing Based Image Restoration Algorithm with Prior Information: Software and Hardware Implementations for Image Guided TherapyJian, Yuchuan January 2012 (has links)
<p>Based on the compressed sensing theorem, we present the integrated software and hardware platform for developing a total-variation based image restoration algorithm by applying prior image information and free-form deformation fields for image guided therapy. The core algorithm we developed solves the image restoration problem for handling missing structures in one image set with prior information, and it enhances the quality of the image and the anatomical information of the volume of the on-board computed tomographic (CT) with limited-angle projections. Through the use of the algorithm, prior anatomical CT scans were used to provide additional information to help reduce radiation doses associated with the improved quality of the image volume produced by on-board Cone-Beam CT, thus reducing the total radiation doses that patients receive and removing distortion artifacts in 3D Digital Tomosynthesis (DTS) and 4D-DTS. The proposed restoration algorithm enables the enhanced resolution of temporal image and provides more anatomical information than conventional reconstructed images.</p><p>The performance of the algorithm was determined and evaluated by two built-in parameters in the algorithm, i.e., B-spline resolution and the regularization factor. These parameters can be adjusted to meet different requirements in different imaging applications. Adjustments also can determine the flexibility and accuracy during the restoration of images. Preliminary results have been generated to evaluate the image similarity and deformation effect for phantoms and real patient's case using shifting deformation window. We incorporated a graphics processing unit (GPU) and visualization interface into the calculation platform, as the acceleration tools for medical image processing and analysis. By combining the imaging algorithm with a GPU implementation, we can make the restoration calculation within a reasonable time to enable real-time on-board visualization, and the platform potentially can be applied to solve complicated, clinical-imaging algorithms.</p> / Dissertation
|
Page generated in 0.1122 seconds