101 |
Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message PassingVila, Jeremy P. 22 May 2015 (has links)
No description available.
|
102 |
Architecture for Multi Input Multi Output CompressiveRadarsBaskar, Siddharth January 2017 (has links)
No description available.
|
103 |
Accurate code phase estimation of LOS GPS signal using Compressive Sensing and multipath mitigation using interpolation/MEDLLViswa, Chaithanya 19 October 2015 (has links)
No description available.
|
104 |
Approximate Message Passing for Multi-Carrier Transmission over Doubly Selective ChannelsMeng, Dong 19 July 2012 (has links)
No description available.
|
105 |
Unsteady Flow Field Projection and Compressive Sensing by Model Order ReductionJohn Michael Matulis (20190012) 10 January 2025 (has links)
<p dir="ltr">In nuclear reactors, enhancing safety via active monitoring of conditions and automation of routine aspects of plant operation require sensors to be integrated into the nuclear reactor system. Incorporating sensors that are compatible with advanced reactor environments can increases capital cost significantly. Additionally, many locations in the system that contain valuable data are wholly inaccessible to current sensor technology. Model order reduction allows critical information about sensor placement and experiment design to be distilled from fully resolved fluid mechanics simulation results. In many cases, sensed information in conjunction with reduced order models can also be used to regenerate full field variables. Previous work has demonstrated projection of sensed pressure data from one spatial domain to another via proper orthogonal decomposition (POD). In this work, the POD inferencing method is extended to the modeling and compressive sensing of temperature, a scalar field variable, and the modeling of pressure from sensed temperature data.</p><p dir="ltr">The method is applied to the problem of flow over a cylinder with heat generation at the cylinder boundary with Pr>>1, Pr\~1, and Pr<<1. The model is trained on pressure and temperature data from simulations. Field reconstructions are then generated using data from selected sensors and the POD model. Finally, the reconstruction performance is evaluated and presented as a function of Prandtl number, sensor count, and mode count. The predicted trend of increasing reconstruction accuracy with decreasing Prandtl number is confirmed and a Prandtl number/sensor count reconstruction performance matrix is presented. In order to examine the efficacy of this algorithm in transfer learning from one scalar field to another scalar field, temperature data sensors are used to predict pressure field information.</p><p dir="ltr">Three empirical sensor location selection techniques are developed and compared: mode-based, random sampling, and boundary-layer based. The random sampling yielded the highest accuracy but required significant computational resources. The mode-based approach, despite its lower accuracy, is used in the analysis of the POD ROM for its explainability and compatibility with existing studies.</p><p dir="ltr">It is shown that the lower Prandtl number flows require fewer sensors and modes for accurate temperature reconstruction, with models utilizing more modes generally outperforming those with fewer as expected. Notably, reconstruction accuracy for temperature and pressure was comparable in high Prandtl number fluids, but in moderate and low Prandtl number fluids, increased thermal diffusion led to smoother temperature gradients and enhanced reconstruction performance with fewer modes.</p><p dir="ltr">This study extends prior work by applying POD ROM techniques to the sparse sensing of temperature. It considers the effect of varying thermal diffusivity between materials and develops trends in accuracy between them. Furthermore, it introduces cross-scalar projection to this technique as a form virtual sensing. Additionally, the question of sensor placement is addressed in greater detail than in prior literature and alternative methods are evaluated. This study confirms the potential of POD ROMs for cross-scalar data projection and presents novel sensor selection techniques while providing insights into optimal conditions for their application.</p><p><br></p>
|
106 |
Band Theory and Beyond: Applications of Quantum Algorithms for Quantum ChemistrySherbert, Kyle Matthew 05 1900 (has links)
In the past two decades, myriad algorithms to elucidate the characteristics and dynamics of molecular systems have been developed for quantum computers. In this dissertation, we explore how these algorithms can be adapted to other fields, both to closely related subjects such as materials science, and more surprising subjects such as information theory. Special emphasis is placed on the Variational Quantum Eigensolver algorithm adapted to solve band structures of a periodic system; three distinct implementations are developed, each with its own advantages and disadvantages. We also see how unitary quantum circuits designed to model individual electron excitations within a molecule can be modified to prepare a quantum states strictly orthogonal to a space of known states, an important component to solve problems in thermodynamics and spectroscopy. Finally, we see how the core behavior in several quantum algorithms originally developed for quantum chemistry can be adapted to implement compressive sensing, a protocol in information theory for extrapolating large amounts of information from relatively few measurements. This body of work demonstrates that quantum algorithms developed to study molecules have immense interdisciplinary uses in fields as varied as materials science and information theory.
|
107 |
OFDM Coupled Compressive Sensing Algorithm for Stepped Frequency Ground Penetrating RadarMetwally, Mohamed 01 January 2014 (has links)
Dating back to as far as 1940, the US road and bridge infrastructure system has garnered quite the status for strategically connecting together half a continent. As monumental as the infrastructure's status, is its rate of deterioration, with the average bridge age coming at a disconcerting 50 years. Aside from visual inspection, a battery of non-destructive tests were developed to conduct structural fault assessment and detect laminations, in order to preemptively take preventive measures.
The mainstream commercially favored test is the impulse time domain ground penetrating radar (GPR). An extremely short, high voltage pulse is used to visualize cross-sections of the bridge decks. While effective and it does not disturb traffic flow, impulse radar suffers from major drawbacks. The drawbacks are namely, its limited dynamic range and high cost of system manufacturing. A less prominent yet highly effective system, stepped frequency continuous wave (SFCW) GPR, was developed to address the aforementioned drawbacks. Mostly developed for research centers and academia, SFCW boasts a high dynamic range and low cost of system manufacturing, while producing comparable if not identical results to the impulse counterpart. However, data procurement speed is an inherent problem in SFCW GPR, which seems to keep impulse radar in the lead for production and development.
I am proposing a novel approach to elevate SFCW's data acquisition speed and its scanning efficiency altogether. This approach combines an encoding method called orthogonal frequency division multiplexing (OFDM) and an emerging paradigm called compressive sensing (CS). In OFDM, a digital data stream, the transmit signal, is encoded on multiple carrier frequencies. These frequencies are combined in such a way to achieve orthogonality between the carrier frequencies, while mitigating any interference between said frequencies. In CS, a signal can be potentially reconstructed from a few samples below the standardized Nyquist rate. A novel design of the SFCW GPR architecture coupled with the OFDM-CS algorithm is proposed and evaluated using ideal channels and realistically modelled bridge decks.
|
108 |
Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomographyCoban, Sophia January 2017 (has links)
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
|
109 |
Non-uniform sampling: algorithms and architecturesLuo, Chenchi 09 November 2012 (has links)
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques.
Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses.
Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
|
110 |
Compressive Sensing for 3D Data Processing Tasks: Applications, Models and AlgorithmsJanuary 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
|
Page generated in 0.0315 seconds