Spelling suggestions: "subject:"sparsity"" "subject:"parsity""
1 
Sparse signal recovery in a transform domainLebed, Evgeniy 11 1900 (has links)
The ability to efficiently and sparsely represent seismic data is becoming an increasingly important problem in geophysics. Over the last thirty years many transforms such as wavelets, curvelets, contourlets, surfacelets, shearlets, and many other types of ‘xlets’ have been developed. Such transform were leveraged to resolve this issue of sparse representations. In this work we compare the properties of four of these commonly used transforms, namely the shiftinvariant wavelets, complex wavelets, curvelets and surfacelets. We also explore the performance of these transforms for the problem of recovering seismic wavefields from incomplete measurements.

2 
Stable seismic data recoveryHerrmann, Felix J. January 2007 (has links)
In this talk, directional frames, known as curvelets, are used to recover
seismic data and images from noisy and incomplete data. Sparsity
and invariance properties of curvelets are exploited to formulate
the recovery by a `1norm promoting program. It is shown that our
data recovery approach is closely linked to the recent theory of “compressive
sensing” and can be seen as a first step towards a nonlinear
sampling theory for wavefields.
The second problem that will be discussed concerns the recovery
of the amplitudes of seismic images in clutter. There, the invariance
of curvelets is used to approximately invert the Gramm operator of
seismic imaging. In the highfrequency limit, this Gramm matrix corresponds
to a pseudodifferential operator, which is near diagonal in
the curvelet domain.

3 
Sparse signal recovery in a transform domainLebed, Evgeniy 11 1900 (has links)
The ability to efficiently and sparsely represent seismic data is becoming an increasingly important problem in geophysics. Over the last thirty years many transforms such as wavelets, curvelets, contourlets, surfacelets, shearlets, and many other types of ‘xlets’ have been developed. Such transform were leveraged to resolve this issue of sparse representations. In this work we compare the properties of four of these commonly used transforms, namely the shiftinvariant wavelets, complex wavelets, curvelets and surfacelets. We also explore the performance of these transforms for the problem of recovering seismic wavefields from incomplete measurements.

4 
Sparse signal recovery in a transform domainLebed, Evgeniy 11 1900 (has links)
The ability to efficiently and sparsely represent seismic data is becoming an increasingly important problem in geophysics. Over the last thirty years many transforms such as wavelets, curvelets, contourlets, surfacelets, shearlets, and many other types of ‘xlets’ have been developed. Such transform were leveraged to resolve this issue of sparse representations. In this work we compare the properties of four of these commonly used transforms, namely the shiftinvariant wavelets, complex wavelets, curvelets and surfacelets. We also explore the performance of these transforms for the problem of recovering seismic wavefields from incomplete measurements. / Science, Faculty of / Mathematics, Department of / Graduate

5 
Angulardependent threedimensional imaging techniques in multipass synthetic aperture radarJamora, Jan Rainer 06 August 2021 (has links)
Humans perceive the world in three dimensions, but many sensing capabilities only display twodimensional information to users by way of images. In this work we develop two novel reconstruction techniques utilizing synthetic aperture radar (SAR) data in three dimensions given sparse amounts of available data. We additionally leverage a hybrid jointsparsity and sparsity approach to remove apriori influences on the environment and instead explore general imaging properties in our reconstructions. We evaluate the required sampling rates for our techniques and a thorough analysis of the accuracy of our methods. The results presented in this thesis suggest a solution to sparse threedimensional object reconstruction that effectively uses a substantially less amount of phase history data (PHD) while still extracting critical features off an object of interest.

6 
Sparse inverse covariance estimation in Gaussian graphical modelsOrchard, Peter Raymond January 2014 (has links)
One of the fundamental tasks in science is to find explainable relationships between observed phenomena. Recent work has addressed this problem by attempting to learn the structure of graphical models  especially Gaussian models  by the imposition of sparsity constraints. The graphical lasso is a popular method for learning the structure of a Gaussian model. It uses regularisation to impose sparsity. In realworld problems, there may be latent variables that confound the relationships between the observed variables. Ignoring these latents, and imposing sparsity in the space of the visibles, may lead to the pruning of important structural relationships. We address this problem by introducing an expectation maximisation (EM) method for learning a Gaussian model that is sparse in the joint space of visible and latent variables. By extending this to a conditional mixture, we introduce multiple structures, and allow side information to be used to predict which structure is most appropriate for each data point. Finally, we handle nonGaussian data by extending each sparse latent Gaussian to a Gaussian copula. We train these models on a financial data set; we find the structures to be interpretable, and the new models to perform better than their existing competitors. A potential problem with the mixture model is that it does not require the structure to persist in time, whereas this may be expected in practice. So we construct an inputoutput HMM with sparse Gaussian emissions. But the main result is that, provided the side information is rich enough, the temporal component of the model provides little benefit, and reduces efficiency considerably. The GWishart distribution may be used as the basis for a Bayesian approach to learning a sparse Gaussian. However, sampling from this distribution often limits the efficiency of inference in these models. We make a small change to the stateoftheart block Gibbs sampler to improve its efficiency. We then introduce a Hamiltonian Monte Carlo sampler that is much more efficient than block Gibbs, especially in high dimensions. We use these samplers to compare a Bayesian approach to learning a sparse Gaussian with the (nonBayesian) graphical lasso. We find that, even when limited to the same time budget, the Bayesian method can perform better. In summary, this thesis introduces practically useful advances in structure learning for Gaussian graphical models and their extensions. The contributions include the addition of latent variables, a nonGaussian extension, (temporal) conditional mixtures, and methods for efficient inference in a Bayesian formulation.

7 
Exploiting data sparsity in parallel magnetic resonance imagingWu, Bing January 2010 (has links)
Magnetic resonance imaging (MRI) is a widely employed imaging modality that allows observation of the interior of human body. Compared to other imaging modalities such
as the computed tomography (CT), MRI features a relatively long scan time that gives rise to many potential issues. The advent of parallel MRI, which employs multiple receiver
coils, has started a new era in speeding up the scan of MRI by reducing the number of data acquisitions. However, the finally recovered images from undersampled data sets often
suffer degraded image quality.
This thesis explores methods that incorporate prior knowledge of the image to be reconstructed to achieve improved image recovery in parallel MRI, following the philosophy that ‘if some prior knowledge of the image to be recovered is known, the image could be recovered better than without’. Specifically, the prior knowledge of image sparsity is utilized. Image sparsity exists in different domains. Image sparsity in the image domain refers to the fact that the imaged object only occupies a portion of the imaging field of view; image sparsity may also exist in a transform domain for which there is a high level of energy
concentration in the image transform. The use of both types of sparsity is considered in this thesis.
There are three major contributions in this thesis. The first contribution is the development of ‘GUISE’. GUISE employs an adaptive sampling design method that achieves better exploitation of image domain sparsity in parallel MRI. Secondly, the development of ‘PBCS’ and ‘SENSECS’. PBCS achieves better exploitation of transform domain sparsity by incorporating a prior estimate of the image to be recovered. SENSECS is an application of PBCS that achieves better exploitation of transform domain sparsity in parallel MRI. The third contribution is the
implementation of GUISE and PBCS in contrast enhanced MR angiography (CE MRA). In their applications in CE MRA, GUISE and PBCS share the common ground of exploiting the high sparsity of the contrast enhanced angiogram.
The above developments are assessed in various ways using both simulated and experimental data. The potential extensions of these methods are also suggested.

8 
A Joint DictionaryBased SingleImage SuperResolution ModelHu, Jun January 2016 (has links)
Image superresolution technique mainly aims at restoring highresolution image with satisfactory novel details. In recent years, leaningbased singleimage superresolution has been developed and proved to produce satisfactory results. With one or some dictionaries trained from a training set, learningbased superresolution is able to establish a mapping relationship between lowresolution images and their corresponding highresolution ones. Among all these algorithms, sparsitybased superresolution has been proved with outstanding performance from extensive experiments. By utilizing compact dictionaries, this class of superresolution algorithms can be efficient with lower computation complexity and has shown great potential for the practical applications.
Our proposed model, which is known as Joint Dictionarybased SuperResolution (JDSR) algorithm, is a new sparsitybased superresolution approach. Based on the observation that the initial values of Nonlocally Centralized Sparse Representation (NCSR) model will affect the final reconstruction, we change its initial values by using results of Zeyde's model. Besides, with the purpose of further improvement, we also add a gradient histogram preservation term in the sparse model of NCSR, and modify the reference histogram estimation by a simple edge detection based enhancement so that the estimated histogram will be closer to the ground truth. The experimental results illustrate that our method outperforms the stateoftheart methods in terms of sharper edges, clearer textures and better novel details.

9 
Occluderaided nonlineofsight imagingSaunders, Charles 27 September 2021 (has links)
Nonlineofsight (NLOS) imaging is the inference of the properties of objects or scenes outside of the direct lineofsight of the observer. Such inferences can range from a 2D photographlike image of a hidden area, to determining the position, motion or number of hidden objects, to 3D reconstructions of a hidden volume. NLOS imaging has many enticing potential applications, such as leveraging the existing hardware in many automobiles to identify hidden pedestrians, vehicles or other hazards and hence plan safer trajectories. Other potential application areas include improving navigation for robots or drones by anticipating occluded hazards, peering past obstructions in medical settings, or in surveying unreachable areas in searchandrescue operations. Most modern NLOS imaging methods fall into one of two categories: active imaging methods that have some control of the illumination of the hidden area, and passive
methods that simply measure light that already exists. This thesis introduces two NLOS imaging methods, one of each category, along with modeling and data processing techniques that are more broadly applicable. The methods are linked by their use of objects (‘occluders’) that reside somewhere between the observer and the hidden
scene and block some possible light paths.
Computational periscopy, a passive method, can recover the unknown position of an occluding object in the hidden area and then recover an image of the hidden scene behind it. It does so using only a single photograph of a blank relay wall taken by an ordinary digital camera. We develop also a framework using an optimized preconditioning matrix to improve the speed at which these reconstructions can be made and greatly improve the robustness to ambient light. Lastly, we develop tools necessary to demonstrate recovery of scenes at multiple unknown depths – paving the way towards threedimensional reconstructions.
Edgeresolved transient imaging, an active method, enables the formation of 2.5D representations – a plan view plus heights – of largescale scenes. A pulsed laser illuminates spots along a small semicircle on the floor, centered on the edge of a vertical wall such as in a doorway. The wall edge occludes some light paths, only allowing the laser light reflecting off of the floor to illuminate certain portions of the hidden area beyond the wall, depending on where along the semicircle it is illuminating. The time at which photons return following a laser pulse is recorded. The occluding wall edge provides angular resolution, and timeresolved sensing provides radial resolution. This novel acquisition strategy, along with a scene response model and reconstruction algorithm, allow for 180° field of view reconstructions of largescale scenes unlike other active imaging methods.
Lastly, we introduce a sparsity penalty named mutually exclusive group sparsity (MEGS), that can be used as a constraint or regularization in optimization problems to promote solutions in which certain components are mutually exclusive. We explore how this penalty relates to other similar penalties, develop fast algorithms to solve MEGSregularized problems, and demonstrate how enforcing mutual exclusivity structure can provide great utility in NLOS imaging problems.

10 
The Unreasonable Usefulness of Approximation by Linear CombinationLewis, Cannada Andrew 05 July 2018 (has links)
Through the exploitation of datasparsity a catch all term for savings gained from a variety of approximations it is possible to reduce the computational cost of accurate electronic structure calculations to linear. Meaning, that the total time to solution for the calculation grows at the same rate as the number of particles that are correlated. Multiple techniques for exploiting datasparsity are discussed, with a focus on those that can be systematically improved by tightening numerical parameters such that as the parameter approaches zero the approximation becomes exact. These techniques are first applied to HartreeFock theory and then we attempt to design a linear scaling massively parallel electron correlation strategy based on second order perturbation theory. / Ph. D.

Page generated in 0.0522 seconds