11 |
A framework for fast and efficient algorithms for sparse recovery problems / CUHK electronic theses & dissertations collectionJanuary 2015 (has links)
The sparse recovery problem aims to reconstruct a high-dimensional sparse signal from its low-dimensional measurements given a carefully designed measuring process. This thesis presents a framework for graphical-model based sparse recovery algorithms. Differing measurement processes lead to specific problems. The sparse recovery problems studied in this thesis include compressive sensing, network tomography, group testing and compressive phase retrieval. For compressive sensing and network tomography, the measurement processes are linear (freely chosen, and topology constrained measurements respectively). For group testing and compressivephase retrieval, the processes are non-linear (disjunctive, and intensity measurements respectively). For all the problems in this thesis, we present algorithms whose measurement structures are based on bipartite graphs. By studying the properties of bipartite graphs and designing novel measuring process and corresponding decoding algorithms, the number of measurements and computational decoding complexities of all the algorithms are information-theoretically either order-optimal or nearly order-optimal. / 稀疏還原問題旨在通過精心設計的低維度度量重建高維度稀疏信號。這篇論文提出了一個基於圖模型的稀疏還原演算法的框架。研究的稀疏還原問題包括了壓縮感知,網路斷層掃描,組測試和壓縮相位恢復。對於壓縮感知和網路斷層掃描,度量過程是線性的(分別是無約束的度量和拓撲結構約束的度量)。對於組測試和壓縮相位恢復,度量過程是非線性的(分別是邏輯度量和強度度量)。對於提到的問題,這篇論文提出的演算法的度量結構基於二部圖。通過學習二部圖的性質,我們提出了新穎的度量方法和相對應的解碼演算法。對於這些演算法,它們的度量維度和解碼演算法的運算複雜度都是(或接近於)資訊理論最優解。 / Cai, Sheng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2015. / Includes bibliographical references (leaves 229-247). / Abstracts also in Chinese. / Title from PDF title page (viewed on 05, October, 2016). / Detailed summary in vernacular field only.
|
12 |
Restricted isometry constants in compressed sensingBah, Bubacarr January 2012 (has links)
Compressed Sensing (CS) is a framework where we measure data through a non-adaptive linear mapping with far fewer measurements that the ambient dimension of the data. This is made possible by the exploitation of the inherent structure (simplicity) in the data being measured. The central issues in this framework is the design and analysis of the measurement operator (matrix) and recovery algorithms. Restricted isometry constants (RIC) of the measurement matrix are the most widely used tool for the analysis of CS recovery algorithms. The addition of the subscripts 1 and 2 below reflects the two RIC variants developed in the CS literature, they refer to the ℓ1-norm and ℓ2-norm respectively. The RIC2 of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the ℓ2-norm. This, and related quantities, provide a mechanism by which standard eigen-analysis can be applied to topics relying on sparsity. Specifically, the upper and lower RIC2 of a matrix A of size n × N is the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all (N/k)matrices formed by taking k columns from A. Calculation of the RIC2 is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC2 in some range of problem sizes (k, n,N). We provide the best known bound on the RIC2 for Gaussian matrices, which is also the smallest known bound on the RIC2 for any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis, and Tanner in Compressed Sensing: How sharp is the Restricted Isometry Property?, with improvements achieved by grouping submatrices that share a substantial number of columns. RIC2 bounds have been presented for a variety of random matrices, matrix dimensions and sparsity ranges. We provide explicit formulae for RIC2 bounds, of n × N Gaussian matrices with sparsity k, in three settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N approaching zero, and c) n/N approaching zero with k/n decaying inverse logarithmically in N/n; in these three settings the RICs a) decay to zero, b) become unbounded (or approach inherent bounds), and c) approach a non-zero constant. Implications of these results for RIC2 based analysis of CS algorithms are presented. The RIC2 of sparse mean zero random matrices can be bounded by using concentration bounds of Gaussian matrices. However, this RIC2 approach does not capture the benefits of the sparse matrices, and in so doing gives pessimistic bounds. RIC1 is a variant of RIC2 where the nearness to an isometry is measured in the ℓ1-norm, which is both able to better capture the structure of sparse matrices and allows for the analysis of non-mean zero matrices. We consider a probabilistic construction of sparse random matrices where each column has a fixed number of non-zeros whose row indices are drawn uniformly at random. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbours for these graphs, and present a tail bound on the probability that this cardinality will be less than the expected value. Deducible from this bound is a similar bound for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets using a dyadic splitting technique. This bound allows for quantitative sampling theorems on existence of expander graphs and the sparse random matrices we consider and also quantitative CS sampling theorems when using sparse non mean-zero measurement matrices.
|
13 |
Curvelet transform with adaptive tilingAl Marzouqi, Hasan 12 January 2015 (has links)
In this dissertation we address the problem of adapting frequency domain tiling using the curvelet transform as the basis algorithm.
The optimal tiling, for a given class of images, is computed using denoising performance as the cost function. The major adaptations considered are: the number of scale decompositions, angular decompositions per scale/quadrant, and scale locations. A global optimization algorithm combining the three adaptations is proposed. Denoising performance of adaptive curvelets is tested on seismic and face data sets. The developed adaptation procedure is applied to a number of different application areas. Adaptive curvelets are used to solve the problem of sparse data recovery from subsampled measurements. Performance comparison with default curvelets demonstrates the effectiveness of the adaptation scheme. Adaptive curvelets are also used in the development of a novel image similarity index. The developed measure succeeds in retrieving correct matches from a variety of textured materials. Furthermore, we present an algorithm for classifying different types of seismic activities.
|
14 |
Robust Coil Combination for bSSFP MRI and the Ordering Problem for Compressed SensingMcKibben, Nicholas Brian 01 August 2019 (has links)
Balanced steady-state free precession (bSSFP) is a fast, SNR-efficient magnetic resonance (MR) imaging sequence suffering from dark banding artifacts due to its off-resonance dependence. These banding artifacts are difficult to mitigate at high field strengths and in the presence of metallic implants. Recent developments in parametric modelling of bSSFP have led to advances in banding removal and parameter estimation using multiple phase-cycled bSSFP. With increasing number of coils in receivers, more storage and processing is required. Coil combination is used to reduce dimensionality of these datasets which otherwise might be prohibitively large or computationally intractable for clinical applications. However, our recent work demonstrates that some combination methods are problematic in conjunction with elliptical phase-cycled bSSFP.This thesis will present a method for phase estimation of coil-combined multiple phase-cycled bSSFP to reduce storage and computational requirements for elliptical models. This method is general and works across many coil combination techniques popular in MR reconstruction including the geometric coil combine and adaptive coil combine algorithms. A viable phase estimate for the sum-of-squares is also demonstrated for computationally efficient dimension reduction. Simulations, phantom experiments, and in vivo MR imaging is performed to validate the proposed phase estimates.Compressed sensing (CS) is an increasingly important acquisition and reconstruction framework. CS MR allows for reconstruction of datasets sampled well-under the Nyquist rate and its application is natural in MR where images are often sparse under common linear transforms. An extension of this framework is the ordering problem for CS, first introduced in 2008. Although the assumption is made in CS that images are sparse in some specified transform domain, it might not be maximally sparse. For example, a signal ordered such that it is monotonic is maximally sparse in the finite differences domain. Knowledge of the correct ordering of an image's pixels can lead to much more sparse and powerful regularizers for the CS inverse problem. However, this problem has met with little interest due to the strong dependence on initial image estimates.This thesis will also present an algorithm for estimating the optimal order of a signal such that it is maximally sparse under an arbitrary linear transformation without relying on any prior image estimate. The algorithm is combinatoric in nature and feasible for small signals of interest such as T1 mapping time curves. Proof of concept simulations are performed that validate performance of the algorithm. Computationally feasible modifications for in vivo cardiac T1 mapping are also demonstrated.
|
15 |
Joint CT-MRI Image ReconstructionCui, Xuelin 28 November 2018 (has links)
Modern clinical diagnoses and treatments have been increasingly reliant on medical imaging techniques. In return, medical images are required to provide more accurate and detailed information than ever. Aside from the evolution of hardware and software, multimodal imaging techniques offer a promising solution to produce higher quality images by fusing medical images from different modalities. This strategy utilizes more structural and/or functional image information, thereby allowing clinical results to be more comprehensive and better interpreted. Since their inception, multimodal imaging techniques have received a great deal of attention for achieving enhanced imaging performance. In this work, a novel joint reconstruction framework using sparse computed tomography (CT) and magnetic resonance imaging (MRI) data is developed and evaluated. The method proposed in this study is part of the planned joint CT-MRI system which assembles CT and MRI subsystems into a single entity. The CT and MRI images are synchronously acquired and registered from the hybrid CT-MRI platform. However, since their image data are highly undersampled, analytical methods, such as filtered backprojection, are unable to generate images of sufficient quality. To overcome this drawback, we resort to compressed sensing techniques, which employ sparse priors that result from an application of L₁-norm minimization. To utilize multimodal information, a projection distance is introduced and is tuned to tailor the texture and pattern of final images. Specifically CT and MRI images are alternately reconstructed using the updated multimodal results that are calculated at the latest step of the iterative optimization algorithm. This method exploits the structural similarities shared by the CT and MRI images to achieve better reconstruction quality. The improved performance of the proposed approach is demonstrated using a pair of undersampled CT-MRI body images and a pair of undersampled CT-MRI head images. These images are tested using joint reconstruction, analytical reconstruction, and independent reconstruction without using multimodal imaging information. Results show that the proposed method improves about 5dB in signal-to-noise ratio (SNR) and nearly 10% in structural similarity measurements compared to independent reconstruction methods. It offers a similar quality as fully sampled analytical reconstruction, yet requires as few as 25 projections for CT and a 30% sampling rate for MRI. It is concluded that structural similarities and correlations residing in images from different modalities are useful to mutually promote the quality of image reconstruction. / Ph. D. / Medical imaging techniques play a central role in modern clinical diagnoses and treatments. Consequently, there is a constant demand to increase the overall quality of medical images. Since their inception, multimodal imaging techniques have received a great deal of attention for achieving enhanced imaging performance. Multimodal imaging techniques can provide more detailed diagnostic information by fusing medical images from different imaging modalities, thereby allowing clinical results to be more comprehensive to improve clinical interpretation.
A new form of multimodal imaging technique, which combines the imaging procedures of computed tomography (CT) and magnetic resonance imaging (MRI), is known as the “omnitomography.” Both computed tomography and magnetic resonance imaging are the most commonly used medical imaging techniques today and their intrinsic properties are complementary. For example, computed tomography performs well for bones whereas the magnetic resonance imaging excels at contrasting soft tissues. Therefore, a multimodal imaging system built upon the fusion of these two modalities can potentially bring much more information to improve clinical diagnoses. However, the planned omni-tomography systems face enormous challenges, such as the limited ability to perform image reconstruction due to mechanical and hardware restrictions that result in significant undersampling of the raw data.
Image reconstruction is a procedure required by both computed tomography and magnetic resonance imaging to convert raw data into final images. A general condition required to produce a decent quality of an image is that the number of samples of raw data must be sufficient and abundant. Therefore, undersampling on the omni-tomography system can cause significant degradation of the image quality or artifacts after image reconstruction. To overcome this drawback, we resort to compressed sensing techniques, which exploit the sparsity of the medical images, to perform iterative based image reconstruction for both computed tomography and magnetic resonance imaging. The sparsity of the images is found by applying sparse transform such as discrete gradient transform or wavelet transform in the image domain. With the sparsity and undersampled raw data, an iterative algorithm can largely compensate for the data inadequacy problem and it can reconstruct the final images from the undersampled raw data with minimal loss of quality.
In addition, a novel “projection distance” is created to perform a joint reconstruction which further promotes the quality of the reconstructed images. Specifically, the projection distance exploits the structural similarities shared between the image of computed tomography and magnetic resonance imaging such that the insufficiency of raw data caused by undersampling is further accounted for. The improved performance of the proposed approach is demonstrated using a pair of undersampled body images and a pair of undersampled head images, each of which consists of an image of computed tomography and its magnetic resonance imaging counterpart. These images are tested using the proposed joint reconstruction method in this work, the conventional reconstructions such as filtered backprojection and Fourier transform, and reconstruction strategy without using multimodal imaging information (independent reconstruction). The results from this work show that the proposed method addressed these challenges by significantly improving the image quality from highly undersampled raw data. In particular, it improves about 5dB in signal-to-noise ratio and nearly 10% in structural similarity measurements compared to other methods. It achieves similar image quality by using less than 5% of the X-ray dose for computed tomography and 30% sampling rate for magnetic resonance imaging. It is concluded that, by using compressed sensing techniques and exploiting structural similarities, the planned joint computed tomography and magnetic resonance imaging system can perform imaging outstanding tasks with highly undersampled raw data.
|
16 |
Distance-Weighted Regularization for Compressed-Sensing Video Recovery and Supervised Hyperspectral ClassificationTramel, Eric W 15 December 2012 (has links)
The compressed sensing (CS) model of signal processing, while offering many unique advantages in terms of low-cost sensor design, poses interesting challenges for both signal acquisition and recovery, especially for signals of large size. In this work, we investigate how CS might be applied practically and efficiently in the context of natural video. We make use of a CS video acquisition approach in line with the popular single-pixel camera framework of blind, nonaptive, random sampling while proposing new approaches for the subsequent recovery of the video signal which leverage interrame redundancy to minimize recovery error. We introduce a method of approximation, which we term multihypothesis (MH) frame prediction, to create accurate frame predictions by comparing hypotheses drawn from the spatial domain of chosen reference frames to the non-overlapping, block-by-block CS measurements of subsequent frames. We accomplish this frame prediction via a novel distance-weighted Tikhonov regularization technique. We verify through our experiments that MH frame prediction via distance-weighted regularization provides state-of-the-art performance for the recovery of natural video sequences from blind CS measurements. The distance-weighted regularization we propose need not be limited to just frame prediction for CS video recovery, but may also be used in a variety of contexts where approximations must be generated from a set of hypotheses or training data. To show this, we apply our technique to supervised hyperspectral image (HSI) classification via a novel classifier we term the nearest regularized subspace (NRS) classifier. We show that the distance-weighted regularization used in the NRS method provides greater classification accuracy than state-of-the-art classifiers for supervised HSI classification tasks. We also propose two modifications to the core NRS classifier to improve its robustness to variation of input parameters and and to further increase its classification accuracy.
|
17 |
Compressive Sensing Analog Front End Design in 180 nm CMOS TechnologyShah, Julin Mukeshkumar 27 August 2015 (has links)
No description available.
|
18 |
Numerical algorithms for the mathematics of informationMendoza-Smith, Rodrigo January 2017 (has links)
This thesis presents a series of algorithmic innovations in Combinatorial Compressed Sensing and Persistent Homology. The unifying strategy across these contributions is in translating structural patterns in the underlying data into specific algorithmic designs in order to achieve: better guarantees in computational complexity, the ability to operate on more complex data, highly efficient parallelisations, or any combination of these.
|
19 |
Compressed Sensing Reconstruction Using Structural Dependency ModelsKim, Yookyung January 2012 (has links)
Compressed sensing (CS) theory has demonstrated that sparse signals can be reconstructed from far fewer measurements than suggested by the Nyquist sampling theory. CS has received great attention recently as an alternative to the current paradigm of sampling followed by compression. Initial CS operated under the implicit assumption that the sparsity domain coefficients are independently distributed. Recent results, however, show that exploiting statistical dependencies in sparse signals improves the recovery performance of CS. This dissertation proposes model-based CS reconstruction techniques. Statistical dependency models for several CS problems are proposed and incorporated into different CS algorithms. These models allow incorporation of a priori information into the CS reconstruction problems. Firstly, we propose the use of a Bayes least squares-Gaussian scale mixtures (BLS-GSM) model for CS recovery of natural images. The BLS-GSM model is able to exploit dependencies inherent in wavelet coefficients. This model is incorporated into several recent CS algorithms. The resulting methods significantly reduce reconstruction errors and/or the number of measurements required to obtain a desired reconstruction quality, when compared to state-of-the-art model-based CS methods in the literature. The model-based CS reconstruction techniques are then extended to video. In addition to spatial dependencies, video sequences exhibit significant temporal dependencies as well. In this dissertation, a model for jointly exploiting spatial and temporal dependencies in video CS is also proposed. The proposed method enforces structural self-similarity of image blocks within each frame as well as across neighboring frames. By sparsely representing collections of similar blocks, dominant image structures are retained while noise and incoherent undersampling artifacts are eliminated. A new video CS algorithm which incorporates this model is then proposed. The proposed algorithm iterates between enforcement of the self-similarity model and consistency with measurements. By enforcing measurement consistency in residual domain, sparsity is increased and CS reconstruction performance is enhanced. The proposed approach exhibits superior subjective image quality and significantly improves peak-signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).Finally, a model-based CS framework is proposed for super resolution (SR) reconstruction. The SR reconstruction is formulated as a CS problem and a self-similarity model is incorporated into the reconstruction. The proposed model enforces similarity of collections of blocks through shrinkage of their transform-domain coefficients. A sharpening operation is performed in transform domain to emphasize edge recovery. The proposed method is compared with state-of-the-art SR techniques and provides high-quality SR images, both quantitatively and subjectively.
|
20 |
Compressed sensing and dimensionality reduction for unsupervised learning / Échantillonnage compressé et réduction de dimension pour l'apprentissage non superviséBourrier, Anthony 13 May 2014 (has links)
Cette thèse est motivée par la perspective de rapprochement entre traitement du signal et apprentissage statistique, et plus particulièrement par l'exploitation de techniques d'échantillonnage compressé afin de réduire le coût de tâches d'apprentissage. Après avoir rappelé les bases de l'échantillonnage compressé et mentionné quelques techniques d'analyse de données s'appuyant sur des idées similaires, nous proposons un cadre de travail pour l'estimation de paramètres de mélange de densités de probabilité dans lequel les données d'entraînement sont compressées en une représentation de taille fixe. Nous instancions ce cadre sur un modèle de mélange de Gaussiennes isotropes. Cette preuve de concept suggère l'existence de garanties théoriques de reconstruction d'un signal pour des modèles allant au-delà du modèle parcimonieux usuel de vecteurs. Nous étudions ainsi dans un second temps la généralisation de résultats de stabilité de problèmes inverses linéaires à des modèles tout à fait généraux de signaux. Nous proposons des conditions sous lesquelles des garanties de reconstruction peuvent être données dans un cadre général. Enfin, nous nous penchons sur un problème de recherche approchée de plus proche voisin avec calcul de signature des vecteurs afin de réduire la complexité. Dans le cadre où la distance d'intérêt dérive d'un noyau de Mercer, nous proposons de combiner un plongement explicite des données suivi d'un calcul de signatures, ce qui aboutit notamment à une recherche approchée plus précise. / This thesis is motivated by the perspective of connecting compressed sensing and machine learning, and more particularly by the exploitation of compressed sensing techniques to reduce the cost of learning tasks. After a reminder of compressed sensing and a quick description of data analysis techniques in which similar ideas are exploited, we propose a framework for estimating probability density mixture parameters in which the training data is compressed into a fixed-size representation. We instantiate this framework on an isotropic Gaussian mixture model. This proof of concept suggests the existence of theoretical guarantees for reconstructing signals belonging to models beyond usual sparse models. We therefore study generalizations of stability results for linear inverse problems for very general models of signals. We propose conditions under which reconstruction guarantees can be given in a general framework. Finally, we consider an approximate nearest neighbor search problem exploiting signatures of the database vectors in order to save resources during the search step. In the case where the considered distance derives from a Mercer kernel, we propose to combine an explicit embedding of data followed by a signature computation step, which principally leads to a more accurate approximate search.
|
Page generated in 0.109 seconds