• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 20
  • 12
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 55
  • 42
  • 39
  • 29
  • 28
  • 28
  • 22
  • 21
  • 19
  • 18
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Iterative synthetic aperture radar imaging algorithms

Kelly, Shaun Innes January 2014 (has links)
Synthetic aperture radar is an important tool in a wide range of civilian and military imaging applications. This is primarily due to its ability to image in all weather conditions, during both the day and the night, unlike optical imaging systems. A synthetic aperture radar system contains a step which is not present in an optical imaging system, this is image formation. This is required because the acquired data from the radar sensor does not directly correspond to the image. Instead, to form an image, the system must solve an inverse problem. In conventional scenarios, this inverse problem is relatively straight forward and a matched lter based algorithm produces an image of suitable image quality. However, there are a number of interesting scenarios where this is not the case. Scenarios where standard image formation algorithms are unsuitable include systems with data undersampling, errors in the system observation model and data that is corrupted by radio frequency interference. Image formation in these scenarios will form the topics of this thesis and a number of iterative algorithms are proposed to achieve image formation. The motivation for these proposed algorithms is primarily from the eld of compressed sensing, which considers the recovery of signals with a low-dimensional structure. The rst contribution of this thesis is the development of fast algorithms for the system observation model and its adjoint. These algorithms are required by large-scale gradient based iterative algorithms for image formation. The proposed algorithms are based on existing fast back-projection algorithms, however, a new decimation strategy is proposed which is more suitable for some applications. The second contribution is the development of a framework for iterative near- eld image formation, which uses the proposed fast algorithms. It is shown that the framework can be used, in some scenarios, to improve the visual quality of images formed from fully sampled data and undersampled data, when compared to images formed using matched lter based algorithms. The third contribution concerns errors in the system observation model. Algorithms that correct these errors are commonly referred to as autofocus algorithms. It is shown that conventional autofocus algorithms, which work as a post-processor on the formed image, are unsuitable for undersampled data. Instead an autofocus algorithm is proposed which corrects errors within the iterative image formation procedure. The proposed algorithm is provably stable and convergent with a faster convergence rate than previous approaches. The nal contribution is an algorithm for ultra-wideband synthetic aperture radar image formation. Due to the large spectrum over which the ultra-wideband signal is transmitted, there is likely to be many other users operating within the same spectrum. These users can produce signi cant radio frequency interference which will corrupt the received data. The proposed algorithm uses knowledge of the RFI spectrum to minimise the e ect of the RFI on the formed image.
52

Quantitative analysis of algorithms for compressed signal recovery

Thompson, Andrew J. January 2013 (has links)
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled nonadaptive linear measurements taken at a rate proportional to the signal's true information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been established, both theoretically and empirically, that certain optimization algorithms are able to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007), which is the focus of this thesis, is an established CS recovery algorithm which is known to be effective in practice, both in terms of recovery performance and computational efficiency. However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case recovery conditions have not yet been quantified in terms of the sparsity/undersampling trade-off, and also there is a need for average-case analysis in order to understand the behaviour of the algorithm in practice. In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed. Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting the realistic average-case assumption that the underlying signal and measurement matrix are independent. We obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT (NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous results within this framework shows a substantial quantitative improvement. We also extend our analysis to a related algorithm which exploits the assumption that the underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010). We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery is guaranteed. Our results, which are the first in the phase transition framework for tree-based CS, show a further significant improvement over results for the standard sparsity model. We also propose a dynamic programming algorithm which is guaranteed to compute an exact tree projection in low-order polynomial time.
53

Tomographie cardiaque en angiographie rotationnelle / Cardiac C-arm computed tomography

Mory, Cyril 26 February 2014 (has links)
Un C-arm est un appareil d’imagerie médicale par rayons X utilisé en radiologie interventionnelle. La plupart des C-arms modernes sont capables de tourner autour du patient tout en acquérant des images radiographiques, à partir desquelles une reconstruction 3D peut être effectuée. Cette technique est appelée angiographie rotationnelle et est déjà utilisée dans certains centres hospitaliers pour l’imagerie des organes statiques. Cependant son extension à l’imagerie du cœur ou du thorax en respiration libre demeure un défi pour la recherche. Cette thèse a pour objet l’angiographie rotationnelle pour l’analyse du myocarde chez l’homme. Plusieurs méthodes nouvelles y sont proposées et comparées à l’état de l’art, sur des données synthétiques et des données réelles. La première de ces méthodes, la déconvolution par FDK itérative synchronisée à l’ECG, consiste à effacer les artéfacts de stries dans une reconstruction FDK synchronisée à l’ECG par déconvolution. Elle permet d’obtenir de meilleurs résultats que les méthodes existantes basées sur la déconvolution, mais reste insuffisante pour l’angiographie rotationnelle cardiaque chez l’homme. Deux méthodes de reconstruction 3D basées sur l’échantillonnage compressé sont proposées : la reconstruction 3D régularisée par variation totale, et la reconstruction 3D régularisée par ondelettes. Elles sont comparées à la méthode qui constitue l’état de l’art actuel, appelé « Prior Image Constrained Compressed Sensing » (PICCS). Elles permettent d’obtenir des résultats similaires à ceux de PICCS. Enfin, deux méthodes de reconstruction 3D+temps sont présentées. Leurs formulations mathématiques sont légèrement différentes l’une de l’autre, mais elles s’appuient sur les mêmes principes : utiliser un masque pour restreindre le mouvement à la région contenant le cœur et l’aorte, et imposer une solution régulière dans l’espace et dans le temps. L’une de ces méthodes génère des résultats meilleurs, c’est-à-dire à la fois plus nets et plus cohérents dans le temps, que ceux de PICCS / A C-arm is an X-ray imaging device used for minimally invasive interventional radiology procedures. Most modern C-arm systems are capable of rotating around the patient while acquiring radiographic images, from which a 3D reconstruction can be performed. This technique is called C-arm computed tomography (C-arm CT) and is used in clinical routine to image static organs. However, its extension to imaging of the beating heart or the free-breathing thorax is still a challenging research problem. This thesis is focused on human cardiac C-arm CT. It proposes several new reconstruction methods and compares them to the current state or the art, both on a digital phantom and on real data acquired on several patients. The first method, ECG-gated Iterative FDK deconvolution, consists in filtering out the streak artifacts from an ECG-gated FDK reconstruction in an iterative deconvolution scheme. It performs better than existing deconvolution-based methods, but it is still insufficient for human cardiac C-arm CT. Two 3D reconstruction methods based on compressed sensing are proposed: total variation-regularized 3D reconstruction and wavelets-regularized 3D reconstruction. They are compared to the current state-of-the-art method, called prior image constrained compressed sensing (PICCS). They exhibit results that are similar to those of PICCS. Finally, two 3D+time reconstruction methods are presented. They have slightly different mathematical formulations but are based on the same principles: using a motion mask to restrict the movement to the area containing the heart and the aorta, and enforcing smoothness of the solution in both space and time. One of these methods outperforms PICCS by producing results that are sharper and more consistent throughout the cardiac cycle
54

Compressed sensing for error correction on real-valued vectors

Tordsson, Pontus January 2019 (has links)
Compressed sensing (CS) is a relatively new branch of mathematics with very interesting applications in signal processing, statistics and computer science. This thesis presents some theory of compressed sensing, which allows us to recover (high-dimensional) sparse vectors from (low-dimensional) compressed measurements by solving the L1-minimization problem. A possible application of CS to the problem of error correction is also presented, where sparse vectors are that of arbitrary noise. Successful sparse recovery by L1-minimization relies on certain properties of rectangular matrices. But these matrix properties are extremely subtle and difficult to numerically verify. Therefore, to get an idea of how sparse (or dense) errors can be, numerical simulation of error correction was done. These simulations show the performance of error correction with respect to various levels of error sparsity and matrix dimensions. It turns out that error correction degrades slower for low matrix dimensions than for high matrix dimensions, while for sufficiently sparse errors, high matrix dimensions offer a higher likelihood of guaranteed error correction.
55

Compressed Sensing via Partial L1 Minimization

Zhong, Lu 27 April 2017 (has links)
Reconstructing sparse signals from undersampled measurements is a challenging problem that arises in many areas of data science, such as signal processing, circuit design, optical engineering and image processing. The most natural way to formulate such problems is by searching for sparse, or parsimonious, solutions in which the underlying phenomena can be represented using just a few parameters. Accordingly, a natural way to phrase such problems revolves around L0 minimization in which the sparsity of the desired solution is controlled by directly counting the number of non-zero parameters. However, due to the nonconvexity and discontinuity of the L0 norm such optimization problems can be quite difficult. One modern tactic to treat such problems is to leverage convex relaxations, such as exchanging the L0 norm for its convex analog, the L1 norm. However, to guarantee accurate reconstructions for L1 minimization, additional conditions must be imposed, such as the restricted isometry property. Accordingly, in this thesis, we propose a novel extension to current approaches revolving around truncated L1 minimization and demonstrate that such approach can, in important cases, provide a better approximation of L0 minimization. Considering that the nonconvexity of the truncated L1 norm makes truncated l1 minimization unreliable in practice, we further generalize our method to partial L1 minimization to combine the convexity of L1 minimization and the robustness of L0 minimization. In addition, we provide a tractable iterative scheme via the augmented Lagrangian method to solve both optimization problems. Our empirical study on synthetic data and image data shows encouraging results of the proposed partial L1 minimization in comparison to L1 minimization.
56

Time-domain Compressive Beamforming for Medical Ultrasound Imaging

David, Guillaume January 2016 (has links)
Over the past 10 years, Compressive Sensing has gained a lot of visibility from the medical imaging research community. The most compelling feature for the use of Compressive Sensing is its ability to perform perfect reconstructions of under-sampled signals using l1-minimization. Of course, that counter-intuitive feature has a cost. The lacking information is compensated for by a priori knowledge of the signal under certain mathematical conditions. This technology is currently used in some commercial MRI scanners to increase the acquisition rate hence decreasing discomfort for the patient while increasing patient turnover. For echography, the applications could go from fast 3D echocardiography to simplified, cheaper echography systems. Real-time ultrasound imaging scanners have been available for nearly 50 years. During these 50 years of existence, much has changed in their architecture, electronics, and technologies. However one component remains present: the beamformer. From analog beamformers to software beamformers, the technology has evolved and brought much diversity to the world of beam formation. Currently, most commercial scanners use several focalized ultrasonic pulses to probe tissue. The time between two consecutive focalized pulses is not compressible, limiting the frame rate. Indeed, one must wait for a pulse to propagate back and forth from the probe to the deepest point imaged before firing a new pulse. In this work, we propose to outline the development of a novel software beamforming technique that uses Compressive Sensing. Time-domain Compressive Beamforming (t-CBF) uses computational models and regularization to reconstruct de-cluttered ultrasound images. One of the main features of t-CBF is its use of only one transmit wave to insonify the tissue. Single-wave imaging brings high frame rates to the modality, for example allowing a physician to see precisely the movements of the heart walls or valves during a heart cycle. t-CBF takes into account the geometry of the probe as well as its physical parameters to improve resolution and attenuate artifacts commonly seen in single-wave imaging such as side lobes. In this thesis, we define a mathematical framework for the beamforming of ultrasonic data compatible with Compressive Sensing. Then, we investigate its capabilities on simple simulations in terms of resolution and super-resolution. Finally, we adapt t-CBF to real-life ultrasonic data. In particular, we reconstruct 2D cardiac images at a frame rate 100-fold higher than typical values.
57

Compressed sensing applied to weather radar

Mishra, Kumar Vijay 01 July 2015 (has links)
Over the last two decades, dual-polarimetric weather radar has proven to be a valuable instrument providing critical precipitation information through remote sensing of the atmosphere. Modern weather radar systems operate with high sampling rates and long dwell times on targets. Often only limited target information is desired, leading to a pertinent question: could lesser samples have been acquired in the first place? Recently, a revolutionary sampling paradigm – compressed sensing (CS) – has emerged, which asserts that it is possible to recover signals from fewer samples or measurements than traditional methods require without degrading the accuracy of target information. CS methods have recently been applied to point target radars and imaging radars, resulting in hardware simplification advantages, enhanced resolution, and reduction in data processing overheads. But CS applications for volumetric radar targets such as precipitation remain relatively unexamined. This research investigates the potential applications of CS to radar remote sensing of precipitation. In general, weather echoes may not be sparse in space-time or frequency domain. Therefore, CS techniques developed for point targets, such as in aircraft surveillance radar, are not directly applicable to weather radars. However, precipitation samples are highly correlated both spatially and temporally. We, therefore, adopt latest advances in matrix completion algorithms to demonstrate the sparse sensing of weather echoes. Several extensions of this approach are then considered to develop a more general CS-based weather radar processing algorithms in presence of noise, ground clutter and dual-polarimetric data. Finally, a super-resolution approach is presented for the spectral recovery of an undersampled signal when certain frequency information is known.
58

Off-the-grid compressive imaging

Ongie, Gregory John 01 August 2016 (has links)
In many practical imaging scenarios, including computed tomography and magnetic resonance imaging (MRI), the goal is to reconstruct an image from few of its Fourier domain samples. Many state-of-the-art reconstruction techniques, such as total variation minimization, focus on discrete ‘on-the-grid” modelling of the problem both in spatial domain and Fourier domain. While such discrete-to-discrete models allow for fast algorithms, they can also result in sub-optimal sampling rates and reconstruction artifacts due to model mismatch. Instead, this thesis presents a framework for “off-the-grid”, i.e. continuous domain, recovery of piecewise smooth signals from an optimal number of Fourier samples. The main idea is to model the edge set of the image as the level-set curve of a continuous domain band-limited function. Sampling guarantees can be derived for this framework by investigating the algebraic geometry of these curves. This model is put into a robust and efficient optimization framework by posing signal recovery entirely in Fourier domain as a structured low-rank (SLR) matrix completion problem. An efficient algorithm for this problem is derived, which is an order of magnitude faster than previous approaches for SLR matrix completion. This SLR approach based on off-the-grid modeling shows significant improvement over standard discrete methods in the context of undersampled MRI reconstruction.
59

Optimal sensing matrices

Achanta, Hema Kumari 01 December 2014 (has links)
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process. The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications. This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead. The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
60

Turbo Bayesian Compressed Sensing

Yang, Depeng 01 August 2011 (has links)
Compressed sensing (CS) theory specifies a new signal acquisition approach, potentially allowing the acquisition of signals at a much lower data rate than the Nyquist sampling rate. In CS, the signal is not directly acquired but reconstructed from a few measurements. One of the key problems in CS is how to recover the original signal from measurements in the presence of noise. This dissertation addresses signal reconstruction problems in CS. First, a feedback structure and signal recovery algorithm, orthogonal pruning pursuit (OPP), is proposed to exploit the prior knowledge to reconstruct the signal in the noise-free situation. To handle the noise, a noise-aware signal reconstruction algorithm based on Bayesian Compressed Sensing (BCS) is developed. Moreover, a novel Turbo Bayesian Compressed Sensing (TBCS) algorithm is developed for joint signal reconstruction by exploiting both spatial and temporal redundancy. Then, the TBCS algorithm is applied to a UWB positioning system for achieving mm-accuracy with low sampling rate ADCs. Finally, hardware implementation of BCS signal reconstruction on FPGAs and GPUs is investigated. Implementation on GPUs and FPGAs of parallel Cholesky decomposition, which is a key component of BCS, is explored. Simulation results on software and hardware have demonstrated that OPP and TBCS outperform previous approaches, with UWB positioning accuracy improved by 12.8x. The accelerated computation helps enable real-time application of this work.

Page generated in 0.0833 seconds