• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 20
  • 12
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 54
  • 41
  • 38
  • 29
  • 28
  • 27
  • 22
  • 21
  • 18
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Performance Analysis between Two Sparsity Constrained MRI Methods: Highly Constrained Backprojection(HYPR) and Compressed Sensing(CS) for Dynamic Imaging

Arzouni, Nibal 2010 August 1900 (has links)
One of the most important challenges in dynamic magnetic resonance imaging (MRI) is to achieve high spatial and temporal resolution when it is limited by system performance. It is desirable to acquire data fast enough to capture the dynamics in the image time series without losing high spatial resolution and signal to noise ratio. Many techniques have been introduced in the recent decades to achieve this goal. Newly developed algorithms like Highly Constrained Backprojection (HYPR) and Compressed Sensing (CS) reconstruct images from highly undersampled data using constraints. Using these algorithms, it is possible to achieve high temporal resolution in the dynamic image time series with high spatial resolution and signal to noise ratio (SNR). In this thesis we have analyzed the performance of HYPR to CS algorithm. In assessing the reconstructed image quality, we considered computation time, spatial resolution, noise amplification factors, and artifact power (AP) using the same number of views in both algorithms, and that number is below the Nyquist requirement. In the simulations performed, CS always provides higher spatial resolution than HYPR, but it is limited by computation time in image reconstruction and SNR when compared to HYPR. HYPR performs better than CS in terms of SNR and computation time when the images are sparse enough. However, HYPR suffers from streaking artifacts when it comes to less sparse image data.
62

Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing

Danjean, Ludovic 29 November 2012 (has links) (PDF)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes ''low-density parity-check'' (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le ''compressed sensing'', les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse'' à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du ''compressed sensing''.Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur'' en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de ''trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances.Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par ''linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les ''stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification.
63

De l' echantillonnage optimal en grande et petite dimension

Carpentier, Alexandra 05 October 2012 (has links) (PDF)
Pendant ma th ese, j'ai eu la chance d'apprendre et de travailler sous la supervision de mon directeur de th ese R emi, et ce dans deux domaines qui me sont particuli erement chers. Je veux parler de la Th eorie des Bandits et du Compressed Sensing. Je les vois comme intimement li es non par les m ethodes mais par leur objectif commun: l' echantillonnage optimal de l'espace. Tous deux sont centr es sur les mani eres d' echantillonner l'espace e cacement : la Th eorie des Bandits en petite dimension et le Compressed Sensing en grande dimension. Dans cette dissertation, je pr esente la plupart des travaux que mes co-auteurs et moi-m^eme avons ecrit durant les trois ann ees qu'a dur e ma th ese.
64

Derivative Compressive Sampling with Application to Inverse Problems and Imaging

Hosseini, Mahdi S. 26 August 2010 (has links)
In many practical problems in applied sciences, the features of most interest cannot be observed directly, but have to be inferred from other, observable quantities. In particular, many important data acquisition devices provide an access to the measurement of the partial derivatives of a feature of interest rather than sensing its values in a direct way. In this case, the feature has to be recovered through integration which is known to be an ill-posed problem in the presence of noises. Moreover, the problem becomes even less trivial to solve when only a portion of a complete set of partial derivatives is available. In this case, the instability of numerical integration is further aggravated by the loss of information which is necessary to perform the reconstruction in a unique way. As formidable as it may seem, however, the above problem does have a solution in the case when the partial derivatives can be sparsely represented in the range of a linear transform. In this case, the derivatives can be recovered from their incomplete measurements using the theory of compressive sampling (aka compressed sensing), followed by reconstruction of the associated feature/object by means of a suitable integration method. It is known, however, that the overall performance of compressive sampling largely depends on the degree of sparsity of the signal representation, on the one hand, and on the degree of incompleteness of data, on the other hand. Moreover, the general rule is the sparser the signal representation is, the fewer measurements are needed to obtain a useful approximation of the true signal. Thus, one of the most important questions to be addressed in such a case would be of how much incomplete the data is allowed to be for the signal reconstruction to remain useful, and what additional constraints/information could be incorporated into the estimation process to improve the quality of reconstruction in the case of extremely under-sampled data. With these questions in mind, the present proposal introduces a way to augment the standard constraints of compressive sampling by additional information related to some natural properties of the signal to be recovered. In particular, in the case when the latter is defined to be the partial derivatives of a multidimensional signal (e.g. image), such additional information can be derived from some standard properties of the gradient operator. Consequently, the resulting scheme of derivative compressive sampling (DCS) is capable of reliably recovering the signals of interest from much fewer data samples as compared to the case of the standard CS. The signal recovery by means of DCS can be used to improve the performance of many important applications which include stereo imaging, interferometry, coherent optical tomography, and many others. In this proposal, we focus mainly on the application of DCS to the problem of phase unwrapping, whose solution is central to all the aforementioned applications. Specifically, it is shown both conceptually and experimentally that the DCS-based phase unwrapping outperforms a number of alternative approaches in terms of estimation accuracy. Finally, the proposal lists a number of research questions which need to be answered in order to attach strong theoretical guarantees to the practical success of DCS.
65

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
66

Compressive Spectral and Coherence Imaging

Wagadarikar, Ashwin Ashok January 2010 (has links)
<p>This dissertation describes two computational sensors that were used to demonstrate applications of generalized sampling of the optical field. The first sensor was an incoherent imaging system designed for compressive measurement of the power spectral density in the scene (spectral imaging). The other sensor was an interferometer used to compressively measure the mutual intensity of the optical field (coherence imaging) for imaging through turbulence. Each sensor made anisomorphic measurements of the optical signal of interest and digital post-processing of these measurements was required to recover the signal. The optical hardware and post-processing software were co-designed to permit acquisition of the signal of interest with sub-Nyquist rate sampling, given the prior information that the signal is sparse or compressible in some basis.</p> <p>Compressive spectral imaging was achieved by a coded aperture snapshot spectral imager (CASSI), which used a coded aperture and a dispersive element to modulate the optical field and capture a 2D projection of the 3D spectral image of the scene in a snapshot. Prior information of the scene, such as piecewise smoothness of objects in the scene, could be enforced by numerical estimation algorithms to recover an estimate of the spectral image from the snapshot measurement.</p> <p>Hypothesizing that turbulence between the scene and CASSI would introduce spectral diversity of the point spread function, CASSI's snapshot spectral imaging capability could be used to image objects in the scene through the turbulence. However, no turbulence-induced spectral diversity of the point spread function was observed experimentally. Thus, coherence functions, which are multi-dimensional functions that completely determine optical fields observed by intensity detectors, were considered. These functions have previously been used to image through turbulence after extensive and time-consuming sampling of such functions. Thus, compressive coherence imaging was attempted as an alternative means of imaging through turbulence.</p> <p>Compressive coherence imaging was demonstrated by using a rotational shear interferometer to measure just a 2D subset of the 4D mutual intensity, a coherence function that captures the optical field correlation between all the pairs of points in the aperture. By imposing a sparsity constraint on the possible distribution of objects in the scene, both the object distribution and the isoplanatic phase distortion induced by the turbulence could be estimated with the small number of measurements made by the interferometer.</p> / Dissertation
67

The design of feedback channels for wireless networks : an optimization-theoretic view

Ganapathy, Harish 23 September 2011 (has links)
The fundamentally fluctuating nature of the strength of a wireless link poses a significant challenge when seeking to achieve reliable communication at high data rates. Common sense, supported by information theory, tells us that one can move closer towards achieving higher data rates if the transmitter is provided with a priori knowledge of the channel. Such channel knowledge is typically provided to the transmitter by a feedback channel that is present between the receiver and the transmitter. The quality of information provided to the transmitter is proportional to the bandwidth of this feedback channel. Thus, the design of feedback channels is a key aspect in enabling high data rates. In the past, these feedback channels have been designed locally, on a link-by-link basis. While such an approach can be globally optimal in some cases, in many other cases, this is not true. In this thesis, we identify various settings in wireless networks, some already a part of existing standards, others under discussion in future standards, where the design of feedback channels is a problem that requires global, network-wide optimization. In general, we propose the treatment of feedback bandwidth as a network-wide resource, as the next step en route to achieving Gigabit wireless. Not surprisingly, such a global optimization initiative naturally leads us to the important issue of computational efficiency. Computational efficiency is critical from the point-of-view of a network provider. A variety of optimization techniques are employed in this thesis to solve the large combinatorial problems that arise in the context of feedback allocation. These include dynamic programming, sub-modular function maximization, convex relaxations and compressed sensing. A naive algorithm to solve these large combinatorial problems would typically involve searching over a exponential number of possibilities to find the optimal feedback allocation. As a general theme, we identify and exploit special application-specific structure to solve these problems optimally with reduced complexity. Continuing this endeavour, we search for more intricate structure that enables us to propose approximate solutions with significantly-reduced complexity. The accompanying analysis of these algorithms studies the inherent trade-offs between accuracy, efficiency and the required structure of the problem. / text
68

Ανακατασκευή θερμικών εικόνων υψηλής ανάλυσης από εικόνες χαμηλής ανάλυσης με τεχνικές compressed sensing / Thermal image super resolution via compressed sensing

Ροντογιάννης, Επαμεινώνδας 10 June 2015 (has links)
Στην παρούσα εργασία εξετάζεται η αύξηση της ανάλυσης (super resolution) σε θερμικές εικόνες χρησιμοποιώντας τεχνικές συμπιεσμένης καταγραφής (compressed sensing). Οι εικόνες εκφράζονται με αραιό τρόπο ως προς δυο υπερπλήρη λεξικά (ένα χαμηλής και ένα υψηλής ανάλυσης) και επιχειρούμε κατασκευή της εικόνας υψηλής ανάλυσης. Τα αποτελέσματα της μεθόδου αυτής συγκρίνονται με τα αποτελέσματα τεχνικών που χρησιμοποιούν image registration με ακρίβεια subpixel για την επίτευξη του super resolution. / This thesis deals with the problem of resolution enhancement (super resolution) of thermal images using com- pressed sensing methods. We solve the super resolution problem in four stages. First, we seek a sparse representation of a low-resolution image with respect to two statistically-learned overcomplete dictionaries (for high and low resolution images respectively) and then we use the coefficients of this representa- tion to calculate the high resolution image. Then, we calculate the high resolution image using methods requiring multiple low resolution images aligned with subpixel accuracy (conventional approach). We compare the results of each method using broadly acclaimed metrics regarding reconstruction quality standards.
69

Compressed wavefield extrapolation with curvelets

Lin, Tim T. Y., Herrmann, Felix J. January 2007 (has links)
An \emph {explicit} algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in {3-D}. By using ideas from ``\emph{compressed sensing}'', we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume{,} thereby reducing the size of the operators. According {to} compressed sensing theory, signals can successfully be recovered from an imcomplete set of measurements when the measurement basis is \emph{incoherent} with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain via a computationally cheaper operatoion. A proof of principle for the ``compressed sensing'' method is given for wavefield extrapolation in {2-D}. The results show that our method is stable and produces identical results compared to the direct application of the full extrapolation operator.
70

Derivative Compressive Sampling with Application to Inverse Problems and Imaging

Hosseini, Mahdi S. 26 August 2010 (has links)
In many practical problems in applied sciences, the features of most interest cannot be observed directly, but have to be inferred from other, observable quantities. In particular, many important data acquisition devices provide an access to the measurement of the partial derivatives of a feature of interest rather than sensing its values in a direct way. In this case, the feature has to be recovered through integration which is known to be an ill-posed problem in the presence of noises. Moreover, the problem becomes even less trivial to solve when only a portion of a complete set of partial derivatives is available. In this case, the instability of numerical integration is further aggravated by the loss of information which is necessary to perform the reconstruction in a unique way. As formidable as it may seem, however, the above problem does have a solution in the case when the partial derivatives can be sparsely represented in the range of a linear transform. In this case, the derivatives can be recovered from their incomplete measurements using the theory of compressive sampling (aka compressed sensing), followed by reconstruction of the associated feature/object by means of a suitable integration method. It is known, however, that the overall performance of compressive sampling largely depends on the degree of sparsity of the signal representation, on the one hand, and on the degree of incompleteness of data, on the other hand. Moreover, the general rule is the sparser the signal representation is, the fewer measurements are needed to obtain a useful approximation of the true signal. Thus, one of the most important questions to be addressed in such a case would be of how much incomplete the data is allowed to be for the signal reconstruction to remain useful, and what additional constraints/information could be incorporated into the estimation process to improve the quality of reconstruction in the case of extremely under-sampled data. With these questions in mind, the present proposal introduces a way to augment the standard constraints of compressive sampling by additional information related to some natural properties of the signal to be recovered. In particular, in the case when the latter is defined to be the partial derivatives of a multidimensional signal (e.g. image), such additional information can be derived from some standard properties of the gradient operator. Consequently, the resulting scheme of derivative compressive sampling (DCS) is capable of reliably recovering the signals of interest from much fewer data samples as compared to the case of the standard CS. The signal recovery by means of DCS can be used to improve the performance of many important applications which include stereo imaging, interferometry, coherent optical tomography, and many others. In this proposal, we focus mainly on the application of DCS to the problem of phase unwrapping, whose solution is central to all the aforementioned applications. Specifically, it is shown both conceptually and experimentally that the DCS-based phase unwrapping outperforms a number of alternative approaches in terms of estimation accuracy. Finally, the proposal lists a number of research questions which need to be answered in order to attach strong theoretical guarantees to the practical success of DCS.

Page generated in 0.5051 seconds