• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Compressed sensing for error correction on real-valued vectors

Tordsson, Pontus January 2019 (has links)
Compressed sensing (CS) is a relatively new branch of mathematics with very interesting applications in signal processing, statistics and computer science. This thesis presents some theory of compressed sensing, which allows us to recover (high-dimensional) sparse vectors from (low-dimensional) compressed measurements by solving the L1-minimization problem. A possible application of CS to the problem of error correction is also presented, where sparse vectors are that of arbitrary noise. Successful sparse recovery by L1-minimization relies on certain properties of rectangular matrices. But these matrix properties are extremely subtle and difficult to numerically verify. Therefore, to get an idea of how sparse (or dense) errors can be, numerical simulation of error correction was done. These simulations show the performance of error correction with respect to various levels of error sparsity and matrix dimensions. It turns out that error correction degrades slower for low matrix dimensions than for high matrix dimensions, while for sufficiently sparse errors, high matrix dimensions offer a higher likelihood of guaranteed error correction.
2

Compressed Sensing via Partial L1 Minimization

Zhong, Lu 27 April 2017 (has links)
Reconstructing sparse signals from undersampled measurements is a challenging problem that arises in many areas of data science, such as signal processing, circuit design, optical engineering and image processing. The most natural way to formulate such problems is by searching for sparse, or parsimonious, solutions in which the underlying phenomena can be represented using just a few parameters. Accordingly, a natural way to phrase such problems revolves around L0 minimization in which the sparsity of the desired solution is controlled by directly counting the number of non-zero parameters. However, due to the nonconvexity and discontinuity of the L0 norm such optimization problems can be quite difficult. One modern tactic to treat such problems is to leverage convex relaxations, such as exchanging the L0 norm for its convex analog, the L1 norm. However, to guarantee accurate reconstructions for L1 minimization, additional conditions must be imposed, such as the restricted isometry property. Accordingly, in this thesis, we propose a novel extension to current approaches revolving around truncated L1 minimization and demonstrate that such approach can, in important cases, provide a better approximation of L0 minimization. Considering that the nonconvexity of the truncated L1 norm makes truncated l1 minimization unreliable in practice, we further generalize our method to partial L1 minimization to combine the convexity of L1 minimization and the robustness of L0 minimization. In addition, we provide a tractable iterative scheme via the augmented Lagrangian method to solve both optimization problems. Our empirical study on synthetic data and image data shows encouraging results of the proposed partial L1 minimization in comparison to L1 minimization.
3

Algorithmes gloutons orthogonaux sous contrainte de positivité / Orthogonal greedy algorithms for non-negative sparse reconstruction

Nguyen, Thi Thanh 18 November 2019 (has links)
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type. / Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind.

Page generated in 0.0819 seconds