1 |
Compressed wavefield extrapolation with curveletsLin, Tim T. Y., Herrmann, Felix J. January 2007 (has links)
An explicit algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in {3-D}. By using ideas from ``compressed sensing'', we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume, thereby reducing the size of the operators. According {to} compressed sensing theory, signals can successfully be recovered from an imcomplete set of measurements when the measurement basis is incoherent} with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain via a computationally cheaper operatoion. A proof of principle for the ``compressed sensing'' method is given for wavefield extrapolation in 2-D. The results show that our method is stable and produces identical results compared to the direct application of the full extrapolation operator.
|
2 |
Sensing dictionary construction for orthogonal matching pursuit algorithm in compressive sensingLi, Bo 10 1900 (has links)
<p>In compressive sensing, the fundamental problem is to reconstruct sparse signal from its nonadaptive insufficient linear measurement. Besides sparse signal reconstruction algorithms, measurement matrix or measurement dictionary plays an important part in sparse signal recovery. Orthogonal Matching Pursuit (OMP) algorithm, which is widely used in compressive sensing, is especially affected by measurement dictionary. Measurement dictionary with small restricted isometry constant or coherence could improve the performance of OMP algorithm. Based on measurement dictionary, sensing dictionary can be constructed and can be incorporated into OMP algorithm. In this thesis, two methods are proposed to design sensing dictionary. In the first method, sensing dictionary design problem is formulated as a linear programming problem. The solution is unique and can be obtained by standard linear programming method such as primal-dual interior point method. The major drawback of linear programming based method is its high computational complexity. The second method is termed sensing dictionary designing algorithm. In this algorithm, each atom of sensing dictionary is designed independently to reduce the maximal magnitude of its inner product with measurement dictionary. Compared with linear programming based method, the proposed sensing dictionary design algorithm is of low computational complexity and the performance is similar. Simulation results indicate that both of linear programming based method and the proposed sensing dictionary designing algorithm can design sensing dictionary with small mutual coherence and cumulative coherence. When the designed sensing dictionary is applied to OMP algorithm, the performance of OMP algorithm improves.</p> / Master of Science in Electrical and Computer Engineering (MSECE)
|
3 |
Koherencí řízená holografická mikroskopie v opticky rozptylujících prostředích / COHERENCE-CONTROLLED HOLOGRAPHIC MICROSCOPY IN DIFFUSE MEDIALošťák, Martin January 2015 (has links)
This thesis deals with imaging through diffuse media in coherence-controlled holographic microscope (CCHM) developed in IPE FME BUT. The mutual coherence function as well as the signal dependence on the lateral mutual shift between both arms of the CCHM are calculated. Both functions are related to each other. The latter dependence is measured experimentally. A principle of imaging with CCHM through diffuse media with both ballistic and diffuse light is explained by a simple geometrical model. This model is then verified experimentally by imaging a sample through diffuse medium. The point spread function (PSF) of CCHM for imaging through diffuse media is then calculated. Results of PSF calculation are proved experimentally.
|
4 |
Block-sparse models in multi-modality : application to the inverse model in EEG/MEG / Des modèles bloc-parcimonieux en multi-modalité : application au problème inverse en EEG/MEGAfdideh, Fardin 12 October 2018 (has links)
De nombreux phénomènes naturels sont trop complexes pour être pleinement reconnus par un seul instrument de mesure ou par une seule modalité. Par conséquent, le domaine de recherche de la multi-modalité a émergé pour mieux identifier les caractéristiques riches du phénomène naturel de la multi-propriété naturelle, en analysant conjointement les données collectées à partir d’uniques modalités, qui sont en quelque sorte complémentaires. Dans notre étude, le phénomène d’intérêt multi-propriétés est l’activité du cerveau humain et nous nous intéressons à mieux la localiser au moyen de ses propriétés électromagnétiques, mesurables de manière non invasive. En neurophysiologie, l’électroencéphalographie (EEG) et la magnétoencéphalographie (MEG) constituent un moyen courant de mesurer les propriétés électriques et magnétiques de l’activité cérébrale. Notre application dans le monde réel, à savoir le problème de reconstruction de source EEG / MEG, est un problème fondamental en neurosciences, allant des sciences cognitives à la neuropathologie en passant par la planification chirurgicale. Considérant que le problème de reconstruction de source EEG /MEG peut être reformulé en un système d’équations linéaires sous-déterminé, la solution (l’activité estimée de la source cérébrale) doit être suffisamment parcimonieuse pour pouvoir être récupérée de manière unique. La quantité de parcimonie est déterminée par les conditions dites de récupération. Cependant, dans les problèmes de grande dimension, les conditions de récupération conventionnelles sont extrêmement strictes. En regroupant les colonnes cohérentes d’un dictionnaire, on pourrait obtenir une structure plus incohérente. Cette stratégie a été proposée en tant que cadre d’identification de structure de bloc, ce qui aboutit à la segmentation automatique de l’espace source du cerveau, sans utiliser aucune information sur l’activité des sources du cerveau et les signaux EEG / MEG. En dépit du dictionnaire structuré en blocs moins cohérent qui en a résulté, la condition de récupération conventionnelle n’est plus en mesure de calculer la caractérisation de la cohérence. Afin de relever le défi mentionné, le cadre général des conditions de récupération exactes par bloc-parcimonie, comprenant trois conditions théoriques et une condition dépendante de l’algorithme, a été proposé. Enfin, nous avons étudié la multi-modalité EEG et MEG et montré qu’en combinant les deux modalités, des régions cérébrales plus raffinées sont apparues / Three main challenges have been addressed in this thesis, in three chapters.First challenge is about the ineffectiveness of some classic methods in high-dimensional problems. This challenge is partially addressed through the idea of clustering the coherent parts of a dictionary based on the proposed characterisation, in order to create more incoherent atomic entities in the dictionary, which is proposed as a block structure identification framework. The more incoherent atomic entities, the more improvement in the exact recovery conditions. In addition, we applied the mentioned clustering idea to real-world EEG/MEG leadfields to segment the brain source space, without using any information about the brain sources activity and EEG/MEG signals. Second challenge raises when classic recovery conditions cannot be established for the new concept of constraint, i.e., block-sparsity. Therefore, as the second research orientation, we developed a general framework for block-sparse exact recovery conditions, i.e., four theoretical and one algorithmic-dependent conditions, which ensure the uniqueness of the block-sparse solution of corresponding weighted mixed-norm optimisation problem in an underdetermined system of linear equations. The mentioned generality of the framework is in terms of the properties of the underdetermined system of linear equations, extracted dictionary characterisations, optimisation problems, and ultimately the recovery conditions. Finally, the combination of different information of a same phenomenon is the subject of the third challenge, which is addressed in the last part of dissertation with application to brain source space segmentation. More precisely, we showed that by combining the EEG and MEG leadfields and gaining the electromagnetic properties of the head, more refined brain regions appeared.
|
5 |
Algorithmes gloutons orthogonaux sous contrainte de positivité / Orthogonal greedy algorithms for non-negative sparse reconstructionNguyen, Thi Thanh 18 November 2019 (has links)
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type. / Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind.
|
Page generated in 0.0667 seconds