• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 27
  • 19
  • 19
  • 18
  • 16
  • 16
  • 16
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Methods for ℓp/TVp Regularized Optimization and Their Applications in Sparse Signal Processing

Yan, Jie 14 November 2014 (has links)
Exploiting signal sparsity has recently received considerable attention in a variety of areas including signal and image processing, compressive sensing, machine learning and so on. Many of these applications involve optimization models that are regularized by certain sparsity-promoting metrics. Two most popular regularizers are based on the l1 norm that approximates sparsity of vectorized signals and the total variation (TV) norm that serves as a measure of gradient sparsity of an image. Nevertheless, the l1 and TV terms are merely two representative measures of sparsity. To explore the matter of sparsity further, in this thesis we investigate relaxations of the regularizers to nonconvex terms such as lp and TVp "norms" with 0 <= p < 1. The contributions of the thesis are two-fold. First, several methods to approach globally optimal solutions of related nonconvex problems for improved signal/image reconstruction quality have been proposed. Most algorithms studied in the thesis fall into the category of iterative reweighting schemes for which nonconvex problems are reduced to a series of convex sub-problems. In this regard, the second main contribution of this thesis has to do with complexity improvement of the l1/TV-regularized methodology for which accelerated algorithms are developed. Along with these investigations, new techniques are proposed to address practical implementation issues. These include the development of an lp-related solver that is easily parallelizable, and a matrix-based analysis that facilitates implementation for TV-related optimizations. Computer simulations are presented to demonstrate merits of the proposed models and algorithms as well as their applications for solving general linear inverse problems in the area of signal and image denoising, signal sparse representation, compressive sensing, and compressive imaging. / Graduate
32

Méthodes de reconstruction d'images à partir d'un faible nombre de projections en tomographie par rayons x / X-ray CT Image Reconstruction from Few Projections

Wang, Han 24 October 2011 (has links)
Afin d'améliorer la sûreté (dose plus faible) et la productivité (acquisition plus rapide) du système de la tomographie par rayons X (CT), nous cherchons à reconstruire une image de haute qualitée avec un faible nombre de projections. Les algorithmes classiques ne sont pas adaptés à cette situation et la reconstruction est instable et perturbée par des artefacts. L'approche "Compressed Sensing" (CS) fait l'hypothèse que l'image inconnue est "parcimonieuse" ou "compressible", et la reconstruit via un problème d'optimisation (minimisation de la norme TV/L1) en promouvant la parcimonie. Pour appliquer le CS en CT, en utilisant le pixel/voxel comme base de representation, nous avons besoin d'une transformée parcimonieuse, et nous devons la combiner avec le "projecteur du rayon X" appliqué sur une image pixelisée. Dans cette thèse, nous avons adapté une base radiale de famille Gaussienne nommée "blob" à la reconstruction CT par CS. Elle a une meilleure localisation espace-fréquentielle que le pixel, et des opérations comme la transformée en rayons-X, peuvent être évaluées analytiquement et sont facilement parallélisables (sur plateforme GPU par exemple). Comparé au blob classique de Kaisser-Bessel, la nouvelle base a une structure multi-échelle : une image est la somme des fonctions translatées et dilatées de chapeau Mexicain radiale. Les images médicales typiques sont compressibles sous cette base. Ainsi le système de representation parcimonieuse dans les algorithmes ordinaires de CS n'est plus nécessaire. Des simulations (2D) ont montré que les algorithmes TV/L1 existants sont plus efficaces et les reconstructions ont des meilleures qualités visuelles que par l'approche équivalente basée sur la base de pixel-ondelettes. Cette nouvelle approche a également été validée sur des données expérimentales (2D), où nous avons observé que le nombre de projections en général peut être réduit jusqu'à 50%, sans compromettre la qualité de l'image. / To improve the safety (lower dose) and the productivity (faster acquisition) of an X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and the reconstruction suffers from artifacts. The "Compressed Sensing" (CS) approach supposes that the unknown image is in some sense "sparse" or "compressible", and reoncstructs it through a non linear optimization problem (TV/$llo$ minimization) by enhancing the sparsity. Using the pixel/voxel as basis, to apply CS framework in CT one usually needs a "sparsifying" transform, and combine it with the "X-ray projector" applying on the pixel image. In this thesis, we have adapted a "CT-friendly" radial basis of Gaussian family called "blob" to the CS-CT framework. It have better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multiscale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. Simulations (2D) show that the existing TV/L1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel/wavelet basis. The new approach has also been validated on experimental data (2D), where we have observed that the number of projections in general can be reduced to about 50%, without compromising the image quality.
33

Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals

Salgado Patarroyo, Ivan Camilo 29 August 2013 (has links)
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
34

Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals

Salgado Patarroyo, Ivan Camilo 29 August 2013 (has links)
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
35

Priklausomų atsitiktinių dydžių sumų aproksimavimas puasono tipo matais / Poisson type approximations for sums of dependent variables

Petrauskienė, Jūratė 07 March 2011 (has links)
Disertacijoje tiriamas diskrečių m-priklausomų atsitiktinių dydžių aproksimavimo Puasono tipo matais tikslumas. Silpnai priklausomų atsitiktinių dydžių sumos yra natūralus nepriklausomų atsitiktinių dydžių sumų apibendrinimas. Vis dėlto atsitiktinių dydžių priklausomybė žymiai pasunkina tokių sumų tyrimą. Disertacijoje pagrindinis dėmesys skiriamas dviparametrėms ir triparametrėms diskrečiosioms aproksimacijoms. Gautus rezultatus galima suskirstyti į keturias dalis. Pirmoje dalyje nagrinėjant dviejų narių serijų statistikos aproksimaciją Puasono ir sudėtiniais Puasono skirstiniais buvo nustatyta, kad dviparametrė sudėtinė Puasono aproksimacija yra tikslesnė už Puasono dėsnio asimptotinį skleidinį su vienu asimptotikos nariu. Aproksimacijos tikslumas įvertintas pilnosios variacijos ir lokalioje metrikoje. Specialiu atveju apskaičiuotos asimptotiškai tikslios konstantos. Taip pat nustatyta, kad gautieji įverčiai iš apačios yra tos pačios eilės, kaip ir įverčiai iš viršaus. Antroje dalyje buvo gauta, kad sveikaskaičiai atsitiktiniai dydžiai, tenkinantys Frankeno sąlygos analogą, gali būti naudojami perėjimui nuo m-priklausomų prie 1-priklausomų atsitiktinių dydžių. Nustatyta, kad ženklą keičiančios sudėtinės Puasono aproksimacijos yra tokios pačios tikslumo eilės, kaip žinomi rezultatai nepriklausomų atsitiktinių dydžių sumoms. Trečioje dalyje nustatyta, kad kai atsitiktiniai dydžiai yra simetriniai, tuomet sudėtinio Puasono aproksimacijos tikslumas yra daug geresnis nei... [toliau žr. visą tekstą] / Our aim is to investigate Poisson type approximations to the sums of dependent integer-valued random variables. In this thesis, only one type of dependence is considered, namely m-dependent random variables. The accuracy of approximation is measured in the total variation, local, uniform (Kolmogorov) and Wasserstein metrics. Results can be divided into four parts. The first part is devoted to 2-runs, when pi=p. We generalize Theorem 5.2 from A.D. Barbour and A. Xia “Poisson perturbations” in two directions: by estimating the second order asymptotic expansion and asymptotic expansion in the exponent. Moreover, lower bound estimates are established, proving the optimality of upper bound estimates. Since, the method of proof does not allow to get small constants, in certain cases, we calculate asymptotically sharp constants. In the second part, we consider sums of 1-dependent random variables, concentrated on nonnegative integers and satisfying analogue of Franken's condition. All results of this part are comparable to the known results for independent summands. In the third part, we consider Poisson type approximations for sums of 1-dependent symmetric three-point distributions. We are unaware about any Poisson-type approximation result for dependent random variables, when symmetry of the distribution is taken into account. In the last part, we consider 1-dependent non-identically distributed Bernoulli random variables. It is shown, that even for this simple... [to full text]
36

On continuous maximum flow image segmentation algorithm

Marak, Laszlo 28 March 2012 (has links) (PDF)
In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
37

Morphological and statistical techniques for the analysis of 3D images

Meinhardt Llopis, Enric 03 March 2011 (has links)
Aquesta tesi proposa una estructura de dades per emmagatzemar imatges tridimensionals. L'estructura da dades té forma d'arbre i codifica les components connexes dels conjunts de nivell de la imatge. Aquesta estructura és la eina bàsica per moltes aplicacions proposades: operadors morfològics tridimensionals, visualització d'imatges mèdiques, anàlisi d'histogrames de color, seguiment d'objectes en vídeo i detecció de vores. Motivada pel problema de la completació de vores, la tesi conté un estudi de com l'eliminació de soroll mitjançant variació total anisòtropa es pot fer servir per calcular conjunts de Cheeger en mètriques anisòtropes. Aquests conjunts de Cheeger anisòtrops es poden utilitzar per trobar òptims globals d'alguns funcionals per completar vores. També estan relacionats amb certs invariants afins que s'utilitzen en reconeixement d'objectes, i en la tesi s'explicita aquesta relació. / This thesis proposes a tree data structure to encode the connected components of level sets of 3D images. This data structure is applied as a main tool in several proposed applications: 3D morphological operators, medical image visualization, analysis of color histograms, object tracking in videos and edge detection. Motivated by the problem of edge linking, the thesis contains also an study of anisotropic total variation denoising as a tool for computing anisotropic Cheeger sets. These anisotropic Cheeger sets can be used to find global optima of a class of edge linking functionals. They are also related to some affine invariant descriptors which are used in object recognition, and this relationship is laid out explicitly.
38

Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variables

Mourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
39

Novel mathematical techniques for structural inversion and image reconstruction in medical imaging governed by a transport equation

Prieto Moreno, Kernel Enrique January 2015 (has links)
Since the inverse problem in Diffusive Optical Tomography (DOT) is nonlinear and severely ill-posed, only low resolution reconstructions are feasible when noise is added to the data nowadays. The purpose of this thesis is to improve image reconstruction in DOT of the main optical properties of tissues with some novel mathematical methods. We have used the Landweber (L) method, the Landweber-Kaczmarz (LK) method and its improved Loping-Landweber-Kaczmarz (L-LK) method combined with sparsity or with total variation regularizations for single and simultaneous image reconstructions of the absorption and scattering coefficients. The sparsity method assumes the existence of a sparse solution which has a simple description and is superposed onto a known background. The sparsity method is solved using a smooth gradient and a soft thresholding operator. Moreover, we have proposed an improved sparsity method. For the total variation reconstruction imaging, we have used the split Bregman method and the lagged diffusivity method. For the total variation method, we also have implemented a memory-efficient method to minimise the storage of large Hessian matrices. In addition, an individual and simultaneous contrast value reconstructions are presented using the level set (LS) method. Besides, the shape derivative of DOT based on the RTE is derived using shape sensitivity analysis, and some reconstructions for the absorption coefficient are presented using this shape derivative via the LS method.\\Whereas most of the approaches for solving the nonlinear problem of DOT make use of the diffusion approximation (DA) to the radiative transfer equation (RTE) to model the propagation of the light in tissue, the accuracy of the DA is not satisfactory in situations where the medium is not scattering dominant, in particular close to the light sources and to the boundary, as well as inside low-scattering or non-scattering regions. Therefore, we have solved the inverse problem in DOT by the more accurate time-dependant RTE in two dimensions.
40

Learning structured models on weighted graphs, with applications to spatial data analysis / Apprentissage de modèles structurés sur graphes pondérés et application à l’analyse de données spatiales

Landrieu, Loïc 26 June 2016 (has links)
La modélisation de processus complexes peut impliquer un grand nombre de variables ayant entre elles une structure de corrélation compliquée. Par exemple, les phénomènes spatiaux possèdent souvent une forte régularité spatiale, se traduisant par une corrélation entre variables d’autant plus forte que les régions correspondantes sont proches. Le formalisme des graphes pondérés permet de capturer de manière compacte ces relations entre variables, autorisant la formalisation mathématique de nombreux problèmes d’analyse de données spatiales. La première partie du manuscrit se concentre sur la résolution efficace de problèmes de régularisation spatiale, mettant en jeu des pénalités telle que la variation totale ou la longueur totale des contours. Nous présentons une stratégie de préconditionnement pour l’algorithme generalized forward-backward, spécifiquement adaptée à la résolution de problèmes structurés par des graphes pondérés présentant une grande variabilité de configurations et de poids. Nous présentons ensuite un nouvel algorithme appelé cut pursuit, qui exploite les relations entre les algorithmes de flots et la variation totale au travers d’une stratégie de working set. Ces algorithmes présentent des performances supérieures à l’état de l’art pour des tâches d’agrégations de données geostatistiques. La seconde partie de ce document se concentre sur le développement d’un nouveau modèle qui étend les chaînes de Markov à temps continu au cas des graphes pondérés non orientés généraux. Ce modèle autorise la prise en compte plus fine des interactions entre noeuds voisins pour la prédiction structurée, comme illustré pour la classification supervisée de tissus urbains. / Modeling complex processes often involve a high number of variables with anintricate correlation structure. For example, many spatially-localized processes display spatial regularity, as variables corresponding to neighboring regions are more correlated than distant ones. The formalism of weighted graphs allows us to capture relationships between interacting variables in a compact manner, permitting the mathematical formulation of many spatial analysis tasks. The first part of this manuscript focuses on optimization problems with graph-structure dregularizers, such as the total variation or the total boundary size. We first present the convex formulation and its resolution with proximal splitting algorithms. We introduce a new preconditioning scheme for the existing generalized forward-backward proximal splitting algorithm, specifically designed for graphs with high variability in neighbourhood configurations and edge weights. We then introduce a new algorithm, cut pursuit, which used the links between graph cuts and total variation in a working set scheme. We also present a variation of this algorithm which solved the problem regularized by the non convex total boundary length penalty. We show that our proposed approaches reach or outperform state-of-the-art for geostatistical aggregation as well as image recovery problems. The second part focuses on the development of a new model, expanding continuous-time Markov chain models to general undirected weighted graphs. This allows us to take into account the interactions between neighbouring nodes in structured classification, as demonstrated for a supervised land-use classification task from cadastral data.

Page generated in 0.0831 seconds