Spelling suggestions: "subject:"proximal algorithms"" "subject:"proximal a.lgorithms""
1 |
GPU Accelerated Framework for Cryogenic Electron Tomography using Proximal AlgorithmsRey Ramirez, Julio A. 04 1900 (has links)
Cryogenic electron tomography provides visualization of cellular complexes in situ, allowing a further understanding of cellular function. However, the projection images from this technique present a meager signal-to-noise ratio due to the limited electron dose, and the lack of projections at high tilt angles produces the 'missing-wedge' problem in the Fourier domain. These limitations in the projection data prevent traditional reconstruction techniques from achieving good reconstructions. Multiple strategies have been proposed to deal with the noise and the artifacts arising from the 'missing-wedge’ problem. For example, manually selecting subtomograms of identical structures and averaging them (subtogram averaging), data-driven approaches that intend to perform subtogram averaging automatically, and various methods for denoising tilt-series before reconstruction or denoising the volumes after reconstruction. Most of these approaches are additional pre-processing or post-processing steps independent from the reconstruction method, and the consistency of the resulting tomograms with the original projection data is lost after the modifications. We propose a GPU accelerated optimization-based reconstruction framework using proximal algorithms. Our framework integrates denoising in the reconstruction process by alternating between reconstruction and denoising, relieving the users of the need to select additional denoising algorithms and preserving the consistency between final tomograms and projection data. Thanks to the flexibility provided by proximal algorithms, various available proximal operators can be interchanged for each task, e.g., various algebraic reconstruction methods and denoising techniques. We evaluate our approach qualitatively by comparison with current reconstruction and denoising approaches, showing excellent denoising capabilities and superior visual quality of the reconstructed tomograms. We quantitatively evaluate the methods with a recently proposed synthetic dataset for scanning transmission electron microscopy, achieving superior reconstruction quality for a noisy and angle-limited synthetic dataset.
|
2 |
Space-Time Tomographic Reconstruction of Deforming ObjectsZang, Guangming 06 February 2020 (has links)
X-ray computed tomography (CT) is a popular imaging technique used for reconstructing volumetric properties for a large range of objects. Compared to traditional optical means, CT is a valuable tool for analyzing objects with interesting internal structure or complex geometries that are not accessible with. In this thesis, a variety of applications in computer vision and graphics of inverse problems using tomographic imaging modalities will be presented:
The first application focuses on the CT reconstruction with a specific emphasis on recovering thin 1D and 2D manifolds embedded in 3D volumes. To reconstruct such structures at resolutions below the Nyquist limit of the CT image sensor, we devise a new 3D structure tensor prior, which can be incorporated as a regularizer into more traditional proximal optimization methods for CT reconstruction.
The second application is about space-time tomography: Through a combination of a new CT image acquisition strategy, a space-time tomographic image formation model, and an alternating, multi-scale solver, we achieve a general approach that can be used to analyze a wide range of dynamic phenomena.
Base on the second application, the third one is aiming to improve the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations, by a warp-and-project strategy.
Finally, with a physically plausible divergence-free prior for motion estimation, as well as a novel view synthesis technique, we present applications to dynamic fluid imaging (e.g., 4D soot imaging of a combustion process, a mixing fluid process, a fuel injection process, and view synthesis for visible light tomography), which further demonstrates the flexibility of our optimization framework.
|
3 |
Approximate Proximal Algorithms for Generalized Variational Inequalities with Pseudomonotone MultifunctionsHsiao, Cheng-chih 19 June 2008 (has links)
In this paper, we establish several strong convergence results of general approximate proximal algorithm and general Bregman-function-based approximate proximal algorithm for solving the generalized variational inequality problem with pseudomonotone multifunction.
|
4 |
Proximal methods for convex minimization of Phi-divergences : application to computer vision. / Méthodes proximales convexes pour la minimisation des Phi-divergences : applications à la stéréo visionEl Gheche, Mireille 27 May 2014 (has links)
Cette thèse s'inscrit dans le contexte de l'optimisation convexe. Elle apporte à ce domaine deux contributions principales. La première porte sur les méthodes d'optimisation convexe non lisse appliquées à la vision par ordinateur. Quant à la seconde, elle fournit de nouveaux résultats théoriques concernant la manipulation de mesures de divergences, telles que celles utilisées en théorie de l'information et dans divers problèmes d'optimisation. Le principe de la stéréovision consiste à exploiter deux images d'une même scène prises sous deux points de vue, afin de retrouver les pixels homologues et de se ramener ainsi à un problème d'estimation d'un champ de disparité. Dans ce travail, le problème de l'estimation de la disparité est considéré en présence de variations d'illumination. Ceci se traduit par l'ajout, dans la fonction objective globale à minimiser, d'un facteur multiplicatif variant spatialement, estimé conjointement avec la disparité. Nous avons mis l'accent sur l'avantage de considérer plusieurs critères convexes et non-nécessairement différentiables, et d'exploiter des images multicomposantes (par exemple, des images couleurs) pour améliorer les performances de notre méthode. Le problème d'estimation posé est résolu en utilisant un algorithme parallèle proximal basé sur des développements récents en analyse convexe. Dans une seconde partie, nous avons étendu notre approche au cas multi-vues qui est un sujet de recherche relativement nouveau. Cette extension s'avère particulièrement utile dans le cadre d'applications où les zones d'occultation sont très larges et posent de nombreuses difficultés. Pour résoudre le problème d'optimisation associé, nous avons utilisé des algorithmes proximaux en suivant des approches multi-étiquettes relaxés de manière convexe. Les algorithmes employés présentent l'avantage de pouvoir gérer simultanément un grand nombre d'images et de contraintes, ainsi que des critères convexes et non convexes. Des résultats sur des images synthétiques ont permis de valider l'efficacité de ces méthodes, pour différentes mesures d'erreur. La dernière partie de cette thèse porte sur les problèmes d'optimisation convexe impliquant des mesures d'information (Phi-divergences), qui sont largement utilisés dans le codage source et le codage canal. Ces mesures peuvent être également employées avec succès dans des problèmes inverses rencontrés dans le traitement du signal et de l'image. Les problèmes d'optimisation associés sont souvent difficiles à résoudre en raison de leur grande taille. Dans ce travail, nous avons établi les expressions des opérateurs proximaux de ces divergences. En s'appuyant sur ces résultats, nous avons développé une approche proximale reposant sur l'usage de méthodes primales-duales. Ceci nous a permis de répondre à une large gamme de problèmes d'optimisation convexe dont la fonction objective comprend un terme qui s'exprime sous la forme de l'une de ces divergences / Convex optimization aims at searching for the minimum of a convex function over a convex set. While the theory of convex optimization has been largely explored for about a century, several related developments have stimulated a new interest in the topic. The first one is the emergence of efficient optimization algorithms, such as proximal methods, which allow one to easily solve large-size nonsmooth convex problems in a parallel manner. The second development is the discovery of the fact that convex optimization problems are more ubiquitous in practice than was thought previously. In this thesis, we address two different problems within the framework of convex optimization. The first one is an application to computer stereo vision, where the goal is to recover the depth information of a scene from a pair of images taken from the left and right positions. The second one is the proposition of new mathematical tools to deal with convex optimization problems involving information measures, where the objective is to minimize the divergence between two statistical objects such as random variables or probability distributions. We propose a convex approach to address the problem of dense disparity estimation under varying illumination conditions. A convex energy function is derived for jointly estimating the disparity and the illumination variation. The resulting problem is tackled in a set theoretic framework and solved using proximal tools. It is worth emphasizing the ability of this method to process multicomponent images under illumination variation. The conducted experiments indicate that this approach can effectively deal with the local illumination changes and yields better results compared with existing methods. We then extend the previous approach to the problem of multi-view disparity estimation. Rather than estimating a single depth map, we estimate a sequence of disparity maps, one for each input image. We address this problem by adopting a discrete reformulation that can be efficiently solved through a convex relaxation. This approach offers the advantage of handling both convex and nonconvex similarity measures within the same framework. We have shown that the additional complexity required by the application of our method to the multi-view case is small with respect to the stereo case. Finally, we have proposed a novel approach to handle a broad class of statistical distances, called $varphi$-divergences, within the framework of proximal algorithms. In particular, we have developed the expression of the proximity operators of several $varphi$-divergences, such as Kulback-Leibler, Jeffrey-Kulback, Hellinger, Chi-Square, I$_{alpha}$, and Renyi divergences. This allows proximal algorithms to deal with problems involving such divergences, thus overcoming the limitations of current state-of-the-art approaches for similar problems. The proposed approach is validated in two different contexts. The first is an application to image restoration that illustrates how to employ divergences as a regularization term, while the second is an application to image registration that employs divergences as a data fidelity term
|
5 |
[en] SEMIDEFINITE PROGRAMMING VIA GENERALIZED PROXIMAL POINT ALGORITHM / [pt] PROGRAMAÇÃO SEMIDEFINIDA VIA ALGORITMO DE PONTO PROXIMAL GENERALIZADOMARIO HENRIQUE ALVES SOUTO NETO 01 July 2019 (has links)
[pt] Diversos problemas em engenharia, aprendizado de máquina e economia podem ser resolvidos através de Programação Semidefinida (SDP). Potenciais aplicações podem ser encontradas em telecomunicações, fluxo de potência e teoria dos jogos. Além disso, como SDP é uma subclasse de otimização convexa, temos uma série de propriedades e garantias que fazem da SDP uma tecnologia muito poderosa. Entretanto, dentre as diferentes subclasses de otimização convexa, SDP ainda permanece como uma das mais desafiadoras. Instancias de larga escala ainda não podem ser resolvidas pelos atuais softwares disponíveis. Nesse sentido, esta tese porpõe um novo algoritmo para resolver problemas de SDP. A principal contribuição deste novo algoritmo é explorar a propriedade de posto baixo presente em diversas instancias. A convergência desta nova metodologia é provada ao mostrar que o algoritmo proposto é um caso particular do Approximate Proximal Point Algorithm. Adicionalmente, as variáveis ótimas duais são disponibilizadas como uma consequência do algoritmo proposto. Além disso, disponibilizamos um software para resolver problemas de SDP, chamado ProxSDP. Três estudos de caso são utilizados para avaliar a performance do algoritmo proposto. / [en] Many problems of interest can be solved by means of Semidefinite Programming (SDP). The potential applications range from telecommunications, electrical power systems, game theory and many more fields. Additionally, the fact that SDP is a subclass of convex optimization brings a set of theoretical guarantees that makes SDP very appealing. However, among all sub-classes of convex optimization, SDP remains one of the most challenging in practice. State-of-the-art semidefinite programming solvers still do not efficiently solve large scale instances. In this regard, this thesis proposes a novel algorithm for solving SDP problems. The main contribution of this novel algorithm is to achieve a substantial speedup by exploiting the low-rank property inherent to several SDP problems. The convergence of the new methodology is proved by showing that the novel algorithm reduces to a particular case of the Approximated Proximal Point Algorithm. Along with the theoretical contributions, an open source numerical solver, called ProxSDP, is made available with this work. The performance of ProxSDP in comparison to state-of-the-art SDP solvers is evaluated on three case studies.
|
6 |
Paralelizace náročných úloh rekonstrukce v dynamické magnetické rezonanci / Parallelization of complex tasks in reconstruction of dynamic magnetic resonanceBijotová, Kateřina January 2019 (has links)
This thesis deals with parallelization of complex tasks in reconstruction of dynamic magnetic resonance. It describes the basic principle of magnetic resonance and its relation to Fourier transform. It deals with the difference between static and dynamic magnetic resonance image reconstruction. It analyzes SVD algorithm and its use in magnetic resonance image reconstruction. It presents the principles and the importance of parallel computing in magnetic resonance imaging and describes CUDA technology. The thesis also contains a description and execution of the implementation of the reconstruction model in MATLAB and Java programming language which were optimized by JCuda library for Java implementation and gpuArray function in case of MATLAB implementation.
|
7 |
Paralelizace náročných úloh rekonstrukce v dynamické magnetické rezonanci / Parallelization of complex tasks in reconstruction of dynamic magnetic resonanceBijotová, Kateřina January 2019 (has links)
This thesis deals with parallelization of complex tasks in reconstruction of dynamic magnetic resonance. It describes the basic principle of magnetic resonance and its relation to Fourier transform. It deals with the difference between static and dynamic magnetic resonance image reconstruction. It analyzes SVD algorithm and its use in magnetic resonance image reconstruction. It presents the principles and the importance of parallel computing in magnetic resonance imaging and describes CUDA technology. The thesis also contains a description and execution of the implementation of the reconstruction model in MATLAB and Java programming language which were optimized by JCuda library for Java implementation and gpuArray function in case of MATLAB implementation.
|
8 |
Doplňování chybějících vzorků v audio signálu / Inpainting of Missing Audio Signal SamplesMach, Václav January 2016 (has links)
V oblasti zpracování signálů se v současné době čím dál více využívají tzv. řídké reprezentace signálů, tzn. že daný signál je možné vyjádřit přesně či velmi dobře aproximovat lineární kombinací velmi malého počtu vektorů ze zvoleného reprezentačního systému. Tato práce se zabývá využitím řídkých reprezentací pro rekonstrukci poškozených zvukových záznamů, ať už historických nebo nově vzniklých. Především historické zvukové nahrávky trpí zarušením jako praskání nebo šum. Krátkodobé poškození zvukových nahrávek bylo doposud řešeno interpolačními technikami, zejména pomocí autoregresního modelování. V nedávné době byl představen algoritmus s názvem Audio Inpainting, který řeší doplňování chybějících vzorků ve zvukovém signálu pomocí řídkých reprezentací. Zmíněný algoritmus využívá tzv. hladové algoritmy pro řešení optimalizačních úloh. Cílem této práce je porovnání dosavadních interpolačních metod s technikou Audio Inpaintingu. Navíc, k řešení optimalizačních úloh jsou využívány algoritmy založené na l1-relaxaci, a to jak ve formě analyzujícího, tak i syntetizujícího modelu. Především se jedná o proximální algoritmy. Tyto algoritmy pracují jak s jednotlivými koeficienty samostatně, tak s koeficienty v závislosti na jejich okolí, tzv. strukturovaná řídkost. Strukturovaná řídkost je dále využita taky pro odšumování zvukových nahrávek. Jednotlivé algoritmy jsou v praktické části zhodnoceny z hlediska nastavení parametrů pro optimální poměr rekonstrukce vs. výpočetní čas. Všechny algoritmy popsané v práci jsou na praktických příkladech porovnány pomocí objektivních metod odstupu signálu od šumu (SNR) a PEMO-Q. Na závěr je úspěšnost rekonstrukce poškozených zvukových signálů vyhodnocena.
|
9 |
Functional Norm Regularization for Margin-Based Ranking on Temporal DataStojkovic, Ivan January 2018 (has links)
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization. / Computer and Information Science
|
10 |
Méthodes proximales pour la résolution de problèmes inverses : application à la tomographie par émission de positrons / Proximal methods for the resolution of inverse problems : application to positron emission tomographyPustelnik, Nelly 13 December 2010 (has links)
L'objectif de cette thèse est de proposer des méthodes fiables, efficaces et rapides pour minimiser des critères convexes apparaissant dans la résolution de problèmes inverses en imagerie. Ainsi, nous nous intéresserons à des problèmes de restauration/reconstruction lorsque les données sont dégradées par un opérateur linéaire et un bruit qui peut être non additif. La fiabilité de la méthode sera assurée par l'utilisation d'algorithmes proximaux dont la convergence est garantie lorsqu'il s'agit de minimiser des critères convexes. La quête d'efficacité impliquera le choix d'un critère adapté aux caractéristiques du bruit, à l'opérateur linéaire et au type d'image à reconstruire. En particulier, nous utiliserons des termes de régularisation basés sur la variation totale et/ou favorisant la parcimonie des coefficients du signal recherché dans une trame. L'utilisation de trames nous amènera à considérer deux approches : une formulation du critère à l'analyse et une formulation du critère à la synthèse. De plus, nous étendrons les algorithmes proximaux et leurs preuves de convergence aux cas de problèmes inverses multicomposantes. La recherche de la rapidité de traitement se traduira par l'utilisation d'algorithmes proximaux parallélisables. Les résultats théoriques obtenus seront illustrés sur différents types de problèmes inverses de grandes tailles comme la restauration d'images mais aussi la stéréoscopie, l'imagerie multispectrale, la décomposition en composantes de texture et de géométrie. Une application attirera plus particulièrement notre attention ; il s'agit de la reconstruction de l'activité dynamique en Tomographie par Emission de Positrons (TEP) qui constitue un problème inverse difficile mettant en jeu un opérateur de projection et un bruit de Poisson dégradant fortement les données observées. Pour optimiser la qualité de reconstruction, nous exploiterons les caractéristiques spatio-temporelles de l'activité dans les tissus / The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The methods reliability is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity
|
Page generated in 0.04 seconds