Spelling suggestions: "subject:"bregman"" "subject:"bruegman""
21 |
[en] HIGH-RESOLUTION OTDR WITH EMBEDDED PRECISE FAULT ANALYSIS / [pt] OTDR DE ALTARESOLUÇÃO COM ANÁLISE PRECISA DE FALHAS INTEGRADAFELIPE CALLIARI 16 November 2021 (has links)
[pt] As fibras ópticas são suscetíveis ao estresse mecânico e podem ser
danificadas ou quebradas, portanto, a supervisão da camada física é essencial
para identificar essas falhas e remediá-las o mais rápido possível. Com o objetivo
de agilizar e simplificar o processo de agendamento de unidades de reparo em
campo, foi desenvolvido um sistema automatizado de medição de fibras baseado
em uma unidade de processamento digital de sinal (DSP) capaz de identificar as
posições das falhas de forma autônoma. Combinando esta unidade de DSP com
um OTDR de contagem de fótons, é possível criar tal dispositivo. Este trabalho
apresenta o desenvolvimento dos blocos de construção para tal dispositivo. / [en] Optical fibers are susceptible to mechanical stress and may be damaged
or broken, thus physical layer supervision is essential to identify these failures
and remediate them as quickly as possible. In order to hasten and simplify
the scheduling process of the in-field repairing units, an automated fiber
measurement system based on a digital signal processing (DSP) unit capable
of autonomously identifying fault positions was developed. By combining this
DSP unit with a Tunable Photon-Counting OTDR, it is possible to create such
a device. This work presents the development of the building blocks for such
device.
|
22 |
Trajetória central, métodos de ponto proximal generalizado e trajetória de Cauchy em variedades Riemannianas. / Central trajectory, generalized proximal point methods and Cauchy trajectory in Riemannian varieties.VELÁSQUEZ, Marco Antonio Lázaro. 11 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-11T21:15:21Z
No. of bitstreams: 1
MARCO ANTONIO LÁZARO VELÁSQUEZ - DISSERTAÇÃO PPGMAT 2007..pdf: 704392 bytes, checksum: 65d621d0e292ed7ae65f9c1d129b6200 (MD5) / Made available in DSpace on 2018-07-11T21:15:21Z (GMT). No. of bitstreams: 1
MARCO ANTONIO LÁZARO VELÁSQUEZ - DISSERTAÇÃO PPGMAT 2007..pdf: 704392 bytes, checksum: 65d621d0e292ed7ae65f9c1d129b6200 (MD5)
Previous issue date: 2007-03 / Capes / Em problemas de otimização convexa e, de maneira geral, em problemas de
inequações variacionais aparecem os conceitos de: trajetória central (definida por uma
função barreira), algoritmo de ponto proximal generalizado (com distâncias de Bregman)
e trajetória de Cauchy em variedades de Riemannianas. Nesta disertação são estudados os três conceitos e suas possíveis relações. Estas relações são dadas principalmente para programação linear. Primeiro é mostrado, com hipóteses adequadas, que a trajetória central está bem definida, é limitada, contínua, possui pontos de acumulação e converge para o centro analítico do conjunto de soluções. Depois, também com hipóteses adequadas, é provado que a seqüência gerada pelo algoritmo de ponto proximal generalizado converge para uma solução do problema de inequações varacionais. Um fato importante é quando a trajetória central é definida pela distância de Bregman como função barreira. Nestas considerações, é mostrado que a trajetória central e a seqüência gerada pelo algoritmo de ponto proximal generalizado convergem para o mesmo ponto. Além disso, para programação linear é mostrado que a seqüência gerada pelo
algoritmo de ponto proximal generalizado está contida na trajetória central.
Finalmente, é mostrado para programação linear que a trajetória central também
coincide com a trajetória de Cauchy em variedades Riemannianas definidas em
subconjuntos abertos de IRn com métrica dada pelo hessiano da função barreira. / In convex otimization problems and, more generally, in variational inequality
problems appears concepts of: central paths defined by a barrier function, generalized
proximal point algorithm with Bregman’s distances and Cauchy trajectory in Riemannian
manifolds. In this work are studed these three concepts and its possible relationships. These relationships are showed principally to linear programming. First is showed, with adequate hypotheses, that a central path is well defined, is bounded, is continuos, have cluster points, these cluster points are solutions of variational inequality problems and converge to the analytic center of the solution set. Next, with adequate hypotheses too, is showed that a sequence generated by the generalized proximal point algorithm converge to someone solution of variational inequality problem. An important fact is when a central path is defined by the Bregman’s distance as a barrier function. In these cases, is showed that a central path and the sequence generated by the generalized proximal point algorithm converges to the same point. Furthermore, to linear programming is showed that the sequence generated by the generalized proximal point algorithm is contained in the central path. Finally, is showed to linear programming that a central path also coincides with a Cauchy trajectory in the Riemannian manifold defined on the open subsets ofIRn
with metric given by the hessian of the barrier function.
|
23 |
Optimization over nonnegative matrix polynomialsCederberg, Daniel January 2023 (has links)
This thesis is concerned with convex optimization problems over matrix polynomials that are constrained to be positive semidefinite on the unit circle. Problems of this form appear in signal processing and can often be solved as semidefinite programs (SDPs). Interior-point solvers for these SDPs scale poorly, and this thesis aims to design first-order methods that are more efficient. We propose methods based on a generalized proximal operator defined in terms of a Bregman divergence. Empirical results on three applications in signal processing demonstrate that the proposed methods scale much better than interior-point solvers. As an example, for sparse estimation of spectral density matrices, Douglas--Rachford splitting with the generalized proximal operator is about 1000 times faster and scales to much larger problems. The ability to solve larger problems allows us to perform functional connectivity analysis of the brain by constructing a sparse estimate of the inverse spectral density matrix.
|
24 |
Unsupervised 3D image clustering and extension to joint color and depth segmentation / Classification non supervisée d’images 3D et extension à la segmentation exploitant les informations de couleur et de profondeurHasnat, Md Abul 01 October 2014 (has links)
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la segmentation de scènes intérieures comme (a) comment l'information 3D peut-elle être exploitée au mieux ? et (b) quelle est la meilleure manière de prendre en compte de manière conjointe les informations couleur et 3D ? Nous abordons ces deux questions dans cette thèse et nous proposons de nouvelles méthodes non supervisées pour la classification d'images 3D et la segmentation prenant en compte de manière conjointe les informations de couleur et de profondeur. A cet effet, nous formulons l'hypothèse que les normales aux surfaces dans les images 3D sont des éléments à prendre en compte pour leur analyse, et leurs distributions sont modélisables à l'aide de lois de mélange. Nous utilisons la méthode dite « Bregman Soft Clustering » afin d'être efficace d'un point de vue calculatoire. De plus, nous étudions plusieurs lois de probabilités permettant de modéliser les distributions de directions : la loi de von Mises-Fisher et la loi de Watson. Les méthodes de classification « basées modèles » proposées sont ensuite validées en utilisant des données de synthèse puis nous montrons leur intérêt pour l'analyse des images 3D (ou de profondeur). Une nouvelle méthode de segmentation d'images couleur et profondeur, appelées aussi images RGB-D, exploitant conjointement la couleur, la position 3D, et la normale locale est alors développée par extension des précédentes méthodes et en introduisant une méthode statistique de fusion de régions « planes » à l'aide d'un graphe. Les résultats montrent que la méthode proposée donne des résultats au moins comparables aux méthodes de l'état de l'art tout en demandant moins de temps de calcul. De plus, elle ouvre des perspectives nouvelles pour la fusion non supervisée des informations de couleur et de géométrie. Nous sommes convaincus que les méthodes proposées dans cette thèse pourront être utilisées pour la classification d'autres types de données comme la parole, les données d'expression en génétique, etc. Elles devraient aussi permettre la réalisation de tâches complexes comme l'analyse conjointe de données contenant des images et de la parole / Access to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
|
25 |
Numerical Methods for Multi-Marginal Optimal Transportation / Méthodes numériques pour le transport optimal multi-margesNenna, Luca 05 December 2016 (has links)
Dans cette thèse, notre but est de donner un cadre numérique général pour approcher les solutions des problèmes du transport optimal (TO). L’idée générale est d’introduire une régularisation entropique du problème initial. Le problème régularisé correspond à minimiser une entropie relative par rapport à une mesure de référence donnée. En effet, cela équivaut à trouver la projection d’un couplage par rapport à la divergence de Kullback-Leibler. Cela nous permet d’utiliser l’algorithme de Bregman/Dykstra et de résoudre plusieurs problèmes variationnels liés au TO. Nous nous intéressons particulièrement à la résolution des problèmes du transport optimal multi-marges (TOMM) qui apparaissent dans le cadre de la dynamique des fluides (équations d’Euler incompressible à la Brenier) et de la physique quantique (la théorie de fonctionnelle de la densité ). Dans ces cas, nous montrons que la régularisation entropique joue un rôle plus important que de la simple stabilisation numérique. De plus, nous donnons des résultats concernant l’existence des transports optimaux (par exemple des transports fractals) pour le problème TOMM. / In this thesis we aim at giving a general numerical framework to approximate solutions to optimal transport (OT) problems. The general idea is to introduce an entropic regularization of the initialproblems. The regularized problem corresponds to the minimization of a relative entropy with respect a given reference measure. Indeed, this is equivalent to find the projection of the joint coupling with respect the Kullback-Leibler divergence. This allows us to make use the Bregman/Dykstra’s algorithm and solve several variational problems related to OT. We are especially interested in solving multi-marginal optimal transport problems (MMOT) arising in Physics such as in Fluid Dynamics (e.g. incompressible Euler equations à la Brenier) and in Quantum Physics (e.g. Density Functional Theory). In these cases we show that the entropic regularization plays a more important role than a simple numerical stabilization. Moreover, we also give some important results concerning existence and characterization of optimal transport maps (e.g. fractal maps) for MMOT .
|
26 |
Optimization tools for non-asymptotic statistics in exponential familiesLe Priol, Rémi 04 1900 (has links)
Les familles exponentielles sont une classe de modèles omniprésente en statistique.
D'une part, elle peut modéliser n'importe quel type de données.
En fait la plupart des distributions communes en font partie : Gaussiennes, variables catégoriques, Poisson, Gamma, Wishart, Dirichlet.
D'autre part elle est à la base des modèles linéaires généralisés (GLM), une classe de modèles fondamentale en apprentissage automatique.
Enfin les mathématiques qui les sous-tendent sont souvent magnifiques, grâce à leur lien avec la dualité convexe et la transformée de Laplace.
L'auteur de cette thèse a fréquemment été motivé par cette beauté.
Dans cette thèse, nous faisons trois contributions à l'intersection de l'optimisation et des statistiques, qui tournent toutes autour de la famille exponentielle.
La première contribution adapte et améliore un algorithme d'optimisation à variance réduite appelé ascension des coordonnées duales stochastique (SDCA), pour entraîner une classe particulière de GLM appelée champ aléatoire conditionnel (CRF). Les CRF sont un des piliers de la prédiction structurée. Les CRF étaient connus pour être difficiles à entraîner jusqu'à la découverte des technique d'optimisation à variance réduite. Notre version améliorée de SDCA obtient des performances favorables comparées à l'état de l'art antérieur et actuel.
La deuxième contribution s'intéresse à la découverte causale.
Les familles exponentielles sont fréquemment utilisées dans les modèles graphiques, et en particulier dans les modèles graphique causaux.
Cette contribution mène l'enquête sur une conjecture spécifique qui a attiré l'attention dans de précédents travaux : les modèles causaux s'adaptent plus rapidement aux perturbations de l'environnement.
Nos résultats, obtenus à partir de théorèmes d'optimisation, soutiennent cette hypothèse sous certaines conditions. Mais sous d'autre conditions, nos résultats contredisent cette hypothèse. Cela appelle à une précision de cette hypothèse, ou à une sophistication de notre notion de modèle causal.
La troisième contribution s'intéresse à une propriété fondamentale des familles exponentielles.
L'une des propriétés les plus séduisantes des familles exponentielles est la forme close de l'estimateur du maximum de vraisemblance (MLE), ou maximum a posteriori (MAP) pour un choix naturel de prior conjugué.
Ces deux estimateurs sont utilisés presque partout, souvent sans même y penser.
(Combien de fois calcule-t-on une moyenne et une variance pour des données en cloche sans penser au modèle Gaussien sous-jacent ?)
Pourtant la littérature actuelle manque de résultats sur la convergence de ces modèles pour des tailles d'échantillons finis, lorsque l'on mesure la qualité de ces modèles avec la divergence de Kullback-Leibler (KL).
Pourtant cette divergence est la mesure de différence standard en théorie de l'information.
En établissant un parallèle avec l'optimisation, nous faisons quelques pas vers un tel résultat, et nous relevons quelques directions pouvant mener à des progrès, tant en statistiques qu'en optimisation.
Ces trois contributions mettent des outil d'optimisation au service des statistiques dans les familles exponentielles : améliorer la vitesse d'apprentissage de GLM de prédiction structurée, caractériser la vitesse d'adaptation de modèles causaux, estimer la vitesse d'apprentissage de modèles omniprésents.
En traçant des ponts entre statistiques et optimisation, cette thèse fait progresser notre maîtrise de méthodes fondamentales d'apprentissage automatique. / Exponential families are a ubiquitous class of models in statistics.
On the one hand, they can model any data type.
Actually, the most common distributions are exponential families: Gaussians, categorical, Poisson, Gamma, Wishart, or Dirichlet.
On the other hand, they sit at the core of generalized linear models (GLM), a foundational class of models in machine learning.
They are also supported by beautiful mathematics thanks to their connection with convex duality and the Laplace transform.
This beauty is definitely responsible for the existence of this thesis.
In this manuscript, we make three contributions at the intersection of optimization and statistics, all revolving around exponential families.
The first contribution adapts and improves a variance reduction optimization algorithm called stochastic dual coordinate ascent (SDCA) to train a particular class of GLM called conditional random fields (CRF). CRF are one of the cornerstones of structured prediction. CRF were notoriously hard to train until the advent of variance reduction techniques, and our improved version of SDCA performs favorably compared to the previous state-of-the-art.
The second contribution focuses on causal discovery.
Exponential families are widely used in graphical models, and in particular in causal graphical models.
This contribution investigates a specific conjecture that gained some traction in previous work: causal models adapt faster to perturbations of the environment.
Using results from optimization, we find strong support for this assumption when the perturbation is coming from an intervention on a cause, and support against this assumption when perturbation is coming from an intervention on an effect.
These pieces of evidence are calling for a refinement of the conjecture.
The third contribution addresses a fundamental property of exponential families.
One of the most appealing properties of exponential families is its closed-form maximum likelihood estimate (MLE) and maximum a posteriori (MAP) for a natural choice of conjugate prior. These two estimators are used almost everywhere, often unknowingly
-- how often are mean and variance computed for bell-shaped data without thinking about the Gaussian model they underly?
Nevertheless, literature to date lacks results on the finite sample convergence property of the information (Kulback-Leibler) divergence between these estimators and the true distribution.
Drawing on a parallel with optimization, we take some steps towards such a result, and we highlight directions for progress both in statistics and optimization.
These three contributions are all using tools from optimization at the service of statistics in exponential families: improving upon an algorithm to learn GLM, characterizing the adaptation speed of causal models, and estimating the learning speed of ubiquitous models.
By tying together optimization and statistics, this thesis is taking a step towards a better understanding of the fundamentals of machine learning.
|
27 |
Détection de points d'intérêt par acquisition compressée dans une image multispectraleRousseau, Sylvain 02 July 2013 (has links) (PDF)
Les capteurs multi- et hyper-spectraux génèrent un énorme flot de données. Un moyen de contourner cette difficulté est de pratiquer une acquisition compressée de l'objet multi- et hyper-spectral. Les données sont alors directement compressées et l'objet est reconstruit lorsqu'on en a besoin. L'étape suivante consiste à éviter cette reconstruction et à travailler directement avec les données compressées pour réaliser un traitement classique sur un objet de cette nature. Après avoir introduit une première approche qui utilise des outils riemanniens pour effectuer une détection de contours dans une image multispectrale, nous présentons les principes de l'acquisition compressée et différents algorithmes utilisés pour résoudre les problèmes qu'elle pose. Ensuite, nous consacrons un chapitre entier à l'étude détaillée de l'un d'entre eux, les algorithmes de type Bregman qui, par leur flexibilité et leur efficacité vont nous permettre de résoudre les minimisations rencontrées plus tard. On s'intéresse ensuite à la détection de signatures dans une image multispectrale et plus particulièrement à un algorithme original du Guo et Osher reposant sur une minimisation $L_1$. Cet algorithme est généralisé dans le cadre de l'acquisition compressée. Une seconde généralisation va permettre de réaliser de la détection de motifs dans une image multispectrale. Et enfin, nous introduirons de nouvelles matrices de mesures qui simplifie énormément les calculs tout en gardant de bonnes qualités de mesures.
|
28 |
Novel higher order regularisation methods for image reconstructionPapafitsoros, Konstantinos January 2015 (has links)
In this thesis we study novel higher order total variation-based variational methods for digital image reconstruction. These methods are formulated in the context of Tikhonov regularisation. We focus on regularisation techniques in which the regulariser incorporates second order derivatives or a sophisticated combination of first and second order derivatives. The introduction of higher order derivatives in the regularisation process has been shown to be an advantage over the classical first order case, i.e., total variation regularisation, as classical artifacts such as the staircasing effect are significantly reduced or totally eliminated. Also in image inpainting the introduction of higher order derivatives in the regulariser turns out to be crucial to achieve interpolation across large gaps. First, we introduce, analyse and implement a combined first and second order regularisation method with applications in image denoising, deblurring and inpainting. The method, numerically realised by the split Bregman algorithm, is computationally efficient and capable of giving comparable results with total generalised variation (TGV), a state of the art higher order method. An additional experimental analysis is performed for image inpainting and an online demo is provided on the IPOL website (Image Processing Online). We also compute and study properties of exact solutions of the one dimensional total generalised variation problem with L^{2} data fitting term, for simple piecewise affine data functions, with or without jumps . This gives an insight on how this type of regularisation behaves and unravels the role of the TGV parameters. Finally, we introduce, study and analyse a novel non-local Hessian functional. We prove localisations of the non-local Hessian to the local analogue in several topologies and our analysis results in derivative-free characterisations of higher order Sobolev and BV spaces. An alternative formulation of a non-local Hessian functional is also introduced which is able to produce piecewise affine reconstructions in image denoising, outperforming TGV.
|
29 |
Methods for ℓp/TVp Regularized Optimization and Their Applications in Sparse Signal ProcessingYan, Jie 14 November 2014 (has links)
Exploiting signal sparsity has recently received considerable attention in a variety of areas including signal and image processing, compressive sensing, machine learning and so on. Many of these applications involve optimization models that are regularized by certain sparsity-promoting metrics. Two most popular regularizers are based on the l1 norm that approximates sparsity of vectorized signals and the total variation (TV) norm that serves as a measure of gradient sparsity of an image.
Nevertheless, the l1 and TV terms are merely two representative measures of sparsity. To explore the matter of sparsity further, in this thesis we investigate relaxations of the regularizers to nonconvex terms such as lp and TVp "norms" with 0 <= p < 1. The contributions of the thesis are two-fold. First, several methods to approach globally optimal solutions of related nonconvex problems for improved signal/image reconstruction quality have been proposed. Most algorithms studied in the thesis fall into the category of iterative reweighting schemes for which nonconvex problems are reduced to a series of convex sub-problems. In this regard, the second main contribution of this thesis has to do with complexity improvement of the l1/TV-regularized methodology for which accelerated algorithms are developed. Along with these investigations, new techniques are proposed to address practical implementation issues. These include the development of an lp-related solver that is easily parallelizable, and a matrix-based analysis that facilitates implementation for TV-related optimizations. Computer simulations are presented to demonstrate merits of the proposed models and algorithms as well as their applications for solving general linear inverse problems in the area of signal and image denoising, signal sparse representation, compressive sensing, and compressive imaging. / Graduate
|
Page generated in 0.0304 seconds