• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 11
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Método do Ponto proximal usando distâncias generalizadas separáveis - reescala e seleção do comprimento do passo / Rescaling and stepsize selection in proximal methods using separable generalized distances

Moreira Neto, Alvaro 29 May 2008 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-08-19T11:27:00Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertacao Alvaro Moreira Neto.pdf: 1177417 bytes, checksum: 8eec9ca4c85a0eebfd029a0c50df7cbe (MD5) / Made available in DSpace on 2014-08-19T11:27:00Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertacao Alvaro Moreira Neto.pdf: 1177417 bytes, checksum: 8eec9ca4c85a0eebfd029a0c50df7cbe (MD5) Previous issue date: 2008-05-29 / In this work.... / Nessa dissertação ....
12

Generalized Maximum Entropy, Convexity and Machine Learning

Sears, Timothy Dean, tim.sears@biogreenoil.com January 2008 (has links)
This thesis identifies and extends techniques that can be linked to the principle of maximum entropy (maxent) and applied to parameter estimation in machine learning and statistics. Entropy functions based on deformed logarithms are used to construct Bregman divergences, and together these represent a generalization of relative entropy. The framework is analyzed using convex analysis to charac- terize generalized forms of exponential family distributions. Various connections to the existing machine learning literature are discussed and the techniques are applied to the problem of non-negative matrix factorization (NMF).
13

Structured numerical problems in contemporary applications

Sustik, Mátyás Attila 31 October 2013 (has links)
The presence of structure in a computational problem can often be exploited and can lead to a more efficient numerical algorithm. In this dissertation, we look at structured numerical problems that arise from applications in wireless communications and machine learning that also impact other areas of scientific computing. In wireless communication system designs, certain structured matrices (frames) need to be generated. The design of such matrices is equivalent to a symmetric inverse eigenvalue problem where the values of the diagonal elements are prescribed. We present algorithms that are capable of generating a larger set of these constructions than previous algorithms. We also discuss the existence of equiangular tight frames---frames that satisfy additional structural properties. Kernel learning is an important class of problems in machine learning. It often relies on efficient numerical algorithms that solve underlying convex optimization problems. In our work, the objective functions to be minimized are the von Neumann and the LogDet Bregman matrix divergences. The algorithm that solves this optimization problem performs matrix updates based on repeated eigendecompositions of diagonal plus rank-one matrices in the case of von Neumann matrix divergence, and Cholesky updates in case of the LogDet Bregman matrix divergence. Our contribution exploits the low-rank representations and the structure of the constraint matrices, resulting in more efficient algorithms than previously known. We also present two specialized zero-finding algorithms where we exploit the structure through the shape and exact formulation of the objective function. The first zero-finding task arises during the matrix update step which is part of the above-mentioned kernel learning application. The second zero-finding problem is for the secular equation; it is equivalent to the computation of the eigenvalues of a diagonal plus rank-one matrix. The secular equation arises in various applications, the most well-known is the divide-and-conquer eigensolver. In our solutions, we build upon a somewhat forgotten zero-finding method by P. Jarratt, first described in 1966. The method employs first derivatives only and needs the same amount of evaluations as Newton's method, but converges faster. Our contributions are the more efficient specialized zero-finding algorithms. / text
14

Contributions to the convergence theory and computational implementation of interior optimization methods for convex problems

Ruiz Garrido, Natalia Soledad Karen January 2016 (has links)
Doctor en Ciencias de la Ingeniería, Mención Modelación Matemática / En esta tesis doctoral se estudian algoritmos para resolver problemas de optimización convexa con estructura separable, y problemas de equilibrio económico de Walras. Así, la tesis se divide en dos partes. La primera parte corresponde al estudio teórico y numérico de un método de direcciones alternantes de multiplicadores el cual usa un término proximal interior. La segunda parte está dedicada al estudio numérico de algoritmos de segundo orden para resolver problemas de maximización de utilidades que aparecen en problemas de equilibrio en economía. En el primer capítulo se hace una revisión de algunos métodos de descomposición basados en el Lagrangeano aumentado. Luego, se hace una revisión de la definición y propiedades de las distancias proximales generalizadas. En el segundo capítulo, se prueba la convergencia global del método propuesto bajo supuestos estándares. Este método es llamado Método de Direcciones Alternantes con Regularización Proximal Interior (RIPADM). En el tercer capítulo, se establece la convergencia global de una variante del método RIPADM la cual añade un factor de relajación a la regla de actualización del multiplicador de Lagrange. En el cuarto capítulo, se implementa computacionalmente en Matlab, el método RIPADM, el ADM original y el método proximal de multiplicadores (PMM) para resolver el problema LASSO restringido y un problema de máquinas de soporte vectorial. En la implementación computacional del método RIPADM se usa la distancia proximal Log-quad, y la distancia Kullback-Leibler. En el quinto capítulo, se describe el modelo de intercambio puro de Arrow-Debreau, y se hace una revisión de un método recientemente propuesto para resolver problemas de equilibrio económico de Walras. En el sexto capítulo, se implementa computacionalmente el método punto-interior primaldual (PDIPM) y el método gradiente proyectado con aceleración (AGPM) para resolver problemas de maximización de utilidades que aparecen en problemas de equilibrio en economía. / This thesis deals with algorithms for solving convex optimization problems with separable structure, and Walras economic equilibrium problems. So the thesis is divided into two parts. The first part corresponds to a theoretical and numerical study of an alternating direction method of multipliers which uses an inner proximal term. The second one is focused on numerical study of algorithms for solving utility maximization problems that arise in equilibrium problems in economic. The first chapter includes a review of some decomposition methods based on the augmented Lagrangian. Then, the definition and properties of generalized proximal distances are given. In the second chapter, the global convergence of the proposed method is proved. This method is called Alternating Direction Method with Interior Proximal Regularization (RIPADM). The third chapter contains the proof of the global convergence of a variant of the RIPADM method which adds a relaxation factor in the update rule of Lagrange multiplier. The fourth chapter corresponds to the computational implementation in Matlab of the RIPADM method, the original ADM and proximal method of multipliers (PMM), to solve the LASSO problem and a support vector machine problem. In the computational implementation of the RIPADM method it is used the Log-quad distance, and also is used the Kullback-Leibler distance which is a Bregman distance. The fifth chapter includes a description of the pure exchange model of Arrow-Debreau, and a review of a recently proposed method to solve Walras economic equilibrium problems. The sixth chapter corresponds to the computational implementation of primal-dual interiorpoint method (PDIPM) and the projected gradient method with acceleration (AGPM) for solving utility maximization problems that appear in Walras economic equilibrium problems.
15

Collective reasoning under uncertainty and inconsistency

Adamcik, Martin January 2014 (has links)
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
16

Convex regression and its extensions to learning a Bregman divergence and difference of convex functions

Siahkamari, Ali 26 January 2022 (has links)
Nonparametric convex regression has been extensively studied over the last two decades. It has been shown any Lipschitz convex function can be approximated with arbitrarily accuracy with a max of linear functions. Using this framework, in this thesis we generalize convex regression to learning an arbitrary Bregman divergence and learning a difference of convex functions. We provide approximation guarantees and sample complexity bounds for both these extensions. Furthermore, we provide algorithms to solve the resulting optimization problems based on 2-block alternative direction method of multipliers (ADMM). For this algorithm, we provide convergence guarantees with iteration complexity of O(n√d/𝜖) for a dataset X 𝝐 ℝ^n,d and arbitrary positive 𝜖. Finally we provide experiments for both the Bregman divergence learning and difference of convex functions learning based on UCI datasets that demonstrate the state of the art on regression and classification datasets.
17

Krylov subspace type methods for the computation of non-negative or sparse solutions of ill-posed problems

Pasha, Mirjeta 10 April 2020 (has links)
No description available.
18

PARAMETER CHOICES FOR THE SPLIT BREGMAN METHOD APPLIED TO SIGNAL RESTORATION

Hashemi, Seyyed Amirreza 20 October 2016 (has links)
No description available.
19

First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images

Papoutsellis, Evangelos January 2016 (has links)
The focus of this thesis is variational image restoration techniques that involve novel non-smooth first-order gradient regularisers: Total Variation (TV) regularisation in image and data space for reconstruction of thin structures from PET data and regularisers given by an infimal-convolution of TV and $L^p$ seminorms for denoising images with piecewise affine structures. In the first part of this thesis, we present a novel variational model for PET reconstruction. During a PET scan, we encounter two different spaces: the sinogram space that consists of all the PET data collected from the detectors and the image space where the reconstruction of the unknown density is finally obtained. Unlike most of the state of the art reconstruction methods in which an appropriate regulariser is designed in the image space only, we introduce a new variational method incorporating regularisation in image and sinogram space. In particular, the corresponding minimisation problem is formed by a total variational regularisation on both the sinogram and the image and with a suitable weighted $L^2$ fidelity term, which serves as an approximation to the Poisson noise model for PET. We establish the well-posedness of this new model for functions of Bounded Variation (BV) and perform an error analysis through the notion of the Bregman distance. We examine analytically how TV regularisation on the sinogram affects the reconstructed image especially the boundaries of objects in the image. This analysis motivates the use of a combined regularisation principally for reconstructing images with thin structures. In the second part of this thesis we propose a first-order regulariser that is a combination of the total variation and $L^p$ seminorms with $1 < p \le \infty$. A well-posedness analysis is presented and a detailed study of the one dimensional model is performed by computing exact solutions for simple functions such as the step function and a piecewise affine function, for the regulariser with $p = 2$ and $p = 1$. We derive necessary and sufficient conditions for a pair in $BV \times L^p$ to be a solution for our proposed model and determine the structure of solutions dependent on the value of $p$. In the case $p = 2$, we show that the regulariser is equivalent to the Huber-type variant of total variation regularisation. Moreover, there is a certain class of one dimensional data functions for which the regularised solutions are equivalent to high-order regularisers such as the state of the art total generalised variation (TGV) model. The key assets of our regulariser are the elimination of the staircasing effect - a well-known disadvantage of total variation regularisation - the capability of obtaining piecewise affine structures for $p = 1$ and qualitatively comparable results to TGV. In addition, our first-order $TVL^p$ regulariser is capable of preserving spike-like structures that TGV is forced to smooth. The numerical solution of the proposed first-order model is in general computationally more efficient compared to high-order approaches.
20

[en] APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE / [pt] BUSCA APROXIMADA DE VIZINHOS MAIS PRÓXIMOS PARA DIVERGÊNCIA DE KULLBACK-LEIBLER

19 March 2018 (has links)
[pt] Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão. / [en] In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.

Page generated in 0.0551 seconds