Spelling suggestions: "subject:"bregman"" "subject:"bruegman""
1 |
Learning to rank in supervised and unsupervised settings using convexity and monotonicityAcharyya, Sreangsu 10 September 2013 (has links)
This dissertation addresses the task of learning to rank, both in the supervised and unsupervised settings, by exploiting the interplay of convex functions, monotonic mappings and their fixed points. In the supervised setting of learning to rank, one wishes to learn from examples of correctly ordered items whereas in the unsupervised setting, one tries to maximize some quantitatively defined characteristic of a "good" ranking. A ranking method selects one permutation from among the combinatorially many permutations defined on the items to rank. Accomplishing this optimally in the supervised setting, with minimal loss in generality, if any, is challenging. In this dissertation this problem is addressed by optimizing, globally and efficiently, a statistically consistent loss functional over the class of compositions of a linear function by an arbitrary, strictly monotonic, separable mapping with large margins. This capability also enables learning the parameters of a generalized linear model with an unknown link function. The method can handle infinite dimensional feature spaces if the corresponding kernel function is known. In the unsupervised setting, a popular ranking approach is is link analysis over a graph of recommendations, as exemplified by pagerank. This dissertation shows that pagerank may be viewed as an instance of an unsupervised consensus optimization problem. The dissertation then solves a more general problem of unsupervised consensus over noisy, directed recommendation graphs that have uncertainty over the set of "out" edges that emanate from a vertex. The proposed consensus rank is essentially the pagerank over the expected edge-set, where the expectation is computed over the distribution that achieves the most agreeable consensus. This consensus is measured geometrically by a suitable Bregman divergence between the consensus rank and the ranks induced by item specific distributions Real world deployed ranking methods need to be resistant to spam, a particularly sophisticated type of which is link-spam. A popular class of countermeasures "de-spam" the corrupted webgraph by removing abusive pages identified by supervised learning. Since exhaustive detection and neutralization is infeasible, there is a need for ranking functions that can, on one hand, attenuate the effects of link-spam without supervision and on the other hand, counter spam more aggressively when supervision is available. A family of non-linear, iteratively defined monotonic functions is proposed that propagates "rank" and "trust" scores through the webgraph. It relies on non-linearity, monotonicity and Schurconvexity to provide the resistance against spam. / text
|
2 |
PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMSPark, Yonggi 19 November 2019 (has links)
No description available.
|
3 |
Méthodes d'éclatement basées sur les distances de Bregman pour les inclusions monotones composites et l'optimisation / Splitting methods based on Bregman distances for composite monotone inclusions and optimizationNguyen, Van Quang 17 July 2015 (has links)
Le but de cette thèse est d'élaborer des méthodes d'éclatement basées sur les distances de Bregman pour la résolution d'inclusions monotones composites dans les espaces de Banach réels réflexifs. Ces résultats nous permettent d'étendre de nombreuses techniques, jusqu'alors limitées aux espaces hilbertiens. De plus, même dans le cadre restreint d'espaces euclidiens, ils donnent lieu à de nouvelles méthodes de décomposition qui peuvent s'avérer plus avantageuses numériquement que les méthodes classiques basées sur la distance euclidienne. Des applications numériques en traitement de l'image sont proposées. / The goal of this thesis is to design splitting methods based on Bregman distances for solving composite monotone inclusions in reflexive real Banach spaces. These results allow us to extend many techniques that were so far limited to Hilbert spaces. Furthermore, even when restricted to Euclidean spaces, they provide new splitting methods that may be more avantageous numerically than the classical methods based on the Euclidean distance. Numerical applications in image processing are proposed.
|
4 |
Détection de points d'intérêts dans une image multi ou hyperspectral par acquisition compressée / Feature detection in a multispectral image by compressed sensingRousseau, Sylvain 02 July 2013 (has links)
Les capteurs multi- et hyper-spectraux génèrent un énorme flot de données. Un moyende contourner cette difficulté est de pratiquer une acquisition compressée de l'objet multi- ethyper-spectral. Les données sont alors directement compressées et l'objet est reconstruitlorsqu'on en a besoin. L'étape suivante consiste à éviter cette reconstruction et à travaillerdirectement avec les données compressées pour réaliser un traitement classique sur un objetde cette nature. Après avoir introduit une première approche qui utilise des outils riemannienspour effectuer une détection de contours dans une image multispectrale, nous présentonsles principes de l'acquisition compressée et différents algorithmes utilisés pour résoudre lesproblèmes qu'elle pose. Ensuite, nous consacrons un chapitre entier à l'étude détaillée de l'und'entre eux, les algorithmes de type Bregman qui, par leur flexibilité et leur efficacité vontnous permettre de résoudre les minimisations rencontrées plus tard. On s'intéresse ensuiteà la détection de signatures dans une image multispectrale et plus particulièrement à unalgorithme original du Guo et Osher reposant sur une minimisation L1. Cet algorithme estgénéralisé dans le cadre de l'acquisition compressée. Une seconde généralisation va permettrede réaliser de la détection de motifs dans une image multispectrale. Et enfin, nous introduironsde nouvelles matrices de mesures qui simplifie énormément les calculs tout en gardant debonnes qualités de mesures. / Multi- and hyper-spectral sensors generate a huge stream of data. A way around thisproblem is to use a compressive acquisition of the multi- and hyper-spectral object. Theobject is then reconstructed when needed. The next step is to avoid this reconstruction and towork directly with compressed data to achieve a conventional treatment on an object of thisnature. After introducing a first approach using Riemannian tools to perform edge detectionin multispectral image, we present the principles of the compressive sensing and algorithmsused to solve its problems. Then we devote an entire chapter to the detailed study of one ofthem, Bregman type algorithms which by their flexibility and efficiency will allow us to solvethe minimization encountered later. We then focuses on the detection of signatures in amultispectral image relying on an original algorithm of Guo and Osher based on minimizingL1. This algorithm is generalized in connection with the acquisition compressed. A secondgeneralization will help us to achieve the pattern detection in a multispectral image. Andfinally, we introduce new matrices of measures that greatly simplifies calculations whilemaintaining a good quality of measurements.
|
5 |
Detecting Influential observations in spatial models using Bregman divergence / Detecção de observações influentes em modelos espaciais usando divergência de BregmanDanilevicz, Ian Meneghel 26 February 2018 (has links)
How to evaluate if a spatial model is well ajusted to a problem? How to know if it is the best model between the class of conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, including homoscedasticity and heteroscedasticity cases? To answer these questions inside Bayesian framework, we propose new ways to apply Bregman divergence, as well as recent information criteria as widely applicable information criterion (WAIC) and leave-one-out cross-validation (LOO). The functional Bregman divergence is a generalized form of the well known Kullback-Leiber (KL) divergence. There is many special cases of it which might be used to identify influential points. All the posterior distributions displayed in this text were estimate by Hamiltonian Monte Carlo (HMC), a optimized version of Metropolis-Hasting algorithm. All ideas showed here were evaluate by both: simulation and real data. / Como avaliar se um modelo espacial está bem ajustado? Como escolher o melhor modelo entre muitos da classe autorregressivo condicional (CAR) e autorregressivo simultâneo (SAR), homoscedásticos e heteroscedásticos? Para responder essas perguntas dentro do paradigma bayesiano, propomos novas formas de aplicar a divergência de Bregman, assim como critérios de informação bastante recentes na literatura, são eles o widely applicable information criterion (WAIC) e validação cruzada leave-one-out (LOO). O funcional de Bregman é uma generalização da famosa divergência de Kullback-Leiber (KL). Há diversos casos particulares dela que podem ser usados para identificar pontos influentes. Todas as distribuições a posteriori apresentadas nesta dissertação foram estimadas usando Monte Carlo Hamiltoniano (HMC), uma versão otimizada do algoritmo Metropolis-Hastings. Todas as ideias apresentadas neste texto foram submetidas a simulações e aplicadas em dados reais.
|
6 |
Detecting Influential observations in spatial models using Bregman divergence / Detecção de observações influentes em modelos espaciais usando divergência de BregmanIan Meneghel Danilevicz 26 February 2018 (has links)
How to evaluate if a spatial model is well ajusted to a problem? How to know if it is the best model between the class of conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, including homoscedasticity and heteroscedasticity cases? To answer these questions inside Bayesian framework, we propose new ways to apply Bregman divergence, as well as recent information criteria as widely applicable information criterion (WAIC) and leave-one-out cross-validation (LOO). The functional Bregman divergence is a generalized form of the well known Kullback-Leiber (KL) divergence. There is many special cases of it which might be used to identify influential points. All the posterior distributions displayed in this text were estimate by Hamiltonian Monte Carlo (HMC), a optimized version of Metropolis-Hasting algorithm. All ideas showed here were evaluate by both: simulation and real data. / Como avaliar se um modelo espacial está bem ajustado? Como escolher o melhor modelo entre muitos da classe autorregressivo condicional (CAR) e autorregressivo simultâneo (SAR), homoscedásticos e heteroscedásticos? Para responder essas perguntas dentro do paradigma bayesiano, propomos novas formas de aplicar a divergência de Bregman, assim como critérios de informação bastante recentes na literatura, são eles o widely applicable information criterion (WAIC) e validação cruzada leave-one-out (LOO). O funcional de Bregman é uma generalização da famosa divergência de Kullback-Leiber (KL). Há diversos casos particulares dela que podem ser usados para identificar pontos influentes. Todas as distribuições a posteriori apresentadas nesta dissertação foram estimadas usando Monte Carlo Hamiltoniano (HMC), uma versão otimizada do algoritmo Metropolis-Hastings. Todas as ideias apresentadas neste texto foram submetidas a simulações e aplicadas em dados reais.
|
7 |
Novel mathematical techniques for structural inversion and image reconstruction in medical imaging governed by a transport equationPrieto Moreno, Kernel Enrique January 2015 (has links)
Since the inverse problem in Diffusive Optical Tomography (DOT) is nonlinear and severely ill-posed, only low resolution reconstructions are feasible when noise is added to the data nowadays. The purpose of this thesis is to improve image reconstruction in DOT of the main optical properties of tissues with some novel mathematical methods. We have used the Landweber (L) method, the Landweber-Kaczmarz (LK) method and its improved Loping-Landweber-Kaczmarz (L-LK) method combined with sparsity or with total variation regularizations for single and simultaneous image reconstructions of the absorption and scattering coefficients. The sparsity method assumes the existence of a sparse solution which has a simple description and is superposed onto a known background. The sparsity method is solved using a smooth gradient and a soft thresholding operator. Moreover, we have proposed an improved sparsity method. For the total variation reconstruction imaging, we have used the split Bregman method and the lagged diffusivity method. For the total variation method, we also have implemented a memory-efficient method to minimise the storage of large Hessian matrices. In addition, an individual and simultaneous contrast value reconstructions are presented using the level set (LS) method. Besides, the shape derivative of DOT based on the RTE is derived using shape sensitivity analysis, and some reconstructions for the absorption coefficient are presented using this shape derivative via the LS method.\\Whereas most of the approaches for solving the nonlinear problem of DOT make use of the diffusion approximation (DA) to the radiative transfer equation (RTE) to model the propagation of the light in tissue, the accuracy of the DA is not satisfactory in situations where the medium is not scattering dominant, in particular close to the light sources and to the boundary, as well as inside low-scattering or non-scattering regions. Therefore, we have solved the inverse problem in DOT by the more accurate time-dependant RTE in two dimensions.
|
8 |
Convergence rates for variational regularization of statistical inverse problemsSprung, Benjamin 04 October 2019 (has links)
No description available.
|
9 |
Total Variation Based Methods for Speckle Image DenoisingBagchi Misra, Arundhati 11 August 2012 (has links)
This dissertation is about the partial differential equation (PDE) based image denoising models. In particular, we are interested about speckle noise images. We provide the mathematical analysis of existing speckle denoising models and propose three new models based on total variation minimization methods. The first model is developed using a new speckle noise model and the solution of associated numerical scheme is proven to be stable. The second one is a speckle version of Chambolle algorithm and the convergence of the numerical solution was proved under certain assumptions. The final model is a nonlocal PDE based speckle denoising model derived by combining the excellent noise removal properties of the nonlocal means algorithm with the PDE models. We enhanced the computational efficiency of this model by adopting the Split Bregman method. Numerical results of all three models show that they compare favorably to the conventional models.
|
10 |
Sparse and orthogonal singular value decompositionKhatavkar, Rohan January 1900 (has links)
Master of Science / Department of Statistics / Kun Chen / The singular value decomposition (SVD) is a commonly used matrix factorization technique
in statistics, and it is very e ective in revealing many low-dimensional structures in
a noisy data matrix or a coe cient matrix of a statistical model. In particular, it is often
desirable to obtain a sparse SVD, i.e., only a few singular values are nonzero and their
corresponding left and right singular vectors are also sparse. However, in several existing
methods for sparse SVD estimation, the exact orthogonality among the singular vectors are
often sacri ced due to the di culty in incorporating the non-convex orthogonality constraint
in sparse estimation. Imposing orthogonality in addition to sparsity, albeit di cult, can be
critical in restricting and guiding the search of the sparsity pattern and facilitating model
interpretation. Combining the ideas of penalized regression and Bregman iterative methods,
we propose two methods that strive to achieve the dual goal of sparse and orthogonal SVD
estimation, in the general framework of high dimensional multivariate regression. We set
up simulation studies to demonstrate the e cacy of the proposed methods.
|
Page generated in 0.0424 seconds