• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Addressing Fundamental Limitations in Differentially Private Machine Learning

Nandi, Anupama January 2021 (has links)
No description available.
112

Optimal Sum-Rate of Multi-Band MIMO Interference Channel

Dhillon, Harpreet Singh 02 September 2010 (has links)
While the channel capacity of an isolated noise-limited wireless link is well-understood, the same is not true for the interference-limited wireless links that coexist in the same area and occupy the same frequency band(s). The performance of these wireless systems is coupled to each other due to the mutual interference. One such wireless scenario is modeled as a network of simultaneously communicating node pairs and is generally referred to as an interference channel (IC). The problem of characterizing the capacity of an IC is one of the most interesting and long-standing open problems in information theory. A popular way of characterizing the capacity of an IC is to maximize the achievable sum-rate by treating interference as Gaussian noise, which is considered optimal in low-interference scenarios. While the sum-rate of the single-band SISO IC is relatively well understood, it is not so when the users have multiple-bands and multiple-antennas for transmission. Therefore, the study of the optimal sum-rate of the multi-band MIMO IC is the main goal of this thesis. The sum-rate maximization problem for these ICs is formulated and is shown to be quite similar to the one already known for single-band MIMO ICs. This problem is reduced to the problem of finding the optimal fraction of power to be transmitted over each spatial channel in each frequency band. The underlying optimization problem, being non-linear and non-convex, is difficult to solve analytically or by employing local optimization techniques. Therefore, we develop a global optimization algorithm by extending the Reformulation and Linearization Technique (RLT) based Branch and Bound (BB) strategy to find the provably optimal solution to this problem. We further show that the spatial and spectral channels are surprisingly similar in a multi-band multi-antenna IC from a sum-rate maximization perspective. This result is especially interesting because of the dissimilarity in the way the spatial and frequency channels affect the perceived interference. As a part of this study, we also develop some rules-of-thumb regarding the optimal power allocation strategies in multi-band MIMO ICs in various interference regimes. Due to the recent popularity of Interference Alignment (IA) as a means of approaching capacity in an IC (in high-interference regime), we also compare the sum-rates achievable by our technique to the ones achievable by IA. The results indicate that the proposed power control technique performs better than IA in the low and intermediate interference regimes. Interestingly, the performance of the power control technique improves further relative to IA with an increase in the number of orthogonal spatial or frequency channels. / Master of Science
113

New insights into conjugate duality

Grad, Sorin - Mihai 13 July 2006 (has links)
With this thesis we bring some new results and improve some existing ones in conjugate duality and some of the areas it is applied in. First we recall the way Lagrange, Fenchel and Fenchel - Lagrange dual problems to a given primal optimization problem can be obtained via perturbations and we present some connections between them. For the Fenchel - Lagrange dual problem we prove strong duality under more general conditions than known so far, while for the Fenchel duality we show that the convexity assumptions on the functions involved can be weakened without altering the conclusion. In order to prove the latter we prove also that some formulae concerning conjugate functions given so far only for convex functions hold also for almost convex, respectively nearly convex functions. After proving that the generalized geometric dual problem can be obtained via perturbations, we show that the geometric duality is a special case of the Fenchel - Lagrange duality and the strong duality can be obtained under weaker conditions than stated in the existing literature. For various problems treated in the literature via geometric duality we show that Fenchel - Lagrange duality is easier to apply, bringing moreover strong duality and optimality conditions under weaker assumptions. The results presented so far are applied also in convex composite optimization and entropy optimization. For the composed convex cone - constrained optimization problem we give strong duality and the related optimality conditions, then we apply these when showing that the formula of the conjugate of the precomposition with a proper convex K - increasing function of a K - convex function on some n - dimensional non - empty convex set X, where K is a k - dimensional non - empty closed convex cone, holds under weaker conditions than known so far. Another field were we apply these results is vector optimization, where we provide a general duality framework based on a more general scalarization that includes as special cases and improves some previous results in the literature. Concerning entropy optimization, we treat first via duality a problem having an entropy - like objective function, from which arise as special cases some problems found in the literature on entropy optimization. Finally, an application of entropy optimization into text classification is presented.
114

HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING

Sudhir B. Kylasa (5929916) 09 December 2019 (has links)
<div> <div> <div> <p>First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive. </p><p><br></p> <p>In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.</p></div></div></div><div><div><div> <p>Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning. </p> </div> </div> </div>
115

[en] ITERATIVE METHODS FOR ROBUST CONVEX OPTIMIZATION / [pt] MÉTODOS ITERATIVOS PARA OTIMIZAÇÃO CONVEXA ROBUSTA

THIAGO DE GARCIA PAULA S MILAGRES 24 March 2020 (has links)
[pt] Otimização Robusta é uma das formas mais comuns de considerar in- certeza nos parâmetros de um problema de otimização. A forma tradicional de achar soluções robustas consiste em resolver a contraparte robusta de um problema, o que em muitos casos, na prática, pode ter um custo computacional proibitivo. Neste trabalho, estudamos métodos iterativos para resolver problemas de Otimização Convexa Robusta de forma aproximada, que não exigem a formulação da contraparte robusta. Utilizamos conceitos de Online Learning para propor um novo algoritmo que utiliza agregação de restrições, demonstrando garantias teóricas de convergência. Desenvolvemos ainda uma modificação deste algoritmo que, apesar de não possuir tais garantias, obtém melhor performance prática. Por fim, implementamos outros métodos iterativos conhecidos da literatura de Otimização Robusta e fazemos uma análise computacional de seus desempenhos. / [en] Robust Optimization is a common paradigm to consider uncertainty in the parameters of an optimization problem. The traditional way to find robust solutions requires solving the robust counterpart of an optimiza- tion problem, which, in practice, can often be prohibitively costly. In this work, we study iterative methods to approximately solve Robust Convex Optimization problems, which do not require solving the robust counter- part. We use concepts from the Online Learning framework to propose a new algorithm that performs constraint aggregation, and we demonstrate theoretical convergence guarantees. We then develop a modification of this algorithm that, although without such guarantees, obtains better practical performance. Finally, we implement other classical iterative methods from the Robust Optimization literature and present a computational study of their performances.
116

Block-decomposition and accelerated gradient methods for large-scale convex optimization

Ortiz Diaz, Camilo 08 June 2015 (has links)
In this thesis, we develop block-decomposition (BD) methods and variants of accelerated *9gradient methods for large-scale conic programming and convex optimization, respectively. The BD methods, discussed in the first two parts of this thesis, are inexact versions of proximal-point methods applied to two-block-structured inclusion problems. The adaptive accelerated methods, presented in the last part of this thesis, can be viewed as new variants of Nesterov's optimal method. In an effort to improve their practical performance, these methods incorporate important speed-up refinements motivated by theoretical iteration-complexity bounds and our observations from extensive numerical experiments. We provide several benchmarks on various important problem classes to demonstrate the efficiency of the proposed methods compared to the most competitive ones proposed earlier in the literature. In the first part of this thesis, we consider exact BD first-order methods for solving conic semidefinite programming (SDP) problems and the more general problem that minimizes the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions. More specifically, these problems are reformulated as two-block monotone inclusion problems and exact BD methods, namely the ones that solve both proximal subproblems exactly, are used to solve them. In addition to being able to solve standard form conic SDP problems, the latter approach is also able to directly solve specially structured non-standard form conic programming problems without the need to add additional variables and/or constraints to bring them into standard form. Several ingredients are introduced to speed-up the BD methods in their pure form such as: adaptive (aggressive) choices of stepsizes for performing the extragradient step; and dynamic updates of scaled inner products to balance the blocks. Finally, computational results on several classes of SDPs are presented showing that the exact BD methods outperform the three most competitive codes for solving large-scale conic semidefinite programming. In the second part of this thesis, we present an inexact BD first-order method for solving standard form conic SDP problems which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large-scale SDP instances. In this BD method, while the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of a linear system corresponding to the manifolds consisting of both the primal and dual affine feasibility constraints. Our implementation uses the conjugate gradient method applied to a reduced positive definite dual linear system to obtain inexact solutions of the latter augmented primal-dual linear system. In addition, the inexact BD method incorporates a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints. In the last part of this thesis, we consider an adaptive accelerated gradient method for a general class of convex optimization problems. More specifically, we present a new accelerated variant of Nesterov's optimal method in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method; and substantially improve its practical performance in comparison to the other existing variants. Computational results are presented to demonstrate that the proposed adaptive accelerated method performs quite well compared to other variants proposed earlier in the literature.
117

Provable alternating minimization for non-convex learning problems

Netrapalli, Praneeth Kumar 17 September 2014 (has links)
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice. / text
118

Image restoration in the presence of Poisson-Gaussian noise

Jezierska, Anna Maria 13 May 2013 (has links) (PDF)
This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the Poisson-Gaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of Poisson-Gaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primal-dual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudo-norm. This useful measure, well-known for promoting sparsity, is difficult to optimize due to its non-convexity and its non-smoothness. We propose an efficient graph-cut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorize-minimize memory gradient algorithm to optimize various smooth versions of the $ell_2-ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an Expectation-Maximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy
119

Sparse coding for machine learning, image processing and computer vision / Représentations parcimonieuses en apprentissage statistique, traitement d’image et vision par ordinateur

Mairal, Julien 30 November 2010 (has links)
Nous étudions dans cette thèse une représentation particulière de signaux fondée sur une méthode d’apprentissage statistique, qui consiste à modéliser des données comme combinaisons linéaires de quelques éléments d’un dictionnaire appris. Ceci peut être vu comme une extension du cadre classique des ondelettes, dont le but est de construire de tels dictionnaires (souvent des bases orthonormales) qui sont adaptés aux signaux naturels. Un succès important de cette approche a été sa capacité à modéliser des imagettes, et la performance des méthodes de débruitage d’images fondées sur elle. Nous traitons plusieurs questions ouvertes, qui sont reliées à ce cadre : Comment apprendre efficacement un dictionnaire ? Comment enrichir ce modèle en ajoutant une structure sous-jacente au dictionnaire ? Est-il possible d’améliorer les méthodes actuelles de traitement d’image fondées sur cette approche ? Comment doit-on apprendre le dictionnaire lorsque celui-ci est utilisé pour une tâche autre que la reconstruction de signaux ? Y a-t-il des applications intéressantes de cette méthode en vision par ordinateur ? Nous répondons à ces questions, avec un point de vue multidisciplinaire, en empruntant des outils d’apprentissage statistique, d’optimisation convexe et stochastique, de traitement des signaux et des images, de vison par ordinateur, mais aussi d'optimisation sur des graphes. / We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
120

Formal Verification and Validation of Convex Optimization Algorithms For model Predictive Control / Vérification formelle et validation des algorithmes d'optimisation convexe appliqués à la commande prédictive

Cohen, Raphaël P. 03 December 2018 (has links)
L’efficacité des méthodes d’optimisation modernes, associée à l’augmentation des ressources informatiques, a conduit à la possibilité d’utiliser ces algorithmes d’optimisation en temps réel agissant dans des rôles critiques. Cependant, cela ne peut se produire sans porter une certaine attention à la validité de ces algorithmes. Ce doctorat traite de la vérification formelle des algorithmes d'optimisation convexe lors qu'ils sont utilisés pour la guidance de systèmes dynamiques. En outre, nous démontrons comment les preuves théoriques des algorithmes d'optimisation en temps réel peuvent être utilisées pour décrire les propriétés fonctionnelles au niveau du code, les rendant ainsi accessibles à la communauté des méthodes formelles. / The efficiency of modern optimization methods, coupled with increasing computational resources, has led to the possibility of real-time optimization algorithms acting in safety critical roles. However, this cannot happen without addressing proper attention to the soundness of these algorithms. This PhD thesis discusses the formal verification of convex optimization algorithms with a particular emphasis on receding-horizon controllers. Additionally, we demonstrate how theoretical proofs of real-time optimization algorithms can be used to describe functional properties at the code level, thereby making it accessible for the formal methods community.

Page generated in 0.5126 seconds