Spelling suggestions: "subject:"firstorder methods"" "subject:"rstorder methods""
1 |
Saddle point techniques in convex composite and error-in-measurement optimizationHe, Niao 07 January 2016 (has links)
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts.
In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle.
In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing.
In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
|
2 |
Accelerating Convergence of Large-scale Optimization AlgorithmsGhadimi, Euhanna January 2015 (has links)
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms. / <p>QC 20150327</p>
|
3 |
Convex optimization under inexact first-order informationLan, Guanghui 29 June 2009 (has links)
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex programming problems under inexact first-order information. In the first part of this thesis we focus on the general non-smooth convex minimization under a stochastic oracle. We start by introducing an important algorithmic advancement in this area, namely, the development of the mirror descent stochastic approximation algorithm. The main contribution is to develop a validation procedure for this algorithm applied to stochastic programming. In the second part of this thesis we consider the Stochastic Composite
Optimizaiton (SCO) which covers smooth, non-smooth and stochastic convex optimization as certain special cases. Note that the optimization algorithms that can achieve this lower bound had never been developed. Our contribution in this topic mainly consists of the following aspects. Firstly, with a novel analysis, it is demonstrated that the simple RM-SA algorithm applied to the aforementioned problems exhibits the best known so far rate of convergence. Moreover, by adapting Nesterov's optimal method, we propose an accelerated SA, which can achieve, uniformly in dimension, the theoretically optimal rate of convergence for solving this class of problems. Finally, the significant advantages of the accelerated SA over the existing algorithms are illustrated in the context of solving a class of stochastic programming problems. In the
last part of this work, we extend our attention to certain deterministic optimization techniques which operate on approximate first-order information for the dual problem. In particular, we establish, for the first time in the literature, the iteration-complexity for the inexact augmented Lagrangian (I-AL)
methods applied to a special class of convex programming problems.
|
4 |
Large-Scale Optimization With Machine Learning ApplicationsVan Mai, Vien January 2019 (has links)
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms. First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets. / <p>QC 20191105</p>
|
5 |
Tractable relaxations and efficient algorithmic techniques for large-scale optimizationKilinc-Karzan, Fatma 21 June 2011 (has links)
In this thesis, we develop tractable relaxations and efficient algorithms for large-scale optimization. Our developments are motivated by a recent paradigm, Compressed Sensing (CS), which consists of acquiring directly low-dimensional linear projections of signals, possibly corrupted with noise, and then using sophisticated recovery procedures for signal reconstruction. We start by analyzing how to utilize a priori information given in the form of sign restrictions on part of the entries. We propose necessary and sufficient on the sensing matrix for exact recovery of sparse signals, utilize them in deriving error bounds under imperfect conditions, suggest verifiable sufficient conditions and establish their limits of performance. In the second part of this thesis, we study the CS synthesis problem -selecting the minimum number of rows from a given matrix, so that the resulting submatrix possesses certifiably good recovery properties. We express the synthesis problem as the problem of approximating a given matrix by a matrix of specified low rank in the uniform norm and develop a randomized algorithm for this problem. The third part is dedicated to efficient First-Order Methods (FOMs) for large-scale, well-structured convex optimization problems. We propose FOMs with stochastic oracles that come with exact guarantees on solution quality, achieve sublinear time behavior, and through extensive simulations, show considerable improvement over the state-of-the-art deterministic FOMs. In the last part, we examine a general sparse estimation problem -estimating a block sparse linear transform of a signal from the undersampled observations of the signal corrupted with nuisance and stochastic noise. We show that an extension of the earlier results to this more general framework is possible. In particular, we suggest estimators that have efficiently verifiable guaranties of performance and provide connections to well-known results in CS theory.
|
6 |
On the Relationship between Conjugate Gradient and Optimal First-Order Methods for Convex OptimizationKarimi, Sahar January 2014 (has links)
In a series of work initiated by Nemirovsky and Yudin, and later extended by Nesterov, first-order algorithms for unconstrained minimization with optimal theoretical complexity bound have been proposed. On the other hand, conjugate gradient algorithms as one of the widely used first-order techniques suffer from the lack of a finite complexity bound. In fact their performance can possibly be quite poor. This dissertation is partially on tightening the gap between these two classes of algorithms, namely the traditional conjugate gradient methods and optimal first-order techniques. We derive conditions under which conjugate gradient methods attain the same complexity bound as in Nemirovsky-Yudin's and Nesterov's methods. Moreover, we propose a conjugate gradient-type algorithm named CGSO, for Conjugate Gradient with Subspace Optimization, achieving the optimal complexity bound with the payoff of a little extra computational cost.
We extend the theory of CGSO to convex problems with linear constraints. In particular we focus on solving $l_1$-regularized least square problem, often referred to as Basis Pursuit Denoising (BPDN) problem in the optimization community. BPDN arises in many practical fields including sparse signal recovery, machine learning, and statistics. Solving BPDN is fairly challenging because the size of the involved signals can be quite large; therefore first order methods are of particular interest for these problems. We propose a quasi-Newton proximal method for solving BPDN. Our numerical results suggest that our technique is computationally effective, and can compete favourably with the other state-of-the-art solvers.
|
7 |
Prédiction de la structure de contrôle de bactéries par optimisation sous incertitudeAit El Faqir, Marouane 22 November 2016 (has links)
L'approche de la biologie des systèmes vise à intégrer les méthodologies appliquées dans la conception et l'analyse des systèmes technologiques complexes, au sein de la biologie afin de comprendre les principes de fonctionnement globaux des systèmes biologiques. La thèse s'inscrit dans le cadre de la biologie des systèmes et en particulier dans la prolongation d'une méthode issue de ce cadre : la méthode Resource Blance Analysis (RBA). Nous visons dans cette thèse à augmenter le pouvoir prédictif de la méthode via un travail de modélisation tout en gardant un bon compromis entre représentativité des modèles issus de ce cadre et leur résolution numérique efficace. La thèse se décompose en deux grandes parties : la première vise à intégrer les aspects thermodynamiques et cinétiques inhérents aux réseaux métaboliques. La deuxième vise à comprendre l'impact de l'aspect stochastique de la production des enzymes sur le croissance de la bactérie. Des méthodes numériques ont été élaborées pour la résolution des modèles ainsi établis dans les deux cas déterministe et stochastique. / In order to understand the global functioning principals of biological systems, system bio- logy approach aims to integrate the methodologies used in the conception and the analysis of complex technological systems, within the biology. This PhD thesis fits into the system biology framework and in particular the extension of the already existing method Resource Balance Analysis (RBA). We aim in this PhD thesis to improve the predictive power of this method by introducing more complex model. However, this new model should respect a good trade-off between the representativity of the model and its efficient numerical computation. This PhD thesis is decomposed into two major parts. The first part aims the integration of the metabolic network inherent thermodynamical and kinetic aspects. The second part aims the comprehension of the impact of enzyme production stochastic aspect on the bacteria growth. Numerical methods are elaborated to solve the obtained models in both deterministic and stochastic cases.
|
8 |
Robustness and optimization in anti-windup controlAlli-Oke, Razak Olusegun January 2014 (has links)
This thesis is broadly concerned with online-optimizing anti-windup control. These are control structures that implement some online-optimization routines to compensate for the windup effects in constrained control systems. The first part of this thesis examines a general framework for analyzing robust preservation in anti-windup control systems. This framework - the robust Kalman conjecture - is defined for the robust Lur’e problem. This part of the thesis verifies this conjecture for first-order plants perturbed by various norm-bounded unstructured uncertainties. Integral quadratic constraint theory is exploited to classify the appropriate stability multipliers required for verification in these cases. The remaining part of the thesis focusses on accelerated gradient methods. In particular, tight complexity-certificates can be obtained for the Nesterov gradient method, which makes it attractive for implementation of online-optimizing anti-windup control. This part of the thesis presents a proposed algorithm that extends the classical Nesterov gradient method by using available secant information. Numerical results demonstrating the efficiency of the proposed algorithm are analysed with the aid of performance profiles. As the objective function becomes more ill-conditioned, the proposed algorithm becomes significantly more efficient than the classical Nesterov gradient method. The improved performance bodes well for online-optimization anti-windup control since ill-conditioning is common place in constrained control systems. In addition, this thesis explores another subcategory of accelerated gradient methods known as Barzilai-Borwein gradient methods. Here, two algorithms that modify the Barzilai-Borwein gradient method are proposed. Global convergence of the proposed algorithms for all convex functions is established by using discrete Lyapunov theorems.
|
Page generated in 0.0624 seconds