• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cutting plane methods and dual problems

Gladin, Egor 28 August 2024 (has links)
Die vorliegende Arbeit befasst sich mit Schnittebenenverfahren, einer Gruppe von iterativen Algorithmen zur Minimierung einer (möglicherweise nicht glatten) konvexen Funktion über einer kompakten konvexen Menge. Wir betrachten zwei prominente Beispiele, nämlich die Ellipsoidmethode und die Methode der Vaidya, und zeigen, dass ihre Konvergenzrate auch bei Verwendung eines ungenauen Orakels erhalten bleibt. Darüber hinaus zeigen wir, dass es möglich ist, diese Methoden im Rahmen der stochastischen Optimierung effizient zu nutzen. Eine andere Richtung, in der Schnittebenenverfahren nützlich sein können, sind duale Probleme. In der Regel können die Zielfunktion und ihre Ableitungen bei solchen Problemen nur näherungsweise berechnet werden. Daher ist die Unempfindlichkeit der Methoden gegenüber Fehlern in den Subgradienten von großem Nutzen. Als Anwendungsbeispiel schlagen wir eine linear konvergierende duale Methode für einen Markow-Entscheidungsprozess mit Nebenbedienungen vor, die auf der Methode der Vaidya basiert. Wir demonstrieren die Leistungsfähigkeit der vorgeschlagenen Methode in einem einfachen RL Problem. Die Arbeit untersucht auch das Konzept der Genauigkeitszertifikate für konvexe Minimierungsprobleme. Zertifikate ermöglichen die Online-Überprüfung der Genauigkeit von Näherungslösungen. In dieser Arbeit verallgemeinern wir den Begriff der Genauigkeitszertifikate für die Situation eines ungenauen Orakels erster Ordnung. Darüber hinaus schlagen wir einen expliziten Weg zur Konstruktion von Genauigkeitszertifikaten für eine große Klasse von Schnittebenenverfahren vor. Als Nebenprodukt zeigen wir, dass die betrachteten Methoden effizient mit einem verrauschten Orakel verwendet werden können, obwohl sie ursprünglich für ein exaktes Orakel entwickelt wurden. Schließlich untersuchen wir die vorgeschlagenen Zertifikate in numerischen Experimenten und zeigen, dass sie eine enge obere Schranke für das objektive Residuum liefern. / The present thesis studies cutting plane methods, which are a group of iterative algorithms for minimizing a (possibly nonsmooth) convex function over a compact convex set. We consider two prominent examples, namely, the ellipsoid method and Vaidya's method, and show that their convergence rate is preserved even when an inexact oracle is used. Furthermore, we demonstrate that it is possible to use these methods in the context of stochastic optimization efficiently. Another direction where cutting plane methods can be useful is Lagrange dual problems. Commonly, the objective and its derivatives can only be computed approximately in such problems. Thus, the methods' insensitivity to error in subgradients comes in handy. As an application example, we propose a linearly converging dual method for a constrained Markov decision process (CMDP) based on Vaidya's algorithm. We demonstrate the performance of the proposed method in a simple RL environment. The work also investigates the concept of accuracy certificates for convex minimization problems. Certificates allow for online verification of the accuracy of approximate solutions. In this thesis, we generalize the notion of accuracy certificates for the setting of an inexact first-order oracle. Furthermore, we propose an explicit way to construct accuracy certificates for a large class of cutting plane methods. As a by-product, we show that the considered methods can be efficiently used with a noisy oracle even though they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in numerical experiments highlighting that they provide a tight upper bound on the objective residual.
2

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
3

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.

Page generated in 0.0435 seconds