• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 12
  • 12
  • 8
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Eine spezielle Klasse von Zwei-Ebenen-Optimierungsaufgaben

Lohse, Sebastian 25 February 2011 (has links)
In der Dissertation werden Zwei-Ebenen-Optimierungsaufgaben mit spezieller Struktur untersucht. Von Interesse sind hierbei für den sogenannten pessimistischen Lösungszugang Existenzresultate für Lösungen, die Eckpunkteigenschaft einer Lösung, eine Regularisierungstechnik, Optimalitätsbedingungen sowie für den linearen Fall ein Verfahren zur Bestimmung einer global pessimistischen Lösung. Beim optimistischen Lösungszugang wird zunächst eine Verallgemeinerung des Lösungsbegriffes angegeben. Anschließend finden sich Betrachtungen zur Komplexität des Problems, zu Optimalitätsbedingungen sowie ein Abstiegs- und Branch&Bound-Verfahren für den linearen Fall wieder. Den Abschluss der Arbeit bilden ein Anwendungsbeispiel und numerische Testrechnungen.
12

Optimal control of a semi-discrete Cahn–Hilliard–Navier–Stokes system with variable fluid densities

Keil, Tobias 29 October 2021 (has links)
Die vorliegende Doktorarbeit befasst sich mit der optimalen Steuerung von einem Cahn–Hilliard–Navier–Stokes-System mit variablen Flüssigkeitsdichten. Dabei konzentriert sie sich auf das Doppelhindernispotential, was zu einem optimalen Steuerungsproblem einer Gruppe von gekoppelten Systemen, welche eine Variationsungleichung vierter Ordnung sowie eine Navier–Stokes-Gleichung beinhalten, führt. Eine geeignete Zeitdiskretisierung wird präsentiert und zugehörige Energieabschätzungen werden bewiesen. Die Existenz von Lösungen zum primalen System und von optimalen Steuerungen für das ursprüngliche Problem sowie für eine Gruppe von regularisierten Problemen wird etabliert. Die Optimalitätsbedingungen erster Ordnung für die regularisierten Probleme werden hergeleitet. Mittels eines Grenzübergangs in Bezug auf den Regularisierungsparameter werden Stationaritätsbedingungen für das ursprüngliche Problem etabliert, welche einer Form von C-Stationarität im Funktionenraum entsprechen. Weiterhin wird ein numerischer Lösungsalgorithmus für das Steuerungsproblem basierend auf einer Strafmethode entwickelt, welche die Moreau–Yosida-artigen Approximationen des Doppelhindernispotentials einschließt. In diesem Zusammenhang wird ein dual-gewichteter Residuenansatz für zielorientierte adaptive finite Elemente präsentiert, welcher auf dem Konzept der C-Stationarität beruht. Die numerische Realisierung des adaptiven Konzepts und entsprechende numerische Testergebnisse werden beschrieben. Die Lipschitzstetigkeit des Steuerungs-Zustandsoperators des zugehörigen instantanen Steuerungsproblems wird bewiesen und dessen Richtungsableitung wird charakterisiert. Starke Stationaritätsbedingungen für dieses Problem werden durch die Anwendung einer Technick von Mignot und Puel hergeleitet. Basierend auf der primalen Form der Bouligard-Ableitung wird ein impliziter numerischer Löser entwickelt, dessen Implentierung erläutert und anhand von numerischen Resultaten illustriert wird. / This thesis is concerned with the optimal control of a Cahn–Hilliard–Navier–Stokes system with variable fluid densities. It focuses on the double-obstacle potential, which yields an optimal control problem for a family of coupled systems in each time instant of a variational inequality of fourth order and the Navier–Stokes equation. A suitable time-discretization is presented and associated energy estimates are proven. The existence of solutions to the primal system and of optimal controls is established for the original problem as well as for a family of regularized problems. The consistency of these approximations is shown and first order optimality conditions for the regularized problems are derived. Through a limit process with respect to the regularization parameter, a stationarity system for the original problem is established, which corresponds to a function space version of ε-almost C-stationarity. Moreover, a numerical solution algorithm for the optimal control problem is developed based on a penalization method involving the Moreau–Yosida type approximations of the double-obstacle potential. A dual-weighted residual approach for goal-oriented adaptive finite elements is presented, which is based on the concept of C-stationarity. The overall error representation depends on dual weighted primal residuals and vice versa, supplemented by additional terms corresponding to the complementarity mismatch. The numerical realization of the adaptive concept is described and a report on numerical tests is provided. The Lipschitz continuity of the control-to-state operator of the corresponding instantaneous control problem is verified and its directional derivative is characterized. Strong stationarity conditions for the instantaneous control problem are derived. Utilizing the primal notion of B-differentiability, a bundle-free implicit programming method is developed. Details on the numerical implementation are given and numerical results are included.
13

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
14

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
15

Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-linear Approach

Kreimeier, Timo 06 December 2023 (has links)
In dieser Arbeit wird ein Algorithmus zur Lösung von endlichdimensionalen Optimierungsproblemen mit stückweise linearer Zielfunktion und stückweise linearen Nebenbedingungen vorgestellt. Dabei wird angenommen, dass die Funktionen in der sogenannten Abs-Linear Form, einer Matrix-Vektor-Darstellung, vorliegen. Mit Hilfe dieser Form lässt sich der Urbildraum in Polyeder zerlegen, so dass die Nichtglattheiten der stückweise linearen Funktionen mit den Kanten der Polyeder zusammenfallen können. Für die Klasse der abs-linearen Funktionen werden sowohl für den unbeschränkten als auch für den beschränkten Fall notwendige und hinreichende Optimalitätsbedingungen bewiesen, die in polynomialer Zeit verifiziert werden können. Für unbeschränkte stückweise lineare Optimierungsprobleme haben Andrea Walther und Andreas Griewank bereits 2019 mit der Active Signature Method (ASM) einen Lösungsalgorithmus vorgestellt. Aufbauend auf dieser Methode und in Kombination mit der Idee der aktiven Mengen Strategie zur Behandlung von Ungleichungsnebenbedingungen entsteht ein neuer Algorithmus mit dem Namen Constrained Active Signature Method (CASM) für beschränkte Probleme. Beide Algorithmen nutzen die stückweise lineare Struktur der Funktionen explizit aus, indem sie die Abs-Linear Form verwenden. Teil der Analyse der Algorithmen ist der Nachweis der endlichen Konvergenz zu lokalen Minima der jeweiligen Probleme sowie die Betrachtung effizienter Berechnung von Lösungen der in jeder Iteration der Algorithmen auftretenden Sattelpunktsysteme. Die numerische Performanz von CASM wird anhand verschiedener Beispiele demonstriert. Dazu gehören akademische Probleme, einschließlich bi-level und lineare Komplementaritätsprobleme, sowie Anwendungsprobleme aus der Gasnetzwerkoptimierung und dem Einzelhandel. / This thesis presents an algorithm for solving finite-dimensional optimization problems with a piecewise linear objective function and piecewise linear constraints. For this purpose, it is assumed that the functions are in the so-called Abs-Linear Form, a matrix-vector representation. Using this form, the domain space can be decomposed into polyhedra, so that the nonsmoothness of the piecewise linear functions can coincide with the edges of the polyhedra. For the class of abs-linear functions, necessary and sufficient optimality conditions that can be verified in polynomial time are given for both the unconstrained and the constrained case. For unconstrained piecewise linear optimization problems, Andrea Walther and Andreas Griewank already presented a solution algorithm called the Active Signature Method (ASM) in 2019. Building on this method and combining it with the idea of the Active Set Method to handle inequality constraints, a new algorithm called the Constrained Active Signature Method (CASM) for constrained problems emerges. Both algorithms explicitly exploit the piecewise linear structure of the functions by using the Abs-Linear Form. Part of the analysis of the algorithms is to show finite convergence to local minima of the respective problems as well as an efficient solution of the saddle point systems occurring in each iteration of the algorithms. The numerical performance of CASM is illustrated by several examples. The test problems cover academic problems, including bi-level and linear complementarity problems, as well as application problems from gas network optimization and inventory problems.

Page generated in 0.1222 seconds