• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 7
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 64
  • 29
  • 16
  • 16
  • 15
  • 14
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The Eigenvalue Problem of the 1-Laplace Operator: Local Perturbation Results and Investigation of Related Vectorial Questions

Littig, Samuel 23 January 2015 (has links)
As a first aspect the thesis treats existence results of the perturbed eigenvalue problem of the 1-Laplace operator. This is done with the aid of a quite general critical point theory results with the genus as topological index. Moreover we show that the eigenvalues of the perturbed 1-Laplace operator converge to the eigenvalues of the unperturebed 1-Laplace operator when the perturbation goes to zero. As a second aspect we treat the eigenvalue problems of the vectorial 1-Laplace operator and the symmetrized 1-Laplace operator. And as a third aspect certain related parabolic problems are considered.
62

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
63

Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-linear Approach

Kreimeier, Timo 06 December 2023 (has links)
In dieser Arbeit wird ein Algorithmus zur Lösung von endlichdimensionalen Optimierungsproblemen mit stückweise linearer Zielfunktion und stückweise linearen Nebenbedingungen vorgestellt. Dabei wird angenommen, dass die Funktionen in der sogenannten Abs-Linear Form, einer Matrix-Vektor-Darstellung, vorliegen. Mit Hilfe dieser Form lässt sich der Urbildraum in Polyeder zerlegen, so dass die Nichtglattheiten der stückweise linearen Funktionen mit den Kanten der Polyeder zusammenfallen können. Für die Klasse der abs-linearen Funktionen werden sowohl für den unbeschränkten als auch für den beschränkten Fall notwendige und hinreichende Optimalitätsbedingungen bewiesen, die in polynomialer Zeit verifiziert werden können. Für unbeschränkte stückweise lineare Optimierungsprobleme haben Andrea Walther und Andreas Griewank bereits 2019 mit der Active Signature Method (ASM) einen Lösungsalgorithmus vorgestellt. Aufbauend auf dieser Methode und in Kombination mit der Idee der aktiven Mengen Strategie zur Behandlung von Ungleichungsnebenbedingungen entsteht ein neuer Algorithmus mit dem Namen Constrained Active Signature Method (CASM) für beschränkte Probleme. Beide Algorithmen nutzen die stückweise lineare Struktur der Funktionen explizit aus, indem sie die Abs-Linear Form verwenden. Teil der Analyse der Algorithmen ist der Nachweis der endlichen Konvergenz zu lokalen Minima der jeweiligen Probleme sowie die Betrachtung effizienter Berechnung von Lösungen der in jeder Iteration der Algorithmen auftretenden Sattelpunktsysteme. Die numerische Performanz von CASM wird anhand verschiedener Beispiele demonstriert. Dazu gehören akademische Probleme, einschließlich bi-level und lineare Komplementaritätsprobleme, sowie Anwendungsprobleme aus der Gasnetzwerkoptimierung und dem Einzelhandel. / This thesis presents an algorithm for solving finite-dimensional optimization problems with a piecewise linear objective function and piecewise linear constraints. For this purpose, it is assumed that the functions are in the so-called Abs-Linear Form, a matrix-vector representation. Using this form, the domain space can be decomposed into polyhedra, so that the nonsmoothness of the piecewise linear functions can coincide with the edges of the polyhedra. For the class of abs-linear functions, necessary and sufficient optimality conditions that can be verified in polynomial time are given for both the unconstrained and the constrained case. For unconstrained piecewise linear optimization problems, Andrea Walther and Andreas Griewank already presented a solution algorithm called the Active Signature Method (ASM) in 2019. Building on this method and combining it with the idea of the Active Set Method to handle inequality constraints, a new algorithm called the Constrained Active Signature Method (CASM) for constrained problems emerges. Both algorithms explicitly exploit the piecewise linear structure of the functions by using the Abs-Linear Form. Part of the analysis of the algorithms is to show finite convergence to local minima of the respective problems as well as an efficient solution of the saddle point systems occurring in each iteration of the algorithms. The numerical performance of CASM is illustrated by several examples. The test problems cover academic problems, including bi-level and linear complementarity problems, as well as application problems from gas network optimization and inventory problems.
64

Lösungsmethoden für Variationsungleichungen

Ponomarenko, Andrej 31 January 2003 (has links)
Zusammenfassung Diese Arbeit ist ein Versuch, verschiedene klassische und neuere Methodender glatten bzw. nichtglatten Optimierung zu verallgemeinern und in ihrem Zusammenhang darzustellen. Als Hauptinstrument erweist sich dabei die sogenannte verallgemeinerte Kojima-Funktion. Neben reichlichen Beispielen setzen wir einen besonderen Akzent auf die Betrachtung von Variationsungleichungen, Komplementaritaetsaufgaben und der Standartaufgabeder mathematischen Programmierung. Unter natuerlichen Voraussetzungen an diese Probleme kann man u.a. Barriere-, Straf- und SQP-Typ-Methoden, die auf Newton-Verfahrenbasieren, aber auch Modelle, die sogenannte NCP-Funktionen benutzen, mittelsspezieller Stoerungen der Kojima-Funktion exakt modellieren. Daneben werdendurch explizite und natuerliche Wahl der Stoerungsparameter auch neue Methoden dieser Arten vorgeschlagen. Die Vorteile solcher Modellierungsind ueberzeugend vor allem wegen der direkt moeglichen (auf Stabilitaetseigenschaften der Kojima-Gleichung beruhendenden)Loesungsabschaetzungen und weil die entsprechenden Nullstellen ziemlich einfach als Loesungen bekannter Ersatzprobleme interpretiert werden koennen. Ein weiterer Aspekt der Arbeit besteht in der genaueren Untersuchungder "nichtglatten Faelle". Hier wird die Theorie von verschiedenen verallgemeinerten Ableitungen und dadurch entstehenden verallgemeinerten Newton-Verfahren, die im Buch "Nonsmooth Equations in Optimization" von B. Kummer und D. Klatte vorgeschlagen und untersucht wurde, intensiv benutzt. Entscheidend ist dabei, dass die benutzten verallgemeinerten Ableitungen auch praktisch angewandt werden koennen, da man sie exakt ausrechnen kann. / This work attempts to generalize various classical and new methods of smooth or nonsmooth optimization and to show them in their interrelation. The main tool for doing this is the so-called generalized Kojima-function. In addition to numerous examples we specialy emphasize the consideration of variational inequalities, complementarity problems and the standard problem of mathematical programming. Under natural assumptions on these problems we can model e.g. barrier-, penalty-, and SQP-Type-methods basing on Newton methods, and also methods using the so-called NCP-function exactly by means of special perturbations of the Kojima-function. Furthermore, by the explicit and natural choice of the perturbation parameters new methods of these kinds are introduced. The benefit of such a modelling is obvious, first of all due to the direct solution estimation (basing on stability properties of the Kojima-equation) and because the corresponding zeros can easily be interpreted as solutions of known subproblems. A further aspect considered in this paper is the detailed investigation of "nonsmooth cases". The theory of various generalized derivatives and resulting generalized Newton methods, which is introduced and investigated in the book "Nonsmooth Equations in Optimization" of B. Kummer and D. Klatte, is intensely used here. The crucial point is the applicability of the used generalized derivatives in practice, since they can be calculated exactly.

Page generated in 0.0484 seconds