• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 13
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 19
  • 17
  • 15
  • 13
  • 13
  • 13
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-linear Approach

Kreimeier, Timo 06 December 2023 (has links)
In dieser Arbeit wird ein Algorithmus zur Lösung von endlichdimensionalen Optimierungsproblemen mit stückweise linearer Zielfunktion und stückweise linearen Nebenbedingungen vorgestellt. Dabei wird angenommen, dass die Funktionen in der sogenannten Abs-Linear Form, einer Matrix-Vektor-Darstellung, vorliegen. Mit Hilfe dieser Form lässt sich der Urbildraum in Polyeder zerlegen, so dass die Nichtglattheiten der stückweise linearen Funktionen mit den Kanten der Polyeder zusammenfallen können. Für die Klasse der abs-linearen Funktionen werden sowohl für den unbeschränkten als auch für den beschränkten Fall notwendige und hinreichende Optimalitätsbedingungen bewiesen, die in polynomialer Zeit verifiziert werden können. Für unbeschränkte stückweise lineare Optimierungsprobleme haben Andrea Walther und Andreas Griewank bereits 2019 mit der Active Signature Method (ASM) einen Lösungsalgorithmus vorgestellt. Aufbauend auf dieser Methode und in Kombination mit der Idee der aktiven Mengen Strategie zur Behandlung von Ungleichungsnebenbedingungen entsteht ein neuer Algorithmus mit dem Namen Constrained Active Signature Method (CASM) für beschränkte Probleme. Beide Algorithmen nutzen die stückweise lineare Struktur der Funktionen explizit aus, indem sie die Abs-Linear Form verwenden. Teil der Analyse der Algorithmen ist der Nachweis der endlichen Konvergenz zu lokalen Minima der jeweiligen Probleme sowie die Betrachtung effizienter Berechnung von Lösungen der in jeder Iteration der Algorithmen auftretenden Sattelpunktsysteme. Die numerische Performanz von CASM wird anhand verschiedener Beispiele demonstriert. Dazu gehören akademische Probleme, einschließlich bi-level und lineare Komplementaritätsprobleme, sowie Anwendungsprobleme aus der Gasnetzwerkoptimierung und dem Einzelhandel. / This thesis presents an algorithm for solving finite-dimensional optimization problems with a piecewise linear objective function and piecewise linear constraints. For this purpose, it is assumed that the functions are in the so-called Abs-Linear Form, a matrix-vector representation. Using this form, the domain space can be decomposed into polyhedra, so that the nonsmoothness of the piecewise linear functions can coincide with the edges of the polyhedra. For the class of abs-linear functions, necessary and sufficient optimality conditions that can be verified in polynomial time are given for both the unconstrained and the constrained case. For unconstrained piecewise linear optimization problems, Andrea Walther and Andreas Griewank already presented a solution algorithm called the Active Signature Method (ASM) in 2019. Building on this method and combining it with the idea of the Active Set Method to handle inequality constraints, a new algorithm called the Constrained Active Signature Method (CASM) for constrained problems emerges. Both algorithms explicitly exploit the piecewise linear structure of the functions by using the Abs-Linear Form. Part of the analysis of the algorithms is to show finite convergence to local minima of the respective problems as well as an efficient solution of the saddle point systems occurring in each iteration of the algorithms. The numerical performance of CASM is illustrated by several examples. The test problems cover academic problems, including bi-level and linear complementarity problems, as well as application problems from gas network optimization and inventory problems.
42

Application of the Duality Theory

Lorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
43

Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine Learning

Lorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.

Page generated in 0.1192 seconds