• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

NFDNA - um algoritmo para otimização não convexa e não diferenciável

Fernandes, Camila de Freitas 08 April 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-06-16T17:52:10Z No. of bitstreams: 1 camiladefreitasfernandes.pdf: 740367 bytes, checksum: fac5ab7dcb039b31d587151b9a53fab1 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-13T14:25:13Z (GMT) No. of bitstreams: 1 camiladefreitasfernandes.pdf: 740367 bytes, checksum: fac5ab7dcb039b31d587151b9a53fab1 (MD5) / Made available in DSpace on 2016-07-13T14:25:13Z (GMT). No. of bitstreams: 1 camiladefreitasfernandes.pdf: 740367 bytes, checksum: fac5ab7dcb039b31d587151b9a53fab1 (MD5) Previous issue date: 2016-04-08 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho estudamos um algoritmo para solução de problemas de otimização irrestrita com funções não necessariamente convexas ou diferenciáveis, denominado Nonsmooth Feasible Direction Nonconvex Algorithm - NFDNA, e fazemos uma aplicação deste algoritmo que consistiu em utilizá-lo como subrotina de um outro algoritmo chamado Interior Epigraph Direction (IED) method. O IED, desenvolvido para resolver problemas de otimização não convexa, não diferenciável mas com restrições, utiliza Dualidade Lagrangeana que requer a minimização da função Lagrangeana. A eficiência do IED depende fortemente de tal minimização. Como aplicação, substituímos a rotina fminsearch do Matlab, utilizada originalmente pelo IED, pelo NFDNA. Mostramos através da solução de problemas teste que a performance do IED foi mais eficiente com a utilização do NFDNA. / In this work we study an algorithm for solving unsconstrained, not necessarily convex or differentiable optimization problems called Nonsmooth Feasible Direction Nonconvex Algorithm - NFDNA. We also employ this algorithm as a subroutine of the Interior Epigraph Directions (IED) method. The IED method, devised for solving constrained, nonconvex and nonsmooth optimization problems uses Lagrangean Duality which requires the minimization of the Lagrangean function. The effectiveness of the IED depends strongly on the Lagrangean function minimization. As an application, we replace the Matlab routine fminsearch, originally used by IED, with NFDNA. We show through the solution of test problems that the IED performance is more efficient by employing NFDNA.
22

Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-linear Approach

Kreimeier, Timo 06 December 2023 (has links)
In dieser Arbeit wird ein Algorithmus zur Lösung von endlichdimensionalen Optimierungsproblemen mit stückweise linearer Zielfunktion und stückweise linearen Nebenbedingungen vorgestellt. Dabei wird angenommen, dass die Funktionen in der sogenannten Abs-Linear Form, einer Matrix-Vektor-Darstellung, vorliegen. Mit Hilfe dieser Form lässt sich der Urbildraum in Polyeder zerlegen, so dass die Nichtglattheiten der stückweise linearen Funktionen mit den Kanten der Polyeder zusammenfallen können. Für die Klasse der abs-linearen Funktionen werden sowohl für den unbeschränkten als auch für den beschränkten Fall notwendige und hinreichende Optimalitätsbedingungen bewiesen, die in polynomialer Zeit verifiziert werden können. Für unbeschränkte stückweise lineare Optimierungsprobleme haben Andrea Walther und Andreas Griewank bereits 2019 mit der Active Signature Method (ASM) einen Lösungsalgorithmus vorgestellt. Aufbauend auf dieser Methode und in Kombination mit der Idee der aktiven Mengen Strategie zur Behandlung von Ungleichungsnebenbedingungen entsteht ein neuer Algorithmus mit dem Namen Constrained Active Signature Method (CASM) für beschränkte Probleme. Beide Algorithmen nutzen die stückweise lineare Struktur der Funktionen explizit aus, indem sie die Abs-Linear Form verwenden. Teil der Analyse der Algorithmen ist der Nachweis der endlichen Konvergenz zu lokalen Minima der jeweiligen Probleme sowie die Betrachtung effizienter Berechnung von Lösungen der in jeder Iteration der Algorithmen auftretenden Sattelpunktsysteme. Die numerische Performanz von CASM wird anhand verschiedener Beispiele demonstriert. Dazu gehören akademische Probleme, einschließlich bi-level und lineare Komplementaritätsprobleme, sowie Anwendungsprobleme aus der Gasnetzwerkoptimierung und dem Einzelhandel. / This thesis presents an algorithm for solving finite-dimensional optimization problems with a piecewise linear objective function and piecewise linear constraints. For this purpose, it is assumed that the functions are in the so-called Abs-Linear Form, a matrix-vector representation. Using this form, the domain space can be decomposed into polyhedra, so that the nonsmoothness of the piecewise linear functions can coincide with the edges of the polyhedra. For the class of abs-linear functions, necessary and sufficient optimality conditions that can be verified in polynomial time are given for both the unconstrained and the constrained case. For unconstrained piecewise linear optimization problems, Andrea Walther and Andreas Griewank already presented a solution algorithm called the Active Signature Method (ASM) in 2019. Building on this method and combining it with the idea of the Active Set Method to handle inequality constraints, a new algorithm called the Constrained Active Signature Method (CASM) for constrained problems emerges. Both algorithms explicitly exploit the piecewise linear structure of the functions by using the Abs-Linear Form. Part of the analysis of the algorithms is to show finite convergence to local minima of the respective problems as well as an efficient solution of the saddle point systems occurring in each iteration of the algorithms. The numerical performance of CASM is illustrated by several examples. The test problems cover academic problems, including bi-level and linear complementarity problems, as well as application problems from gas network optimization and inventory problems.

Page generated in 0.0886 seconds