• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Método de direções interiores ao epígrafo para a solução de problemas de otimização não-convexos e não-diferenciáveis via dualidade lagrangeana

Gómez, Jesús Cernades 07 June 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-03-31T11:28:07Z No. of bitstreams: 1 jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T02:56:11Z (GMT) No. of bitstreams: 1 jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5) / Made available in DSpace on 2016-04-24T02:56:11Z (GMT). No. of bitstreams: 1 jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5) Previous issue date: 2013-06-07 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho tem por finalidade apresentar um método para a solução de problemas de otimização não-convexos e não-diferenciáveis. O método, chamado IED (Interior Epigraph Directions), aplica-se a problemas de otimização cuja função objetivo é contínua e definida em um subconjunto compacto de Rn, sujeita a restrições de igualdade e/ou desigualdade. O método IED considera o problema dual induzido por uma função lagrangeana aumentada e obtém a solução primal gerando uma sequêmcia de pontos no interior do epígrafo da função dual. Primeiramente, um subgradiente é usado para gerar uma aproximação linear do problema dual. Em seguida, usa-se esta aproximação linear para definir-se uma direção de busca interior ao epígrafo da função dual. Obtém-se então, a partir de um ponto no interior do epígrafo, um novo ponto interior e, consequêntemente, uma sequência de pontos interiores é construida. Essa sequência produz uma sequência dual que por sua vez origina uma sequência primal, através da solução de um subproblema originado pela dualidade. A análise de convergência do algoritmo é também apresentada bem como resultados numéricos da solução de problema extraídos da literatura. / This work presents a method for solving constrained nonsmooth and nonconvex optimization problems. Themethod, called IED (Interior Epigraph Directions) can be applied to optimization problems with continuos objective functions defined over compact subsets of Rn and subjected to equalities and/or inequalities constraints. The IED method considers the dual problem induced by a generalized augmented Lagrangian function and obtains the primal solution by generating a sequence of iterates in the interior of the dual function. First, a subgradient is used to build a linear approximation to the dual problem. Then, this linear approximation is used to define a search direction in the interior of the dual function. From an interior point of the epigraph, a new point is obtained and an interior sequence to the epigraph is built, This sequence of interior points generates a dual sequence which in its turn generates a primal sequence by solving a problem originated by duality. The convergence analysis is also presented as well as numerical result of several problems obtained from de literature.
2

Metodo de direções interiores ao epígrafo - IED para otimização não diferenciável e não convexa via Dualidade Lagrangeana: estratégias para minimização da Lagrangeana aumentada

Franco, Hernando José Rocha 08 June 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-07-12T12:23:47Z No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-07-17T11:56:13Z (GMT) No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Made available in DSpace on 2018-07-17T11:56:13Z (GMT). No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) Previous issue date: 2018-06-08 / A teoria clássica de otimização presume a existência de certas condições, por exemplo, que as funções envolvidas em um problema desta natureza sejam pelo menos uma vez continuamente diferenciáveis. Entretanto, em muitas aplicações práticas que requerem o emprego de métodos de otimização, essa característica não se encontra presente. Problemas de otimização não diferenciáveis são considerados mais difíceis de lidar. Nesta classe, aqueles que envolvem funções não convexas são ainda mais complexos. O Interior Epigraph Directions (IED) é um método de otimização que se baseia na teoria da Dualidade Lagrangeana e se aplica à resolução de problemas não diferenciáveis, não convexos e com restrições. Neste estudo, apresentamos duas novas versões para o referido método a partir de implementações computacionais de outros algoritmos. A primeira versão, denominada IED+NFDNA, recebeu a incorporação de uma implementação do algoritmo Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA). Esta versão, ao ser aplicada em experimentos numéricos com problemas teste da literatura, apresentou desempenho satisfatório quando comparada ao IED original e a outros solvers de otimização. Com o objetivo de aperfeiçoar mais o método, reduzindo sua dependência de parâmetros iniciais e também do cálculo de subgradientes, uma segunda versão, IED+GA, foi desenvolvida com a utilização de algoritmos genéticos. Além da resolução de problemas teste, o IED-FGA obteve bons resultados quando aplicado a problemas de engenharia. / The classical theory of optimization assumes the existence of certain conditions, for example, that the functions involved in a problem of this nature are at least once continuously differentiable. However, in many practical applications that require the use of optimization methods, this characteristic is not present. Non-differentiable optimization problems are considered more difficult to deal with. In this class, those involving nonconvex functions are even more complex. Interior Epigraph Directions (IED) is an optimization method that is based on Lagrangean duality theory and applies to the resolution of non-differentiable, non-convex and constrained problems. In this study, we present two new versions for this method from computational implementations of other algorithms. The first version, called IED + NFDNA, received the incorporation of an implementation of the Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA) algorithm. This version, when applied in numerical experiments with problems in the literature, presented satisfactory performance when compared to the original IED and other optimization solvers. A second version, IED + GA, was developed with the use of genetic algorithms in order to further refine the method, reducing its dependence on initial parameters and also on the calculation of subgradients. In addition to solving test problems, IED + GA achieved good results when applied to engineering problems.
3

Optimisation convexe non-différentiable et méthodes de décomposition en recherche opérationnelle / Convex nonsmooth optimization and decomposition methods in operations research

Zaourar, Sofia 04 November 2014 (has links)
Les méthodes de décomposition sont une application du concept de diviser pour régner en optimisation. L'idée est de décomposer un problème d'optimisation donné en une séquence de sous-problèmes plus faciles à résoudre. Bien que ces méthodes soient les meilleures pour un grand nombre de problèmes de recherche opérationnelle, leur application à des problèmes réels de grande taille présente encore de nombreux défis. Cette thèse propose des améliorations méthodologiques et algorithmiques de méthodes de décomposition. Notre approche est basée sur l'analyse convexe et l'optimisation non-différentiable. Dans la décomposition par les contraintes (ou relaxation lagrangienne) du problème de planification de production électrique, même les sous-problèmes sont trop difficiles pour être résolus exactement. Mais des solutions approchées résultent en des prix instables et chahutés. Nous présentons un moyen simple d'améliorer la structure des prix en pénalisant leurs oscillations, en utilisant en particulier une régularisation par variation totale. La consistance de notre approche est illustrée sur des problèmes d'EDF. Nous considérons ensuite la décomposition par les variables (ou de Benders) qui peut avoir une convergence excessivement lente. Avec un point de vue d'optimisation non-différentiable, nous nous concentrons sur l'instabilité de l'algorithme de plans sécants sous-jacent à la méthode. Nous proposons une stabilisation quadratique de l'algorithme de Benders, inspirée par les méthodes de faisceaux en optimisation convexe. L'accélération résultant de cette stabilisation est illustrée sur des problèmes de conception de réseau et de localisation de plates-formes de correspondance (hubs). Nous nous intéressons aussi plus généralement aux problèmes d'optimisation convexe non-différentiable dont l'objectif est coûteux à évaluer. C'est en particulier une situation courante dans les procédures de décomposition. Nous montrons qu'il existe souvent des informations supplémentaires sur le problème, faciles à obtenir mais avec une précision inconnue, qui ne sont pas utilisées dans les algorithmes. Nous proposons un moyen d'incorporer ces informations incontrôlées dans des méthodes classiques d'optimisation convexe non-différentiable. Cette approche est appliquée avec succès à desproblèmes d'optimisation stochastique. Finalement, nous introduisons une stratégie de décomposition pour un problème de réaffectation de machines. Cette décomposition mène à une nouvelle variante de problèmes de conditionnement vectoriel (vectorbin packing) où les boîtes sont de taille variable. Nous proposons des heuristiques efficaces pour ce problème, qui améliorent les résultats de l'état de l'art du conditionnement vectoriel. Une adaptation de ces heuristiques permet de construire des solutions réalisables au problème de réaffectation de machines de Google. / Decomposition methods are an application of the divide and conquer principle to large-scale optimization. Their idea is to decompose a given optimization problem into a sequence of easier subproblems. Although successful for many applications, these methods still present challenges. In this thesis, we propose methodological and algorithmic improvements of decomposition methods and illustrate them on several operations research problems. Our approach heavily relies on convex analysis and nonsmooth optimization. In constraint decomposition (or Lagrangian relaxation) applied to short-term electricity generation management, even the subproblems are too difficult to solve exactly. When solved approximately though, the obtained prices show an unstable noisy behaviour. We present a simple way to improve the structure of the prices by penalizing their noisy behaviour, in particular using a total variation regularization. We illustrate the consistency of our regularization on real-life problems from EDF. We then consider variable decomposition (or Benders decomposition), that can have a very slow convergence. With a nonsmooth optimization point of view on this method, we address the instability of Benders cutting-planes algorithm. We present an algorithmic stabilization inspired by bundle methods for convex optimization. The acceleration provided by this stabilization is illustrated on network design andhub location problems. We also study more general convex nonsmooth problems whose objective function is expensive to evaluate. This situation typically arises in decomposition methods. We show that it often exists extra information about the problem, cheap but with unknown accuracy, that is not used by the algorithms. We propose a way to incorporate this coarseinformation into classical nonsmooth optimization algorithms and apply it successfully to two-stage stochastic problems.Finally, we introduce a decomposition strategy for the machine reassignment problem. This decomposition leads to a new variant of vector bin packing problems, where the bins have variable sizes. We propose fast and efficient heuristics for this problem that improve on state of the art results of vector bin packing problems. An adaptation of these heuristics is also able to generate feasible solutions for Google instances of the machine reassignment problem.

Page generated in 0.095 seconds