• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Semismooth Newton Method For Generalized Semi-infinite Programming Problems

Tezel Ozturan, Aysun 01 July 2010 (has links) (PDF)
Semi-infinite programming problems is a class of optimization problems in finite dimensional variables which are subject to infinitely many inequality constraints. If the infinite index of inequality constraints depends on the decision variable, then the problem is called generalized semi-infinite programming problem (GSIP). If the infinite index set is fixed, then the problem is called standard semi-infinite programming problem (SIP). In this thesis, convergence of a semismooth Newton method for generalized semi-infinite programming problems with convex lower level problems is investigated. In this method, using nonlinear complementarity problem functions the upper and lower level Karush-Kuhn-Tucker conditions of the optimization problem are reformulated as a semismooth system of equations. A possible violation of strict complementary slackness causes nonsmoothness. In this study, we show that the standard regularity condition for convergence of the semismooth Newton method is satisfied under natural assumptions for semi-infinite programs. In fact, under the Reduction Ansatz in the lower level problem and strong stability in the reduced upper level problem this regularity condition is satisfied. In particular, we do not have to assume strict complementary slackness in the upper level. Furthermore, in this thesis we neither assume strict complementary slackness in the upper nor in the lower level. In the case of violation of strict complementary slackness in the lower level, the auxiliary functions of the locally reduced problem are not necessarily twice continuously differentiable. But still, we can show that a standard regularity condition for quadratic convergence of the semismooth Newton method holds under a natural assumption for semi-infinite programs. Numerical examples from, among others, design centering and robust optimization illustrate the performance of the method.
2

Tópicos em penalidades exatas diferenciáveis / Topics in differentiable exact penalties

Ellen Hidemi Fukuda 11 March 2011 (has links)
Durante as décadas de 70 e 80, desenvolveram-se métodos baseados em penalidades exatas diferenciáveis para resolver problemas de otimização não linear com restrições. Uma desvantagem dessas penalidades é que seus gradientes contêm termos de segunda ordem em suas fórmulas, o que impede a utilização de métodos do tipo Newton para resolver o problema. Para contornar essa dificuldade, utilizamos uma ideia de construção de penalidade exata para desigualdades variacionais, introduzida recentemente por André e Silva. Essa construção consiste em incorporar um estimador de multiplicadores, proposto por Glad e Polak, no lagrangiano aumentado para desigualdades variacionais. Nesse trabalho, estendemos o estimador de multiplicadores para restrições gerais de igualdade e desigualdade, e enfraquecemos a hipótese de regularidade. Como resultado, obtemos uma função penalidade exata continuamente diferenciável e uma nova reformulação do sistema KKT associado a problemas não lineares. A estrutura dessa reformulação permite a utilização do método de Newton semi-suave, e a taxa de convergência local superlinear pode ser provada. Além disso, verificamos que a penalidade exata construída pode ser usada para globalizar o método, levando a uma abordagem do tipo Gauss-Newton. Por fim, realizamos experimentos numéricos baseando-se na coleção CUTE de problemas de teste. / During the 1970\'s and 1980\'s, methods based on differentiable exact penalty functions were developed to solve constrained optimization problems. One drawback of these functions is that they contain second-order terms in their gradient\'s formula, which do not allow the use of Newton-type methods. To overcome such difficulty, we use an idea for construction of exact penalties for variational inequalities, introduced recently by André and Silva. This construction consists on incorporating a multipliers estimate, proposed by Glad and Polak, in the augmented Lagrangian function for variational inequalities. In this work, we extend the multipliers estimate to deal with both equality and inequality constraints and we weaken the regularity assumption. As a result, we obtain a continuous differentiable exact penalty function and a new equation reformulation of the KKT system associated to nonlinear problems. The formula of such reformulation allows the use of semismooth Newton method, and the local superlinear convergence rate can be also proved. Besides, we note that the exact penalty function can be used to globalize the method, resulting in a Gauss-Newton-type approach. We conclude with some numerical experiments using the collection of test problems CUTE.
3

Tópicos em penalidades exatas diferenciáveis / Topics in differentiable exact penalties

Fukuda, Ellen Hidemi 11 March 2011 (has links)
Durante as décadas de 70 e 80, desenvolveram-se métodos baseados em penalidades exatas diferenciáveis para resolver problemas de otimização não linear com restrições. Uma desvantagem dessas penalidades é que seus gradientes contêm termos de segunda ordem em suas fórmulas, o que impede a utilização de métodos do tipo Newton para resolver o problema. Para contornar essa dificuldade, utilizamos uma ideia de construção de penalidade exata para desigualdades variacionais, introduzida recentemente por André e Silva. Essa construção consiste em incorporar um estimador de multiplicadores, proposto por Glad e Polak, no lagrangiano aumentado para desigualdades variacionais. Nesse trabalho, estendemos o estimador de multiplicadores para restrições gerais de igualdade e desigualdade, e enfraquecemos a hipótese de regularidade. Como resultado, obtemos uma função penalidade exata continuamente diferenciável e uma nova reformulação do sistema KKT associado a problemas não lineares. A estrutura dessa reformulação permite a utilização do método de Newton semi-suave, e a taxa de convergência local superlinear pode ser provada. Além disso, verificamos que a penalidade exata construída pode ser usada para globalizar o método, levando a uma abordagem do tipo Gauss-Newton. Por fim, realizamos experimentos numéricos baseando-se na coleção CUTE de problemas de teste. / During the 1970\'s and 1980\'s, methods based on differentiable exact penalty functions were developed to solve constrained optimization problems. One drawback of these functions is that they contain second-order terms in their gradient\'s formula, which do not allow the use of Newton-type methods. To overcome such difficulty, we use an idea for construction of exact penalties for variational inequalities, introduced recently by André and Silva. This construction consists on incorporating a multipliers estimate, proposed by Glad and Polak, in the augmented Lagrangian function for variational inequalities. In this work, we extend the multipliers estimate to deal with both equality and inequality constraints and we weaken the regularity assumption. As a result, we obtain a continuous differentiable exact penalty function and a new equation reformulation of the KKT system associated to nonlinear problems. The formula of such reformulation allows the use of semismooth Newton method, and the local superlinear convergence rate can be also proved. Besides, we note that the exact penalty function can be used to globalize the method, resulting in a Gauss-Newton-type approach. We conclude with some numerical experiments using the collection of test problems CUTE.
4

Approximation of nonsmooth optimization problems and elliptic variational inequalities with applications to elasto-plasticity

Rösel, Simon 09 May 2017 (has links)
Optimierungsprobleme und Variationsungleichungen über Banach-Räumen stellen Themen von substantiellem Interesse dar, da beide Problemklassen einen abstrakten Rahmen für zahlreiche Anwendungen aus verschiedenen Fachgebieten stellen. Nach einer Einführung in Teil I werden im zweiten Teil allgemeine Approximationsmethoden, einschließlich verschiedener Diskretisierungs- und Regularisierungsansätze, zur Lösung von nichtglatten Variationsungleichungen und Optimierungsproblemen unter konvexen Restriktionen vorgestellt. In diesem allgemeinen Rahmen stellen sich gewisse Dichtheitseigenschaften der konvexen zulässigen Menge als wichtige Voraussetzungen für die Konsistenz einer abstrakten Klasse von Störungen heraus. Im Folgenden behandeln wir vor allem Restriktionsmengen in Sobolev-Räumen, die durch eine punktweise Beschränkung an den Funktionswert definiert werden. Für diesen Restriktionstyp werden verschiedene Dichtheitsresultate bewiesen. In Teil III widmen wir uns einem quasi-statischen Kontaktproblem der Elastoplastizität mit Härtung. Das entsprechende zeit-diskretisierte Problem kann als nichtglattes, restringiertes Minimierungsproblem betrachtet werden. Zur Lösung wird eine Pfadverfolgungsmethode auf Basis des verallgemeinerten Newton-Verfahrens entwickelt, dessen Teilprobleme lokal superlinear und gitterunabhängig lösbar sind. Teil III schließt mit verschiedenen numerischen Beispielen. Der letzte Teil der Arbeit ist der quasi-statischen, perfekten Plastizität gewidmet. Auf Basis des primalen Problems der perfekten Plastizität leiten wir eine reduzierte Formulierung her, die es erlaubt, das primale Problem als Fenchel-dualisierte Form des klassischen zeit-diskretisierten Spannungsproblems zu verstehen. Auf diese Weise werden auch neue Optimalitätsbedingungen hergeleitet. Zur Lösung des Problems stellen wir eine modifizierte Form der viskoplastischen Regularisierung vor und beweisen die Konvergenz dieses neuen Regularisierungsverfahrens. / Optimization problems and variational inequalities over Banach spaces are subjects of paramount interest since these mathematical problem classes serve as abstract frameworks for numerous applications. Solutions to these problems usually cannot be determined directly. Following an introduction, part II presents several approximation methods for convex-constrained nonsmooth variational inequality and optimization problems, including discretization and regularization approaches. We prove the consistency of a general class of perturbations under certain density requirements with respect to the convex constraint set. We proceed with the study of pointwise constraint sets in Sobolev spaces, and several density results are proven. The quasi-static contact problem of associative elasto-plasticity with hardening at small strains is considered in part III. The corresponding time-incremental problem can be equivalently formulated as a nonsmooth, constrained minimization problem, or, as a mixed variational inequality problem over the convex constraint. We propose an infinite-dimensional path-following semismooth Newton method for the solution of the time-discrete plastic contact problem, where each path-problem can be solved locally at a superlinear rate of convergence with contraction rates independent of the discretization. Several numerical examples support the theoretical results. The last part is devoted to the quasi-static problem of perfect (Prandtl-Reuss) plasticity. Building upon recent developments in the study of the (incremental) primal problem, we establish a reduced formulation which is shown to be a Fenchel predual problem of the corresponding stress problem. This allows to derive new primal-dual optimality conditions. In order to solve the time-discrete problem, a modified visco-plastic regularization is proposed, and we prove the convergence of this new approximation scheme.

Page generated in 0.0664 seconds