• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 3
  • Tagged with
  • 15
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Análise não suave e aplicações em otimização

Costa, Tiago Mendonça de [UNESP] 28 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:18Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-28Bitstream added on 2014-06-13T20:48:40Z : No. of bitstreams: 1 costa_tm_me_sjrp.pdf: 1425800 bytes, checksum: f5b08954e14201ee5211145299b1e813 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, estamos interessados em apresentar uma abordagem relacionando a análise não suave com a otimização. Primeiramente, é realizado um estudo sobre conceitos da análise não suave, como cones normais, cone tangente de Bouligand, subdiferenciais proximal, estrita, limite e de clarke. Com esses conceitos exibimos uma série de resultados, por exemplo, uma caracterização par funções de Lipschitz, subdiferencais da soma, produto e máximo de funções semi-contínuas inferior, uma versão não suave dos multiplicadores de Lagrange, i.e., condições de primeira ordem para otimalidade de problemas de otimização não suaves. Também é feito um estudo sobre as condições de segunda ordem para otimalidade em problemas de otimização não suaves e para isso, foi necessário a apresentação de outros conceitos e propriedades como os de Hessiana generalizada, Jacobiana aproximada a Hessiana proximada. Após a apresentação desses resultados, é feita uma análise sobre dois Teoremas que fornecem, com abordagens distintas, condições suficiente de segunda ordem para problemas de otimização não suaves e este trabalho é finalizado com a aprsentação de um resultado que é considerado uma unificação desses dois Teoremas / In this work we are interested in the presentation of an approach relating Nonsmooth Analysis to Optimization. First we make a study about concepts of nonsmooth analysis such as, normal cone, Bouligand's tangent cone, proximal, strict and limiting Subdiferential, as well as Clarke's Suddifferential. After these, we exhibit a series of results, for example, a characterization of Lipschitz functions, Subdifferential sum, product and maxium rules of lower semicontinuous functions and a nonsmooth version of Lagrange's multiplier rule, that is, a first order necessary condition of optimality for nonsmooth optimization problems. We also made a study about second order optimality conditions for nonsmooth optimization problems. In order to do that, it was necessary to present other concepts and properties about generalized Hessian, approximate Jacobian and approximate Hessian. After presenting these concepts and results, an analysis of two theorems that provide, with different approches, second order conditions for optimality for nonsmooth problems is made. Finally, this dissertation is completed with the exposition of a result that is considered a unification of these two theorems
2

Derivative Free Algorithms For Large Scale Non-smooth Optimization And Their Applications

Tor, Ali Hakan 01 February 2013 (has links) (PDF)
In this thesis, various numerical methods are developed to solve nonsmooth and in particular, nonconvex optimization problems. More speci
3

Decompositions and representations of monotone operators with linear graphs

Yao, Liangjin 05 1900 (has links)
We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decomposition of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equivalent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations including an explicit formula that is built upon the proximal average of the associated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Zălinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations.
4

A Semismooth Newton Method For Generalized Semi-infinite Programming Problems

Tezel Ozturan, Aysun 01 July 2010 (has links) (PDF)
Semi-infinite programming problems is a class of optimization problems in finite dimensional variables which are subject to infinitely many inequality constraints. If the infinite index of inequality constraints depends on the decision variable, then the problem is called generalized semi-infinite programming problem (GSIP). If the infinite index set is fixed, then the problem is called standard semi-infinite programming problem (SIP). In this thesis, convergence of a semismooth Newton method for generalized semi-infinite programming problems with convex lower level problems is investigated. In this method, using nonlinear complementarity problem functions the upper and lower level Karush-Kuhn-Tucker conditions of the optimization problem are reformulated as a semismooth system of equations. A possible violation of strict complementary slackness causes nonsmoothness. In this study, we show that the standard regularity condition for convergence of the semismooth Newton method is satisfied under natural assumptions for semi-infinite programs. In fact, under the Reduction Ansatz in the lower level problem and strong stability in the reduced upper level problem this regularity condition is satisfied. In particular, we do not have to assume strict complementary slackness in the upper level. Furthermore, in this thesis we neither assume strict complementary slackness in the upper nor in the lower level. In the case of violation of strict complementary slackness in the lower level, the auxiliary functions of the locally reduced problem are not necessarily twice continuously differentiable. But still, we can show that a standard regularity condition for quadratic convergence of the semismooth Newton method holds under a natural assumption for semi-infinite programs. Numerical examples from, among others, design centering and robust optimization illustrate the performance of the method.
5

Decompositions and representations of monotone operators with linear graphs

Yao, Liangjin 05 1900 (has links)
We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decomposition of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equivalent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations including an explicit formula that is built upon the proximal average of the associated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Zălinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations.
6

Análise não suave e aplicações em otimização /

Costa, Tiago Mendonça de. January 2011 (has links)
Orientador: Geraldo Nunes Silva / Banca: Luis Antônio Fernandes de Oliveira / Banca: Yurilev Chalco Cano / Resumo: Neste trabalho, estamos interessados em apresentar uma abordagem relacionando a análise não suave com a otimização. Primeiramente, é realizado um estudo sobre conceitos da análise não suave, como cones normais, cone tangente de Bouligand, subdiferenciais proximal, estrita, limite e de clarke. Com esses conceitos exibimos uma série de resultados, por exemplo, uma caracterização par funções de Lipschitz, subdiferencais da soma, produto e máximo de funções semi-contínuas inferior, uma versão não suave dos multiplicadores de Lagrange, i.e., condições de primeira ordem para otimalidade de problemas de otimização não suaves. Também é feito um estudo sobre as condições de segunda ordem para otimalidade em problemas de otimização não suaves e para isso, foi necessário a apresentação de outros conceitos e propriedades como os de Hessiana generalizada, Jacobiana aproximada a Hessiana proximada. Após a apresentação desses resultados, é feita uma análise sobre dois Teoremas que fornecem, com abordagens distintas, condições suficiente de segunda ordem para problemas de otimização não suaves e este trabalho é finalizado com a aprsentação de um resultado que é considerado uma "unificação" desses dois Teoremas / Abstract: In this work we are interested in the presentation of an approach relating Nonsmooth Analysis to Optimization. First we make a study about concepts of nonsmooth analysis such as, normal cone, Bouligand's tangent cone, proximal, strict and limiting Subdiferential, as well as Clarke's Suddifferential. After these, we exhibit a series of results, for example, a characterization of Lipschitz functions, Subdifferential sum, product and maxium rules of lower semicontinuous functions and a nonsmooth version of Lagrange's multiplier rule, that is, a first order necessary condition of optimality for nonsmooth optimization problems. We also made a study about second order optimality conditions for nonsmooth optimization problems. In order to do that, it was necessary to present other concepts and properties about generalized Hessian, approximate Jacobian and approximate Hessian. After presenting these concepts and results, an analysis of two theorems that provide, with different approches, second order conditions for optimality for nonsmooth problems is made. Finally, this dissertation is completed with the exposition of a result that is considered a "unification" of these two theorems / Mestre
7

Existência e unicidade da solução de um problema de plasma confinado

Montero, Carlos Alberto Almendras 25 February 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-01T17:11:55Z No. of bitstreams: 1 carlosalbertoalmendrasmontero.pdf: 861941 bytes, checksum: 5266d3e617b3a55680988de7de3b66d5 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-01T20:14:37Z (GMT) No. of bitstreams: 1 carlosalbertoalmendrasmontero.pdf: 861941 bytes, checksum: 5266d3e617b3a55680988de7de3b66d5 (MD5) / Made available in DSpace on 2016-02-01T20:14:37Z (GMT). No. of bitstreams: 1 carlosalbertoalmendrasmontero.pdf: 861941 bytes, checksum: 5266d3e617b3a55680988de7de3b66d5 (MD5) Previous issue date: 2014-02-25 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, o objetivo é estudar a existência e unicidade da solução num sentido fraco para um problema não linear com valor na fronteira que é derivado de um modelo que decreve o equilibrio de um plasma confinado. Para esta finalidade se formula um problema equivalente e se estabelecem condições para este novo problema. Logo, utilizando a teoria da subdiferencial e fazendo um estudo de autovalor se consegue que este novo problema tenha solução e, além disso, seja única. / In this work, the objective is to study the existence and uniqueness of the solution in a weak sense of a nonlinear boundary value problem which it is derived from a model that describe the equilibrium of a confined plasma. For this purpose, we formulate an equivalent problem and establish conditions for this new problem. Therefore, using the theory of subdiferencial and studing an eigenvalue problem, we obtain that this new problem has a unique solution.
8

Decompositions and representations of monotone operators with linear graphs

Yao, Liangjin 05 1900 (has links)
We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decomposition of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equivalent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations including an explicit formula that is built upon the proximal average of the associated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Zălinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations. / Graduate Studies, College of (Okanagan) / Graduate
9

Optimisation multicritère sous incertitudes : un algorithme de descente stochastique / Multiobjective optimization under uncertainty : a stochastic descent algorithm

Mercier, Quentin 10 October 2018 (has links)
Cette thèse s’intéresse à l’optimisation multiobjectif sans contrainte lorsque les objectifs sont exprimés comme des espérances de fonctions aléatoires. L’aléa est modélisé par l’intermédiaire de variables aléatoires et on considère qu’il n’impacte pas les variables d’optimisation du problème. La thèse consiste à proposer un algorithme de descente qui permet l’obtention des solutions de Pareto du problème d’optimisation ainsi écrit. En utilisant des résultats d’analyse convexe, il est possible de construire un vecteur de descente commun à l’ensemble des objectifs du problème d’optimisation pour un tirage des variables aléatoires donné. Une suite itérative consistant à descendre dans la direction du vecteur de descente commun calculé au point courant et pour un tirage aléatoire unique et indépendant des variables aléatoires est alors construite. De cette manière, l’estimation coûteuse des espérances à chaque étape du processus d’optimisation n’est pas nécessaire. Il est possible de prouver les convergences en norme et presque sûre de cette suite vers les solutions de Pareto du problème d’optimisation en espérance et d’obtenir un résultat de vitesse de convergence lorsque la suite de pas de descente est bien choisie. Après avoir proposé diverses méthodes numériques d’amélioration de l’algorithme, un ensemble d’essais numériques est mené et les résultats de l’algorithme proposé sont comparés à ceux obtenus par deux autres algorithmes classiques de la littérature. Les résultats obtenus sont comparés par l’intermédiaire de mesures adaptées à l’optimisation multiobjectif et sont analysés par le biais de profils de performance. Des méthodes sont alors proposées pour prendre en compte deux types de contrainte et sont illustrées sur des problèmes d’optimisation de structures mécaniques. Une première méthode consiste à pénaliser les objectifs par l’intermédiaire de fonctions de pénalisation exacte lorsque la contrainte est décrite par une fonction déterministe. Pour les contraintes probabilistes, on propose de remplacer les contraintes par des objectifs supplémentaires, ces contraintes probabilistes étant alors reformulées comme des espérances de fonctions indicatrices, le problème étant résolu à l’aide de l’algorithme proposé dans la thèse sans avoir à estimer les probabilités des contraintes à chaque itération. / This thesis deals with unconstrained multiobjective optimization when the objectives are written as expectations of random functions. The randomness is modelled through random variables and we consider that this does not impact the problem optimization variables. A descent algorithm is proposed which gives the Pareto solutions without having to estimate the expectancies. Using convex analysis results, it is possible to construct a common descent vector that is a descent vector for all the objectives simultaneously, for a given draw of the random variables. An iterative sequence is then built and consists in descending following this common descent vector calculated at the current point and for a single independent draw of the random variables. This construction avoids the costly estimation of the expectancies at each step of the algorithm. It is then possible to prove the mean square and almost sure convergence of the sequence towards Pareto solutions of the problem and at the same time, it is possible to obtain a speed rate result when the step size sequence is well chosen. After having proposed some numerical enhancements of the algorithm, it is tested on multiple test cases against two classical algorithms of the literature. The results for the three algorithms are then compared using two measures that have been devised for multiobjective optimization and analysed through performance profiles. Methods are then proposed to handle two types of constraint and are illustrated on mechanical structure optimization problems. The first method consists in penalising the objective functions using exact penalty functions when the constraint is deterministic. When the constraint is expressed as a probability, the constraint is replaced by an additional objective. The probability is then reformulated as an expectation of an indicator function and this new problem is solved using the algorithm proposed in the thesis without having to estimate the probability during the optimization process.
10

Contributions à l'analyse convexe sequentielle / Contributions to the sequential convex analysis

Lopez, Olivier 16 December 2010 (has links)
Les premiers résultats en analyse convexe ne nécessitant aucune condition de qualification datent à peu près d'une quinzaine d'années et constituent le début de l'analyse convexe séquentielle. Ils concernaient essentiellement: la somme d'un nombre fini de fonctions convexes, la composition avec une application vectorielle convexe, et les problèmes de programmation mathématique convexe. Cette thèse apporte un ensemble de contributions à l'analyse convexe séquentielle. La première partie de la thèse est consacrée à l'obtention sans condition de qualification de règles de calcul sous-differentiel exprimées séquentiellement. On considère les cas suivants:l'enveloppe supérieure d'une famille quelconque de fonctions convexes semi-continues inférieurement définies sur un espace de Banach; une fonctionnelle intégrale convexe générale définie sur un espace de fonctions intégrales;la somme continue (ou intégrale) de fonctions convexes semi-continues inférieurement définies sur un espace de Banach séparable. Dans la deuxième partie on établit sans hypothèse de qualification sur les données du problème, des conditions nécessaires et suffisantes d'optimalité séquentielle pour divers types de problèmes d'optimisation et de contrôle optimal discret ou continu. / The first results in convex analysis without any qualificationcondition have been established fifteen years ago, and one may say thatsequential convex analysis began with those results. They essentially concerned:The finite sum of convex functions, the composition with a vectorvaluedconvex mapping, and convex mathematical programming. The firstpart of this dissertation provides several contibutions to sequential convexanalysis. The following cases are considered: the upper envelop of a familyof lower semicontinuous convex functions; the integral functional overan integral space; the continuous sum of lower semicontinuous convex functions.In the second part, necessary and sufficient optimality conditions areestablished in sequential form for many types of programming problems anddicrete or continuous optimal control problems.

Page generated in 0.1617 seconds