Spelling suggestions: "subject:"approximation polyédrique"" "subject:"approximation polyédriques""
1 |
Topics in Convex Optimization: Interior-Point Methods, Conic Duality and ApproximationsGlineur, François 26 January 2001 (has links) (PDF)
Optimization is a scientific discipline that lies at the boundary<br />between pure and applied mathematics. Indeed, while on the one hand<br />some of its developments involve rather theoretical concepts, its<br />most successful algorithms are on the other hand heavily used by<br />numerous companies to solve scheduling and design problems on a<br />daily basis.<br /><br />Our research started with the study of the conic formulation for<br />convex optimization problems. This approach was already studied in<br />the seventies but has recently gained a lot of interest due to<br />development of a new class of algorithms called interior-point<br />methods. This setting is able to exploit the two most important<br />characteristics of convexity:<br /><br />- a very rich duality theory (existence of a dual problem that is<br />strongly related to the primal problem, with a very symmetric<br />formulation),<br />- the ability to solve these problems efficiently,<br />both from the theoretical (polynomial algorithmic complexity) and<br />practical (implementations allowing the resolution of large-scale<br />problems) points of view.<br /><br />Most of the research in this area involved so-called self-dual<br />cones, where the dual problem has exactly the same structure as the<br />primal: the most famous classes of convex optimization problems<br />(linear optimization, convex quadratic optimization and semidefinite<br />optimization) belong to this category. We brought some contributions <br />in this field:<br />- a survey of interior-point methods for linear optimization, with <br />an emphasis on the fundamental principles that lie behind the design <br />of these algorithms,<br />- a computational study of a method of linear approximation of convex <br />quadratic optimization (more precisely, the second-order cone that <br />can be used in the formulation of quadratic problems is replaced by a <br />polyhedral approximation whose accuracy can be guaranteed a priori),<br />- an application of semidefinite optimization to classification, <br />whose principle consists in separating different classes of patterns <br />using ellipsoids defined in the feature space (this approach was <br />successfully applied to the prediction of student grades).<br /><br />However, our research focussed on a much less studied category of<br />convex problems which does not rely on self-dual cones, i.e.<br />structured problems whose dual is formulated very differently from<br />the primal. We studied in particular<br />- geometric optimization, developed in the late sixties, which<br />possesses numerous application in the field of engineering<br />(entropy optimization, used in information theory, also belongs to<br />this class of problems)<br />- l_p-norm optimization, a generalization of linear and convex<br />quadratic optimization, which allows the formulation of constraints<br />built around expressions of the form |ax+b|^p (where p is a fixed<br />exponent strictly greater than 1).<br /><br />For each of these classes of problems, we introduced a new type of<br />convex cone that made their formulation as standard conic problems<br />possible. This allowed us to derive very simplified proofs of the<br />classical duality results pertaining to these problems, notably weak<br />duality (a mere consequence of convexity) and the absence of a<br />duality gap (strong duality property without any constraint<br />qualification, which does not hold in the general convex case). We<br />also uncovered a very surprising result that stipulates that<br />geometric optimization can be viewed as a limit case of l_p-norm<br />optimization. Encouraged by the similarities we observed, we<br />developed a general framework that encompasses these two classes of<br />problems and unifies all the previously obtained conic formulations.<br /><br />We also brought our attention to the design of interior-point<br />methods to solve these problems. The theory of polynomial algorithms<br />for convex optimization developed by Nesterov and Nemirovski asserts<br />that the main ingredient for these methods is a computable<br />self-concordant barrier function for the corresponding cones. We<br />were able to define such a barrier function in the case of<br />l_p-norm optimization (whose parameter, which is the main<br />determining factor in the algorithmic complexity of the method, is<br />proportional to the number of variables in the formulation and<br />independent from p) as well as in the case of the general<br />framework mentioned above.<br /><br />Finally, we contributed a survey of the self-concordancy property,<br />improving some useful results about the value of the complexity<br />parameter for certain categories of barrier functions and providing<br />some insight on the reason why the most commonly adopted definition<br />for self-concordant functions is the best possible.
|
2 |
Relaxations in mixed-integer quadratically constrained programming and robust programming / Relaxations en programmation mixte en nombres entiers avec contraintes quadratiques et en programmation robusteWang, Guanglei 28 November 2016 (has links)
De nombreux problèmes de la vie réelle sont exprimés sous la forme de décisions à prendre à l’aide de l’information accessible dans le but d’atteindre certains objectifs. La programmation numérique a prouvé être un outil efficace pour modéliser et résoudre une grande variété de problèmes de ce type. Cependant, de nombreux problèmes en apparence faciles sont encore durs à résoudre. Et même des problèmes faciles de programmation linéaire deviennent durs avec l’incertitude de l’information disponible. Motivés par un problème de télécommunication où l’on doit associer des machines virtuelles à des serveurs tout en minimisant les coûts, nous avons employé plusieurs outils de programmation mathématique dans le but de résoudre efficacement le problème, et développé de nouveaux outils pour des problèmes plus généraux. Dans l’ensemble, résumons les principaux résultats de cette thèse comme suit. Une formulation exacte et plusieurs reformulations pour le problème d’affectation de machines virtuelles dans le cloud sont données. Nous utilisons plusieurs inégalités valides pour renforcer la formulation exacte, accélérant ainsi l’algorithme de résolution de manière significative. Nous donnons en outre un résultat géométrique sur la qualité de la borne lagrangienne montrant qu’elle est généralement beaucoup plus forte que la borne de la relaxation continue. Une hiérarchie de relaxation est également proposée en considérant une séquence de couverture de l’ensemble de la demande. Ensuite, nous introduisons une nouvelle formulation induite par les symétries du problème. Cette formulation permet de réduire considérablement le nombre de termes bilinéaires dans le modèle, et comme prévu, semble plus efficace que les modèles précédents. Deux approches sont développées pour la construction d’enveloppes convexes et concaves pour l’optimisation bilinéaire sur un hypercube. Nous établissons plusieurs connexions théoriques entre différentes techniques et nous discutons d’autres extensions possibles. Nous montrons que deux variantes de formulations pour approcher l’enveloppe convexe des fonctions bilinéaires sont équivalentes. Nous introduisons un nouveau paradigme sur les problèmes linéaires généraux avec des paramètres incertains. Nous proposons une hiérarchie convergente de problèmes d’optimisation robuste – approche robuste multipolaire, qui généralise les notions de robustesse statique, de robustesse d’affinement ajustable, et de robustesse entièrement ajustable. En outre, nous montrons que l’approche multipolaire peut générer une séquence de bornes supérieures et une séquence de bornes inférieures en même temps et les deux séquences convergent vers la valeur robuste des FARC sous certaines hypothèses modérées / Many real life problems are characterized by making decisions with current information to achieve certain objectives. Mathematical programming has been developed as a successful tool to model and solve a wide range of such problems. However, many seemingly easy problems remain challenging. And some easy problems such as linear programs can be difficult in the face of uncertainty. Motivated by a telecommunication problem where assignment decisions have to be made such that the cloud virtual machines are assigned to servers in a minimum-cost way, we employ several mathematical programming tools to solve the problem efficiently and develop new tools for general theoretical problems. In brief, our work can be summarized as follows. We provide an exact formulation and several reformulations on the cloud virtual machine assignment problem. Then several valid inequalities are used to strengthen the exact formulation, thereby accelerating the solution procedure significantly. In addition, an effective Lagrangian decomposition is proposed. We show that, the bounds providedby the proposed Lagrangian decomposition is strong, both theoretically and numerically. Finally, a symmetry-induced model is proposed which may reduce a large number of bilinear terms in some special cases. Motivated by the virtual machine assignment problem, we also investigate a couple of general methods on the approximation of convex and concave envelopes for bilinear optimization over a hypercube. We establish several theoretical connections between different techniques and prove the equivalence of two seeming different relaxed formulations. An interesting research direction is also discussed. To address issues of uncertainty, a novel paradigm on general linear problems with uncertain parameters are proposed. This paradigm, termed as multipolar robust optimization, generalizes notions of static robustness, affinely adjustable robustness, fully adjustable robustness and fills the gaps in-between. As consequences of this new paradigms, several known results are implied. Further, we prove that the multipolar approach can generate a sequence of upper bounds and a sequence of lower bounds at the same time and both sequences converge to the robust value of fully adjustable robust counterpart under some mild assumptions
|
3 |
Topics in convex optimization: interior-point methods, conic duality and approximationsGlineur, Francois 26 January 2001 (has links)
Optimization is a scientific discipline that lies at the boundary between pure and applied mathematics. Indeed, while on the one hand
some of its developments involve rather theoretical concepts, its most successful algorithms are on the other hand heavily used by
numerous companies to solve scheduling and design problems on a daily basis.
Our research started with the study of the conic formulation for convex optimization problems. This approach was already studied in the seventies but has recently gained a lot of interest due to
development of a new class of algorithms called interior-point methods. This setting is able to exploit the two most important characteristics of convexity: - a very rich duality theory (existence of a dual problem that is strongly related to the primal problem, with a very symmetric formulation), - the ability to solve these problems efficiently, both from the theoretical (polynomial algorithmic complexity) and practical (implementations allowing the resolution of large-scale problems) point of views.
Most of the research in this area involved so-called self-dual cones, where the dual problem has exactly the same structure as the primal: the most famous classes of convex optimization problems (linear optimization, convex quadratic optimization and semidefinite optimization) belong to this category. We brought some contributions
in this field: - a survey of interior-point methods for linear optimization, with an emphasis on the fundamental principles that lie behind the design of these algorithms, - a computational study of a method of linear approximation of convex quadratic optimization (more precisely, the second-order cone that can be used in the formulation of quadratic problems is replaced by a polyhedral approximation whose accuracy that can be guaranteed a priori), - an application of semidefinite optimization to classification, whose principle consists in separating different classes of patterns using ellipsoids defined in the feature space (this approach was successfully applied to the prediction of student grades).
However, our research focussed on a much less studied category of convex problems which does not rely on self-dual cones, i.e. structured problems whose dual is formulated very differently from
the primal. We studied in particular - geometric optimization, developed in the late sixties, which possesses numerous application in the field of engineering (entropy optimization, used in information theory, also belongs to this class of problems) - l_p-norm optimization, a generalization of linear and convex
quadratic optimization, which allows the formulation of constraints built around expressions of the form |ax+b|^p (where p is a fixed exponent strictly greater than 1).
For each of these classes of problems, we introduced a new type of convex cone that made their formulation as standard conic problems
possible. This allowed us to derive very simplified proofs of the classical duality results pertaining to these problems, notably weak
duality (a mere consequence of convexity) and the absence of a duality gap (strong duality property without any constraint qualification, which does not hold in the general convex case). We also uncovered a very surprising result that stipulates that geometric optimization can be viewed as a limit case of l_p-norm optimization. Encouraged by the similarities we observed, we developed a general framework that encompasses these two classes of problems and unifies all the previously obtained conic formulations.
We also brought our attention to the design of interior-point methods to solve these problems. The theory of polynomial algorithms for convex optimization developed by Nesterov and Nemirovsky asserts that the main ingredient for these methods is a computable self-concordant barrier function for the corresponding cones. We were able to define such a barrier function in the case of l_p-norm optimization (whose parameter, which is the main determining factor in the algorithmic complexity of the method, is proportional to the
number of variables in the formulation and independent from p) as well as in the case of the general framework mentioned above.
Finally, we contributed a survey of the self-concordancy property, improving some useful results about the value of the complexity
parameter for certain categories of barrier functions and providing some insight on the reason why the most commonly adopted definition for self-concordant functions is the best possible.
|
Page generated in 0.0949 seconds