Return to search

Topics in Convex Optimization: Interior-Point Methods, Conic Duality and Approximations

Optimization is a scientific discipline that lies at the boundary<br />between pure and applied mathematics. Indeed, while on the one hand<br />some of its developments involve rather theoretical concepts, its<br />most successful algorithms are on the other hand heavily used by<br />numerous companies to solve scheduling and design problems on a<br />daily basis.<br /><br />Our research started with the study of the conic formulation for<br />convex optimization problems. This approach was already studied in<br />the seventies but has recently gained a lot of interest due to<br />development of a new class of algorithms called interior-point<br />methods. This setting is able to exploit the two most important<br />characteristics of convexity:<br /><br />- a very rich duality theory (existence of a dual problem that is<br />strongly related to the primal problem, with a very symmetric<br />formulation),<br />- the ability to solve these problems efficiently,<br />both from the theoretical (polynomial algorithmic complexity) and<br />practical (implementations allowing the resolution of large-scale<br />problems) points of view.<br /><br />Most of the research in this area involved so-called self-dual<br />cones, where the dual problem has exactly the same structure as the<br />primal: the most famous classes of convex optimization problems<br />(linear optimization, convex quadratic optimization and semidefinite<br />optimization) belong to this category. We brought some contributions <br />in this field:<br />- a survey of interior-point methods for linear optimization, with <br />an emphasis on the fundamental principles that lie behind the design <br />of these algorithms,<br />- a computational study of a method of linear approximation of convex <br />quadratic optimization (more precisely, the second-order cone that <br />can be used in the formulation of quadratic problems is replaced by a <br />polyhedral approximation whose accuracy can be guaranteed a priori),<br />- an application of semidefinite optimization to classification, <br />whose principle consists in separating different classes of patterns <br />using ellipsoids defined in the feature space (this approach was <br />successfully applied to the prediction of student grades).<br /><br />However, our research focussed on a much less studied category of<br />convex problems which does not rely on self-dual cones, i.e.<br />structured problems whose dual is formulated very differently from<br />the primal. We studied in particular<br />- geometric optimization, developed in the late sixties, which<br />possesses numerous application in the field of engineering<br />(entropy optimization, used in information theory, also belongs to<br />this class of problems)<br />- l_p-norm optimization, a generalization of linear and convex<br />quadratic optimization, which allows the formulation of constraints<br />built around expressions of the form |ax+b|^p (where p is a fixed<br />exponent strictly greater than 1).<br /><br />For each of these classes of problems, we introduced a new type of<br />convex cone that made their formulation as standard conic problems<br />possible. This allowed us to derive very simplified proofs of the<br />classical duality results pertaining to these problems, notably weak<br />duality (a mere consequence of convexity) and the absence of a<br />duality gap (strong duality property without any constraint<br />qualification, which does not hold in the general convex case). We<br />also uncovered a very surprising result that stipulates that<br />geometric optimization can be viewed as a limit case of l_p-norm<br />optimization. Encouraged by the similarities we observed, we<br />developed a general framework that encompasses these two classes of<br />problems and unifies all the previously obtained conic formulations.<br /><br />We also brought our attention to the design of interior-point<br />methods to solve these problems. The theory of polynomial algorithms<br />for convex optimization developed by Nesterov and Nemirovski asserts<br />that the main ingredient for these methods is a computable<br />self-concordant barrier function for the corresponding cones. We<br />were able to define such a barrier function in the case of<br />l_p-norm optimization (whose parameter, which is the main<br />determining factor in the algorithmic complexity of the method, is<br />proportional to the number of variables in the formulation and<br />independent from p) as well as in the case of the general<br />framework mentioned above.<br /><br />Finally, we contributed a survey of the self-concordancy property,<br />improving some useful results about the value of the complexity<br />parameter for certain categories of barrier functions and providing<br />some insight on the reason why the most commonly adopted definition<br />for self-concordant functions is the best possible.

Identiferoai:union.ndltd.org:CCSD/oai:tel.archives-ouvertes.fr:tel-00006861
Date26 January 2001
CreatorsGlineur, François
Source SetsCCSD theses-EN-ligne, France
LanguageFrench
Detected LanguageEnglish
TypePhD thesis

Page generated in 0.0026 seconds