Spelling suggestions: "subject:"copositive programming"" "subject:"topositive programming""
1 |
Copositive programming: separation and relaxationsDong, Hongbo 01 December 2011 (has links)
A large portion of research in science and engineering, as well as in business, concerns one similar problem: how to make things "better”? Once properly modeled (although usually a highly nontrivial task), this kind of questions can be approached via a mathematical optimization problem. Optimal solution to a mathematical optimization problem, when interpreted properly, might corresponds to new knowledge, effective methodology or good decisions in corresponding application area. As already proved in many success stories, research in mathematical optimization has a significant impact on numerous aspects of human life. Recently, it was discovered that a large amount of difficult optimization problems can be formulated as copositive programming problems. Famous examples include a large class of quadratic optimization problems as well as many classical combinatorial optimization problems. For some more general optimization problems, copositive programming provides a way to construct tight convex relaxations. Because of this generality, new knowledge of copositive programs has the potential of being uniformly applied to these cases. While it is provably difficult to design efficient algorithms for general copositive programs, we study copositive programming from two standard aspects, its relaxations and its separation problem.
With regard to constructing computational tractable convex relaxations for copositive programs, we develop direct constructions of two tensor relaxation hierarchies for the completely positive cone, which is a fundamental geometric object in copositive programming. We show connection of our relaxation hierarchies with known hierarchies. Then we consider the application of these tensor relaxations to the maximum stable set problem. With regard to the separation problem for copositive programming. We first prove some new results in low dimension of 5 x 5 matrices. Then we show how a separation procedure for this low dimensional case can be extended to any symmetric matrices with a certain block structure. Last but not least, we provide another approach to the separation and relaxations for the (generalized) completely positive cone. We prove some generic results, and discuss applications to the completely positive case and another case related to box-constrained quadratic programming. Finally, we conclude the thesis with remarks on some interesting open questions in the field of copositive programming.
|
2 |
Optimization under uncertainty: conic programming representations, relaxations, and approximationsXu, Guanglin 01 August 2017 (has links)
In practice, the presence of uncertain parameters in optimization problems introduces new challenges in modeling and solvability to operations research. There are three main paradigms proposed for optimization problems under uncertainty. These include stochastic programming, robust optimization, and sensitivity analysis. In this thesis, we examine, improve, and combine the latter two paradigms in several relevant models and applications. In the second chapter, we study a two-stage adjustable robust linear optimization problem in which the right-hand sides are uncertain and belong to a compact, convex, and tractable uncertainty set. Under standard and simple assumptions, we reformulate the two-stage problem as a copositive optimization program, which in turns leads to a class of tractable semidefinite-based approximations that are at least as strong as the affine policy, which is a well studied tractable approximation in the literature. We examine our approach over several examples from the literature and the results demonstrate that our tractable approximations significantly improve the affine policy. In particular, our approach recovers the optimal values of a class of instances of increasing size for which the affine policy admits an arbitrary large gap. In the third chapter, we leverage the concept of robust optimization to conduct sensitivity analysis of the optimal value of linear programming (LP). In particular, we propose a framework for sensitivity analysis of LP problems, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex, and tractable uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case LP and two-stage adjustable linear programming. We define the best-case and worst-case LP optimal values over the uncertainty set. As the concept aligns well with the general spirit of robust optimization, we denote our approach as robust sensitivity analysis. While the best-case and worst-case optimal values are difficult to compute in general, we prove that they equal the optimal values of two separate, but related, copositive programs. We then develop tight, tractable conic relaxations to provide bounds on the best-case and worst case optimal values, respectively. We also develop techniques to assess the quality of the bounds, and we validate our approach computationally on several examples from—and inspired by—the literature. We find that the bounds are very strong in practice and, in particular, are at least as strong as known results for specific cases from the literature. In the fourth chapter of this thesis, we study the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing all distributions that could have generated the observed samples with a high confidence. The problem of interest is to investigate the bound on the expected optimal value over the Wasserstein ambiguity set. Under standard assumptions, we reformulate the problem into a copositive programming problem, which naturally leads to a tractable semidefinite-based approximation. We compare our approach with a moment-based approach from the literature for two applications. The numerical results illustrate the effectiveness of our approach.
Finally, we conclude the thesis with remarks on some interesting open questions in the field of optimization under uncertainty. In particular, we point out that some interesting topics that can be potentially studied by copositive programming techniques.
|
Page generated in 1.3408 seconds