Spelling suggestions: "subject:"inves"" "subject:"inven""
1 |
Penalizační metody ve stochastické optimalizaci / Penalizační metody ve stochastické optimalizaciKálosi, Szilárd January 2013 (has links)
The submitted thesis studies penalty function methods for stochastic programming problems. The main objective of the paper is to examine penalty function methods for deterministic nonlinear programming, in particular exact penalty function methods, in order to enhance penalty function methods for stochastic programming. For this purpose, the equivalence of the original de- terministic nonlinear and the corresponding penalty function problem using arbi- trary vector norm as the penalty function is shown for convex and invex functions occurring in the problems, respectively. The obtained theorems are consequently applied to multiple chance constrained problems under finite discrete probability distribution to show the asymptotic equivalence of the probabilistic and the cor- responding penalty function problems. The practical use of the newly obtained methods is demonstrated on a numerical study, in which a comparison with other approaches is provided as well. 1
|
2 |
CONTINUOUS RELAXATION FOR COMBINATORIAL PROBLEMS - A STUDY OF CONVEX AND INVEX PROGRAMSAdarsh Barik (15359902) 27 April 2023 (has links)
<p>In this thesis, we study optimization problems which have a combinatorial aspect to them. Search space for such problems quickly grows large - exponentially - with respect to the problem dimension. Thus, exhaustive search becomes intractable and we need good relaxations to solve combinatorial problems efficiently. Another challenge arises due to the high dimensionality of such problems and lack of large number of samples. Our aim is to come up with innovative approaches that solve the problem in polynomial time and sample complexity. We discuss three combinatorial optimization problems and provide continuous relaxations for them. Our continuous relaxations involve both convex and nonconvex (invex) relaxations. Furthermore, we provide efficient first order algorithms to solve a general class of invex problems with provable convergence rate guarantees. The three combinatorial problems we study in this work are – learning the directed structure of a Bayesian network using blackbox data, fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute and mixed linear regression. We propose convex relaxation for the first problem, while the other two are solved using invex relaxation. On the first problem, we come up with a novel notion of low rank representation of conditional probability tables for a Bayesian network and connect it to Fourier transformation of real valued set functions to recover the exact structure of the Bayesian networks. For the second problem, we propose a novel invex relaxation for the combinatorial version of sparse linear regression with fairness. For the final problem, we again use invex relaxation to learn a mixture of sparse linear regression models. We formally show correctness of our proposed methods and provide provable theoretical guarantees for efficient computational and sample complexity. We also develop efficient first order algorithms to solve invex problems. We provide convergence rate analysis for our proposed methods. Furthermore, we also discuss possible future research directions and the problems we want to tackle in future.</p>
|
Page generated in 0.0343 seconds