Spelling suggestions: "subject:"codecision tre optimization"" "subject:"bydecision tre optimization""
1 |
Discrete and Continuous Nonconvex Optimization: Decision Trees, Valid Inequalities, and Reduced Basis TechniquesDalkiran, Evrim 26 April 2011 (has links)
This dissertation addresses the modeling and analysis of a strategic risk management problem via a novel decision tree optimization approach, as well as development of enhanced Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for solving nonconvex polynomial programming problems, through the generation of valid inequalities and reduced representations, along with the design and implementation of efficient algorithms. We first conduct a quantitative analysis for a strategic risk management problem that involves allocating certain available failure-mitigating and consequence-alleviating resources to reduce the failure probabilities of system safety components and subsequent losses, respectively, together with selecting optimal strategic decision alternatives, in order to minimize the risk or expected loss in the event of a hazardous occurrence. Using a novel decision tree optimization approach to represent the cascading sequences of probabilistic events as controlled by key decisions and investment alternatives, the problem is modeled as a nonconvex mixed-integer 0-1 factorable program. We develop a specialized branch-and-bound algorithm in which lower bounds are computed via tight linear relaxations of the original problem that are constructed by utilizing a polyhedral outer-approximation mechanism in concert with two alternative linearization schemes having different levels of tightness and complexity. We also suggest three alternative branching schemes, each of which is proven to guarantee convergence to a global optimum for the underlying problem. Extensive computational results and sensitivity analyses are presented to provide insights and to demonstrate the efficacy of the proposed algorithm. In particular, our methodology outperformed the commercial software BARON (Version 8.1.5), yielding a more robust performance along with an 89.9% savings in effort on average.
Next, we enhance RLT-based LP relaxations for polynomial programming problems by developing two classes of valid inequalities: v-semidefinite cuts and bound-grid-factor constraints. The first of these uses concepts derived from semidefinite programming. Given an RLT relaxation, we impose positive semidefiniteness on suitable dyadic variable-product matrices, and correspondingly derive implied semidefinite cuts. In the case of polynomial programs, there are several possible variants for selecting such dyadic variable-product matrices for imposing positive semidefiniteness restrictions in order to derive implied valid inequalities, which leads to a new class of cutting planes that we call v-semidefinite cuts. We explore various strategies for generating such cuts within the context of an RLT-based branch-and-cut scheme, and exhibit their relative effectiveness towards tightening the RLT relaxations and solving the underlying polynomial programming problems, using a test-bed of randomly generated instances as well as standard problems from the literature. Our results demonstrate that these cutting planes achieve a significant tightening of the lower bound in contrast with using RLT as a stand-alone approach, thereby enabling an appreciable reduction in the overall computational effort, even in comparison with the commercial software BARON. Empirically, our proposed cut-enhanced algorithm reduced the computational effort required by the latter two approaches by 44% and 77%, respectively, over a test-bed of 60 polynomial programming problems.
As a second cutting plane strategy, we introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between RLT variables and their associated nonlinear products to zero. To preserve computational expediency while promoting efficiency, we propose certain concurrent and sequential cut generation routines and various grid-factor selection rules. The results indicate a significant tightening of lower bounds, which yields an overall reduction in computational effort of 21% for solving a test-bed of 15 challenging polynomial programming problems to global optimality in comparison with the basic RLT procedure, and over a 100-fold speed-up in comparison with the commercial software BARON.
Finally, we explore equivalent, reduced size RLT-based formulations for polynomial programming problems. Utilizing a basis partitioning scheme for an embedded linear equality subsystem, we show that a strict subset of RLT defining equalities imply the remaining ones. Applying this result, we derive significantly reduced RLT representations and develop certain coherent associated branching rules that assure convergence to a global optimum, along with static as well as dynamic basis selection strategies to implement the proposed procedure. In addition, we enhance the RLT relaxations with v-semidefinite cuts, which are empirically shown to further improve the relative performance of the reduced RLT method over the usual RLT approach. Computational results presented using a test-bed of 10 challenging polynomial programs to evaluate the different reduction strategies demonstrate that our superlative proposed approach achieved more than a four-fold improvement in computational effort in comparison with both the commercial software BARON and a recently developed open-source code, Couenne, for solving nonconvex mixed-integer nonlinear programming problems. Moreover, our approach robustly solved all the test cases to global optimality, whereas BARON and Couenne were jointly able to solve only a single instance to optimality within the set computational time limit, having an unresolved average optimality gap of 260% and 437%, respectively, for the other nine instances.
This dissertation makes several broader contributions to the field of nonconvex optimization, including factorable, nonlinear mixed-integer programming problems. The proposed decision tree optimization framework can serve as a versatile management tool in the arenas of homeland security and health-care. Furthermore, we have advanced the frontier for tackling formidable nonconvex polynomial programming problems that arise in emerging fields such as signal processing, biomedical engineering, materials science, and risk management. An open-source software using the proposed reduced RLT representations, semidefinite cuts, bound-grid-factor constraints, and range reduction strategies, is currently under preparation. In addition, the different classes of challenging polynomial programming test problems that are utilized in the computational studies conducted in this dissertation have been made available for other researchers via the Web-page http://filebox.vt.edu/users/dalkiran/website/. It is our hope and belief that the modeling and methodological contributions made in this dissertation will serve society in a broader context through the myriad of widespread applications they support. / Ph. D.
|
2 |
Decision making strategy for antenatal echographic screening of foetal abnormalities using statistical learning / Méthodologie d'aide à la décision pour le dépistage anténatal échographique d'anomalies fœtales par apprentissage statistiqueBesson, Rémi 01 October 2019 (has links)
Dans cette thèse, nous proposons une méthode pour construire un outil d'aide à la décision pour le diagnostic de maladie rare. Nous cherchons à minimiser le nombre de tests médicaux nécessaires pour atteindre un état où l'incertitude concernant la maladie du patient est inférieure à un seuil prédéterminé. Ce faisant, nous tenons compte de la nécessité dans de nombreuses applications médicales, d'éviter autant que possible, tout diagnostic erroné. Pour résoudre cette tâche d'optimisation, nous étudions plusieurs algorithmes d'apprentissage par renforcement et les rendons opérationnels pour notre problème de très grande dimension. Pour cela nous décomposons le problème initial sous la forme de plusieurs sous-problèmes et montrons qu'il est possible de tirer partie des intersections entre ces sous-tâches pour accélérer l'apprentissage. Les stratégies apprises se révèlent bien plus performantes que des stratégies gloutonnes classiques. Nous présentons également une façon de combiner les connaissances d'experts, exprimées sous forme de probabilités conditionnelles, avec des données cliniques. Il s'agit d'un aspect crucial car la rareté des données pour les maladies rares empêche toute approche basée uniquement sur des données cliniques. Nous montrons, tant théoriquement qu'empiriquement, que l'estimateur que nous proposons est toujours plus performant que le meilleur des deux modèles (expert ou données) à une constante près. Enfin nous montrons qu'il est possible d'intégrer efficacement des raisonnements tenant compte du niveau de granularité des symptômes renseignés tout en restant dans le cadre probabiliste développé tout au long de ce travail. / In this thesis, we propose a method to build a decision support tool for the diagnosis of rare diseases. We aim to minimize the number of medical tests necessary to achieve a state where the uncertainty regarding the patient's disease is less than a predetermined threshold. In doing so, we take into account the need in many medical applications, to avoid as much as possible, any misdiagnosis. To solve this optimization task, we investigate several reinforcement learning algorithm and make them operable in our high-dimensional. To do this, we break down the initial problem into several sub-problems and show that it is possible to take advantage of the intersections between these sub-tasks to accelerate the learning phase. The strategies learned are much more effective than classic greedy strategies. We also present a way to combine expert knowledge, expressed as conditional probabilities, with clinical data. This is crucial because the scarcity of data in the field of rare diseases prevents any approach based solely on clinical data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant. Finally, we show that it is possible to effectively integrate reasoning taking into account the level of granularity of the symptoms reported while remaining within the probabilistic framework developed throughout this work.
|
Page generated in 0.13 seconds