11 
Probabilistic analysis and results of combinatorial problems with military applicationsGrundel, Don A. January 2004 (has links)
Thesis (Ph. D.)University of Florida, 2004. / Title from title page of source document. Document formatted into pages; contains 135 pages. Includes vita. Includes bibliographical references.

12 
A reliabilitybased method for optimization programming problems /Esteban, Jaime, January 1992 (has links)
Thesis (M.S.)Virginia Polytechnic Institute and State University, 1992. / Vita. Abstract. Includes bibliographical references (leaves 5559). Also available via the Internet.

13 
Geometric Ramifications of the Lovász Theta Function and Their Interplay with Dualityde Carli Silva, Marcel Kenji January 2013 (has links)
The Lovasz theta function and the associated convex sets known as theta bodies are fundamental objects in combinatorial and
semidefinite optimization. They are accompanied by a rich duality theory and
deep connections to the geometric concept of orthonormal representations of graphs. In this thesis, we investigate several ramifications of the theory underlying these objects, including those arising from the illuminating viewpoint of duality. We study some optimization problems over unitdistance representations of graphs, which are intimately related to the Lovasz theta function and orthonormal representations. We also strengthen some known results about dual descriptions of theta bodies and their variants. Our main goal throughout the thesis is to lay some of the foundations for using semidefinite optimization and convex analysis in a way analogous to how polyhedral combinatorics has been using linear optimization to prove minmax theorems.
A unitdistance representation of a graph $G$ maps its nodes to some Euclidean space so that adjacent nodes are sent to pairs of points at distance one. The hypersphere number of $G$, denoted by $t(G)$, is the (square of the) minimum radius of a hypersphere that contains a unitdistance representation of $G$. Lovasz proved a minmax relation describing $t(G)$ as a function of $\vartheta(\overline{G})$, the theta number of the complement of $G$. This relation provides a dictionary between unitdistance representations in hyperspheres and orthonormal representations, which we exploit in a number of ways: we develop a weighted generalization of $t(G)$, parallel to the weighted version of $\vartheta$; we prove that $t(G)$ is equal to the (square of the) minimum radius of an Euclidean ball that contains a unitdistance representation of $G$; we abstract some properties of $\vartheta$ that yield the famous Sandwich Theorem and use them to define another weighted generalization of $t(G)$, called ellipsoidal number of $G$, where the unitdistance representation of $G$ is required to be in an ellipsoid of a given shape with minimum volume. We determine an analytic formula for the ellipsoidal number of the complete graph on $n$ nodes whenever there exists a Hadamard matrix of order $n$.
We then study several duality aspects of the description of the theta body $\operatorname{TH}(G)$. For a graph $G$, the convex corner $\operatorname{TH}(G)$ is known to be the projection of a certain convex set, denoted by $\widehat{\operatorname{TH}}(G)$, which lies in a much higherdimensional matrix space. We prove that the vertices of $\widehat{\operatorname{TH}}(G)$ are precisely the symmetric tensors of incidence vectors of stable sets in $G$, thus broadly generalizing previous results about vertices of the elliptope due to Laurent and Poljak from 1995. Along the way, we also identify all the vertices of several variants of $\widehat{\operatorname{TH}}(G)$ and of the elliptope. Next we introduce an axiomatic framework for studying generalized theta bodies, based on the concept of diagonally scaling invariant cones, which allows us to prove in a unified way several characterizations of $\vartheta$ and the variants $\vartheta'$ and $\vartheta^+$, introduced independently by Schrijver, and by McEliece, Rodemich, and Rumsey in the late 1970's, and by Szegedy in 1994. The beautiful duality equation which states that the antiblocker of $\operatorname{TH}(G)$ is $\operatorname{TH}(\overline{G})$ is extended to this setting. The framework allows us to treat the stable set polytope and its classical polyhedral relaxations as generalized theta bodies, using the completely positive cone and its dual, and it allows us to derive a (weighted generalization of a) copositive formulation for the fractional chromatic number due to Dukanovic and Rendl in 2010 from a completely positive formulation for the stability number due to de Klerk and Pasechnik in 2002. Finally, we study a nonconvex constraint for semidefinite programs (SDPs) that may be regarded as analogous to the usual integrality constraint for linear programs. When applied to certain classical SDPs, it specializes to the standard rankone constraint. More importantly, the nonconvex constraint also applies to the dual SDP, and for a certain SDP formulation of $\vartheta$, the modified dual yields precisely the clique covering number. This opens the way to study some exactness properties of SDP relaxations for combinatorial optimization problems akin to the corresponding classical notions from polyhedral combinatorics, as well as approximation algorithms based on SDP relaxations.

14 
Geometric Ramifications of the Lovász Theta Function and Their Interplay with Dualityde Carli Silva, Marcel Kenji January 2013 (has links)
The Lovasz theta function and the associated convex sets known as theta bodies are fundamental objects in combinatorial and
semidefinite optimization. They are accompanied by a rich duality theory and
deep connections to the geometric concept of orthonormal representations of graphs. In this thesis, we investigate several ramifications of the theory underlying these objects, including those arising from the illuminating viewpoint of duality. We study some optimization problems over unitdistance representations of graphs, which are intimately related to the Lovasz theta function and orthonormal representations. We also strengthen some known results about dual descriptions of theta bodies and their variants. Our main goal throughout the thesis is to lay some of the foundations for using semidefinite optimization and convex analysis in a way analogous to how polyhedral combinatorics has been using linear optimization to prove minmax theorems.
A unitdistance representation of a graph $G$ maps its nodes to some Euclidean space so that adjacent nodes are sent to pairs of points at distance one. The hypersphere number of $G$, denoted by $t(G)$, is the (square of the) minimum radius of a hypersphere that contains a unitdistance representation of $G$. Lovasz proved a minmax relation describing $t(G)$ as a function of $\vartheta(\overline{G})$, the theta number of the complement of $G$. This relation provides a dictionary between unitdistance representations in hyperspheres and orthonormal representations, which we exploit in a number of ways: we develop a weighted generalization of $t(G)$, parallel to the weighted version of $\vartheta$; we prove that $t(G)$ is equal to the (square of the) minimum radius of an Euclidean ball that contains a unitdistance representation of $G$; we abstract some properties of $\vartheta$ that yield the famous Sandwich Theorem and use them to define another weighted generalization of $t(G)$, called ellipsoidal number of $G$, where the unitdistance representation of $G$ is required to be in an ellipsoid of a given shape with minimum volume. We determine an analytic formula for the ellipsoidal number of the complete graph on $n$ nodes whenever there exists a Hadamard matrix of order $n$.
We then study several duality aspects of the description of the theta body $\operatorname{TH}(G)$. For a graph $G$, the convex corner $\operatorname{TH}(G)$ is known to be the projection of a certain convex set, denoted by $\widehat{\operatorname{TH}}(G)$, which lies in a much higherdimensional matrix space. We prove that the vertices of $\widehat{\operatorname{TH}}(G)$ are precisely the symmetric tensors of incidence vectors of stable sets in $G$, thus broadly generalizing previous results about vertices of the elliptope due to Laurent and Poljak from 1995. Along the way, we also identify all the vertices of several variants of $\widehat{\operatorname{TH}}(G)$ and of the elliptope. Next we introduce an axiomatic framework for studying generalized theta bodies, based on the concept of diagonally scaling invariant cones, which allows us to prove in a unified way several characterizations of $\vartheta$ and the variants $\vartheta'$ and $\vartheta^+$, introduced independently by Schrijver, and by McEliece, Rodemich, and Rumsey in the late 1970's, and by Szegedy in 1994. The beautiful duality equation which states that the antiblocker of $\operatorname{TH}(G)$ is $\operatorname{TH}(\overline{G})$ is extended to this setting. The framework allows us to treat the stable set polytope and its classical polyhedral relaxations as generalized theta bodies, using the completely positive cone and its dual, and it allows us to derive a (weighted generalization of a) copositive formulation for the fractional chromatic number due to Dukanovic and Rendl in 2010 from a completely positive formulation for the stability number due to de Klerk and Pasechnik in 2002. Finally, we study a nonconvex constraint for semidefinite programs (SDPs) that may be regarded as analogous to the usual integrality constraint for linear programs. When applied to certain classical SDPs, it specializes to the standard rankone constraint. More importantly, the nonconvex constraint also applies to the dual SDP, and for a certain SDP formulation of $\vartheta$, the modified dual yields precisely the clique covering number. This opens the way to study some exactness properties of SDP relaxations for combinatorial optimization problems akin to the corresponding classical notions from polyhedral combinatorics, as well as approximation algorithms based on SDP relaxations.

15 
Topics in linear and nonlinear discrete optimizationShapoval, Andriy 08 June 2015 (has links)
This work contributes to modeling, theoretical, and practical aspects of structured Mathematical Programming problems. Many realworld applications have nonlinear characteristics and can be modeled as Mixed Integer Nonlinear Programming problems (MINLP). Modern global solvers have significant difficulty handling largescale instances of them. Several convexification and underestimation techniques were proposed in the last decade as a part of the solution process, and we join this trend. The thesis has three major parts.
The first part considers MINLP problems containing convex (in the sense of continuous relaxations) and posynomial terms (also called monomials), i.e. products of variables with some powers. Recently, a linear Mixed Integer Programming (MIP) approach was introduced for minimization the number of variables and transformations for convexification and underestimation of these structured problems. We provide polyhedral analysis together with separation for solving our variant of this minimization subproblem, containing binary and bounded continuous variables. Our novel mixed hyperedge method allows to outperform modern commercial MIP software, providing new families of facetdefining inequalities. As a byproduct, we introduce a new research area called mixed conflict hypergraphs. It merges mixed conflict graphs and 01 conflict hypergraphs.
The second part applies our mixed hyperedge method to a linear subproblem of the same purpose for another class of structured MINLP problems. They contain signomial terms, i.e. posynomial terms of both positive and negative signs. We obtain new facetdefining inequalities in addition to those families from the first part.
The final part is dedicated to managing guest flow in Georgia Aquarium after the Dolphin Tales opening with applying a largescale MINLP. We consider arrival and departure processes related to scheduled shows and develop three stochastic models for them. If demand for the shows is high, all processes become interconnected and require a generalized model. We provide and solve a Signomial Programming problem with mixed variables for minimization resources to prevent and control congestions.

16 
Funding site cleanup at closing Army installations : a stochastic optimization approachArdic, Sureyya. 12 1900 (has links)
To reduce domestic military infrastructure, the United States enacted two laws that instituted rounds of base realignment and closure (BRAC) in 1988, 1991, 1993, and 1995. As a result of these BRAC rounds, the United States Army has closed or realigned 139 installations. Environmental cleanup is almost $2.3 billion (43%) of the entire cost associated with the closure and realignment of these 139 Army installations. The United States Army Base Realignment and Closure Office (BRACO) uses an integer linear program called BAEC (Budget Allocation for Environmental Cleanup) to help determine how to allocate limited yearly funding to installations for environmental cleanup. Considering environmental policies and yearly installation funding requests from 2002 to 2015, this thesis modifies BAEC to better account for uncertainty in future environmental cleanup cost estimates. Based on historic data that show most environmental cleanup cost estimates increase over time, the stochastic BAEC model recommends funding fewer sites than the deterministic BAEC model recommends. The stochastic BAEC model thereby provides funding recommendations with a better chance of staying within limited available yearly funding. / Turkish Army author

17 
Conception modulaire d'une caisse de véhicule par des méthodes d'optimisation robusteCharrier, Martin 18 September 2018 (has links)
Dans l'industrie automobile, les volumes de vente sont tels que le moindre kilogramme gagné sur un véhicule génère des économies colossales : diminution du coût des matériaux bruts, réduction de la consommation et de la taxe carbone (meilleure perception client). L'utilisation de simulations par éléments finis et l'optimisation font désormais partie intégrante des processus de développement des véhicules : crash, NVH (Noise and Vibration Harshness), aérodynamique, mécanique des fluides. . . L'optimisation réduit la masse d'un véhicule en faisant varier des variables de conception sous contraintes du cahier des charges. Le nombre de simulations requises à une optimisation classique varie entre 3 et 10 fois le nombre de variables quantitatives (épaisseurs, variables de formes). Ce nombre augmente rapidement si le problème contient des variables qualitatives (présence/absence/alternatives de pièces). Dans ce cas, lorsque les simulations sont très coûteuses comme en crash, les algorithmes d'optimisation deviennent inefficaces. Pour cette raison, l'optimisation est utilisée tardivement dans le cycle de développement des véhicules, lorsque le nombre de variables qualitatives a été réduit par plusieurs décisions stratégiques (architecture de caisse, pièces à réutiliser, usines de fabrication. . .). De telles décisions ne sont pas toujours prises de manière pertinente, particulièrement lorsque le planning est serré et les données indisponibles. De mauvais choix à ce stade peuvent s'avérer très coûteux par la suite. La méthode proposée dans les premiers chapitres de cette thèse utilise un algorithme de Branch and Bound pour étendre le périmètre de l'optimisation en permettant un grand nombre de variables qualitatives et une adaptation rapide aux possibles changements de contraintes. Avec ces deux caractéristiques, de nouvelles variables qualitatives généralement précontraintes par des décisions stratégiques peuvent être prises en compte dès lors que les modèles numériques sont disponibles. Différents scenarii liés à différents jeux de contraintes stratégiques peuvent alors être comparés. Les chapitres suivants sont dédiés à la méthode de réduction de modèles ReCUR, qui complète l'amélioration de l'algorithme par une réduction drastique du nombre de simulations nécessaires à l'établissement d'un modèle du comportement crash du véhicule. La méthode surpasse le traditionnel fléau de la dimension (ou curse of dimensionality) en proposant une modélisation en fonction de variables exprimées non plus aux pièces, mais aux nœuds (ou éléments) du maillage. Les deux versions de ReCUR testées aujourd'hui seront présentées, et chacune d'entre elle sera appliquée à l'optimisation d'une caisse complète de véhicule. Deux méthodes qui permettraient à ReCUR de prendre en compte des variables de type alternatives géométriques de pièces seront également proposées. / In automotive industry, sales volumes are such that each kilogram saved on a vehicle generates huge gains: raw material cost, customer value due to consumption and carbon tax reduction. Engineering is under severe pressure to reduce mass and CO2. The use of finite element simulation and optimization becomes an integral part of the design processes: crash, NVH (Noise and Vibration Harshness), aerodynamics, fluid mechanics... Optimization reduces the mass of a vehicle varying the design variables, under constraints on specifications. The number of simulations required is between 3 and 10 times the number of quantitative variables (e.g. thicknesses, shape parameters). This number increases rapidly if the problem contains qualitative variables (e.g. presence/absence/alternative of parts). In this last case, when simulations are expensive like crash, optimization algorithms become inefficient. For this reason, optimization is used on the last stage of the development cycle when the number of qualitative variables has been reduced by several strategic decisions (e.g. body architecture, parts to be reused, manufacturing plants. . .). Such decisions are not always taken in a relevant way, especially when the schedule is tight and some data unavailable. A bad choice made in an early stage in the project can then be very costly. The method proposed in the first chapters of this thesis uses a Branch and Bound algorithm to extend the optimization perimeter by allowing a large number of qualitative variables, as well as a rapid adaptation to possible changes of constraints during the optimization. With these two characteristics, new qualitative variables usually preconstrained by strategic decisions can be added to the problem, assuming that numerical models are available. Different scenarios linked to several sets of strategic constraints can then be simulated. Decisionmaking tools, like Pareto frontiers, help to visualize the optimal scenarios. The following chapters are dedicated to the ReCUR model reduction method, which completes the improvement of the algorithm by drastically reducing the number of simulations required to establish a model of the vehicle crash behavior. The method surpasses the traditional curse of dimensionality by proposing a modelization according to variables expressed not to the pieces, but to the nodes (or elements) of the mesh. The two versions of ReCUR tested today will be presented, and each of them will be applied to the optimization of a complete vehicle body. Two methods that would allow ReCUR to take into account variables of the type geometric alternatives of parts will also be proposed.

18 
Some recent results on optimization theory.January 1997 (has links)
by Chan Chiu Fat. / Thesis (M.Phil.)Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 99100). / Acknowledgment  p.i / Abstract  p.ii / Introduction  p.1 / Chapter 1  Differentiability On Banach Spaces  p.4 / Chapter 1.1  Introduction  p.5 / Chapter 1.2  Subdifferentiability of Convex Functions  p.12 / Chapter 1.3  Generalized Gradients  p.16 / Chapter 1.4  The Rockafellar Subderivatives  p.21 / Chapter 2  On Minimizing and Stationary Sequences  p.25 / Chapter 2.1  On Equivalence of Minimizing and Stationary Sequences  p.26 / Chapter 2.2  Constrained Optimization Problem  p.35 / Chapter 2.3  Convergence of an Iterative Algorithm  p.41 / Chapter 3  Asymptotically WeilBehaved Functions  p.47 / Chapter 3.1  Asymptotically WellBehaved Convex Functions  p.48 / Chapter 3.2  The Subclass R  p.54 / Chapter 3.3  Application on Approximate Problems and Duality Theory  p.61 / Chapter 4  The Minimum and Maximum Principle  p.71 / Chapter 4.1  The RadonNikodym Property  p.71 / Chapter 4.2  The Minimum and Maximum Principle for Nonconvex Functions  p.83 / Bibliography  p.99

19 
On Algorithms, Separability and Cellular Automata in Quantum ComputingCheung, Donny January 2007 (has links)
In Part I of this thesis, we present a new model of quantum cellular automata
(QCA) based on local unitary operations. We will describe a set of desirable
properties for any QCA model, and show that all of these properties are
satisfied by the new model, while previous models of QCA do not. We will also
show that the computation model based on Local Unitary QCA is equivalent to
the Quantum Circuit model of computation, and give a number of applications of
this new model of QCA. We also present a physical model of classical CA, on
which the Local Unitary QCA model is based, and Coloured QCA, which is an
alternative to the Local Unitary QCA model that can be used as the basis for
implementing QCA in actual physical systems.
In Part II, we explore the quantum separability problem, where we are given a
density matrix for a state over two quantum systems, and we are to determine
whether the state is separable with respect to these systems. We also look at
the converse problem of finding an entanglement witness, which is an
observable operator which can give a verification that a particular quantum
state is indeed entangled. Although the combined problem is known to be
NPhard in general, it reduces to a convex optimization problem, and by
exploiting specific properties of the set of separable states, we introduce a
classical algorithm for solving this problem based on an Interior Point
Algorithm introduced by Atkinson and Vaidya in 1995.
In Part III, we explore the use of a lowdepth AQFT (approximate quantum
Fourier transform) in quantum phase estimation. It has been shown previously
that the logarithmicdepth AQFT is as effective as the full QFT for the
purposes of phase estimation. However, with sublogarithmic depth, the phase
estimation algorithm no longer works directly. In this case, results of the
phase estimation algorithm need classical postprocessing in order to retrieve
the desired phase information. A generic technique such as the method of
maximum likelihood can be used in order to recover the original phase.
Unfortunately, working with the likelihood function analytically is
intractable for the phase estimation algorithm. We develop some computational
techniques to handle likelihood functions that occur in phase estimation
algorithms. These computational techniques may potentially aid in the analysis
of certain likelihood functions.

20 
A Combinatorial Interpretation of Minimal Transitive Factorizations into Transpositions for Permutations with two Disjoint CyclesPrévilleRatelle, LouisFrançois 23 January 2008 (has links)
This thesis is about minimal transitive factorizations of permutations into transpositions. We focus on finding direct combinatorial proofs for the cases where no such direct combinatorial proofs were known. We give a description of what has been done previously in the subject at the direct combinatorial level and in general. We give some new proofs for the known cases. We then present an algorithm that is a bijection between the set of elements in {1, ..., k} dropped into n cyclically ordered boxes and some combinatorial structures involving trees attached to boxes, where these structures depend on whether k > n, k = n or k < n. The inverse of this bijection consists of removing vertices from trees and placing them in boxes in a simple way. In particular this gives a bijection between parking functions of length n and rooted forests on n elements. Also, it turns out that this bijection allows us to give a direct combinatorial derivation of the number of minimal transitive factorizations into transpositions of the permutations that are the product of two disjoint cycles.

Page generated in 0.1418 seconds