• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Irreducible Infeasible Subsystem Decomposition for Probabilistically Constrained Stochastic Integer Programs

Gallego Arrubla, Julian Andres 16 December 2013 (has links)
This dissertation explores methods for finding irreducible infeasible subsystems (IISs) of systems of inequalities with binary decision variables and for solving probabilistically constrained stochastic integer programs (SIP-C). Finding IISs for binary systems is useful in decomposition methods for SIP-C. SIP-C has many important applications including modeling of strategic decision-making problems in wildfire initial response planning. New theoretical results and two new algorithms to find IISs for systems of inequalities with binary variables are developed. The first algorithm uses the new theory and the method of the alternative polyhedron within a branch-and-bound (BAB) approach. The second algorithm applies the new theory and the method of the alternative polyhedron to a system in which zero/one box constraints are appended. Decomposition schemes using IISs for binary systems can be used to solve SIP-C. SIP-C is challenging to solve due to the generally non-convex feasible region. In addition, very weak lower (upper) bounds on the objective function are obtained from the linear programming (LP) relaxation of the deterministic equivalent problem (DEP) to SIP-C. This work develops a branch-and-cut (BAC) method based on IIS inequalities to solve SIP-C with random technology matrix and random righthand- side vector. Computational results show that the LP relaxation of the DEP to SIP-C can be strengthened by the IIS inequalities. SIP-C modeling can be applied to wildfire initial response planning. A new methodology for wildfire initial response that includes a fire behavior simulation model, a wildfire risk model, and SIP-C is developed and tested. The new method- ology assumes a known standard response needed to contain a fire of given size. Likewise, this methodology is used to evaluate deployment decisions in terms of the number of firefighting resources positioned at each base, the expected number of escaped and contained fires, as well as the wildfire risk associated with fires not receiving a standard response. A study based on the Texas district 12 (TX12) that is one of the Texas A&M Forest Service (TFS) fire planning units in east Texas demonstrates the effectiveness of the new methodology towards making strategic deployment decisions for wildfire initial response planning.
2

Drive-based Modeling And Visualization Of Crew Race Strategy And Performance

Cornett, Jeffrey 01 January 2008 (has links)
Crew race strategy is typically formulated by coaches based on rowing tradition and years of experience. However, coaching strategies are not generally supported by empirical evidence and decision-support models. Previous models of crew race strategy have been constrained by the sparse information published on crew race performance (quarterly 500-meter splits). Empirical research has merely summarized which quarterly splits averaged the fastest and slowest relative to the other splits and relative to the average speed of the other competitors. Video records of crew race world championships provide a rich source of data for those capable and patient enough to mine this level of detail. This dissertation is based on a precise frame-by-frame video analysis of five world championship rowing finals. With six competing crews per race, a database of 75 race-pair duels was compiled that summarizes race positioning, competitive drives, and relative stroke rates at 10-meter intervals recorded with photo-finish precision (30 frames per second). The drive-based research pioneered in this dissertation makes several contributions to understanding the dynamics of crew race strategy and performance: 1) An 8-factor conceptual model of crew race performance. 2) A generic drive model that decomposes how pairs of crews duel in a race. 3) Graphical summaries of the rates and locations of successful and unsuccessful drives. 4) Contour lines of the margins that winning crews hold over the course of the race. 5) Trend lines for what constitutes a probabilistically decisive lead as a function of position along the course, seconds behind the leader, and whether the trailing crew is driving. This research defines a new drive-based vocabulary for evaluating crew race performance for use by coaches, competitors and race analysts. The research graphically illustrates situational parameters helpful in formulating race strategy and guiding real-time decision-making by competitors. This research also lays the foundation for future industrial engineering decision-support models and associated parameters as applied to race strategy and tactics.
3

Global Optimization of Monotonic Programs: Applications in Polynomial and Stochastic Programming.

Cheon, Myun-Seok 15 April 2005 (has links)
Monotonic optimization consists of minimizing or maximizing a monotonic objective function over a set of constraints defined by monotonic functions. Many optimization problems in economics and engineering often have monotonicity while lacking other useful properties, such as convexity. This thesis is concerned with the development and application of global optimization algorithms for monotonic optimization problems. First, we propose enhancements to an existing outer-approximation algorithm | called the Polyblock Algorithm | for monotonic optimization problems. The enhancements are shown to significantly improve the computational performance of the algorithm while retaining the convergence properties. Next, we develop a generic branch-and-bound algorithm for monotonic optimization problems. A computational study is carried out for comparing the performance of the Polyblock Algorithm and variants of the proposed branch-and-bound scheme on a family of separable polynomial programming problems. Finally, we study an important class of monotonic optimization problems | probabilistically constrained linear programs. We develop a branch-and-bound algorithm that searches for a global solution to the problem. The basic algorithm is enhanced by domain reduction and cutting plane strategies to reduce the size of the partitions and hence tighten bounds. The proposed branch-reduce-cut algorithm exploits the monotonicity properties inherent in the problem, and requires the solution of only linear programming subproblems. We provide convergence proofs for the algorithm. Some illustrative numerical results involving problems with discrete distributions are presented.
4

Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect Completeness

Huang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0&gt;0, such that for any K &gt;= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0&gt;0, så att för alla K &gt;= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
5

Sparse instances of hard problems

Dell, Holger 01 September 2011 (has links)
Diese Arbeit nutzt und verfeinert Methoden der Komplexitätstheorie, um mit diesen die Komplexität dünner Instanzen zu untersuchen. Dazu gehören etwa Graphen mit wenigen Kanten oder Formeln mit wenigen Bedingungen beschränkter Weite. Dabei ergeben sich zwei natürliche Fragestellungen: (a) Gibt es einen effizienten Algorithmus, der beliebige Instanzen eines NP-schweren Problems auf äquivalente, dünne Instanzen reduziert? (b) Gibt es einen Algorithmus, der dünne Instanzen NP-schwerer Probleme bedeutend schneller löst als allgemeine Instanzen gelöst werden können? Wir formalisieren diese Fragen für verschiedene Probleme und zeigen, dass positive Antworten jeweils zu komplexitätstheoretischen Konsequenzen führen, die als unwahrscheinlich gelten. Frage (a) wird als Kommunikation modelliert, in der zwei Akteure kooperativ eine NP-schwere Sprache entscheiden möchten und dabei möglichst wenig kommunizieren. Unter der komplexitätstheoretischen Annahme, dass coNP keine Teilmenge von NP/poly ist, erhalten wir aus unseren Ergebnissen erstaunlich scharfe untere Schranken für interessante Parameter aus verschiedenen Teilgebieten der theoretischen Informatik. Im Speziellen betrifft das die Ausdünnung von Formeln, die Kernelisierung aus der parameterisierten Komplexitätstheorie, die verlustbehaftete Kompression von Entscheidungsproblemen, und die Theorie der probabilistisch verifizierbaren Beweise. Wir untersuchen Fragestellung (b) anhand der Exponentialzeitkomplexität von Zählproblemen. Unter (Varianten) der bekannten Exponentialzeithypothese (ETH) erhalten wir exponentielle untere Schranken für wichtige #P-schwere Probleme: das Berechnen der Zahl der erfüllenden Belegungen einer 2-KNF Formel, das Berechnen der Zahl aller unabhängigen Mengen in einem Graphen, das Berechnen der Permanente einer Matrix mit Einträgen 0 und 1, das Auswerten des Tuttepolynoms an festen Punkten. / In this thesis, we use and refine methods of computational complexity theory to analyze the complexity of sparse instances, such as graphs with few edges or formulas with few constraints of bounded width. Two natural questions arise in this context: (a) Is there an efficient algorithm that reduces arbitrary instances of an NP-hard problem to equivalent, sparse instances? (b) Is there an algorithm that solves sparse instances of an NP-hard problem significantly faster than general instances can be solved? We formalize these questions for different problems and show that positive answers for these formalizations would lead to consequences in complexity theory that are considered unlikely. Question (a) is modeled by a communication process, in which two players want to cooperatively decide an NP-hard language and at the same time communicate as few as possible. Under the complexity-theoretic hypothesis that coNP is not in NP/poly, our results imply surprisingly tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. We study the question (b) for counting problems in the exponential time setting. Assuming (variants of) the exponential time hypothesis (ETH), we obtain asymptotically tight, exponential lower bounds for well-studied #P-hard problems: Computing the number of satisfying assignments of a 2-CNF formula, computing the number of all independent sets in a graph, computing the permanent of a matrix with entries 0 and 1, evaluating the Tutte polynomial at fixed evaluation points.

Page generated in 0.0992 seconds