• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 8
  • 4
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 17
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Kernelization and Enumeration: New Approaches to Solving Hard Problems

Meng, Jie 2010 May 1900 (has links)
NP-Hardness is a well-known theory to identify the hardness of computational problems. It is believed that NP-Hard problems are unlikely to admit polynomial-time algorithms. However since many NP-Hard problems are of practical significance, different approaches are proposed to solve them: Approximation algorithms, randomized algorithms and heuristic algorithms. None of the approaches meet the practical needs. Recently parameterized computation and complexity has attracted a lot of attention and been a fruitful branch of the study of efficient algorithms. By taking advantage of the moderate value of parameters in many practical instances, we can design efficient algorithms for the NP-Hard problems in practice. In this dissertation, we discuss a new approach to design efficient parameterized algorithms, kernelization. The motivation is that instances of small size are easier to solve. Roughly speaking, kernelization is a preprocess on the input instances and is able to significantly reduce their sizes. We present a 2k kernel for the cluster editing problem, which improves the previous best kernel of size 4k; We also present a linear kernel of size 7k 2d for the d-cluster editing problem, which is the first linear kernel for the problem. The kernelization algorithm is simple and easy to implement. We propose a quadratic kernel for the pseudo-achromatic number problem. This implies that the problem is tractable in term of parameterized complexity. We also study the general problem, the vertex grouping problem and prove it is intractable in term of parameterized complexity. In practice, many problems seek a set of good solutions instead of a good solution. Motivated by this, we present the framework to study enumerability in term of parameterized complexity. We study three popular techniques for the design of parameterized algorithms, and show that combining with effective enumeration techniques, they could be transferred to design efficient enumeration algorithms.
2

Instance compression of parametric problems and related hierarchies

Chakraborty, Chiranjit January 2014 (has links)
We define instance compressibility ([13, 17]) for parametric problems in the classes PH and PSPACE.We observe that the problem ƩiCIRCUITSAT of deciding satisfiability of a quantified Boolean circuit with i-1 alternations of quantifiers starting with an existential quantifier is complete for parametric problems in the class Ʃp/i with respect to w-reductions, and that analogously the problem QBCSAT (Quantified Boolean Circuit Satisfiability) is complete for parametric problems in PSPACE with respect to w-reductions. We show the following results about these problems: 1. If CIRCUITSAT is non-uniformly compressible within NP, then ƩiCIRCUITSAT is non-uniformly compressible within NP, for any i≥1. 2. If QBCSAT is non-uniformly compressible (or even if satisfiability of quantified Boolean CNF formulae is non-uniformly compressible), then PSPACE ⊆ NP/poly and PH collapses to the third level. Next, we define Succinct Interactive Proof (Succinct IP) and by adapting the proof of IP = PSPACE ([11, 6]) , we show that QBCNFSAT (Quantified Boolean Formula (in CNF) Satisfiability) is in Succinct IP. On the contrary if QBCNFSAT has Succinct PCPs ([32]) , Polynomial Hierarchy (PH) collapses. After extending the notion of instance compression to higher classes, we study the hierarchical structure of the parametric problems with respect to compressibility. For that purpose, we extend the existing definition of VC-hierarchy ([13]) to parametric problems. After that, we have considered a long list of natural NP problems and tried to classify them into some level of VC-hierarchy. We have shown some of the new w-reductions in this context and pointed out a few interesting results including the ones as follows. 1. CLIQUE is VC1-complete (using the results in [14]). 2. SET SPLITTING and NAE-SAT are VC2-complete. We have also introduced a new complexity class VCE in this context and showed some hardness and completeness results for this class. We have done a comparison of VC-hierarchy with other related hierarchies in parameterized complexity domain as well. Next, we define the compression of counting problems and the analogous classification of them with respect to the notion of instance compression. We define #VC-hierarchy for this purpose and similarly classify a large number of natural counting problems with respect to this hierarchy, by showing some interesting hardness and completeness results. We have considered some of the interesting practical problems as well other than popular NP problems (e.g., #MULTICOLOURED CLIQUE, #SELECTED DOMINATING SET etc.) and studied their complexity for both decision and counting version. We have also considered a large variety of circuit satisfiability problems (e.g., #MONOTONE WEIGHTED-CNFSAT, #EXACT DNF-SAT etc.) and proved some interesting results about them with respect to the theory of instance compressibility.
3

Parallelism with limited nondeterminism

Finkelstein, Jeffrey 05 March 2017 (has links)
Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems. Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science.
4

Representations and Parameterizations of Combinatorial Auctions

Loker, David Ryan January 2007 (has links)
Combinatorial auctions (CAs) are an important mechanism for allocating multiple items while allowing agents to specify preferences over bundles of items. In order to communicate these preferences, agents submit bids, which consist of one or more items and a value indicating the agent’s preference for these items. The process of determining the allocation of items is known as the winner determination problem (WDP). WDP for CAs is known to be NP-complete in the general case. We consider two distinct graph representations of a CA; the bid graph and the item graph. In a bid graph, vertices represent bids, and two vertices are adjacent if and only if the bids share items in common. In an item graph, each vertex represents a unique item, there is a vertex for each item, and any bid submitted by any agent must induce a connected subgraph of the item graph. We introduce a new definition of combinatorial auction equivalence by declaring two CAs equivalent if and only if their bid graphs are isomorphic. Parameterized complexity theory can be used to further distinguish between NP-hard problems. In order to make use of parameterized complexity theory in the investigation of a problem, we aim to find one or more parameters that describe some aspect of the problem such that if we fix these parameters, then either the problem is still hard (fixed-parameter intractable), or the problem can be solved in polynomial time (fixed-parameter tractable). We analyze WDP using bid graphs from within the formal scope of parameterized complexity theory. This approach has not previously been used to analyze WDP for CAs, although it has been used to solve set packing, which is related to WDP for CAs and is discussed in detail. We investigate a few parameterizations of WDP; some of the parameterizations are shown to be fixed-parameter intractable, while others are fixed-parameter tractable. We also analyze WDP when the graph class of a bid graph is restricted. We also discuss relationships between item graphs and bid graphs. Although both graphs can represent the same problem, there is little previous work analyzing direct relationships between them. Our discussion on these relationships begins with a result by Conitzer et al. [7], which focuses on the item graph representation and its treewidth, a property of a graph that measures how close the graph is to a tree. From a result by Gavril, if an item graph has treewidth one, then the bid graph must be chordal [16]. To apply the other direction of Gavril’s theorem, we use our new definition of CA equivalence. With this new definition, Gavril’s result shows that if a bid graph of a CA is chordal, then we can construct an item graph that has treewidth one for some equivalent CA.
5

Representations and Parameterizations of Combinatorial Auctions

Loker, David Ryan January 2007 (has links)
Combinatorial auctions (CAs) are an important mechanism for allocating multiple items while allowing agents to specify preferences over bundles of items. In order to communicate these preferences, agents submit bids, which consist of one or more items and a value indicating the agent’s preference for these items. The process of determining the allocation of items is known as the winner determination problem (WDP). WDP for CAs is known to be NP-complete in the general case. We consider two distinct graph representations of a CA; the bid graph and the item graph. In a bid graph, vertices represent bids, and two vertices are adjacent if and only if the bids share items in common. In an item graph, each vertex represents a unique item, there is a vertex for each item, and any bid submitted by any agent must induce a connected subgraph of the item graph. We introduce a new definition of combinatorial auction equivalence by declaring two CAs equivalent if and only if their bid graphs are isomorphic. Parameterized complexity theory can be used to further distinguish between NP-hard problems. In order to make use of parameterized complexity theory in the investigation of a problem, we aim to find one or more parameters that describe some aspect of the problem such that if we fix these parameters, then either the problem is still hard (fixed-parameter intractable), or the problem can be solved in polynomial time (fixed-parameter tractable). We analyze WDP using bid graphs from within the formal scope of parameterized complexity theory. This approach has not previously been used to analyze WDP for CAs, although it has been used to solve set packing, which is related to WDP for CAs and is discussed in detail. We investigate a few parameterizations of WDP; some of the parameterizations are shown to be fixed-parameter intractable, while others are fixed-parameter tractable. We also analyze WDP when the graph class of a bid graph is restricted. We also discuss relationships between item graphs and bid graphs. Although both graphs can represent the same problem, there is little previous work analyzing direct relationships between them. Our discussion on these relationships begins with a result by Conitzer et al. [7], which focuses on the item graph representation and its treewidth, a property of a graph that measures how close the graph is to a tree. From a result by Gavril, if an item graph has treewidth one, then the bid graph must be chordal [16]. To apply the other direction of Gavril’s theorem, we use our new definition of CA equivalence. With this new definition, Gavril’s result shows that if a bid graph of a CA is chordal, then we can construct an item graph that has treewidth one for some equivalent CA.
6

Harnessing tractability in constraint satisfaction problems

Carbonnel, Clément 07 December 2016 (has links) (PDF)
The Constraint Satisfaction Problem (CSP) is a fundamental NP-complete problem with many applications in artificial intelligence. This problem has enjoyed considerable scientific attention in the past decades due to its practical usefulness and the deep theoretical questions it relates to. However, there is a wide gap between practitioners, who develop solving techniques that are efficient for industrial instances but exponential in the worst case, and theorists who design sophisticated polynomial-time algorithms for restrictions of CSP defined by certain algebraic properties. In this thesis we attempt to bridge this gap by providing polynomial-time algorithms to test for membership in a selection of major tractable classes. Even if the instance does not belong to one of these classes, we investigate the possibility of decomposing efficiently a CSP instance into tractable subproblems through the lens of parameterized complexity. Finally, we propose a general framework to adapt the concept of kernelization, central to parameterized complexity but hitherto rarely used in practice, to the context of constraint reasoning. Preliminary experiments on this last contribution show promising results.
7

Exploring Algorithms for Branch Decompositions of Planar Graphs

Dinh, Hiep 29 December 2008 (has links)
No description available.
8

Obtížné problémy vzhledem k parametru různorodost sousedství / Obtížné problémy vzhledem k parametru různorodost sousedství

Koutecký, Martin January 2013 (has links)
Parameterized complexity is a part of computer science dealing with the computational complexity of problems measured not only by the length of their input but also some parameter of the input. Nei- ghborhood diversity is a recently introduced parameter describing a certain structure of a graph. is parameter is aractive for resear especially because some problems whi are hard with respect to other parameters that are incomparable with neighborhood diversity become fixed-parameter tractable with respect to neighborhood diversity. In this thesis we show fixed-parameter tractability for three problems that are hard with respect to treewidth. is constitutes the main part of this thesis and it is our original work. Next it contains an overview of other interesting problems and also a survey of the state of the art in the area of parameters for sparse and dense graphs. 1
9

Résultats Positifs et Négatifs en Approximation et Complexité Paramétrée / Positive and Negative Results in Approximation and Parameterized Complexity

Bonnet, Edouard 20 November 2014 (has links)
De nombreux problèmes de la vie réelle sont NP-Difficiles et ne peuvent pas être résolus en temps polynomial. Deux paradigmes notables pour les résoudre quand même sont: l'approximation et la complexité paramétrée. Dans cette thèse, on présente une nouvelle technique appelée "gloutonnerie-Pour-La-Paramétrisation". On l'utilise pour établir ou améliorer la complexité paramétrée de nombreux problèmes et également pour obtenir des algorithmes paramétrés pour des problèmes à cardinalité contrainte sur les graphes bipartis. En vue d'établir des résultats négatifs sur l'approximabilité en temps sous-Exponentiel et en temps paramétré, on introduit différentes méthodes de sparsification d'instances préservant l'approximation. On combine ces "sparsifieurs" à des réductions nouvelles ou déjà connues pour parvenir à nos fins. En guise de digestif, on présente des résultats de complexité de jeux comme le Bridge et Havannah. / Several real-Life problems are NP-Hard and cannot be solved in polynomial time.The two main options to overcome this issue are: approximation and parameterized complexity. In this thesis, we present a new technique called greediness-For-Parameterization and we use it to improve the parameterized complexity of many problems. We also use this notion to obtain parameterized algorithms for some problems in bipartite graphs. Aiming at establishing negative results on the approximability in subexponential time and in parameterized time, we introduce new methods of sparsification that preserves approximation. We combine those "sparsifiers" with known or new reductions to achieve our goal. Finally, we present some hardness results of games such as Bridge and Havannah.
10

Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes / Approximation and parameterized complexity of graph optimisation problems : partitions and subgraphs

Watrigant, Rémi 02 October 2014 (has links)
La théorie de la NP-complétude nous apprend que pour un certain nombre de problèmes d'optimisation, il est vain d'espérer un algorithme efficace calculant une solution optimale. Partant de ce constat, un moyen pour contourner cet obstacle est de réaliser un compromis sur chacun de ces critères, engendrant deux approches devenues classiques. La première, appelée approximation polynomiale, consiste à développer des algorithmes efficaces et retournant une solution proche d'une solution optimale. La seconde, appelée complexité paramétrée, consiste à développer des algorithmes retournant une solution optimale mais dont l'explosion combinatoire est capturée par un paramètre de l'entrée bien choisi. Cette thèse comporte deux objectifs. Dans un premier temps, nous proposons d'étudier et d'appliquer les méthodes classiques de ces deux domaines afin d'obtenir des résultats positifs et négatifs pour deux problèmes d'optimisation dans les graphes : un problème de partition appelé Sparsest k-Compaction, et un problème de recherche d'un sous-graphe avec une cardinalité fixée appelé Sparsest k-Subgraph. Dans un second temps, nous présentons comment les méthodes de ces deux domaines ont pu se combiner ces dernières années pour donner naissance au principe d'approximation paramétrée. En particulier, nous étudierons les liens entre approximation et algorithmes de noyaux. / The theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms.

Page generated in 0.0493 seconds