• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 11
  • 10
  • 7
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 106
  • 45
  • 29
  • 22
  • 20
  • 19
  • 18
  • 17
  • 14
  • 14
  • 14
  • 14
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Obtížné problémy vzhledem k parametru různorodost sousedství / Obtížné problémy vzhledem k parametru různorodost sousedství

Koutecký, Martin January 2013 (has links)
Parameterized complexity is a part of computer science dealing with the computational complexity of problems measured not only by the length of their input but also some parameter of the input. Nei- ghborhood diversity is a recently introduced parameter describing a certain structure of a graph. is parameter is aractive for resear especially because some problems whi are hard with respect to other parameters that are incomparable with neighborhood diversity become fixed-parameter tractable with respect to neighborhood diversity. In this thesis we show fixed-parameter tractability for three problems that are hard with respect to treewidth. is constitutes the main part of this thesis and it is our original work. Next it contains an overview of other interesting problems and also a survey of the state of the art in the area of parameters for sparse and dense graphs. 1
32

Résultats Positifs et Négatifs en Approximation et Complexité Paramétrée / Positive and Negative Results in Approximation and Parameterized Complexity

Bonnet, Edouard 20 November 2014 (has links)
De nombreux problèmes de la vie réelle sont NP-Difficiles et ne peuvent pas être résolus en temps polynomial. Deux paradigmes notables pour les résoudre quand même sont: l'approximation et la complexité paramétrée. Dans cette thèse, on présente une nouvelle technique appelée "gloutonnerie-Pour-La-Paramétrisation". On l'utilise pour établir ou améliorer la complexité paramétrée de nombreux problèmes et également pour obtenir des algorithmes paramétrés pour des problèmes à cardinalité contrainte sur les graphes bipartis. En vue d'établir des résultats négatifs sur l'approximabilité en temps sous-Exponentiel et en temps paramétré, on introduit différentes méthodes de sparsification d'instances préservant l'approximation. On combine ces "sparsifieurs" à des réductions nouvelles ou déjà connues pour parvenir à nos fins. En guise de digestif, on présente des résultats de complexité de jeux comme le Bridge et Havannah. / Several real-Life problems are NP-Hard and cannot be solved in polynomial time.The two main options to overcome this issue are: approximation and parameterized complexity. In this thesis, we present a new technique called greediness-For-Parameterization and we use it to improve the parameterized complexity of many problems. We also use this notion to obtain parameterized algorithms for some problems in bipartite graphs. Aiming at establishing negative results on the approximability in subexponential time and in parameterized time, we introduce new methods of sparsification that preserves approximation. We combine those "sparsifiers" with known or new reductions to achieve our goal. Finally, we present some hardness results of games such as Bridge and Havannah.
33

Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes / Approximation and parameterized complexity of graph optimisation problems : partitions and subgraphs

Watrigant, Rémi 02 October 2014 (has links)
La théorie de la NP-complétude nous apprend que pour un certain nombre de problèmes d'optimisation, il est vain d'espérer un algorithme efficace calculant une solution optimale. Partant de ce constat, un moyen pour contourner cet obstacle est de réaliser un compromis sur chacun de ces critères, engendrant deux approches devenues classiques. La première, appelée approximation polynomiale, consiste à développer des algorithmes efficaces et retournant une solution proche d'une solution optimale. La seconde, appelée complexité paramétrée, consiste à développer des algorithmes retournant une solution optimale mais dont l'explosion combinatoire est capturée par un paramètre de l'entrée bien choisi. Cette thèse comporte deux objectifs. Dans un premier temps, nous proposons d'étudier et d'appliquer les méthodes classiques de ces deux domaines afin d'obtenir des résultats positifs et négatifs pour deux problèmes d'optimisation dans les graphes : un problème de partition appelé Sparsest k-Compaction, et un problème de recherche d'un sous-graphe avec une cardinalité fixée appelé Sparsest k-Subgraph. Dans un second temps, nous présentons comment les méthodes de ces deux domaines ont pu se combiner ces dernières années pour donner naissance au principe d'approximation paramétrée. En particulier, nous étudierons les liens entre approximation et algorithmes de noyaux. / The theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms.
34

Approximation, complexité paramétrée et stratégies de résolution de problèmes d'affectation multidimensionnelle / Approximability, parameterized complexity and solving strategies of some multidimensional assignment problems

Duvillié, Guillerme 07 October 2016 (has links)
Au cours de la thèse, nous nous sommes intéressés aux problèmes d'empilement de wafers. Ces problèmes apparaissent lors de la fabrication de processeurs en 3 dimensions. Au cours du processus de fabrication, les puces électroniques doivent être empilées les unes sur les autres. Jusqu'à peu, ces dernières, une fois gravées sur des plaques de silicium appelées wafers, étaient découpées, puis triées afin d'écarter les puces défectueuses et enfin assemblées les unes entre elles.Cependant empiler les wafers plutôt que les puces présente de nombreux avantages techniques et financiers. Naturellement, étant impossible d'écarter les puces défectueuses sans découper la plaque de silice, le problème de la superposition d'une puce viable avec une puce défectueuse se pose. Une pile de puces, étant considérées comme défectueuse si elle contient ne serait-ce qu'une puce défectueuse, la superposition non réfléchie des wafers entre eux mènerait à un rendement désastreux.Afin de générer un nombre minimum de piles défectueuses, une "cartographie" de chaque wafer candidat à la superposition est réalisée lors d'une phase de test, permettant de situer les puces défectueuses sur le wafer. Une fois cette cartographie réalisée, l'objectif est de sélectionner les wafers qui seront assemblés ensembles de manière à faire correspondre les défauts de chacun des wafers.Ce problème peut être modélisé à l'aide d'un problème d'affectation multidimensionnelle. Chaque wafer est représenté par un vecteur comportant autant de composantes que de puces sur le wafer qu'il représente. Une composante égale à zéro matérialise une puce défectueuse tandis qu'un un matérialise une puce viable. Chaque lot de wafers est représenté par un lot de vecteurs. Formellement, une instance d'empilement de wafers est représenté par m ensembles de n vecteurs binaires p-dimensionnels. L'objectif est alors de réaliser n m-uplets disjoints contenant exactement un vecteur par ensemble. Ces m-uplets représenteront les piles. Chaque m-uplet peut être représenté par un vecteur binaire p-dimensionnels, chaque composante étant calculée en réalisant le ET binaire des composantes correspondantes des vecteurs qui composent le m-uplet. Autrement dit, une composante du vecteur représentant le m-uplet est égale à un si et seulement si tous les vecteurs ont cette composante égale à un. Et donc une pile de puces est viables si toutes les puces qui la composent sont viables. L'objectif est alors de minimiser le nombre de zéros ou de maximiser le nombre de un.La thèse comporte deux grandes parties. Une partie théorique abordant la complexité des différentes versions du problèmes en fonction de certains paramètres tels que m, n, p ou encore le nombre maximum de zéros par vecteurs. Nous montrons entre autre que ces problèmes peuvent être utilisés pour modéliser des problèmes plus classiques tels que Maximum Clique, Minimum Vertex Cover ou encore k-Dimensional Matching, permettant de prouver un certain nombre de résultats négatifs que ce soit d'un point de vue de la complexité classique, l'approximabilité ou la complexité paramétrée. Nous fournissons également des résultats positifs pour des cas particuliers du problème.Dans un second temps, nous nous intéressons à la résolution pratique du problème en fournissant et comparant un certain nombre de formulations en Programmation Linéaire en Nombres Entiers. Mais nous nous intéressons également aux performances en pratique de certaines heuristiques à garantie de performances détaillées dans la partie théorique. / In this thesis, we focused in the Wafer-to-Wafer integration problems. These problems come from IC manufacturing. During the production of three-dimensional processors, dies have to be superimposed. Until recent, the dies were engraved on a silicon disk called wafer, then were cut, tested and sorted to suppress faulty dies and lastly superimposed one to each other.However superimposing wafers instead of dies presents several technical and financial advantages. Since faulty dies can only be dismissed when cutting the wafer, superimpose two wafers can lead to superimpose a faulty die with a viable one. In this case, the resulting stack of dies is considered as faulty. It follows that a bad assignment between the wafers can lead to a disastrous yield.In order to minimize the number of faulty dies stacks, a "failure map" of each wafer is generated during a test phase. This map gives location of the faulty dies on the wafers. The objective is then to take advantage of this map to define an assignment of the wafers to each other in order to match as many failures as possible.This problem can be modelized with Multidimensional Assignment problems. Each wafer can be seen as a vector with as many dimensions as the number of dies engraved on it. A coordinate set to zero marks a faulty die while a coordinate set to one indicates a viable one. Each seat of wafers is represented by a set of vector. Formally, an instance of a Wafer-to-Wafer integration problem is represented by m sets of n p-dimensional vectors. The objective is then to partition the vectors into n disjoint m-tuples, each tuple containing exactly one vector per set. An m-tuple represents a stack of wafers. Every m-tuple can be represented by a p-dimensional vector. Each coordinate is computed by performing the bitwise AND between the corresponding coordinates of the vectors that compose the m-tuple. In other words, a coordinate of the representative vector is equal to one if and only if this coordinate is equal to one in every vector composing the tuple. It follows that a dies stack is viable if and only if all the dies composing the stack are viable. The objective is then to maximize the overall number of ones of to minimize the overall number of zeros.The first part of the thesis is a theoretical one. We study the complexity of the considered versions of the problem with regards to natural parameters such as m, n, p or the number of zeros per vector. We show that these problems can encode more classical problems such as Maximum Clique, Minimum Vertex Cover or k-Dimensional Matching. This leads to several negative results from computational complexity, approximability or even parameterized complexity point of view. We also provide several positive results for some specific cases of the problem.In a second part, we focus on the practical solving of the problem. We provide and compare several Integer Linear Programming formulations. We also focus on performances of some approximation algorithms that we detailed in the theoretical part.
35

Výpočetní složitost v teorii grafů / Computational complexity in graph theory

Doucha, Martin January 2012 (has links)
This work introduces two new parameterizations of graph problems generalizing vertex cover which fill part of the space between vertex cover and clique width in the hierarchy of graf parameterizations. We also study parameterized complexity of Hamiltonian path and cycle, vertex coloring, precoloring extension and equitable coloring parameterized by these two parameterizations. With the exception of precoloring extension which is W[1]-hard in one case, all the other problems listed above are tractable for both parameterizations. The boundary between tractability and intractability of these problems can therefore be moved closer to parameterization by clique width.
36

Strukturální vlastnosti grafů a efektivní algoritmy: Problémy separující parametry / Structural properties of graphs and eficient algorithms: Problems Between Parameters

Knop, Dušan January 2017 (has links)
Structural Properties of Graphs and Eficient Algorithms: Problems Between Parameters Dušan Knop Parameterized complexity became over last two decades one of the most impor- tant subfield of computational complexity. Structural graph parameters (widths) play important role both in graph theory and (parameterized) algoritmh design. By studying some concrete problems we exhibit the connection between struc- tural graph parameters and parameterized tractability. We do this by examining tractability and hardness results for the Target Set Selection, Minimum Length Bounded Cut, and other problems. In the Minimum Length Bounded Cut problem we are given a graph, source, sink, and a positive integer L and the task is to remove edges from the graph such that the distance between the source and the sink exceeds L in the resulting graph. We show that an optimal solution to the Minimum Length Bounded Cut problem can be computed in time f(k)n, where f is a computable function and k denotes the tree-depth of the input graph. On the other hand we prove that (under assumption that FPT ̸= W[1]) no such algorithm can exist if the parameter k is the tree-width of the input graph. Currently only few such problems are known. The Target Set Selection problem exibits the same phenomenon for the vertex cover number and...
37

Model Checking Parameterized Timed Systems

Mahata, Pritha January 2005 (has links)
<p>In recent years, there has been much advancement in the area of verification of infinite-state systems. A system can have an infinite state-space due to unbounded data structures such as counters, clocks, stacks, queues, etc. It may also be infinite-state due to parameterization, i.e., the possibility of having an arbitrary number of components in the system. For parameterized systems, we are interested in checking correctness of all the instances in one verification step. </p><p>In this thesis, we consider systems which contain both sources of infiniteness, namely: (a) real-valued clocks and (b) parameterization. More precisely, we consider two models: (a) the timed Petri net (TPN) model, which is an extension of the classical Petri net model; and (b) the timed network (TN) model in which an arbitrary number of timed automata run in parallel. </p><p>We consider verification of safety properties for timed Petri nets using forward analysis. Since forward analysis is necessarily incomplete, we provide a semi-algorithm augmented with an acceleration technique in order to make it terminate more often on practical examples. Then we consider a number of problems which are generalisations of the corresponding ones for timed automata and Petri nets. For instance, we consider zenoness where we check the existence of an infinite computation with a finite duration. We also consider two variants of boundedness problem: syntactic boundedness in which both live and dead tokens are considered and semantic boundedness where only live tokens are considered. We show that the former problem is decidable while the latter is not. Finally, we show undecidability of LTL model checking both for dense and discrete timed Petri nets. </p><p>Next we consider timed networks. We show undecidability of safety properties in case each component is equipped with two or more clocks. This result contrasts previous decidability result for the case where each component has a single clock. Also ,we show that the problem is decidable when clocks range over the discrete time domain. This decidability result holds when the processes have any finite number of clocks. Furthermore, we outline the border between decidability and undecidability of safety for TNs by considering several syntactic and semantic variants.</p>
38

Circuit Timing and Leakage Analysis in the Presence of Variability

Heloue, Khaled R. 15 February 2011 (has links)
Driven by the need for faster devices and higher transistor densities, technology trends have pushed transistor dimensions into the deep sub-micron regime. This continued scaling, however, has led to many challenges facing digital integrated circuits today. One important challenge is the increased variations in the underlying process and environmental parameters, and the significant impact of this variability on circuit timing and leakage power, making it increasingly difficult to design circuits that achieve a required specification. Given these challenges, there is a need for computer-aided design (CAD) techniques that can predict and analyze circuit performance (timing and leakage) accurately and efficiently in the presence of variability. This thesis presents new techniques for variation-aware timing and leakage analysis that address different aspects of the problem. First, on the timing front, a pre-placement statistical static timing analysis technique is presented. This technique can be applied at an early stage of design, when within-die correlations are still unknown. Next, a general parameterized static timing analysis framework is proposed, which supports a general class of nonlinear delay models and handles both random (process) parameters with arbitrary distributions and non-random (environmental) parameters. Following this, a parameterized static timing analysis technique is presented, which can capture circuit delay exactly at any point in the parameter space. This is enabled by identifying all potentially critical paths in the circuit through novel and efficient pruning algorithms that improve on the state of art both in theoretical complexity and runtime. Also on the timing front, a novel distance-based metric for robustness is proposed. This metric can be used to quantify the susceptibility of parameterized timing quantities to failure, thus enabling designers to fix the nodes with smallest robustness values in order to improve the overall design robustness. Finally, on the leakage front, a statistical technique for early-mode and late-mode leakage estimation is presented. The novelty lies in the random gate concept, which allows for efficient and accurate full-chip leakage estimation. In its simplest form, the leakage estimation reduces to finding the area under a scaled version of the within-die channel length auto-correlation function, which can be done in constant time.
39

Circuit Timing and Leakage Analysis in the Presence of Variability

Heloue, Khaled R. 15 February 2011 (has links)
Driven by the need for faster devices and higher transistor densities, technology trends have pushed transistor dimensions into the deep sub-micron regime. This continued scaling, however, has led to many challenges facing digital integrated circuits today. One important challenge is the increased variations in the underlying process and environmental parameters, and the significant impact of this variability on circuit timing and leakage power, making it increasingly difficult to design circuits that achieve a required specification. Given these challenges, there is a need for computer-aided design (CAD) techniques that can predict and analyze circuit performance (timing and leakage) accurately and efficiently in the presence of variability. This thesis presents new techniques for variation-aware timing and leakage analysis that address different aspects of the problem. First, on the timing front, a pre-placement statistical static timing analysis technique is presented. This technique can be applied at an early stage of design, when within-die correlations are still unknown. Next, a general parameterized static timing analysis framework is proposed, which supports a general class of nonlinear delay models and handles both random (process) parameters with arbitrary distributions and non-random (environmental) parameters. Following this, a parameterized static timing analysis technique is presented, which can capture circuit delay exactly at any point in the parameter space. This is enabled by identifying all potentially critical paths in the circuit through novel and efficient pruning algorithms that improve on the state of art both in theoretical complexity and runtime. Also on the timing front, a novel distance-based metric for robustness is proposed. This metric can be used to quantify the susceptibility of parameterized timing quantities to failure, thus enabling designers to fix the nodes with smallest robustness values in order to improve the overall design robustness. Finally, on the leakage front, a statistical technique for early-mode and late-mode leakage estimation is presented. The novelty lies in the random gate concept, which allows for efficient and accurate full-chip leakage estimation. In its simplest form, the leakage estimation reduces to finding the area under a scaled version of the within-die channel length auto-correlation function, which can be done in constant time.
40

A Parameterized Algorithm for Upward Planarity Testing of Biconnected Graphs

Chan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints. Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete. Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time. In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw. Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.

Page generated in 0.0586 seconds