• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 8
  • 4
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 17
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Approximation, complexité paramétrée et stratégies de résolution de problèmes d'affectation multidimensionnelle / Approximability, parameterized complexity and solving strategies of some multidimensional assignment problems

Duvillié, Guillerme 07 October 2016 (has links)
Au cours de la thèse, nous nous sommes intéressés aux problèmes d'empilement de wafers. Ces problèmes apparaissent lors de la fabrication de processeurs en 3 dimensions. Au cours du processus de fabrication, les puces électroniques doivent être empilées les unes sur les autres. Jusqu'à peu, ces dernières, une fois gravées sur des plaques de silicium appelées wafers, étaient découpées, puis triées afin d'écarter les puces défectueuses et enfin assemblées les unes entre elles.Cependant empiler les wafers plutôt que les puces présente de nombreux avantages techniques et financiers. Naturellement, étant impossible d'écarter les puces défectueuses sans découper la plaque de silice, le problème de la superposition d'une puce viable avec une puce défectueuse se pose. Une pile de puces, étant considérées comme défectueuse si elle contient ne serait-ce qu'une puce défectueuse, la superposition non réfléchie des wafers entre eux mènerait à un rendement désastreux.Afin de générer un nombre minimum de piles défectueuses, une "cartographie" de chaque wafer candidat à la superposition est réalisée lors d'une phase de test, permettant de situer les puces défectueuses sur le wafer. Une fois cette cartographie réalisée, l'objectif est de sélectionner les wafers qui seront assemblés ensembles de manière à faire correspondre les défauts de chacun des wafers.Ce problème peut être modélisé à l'aide d'un problème d'affectation multidimensionnelle. Chaque wafer est représenté par un vecteur comportant autant de composantes que de puces sur le wafer qu'il représente. Une composante égale à zéro matérialise une puce défectueuse tandis qu'un un matérialise une puce viable. Chaque lot de wafers est représenté par un lot de vecteurs. Formellement, une instance d'empilement de wafers est représenté par m ensembles de n vecteurs binaires p-dimensionnels. L'objectif est alors de réaliser n m-uplets disjoints contenant exactement un vecteur par ensemble. Ces m-uplets représenteront les piles. Chaque m-uplet peut être représenté par un vecteur binaire p-dimensionnels, chaque composante étant calculée en réalisant le ET binaire des composantes correspondantes des vecteurs qui composent le m-uplet. Autrement dit, une composante du vecteur représentant le m-uplet est égale à un si et seulement si tous les vecteurs ont cette composante égale à un. Et donc une pile de puces est viables si toutes les puces qui la composent sont viables. L'objectif est alors de minimiser le nombre de zéros ou de maximiser le nombre de un.La thèse comporte deux grandes parties. Une partie théorique abordant la complexité des différentes versions du problèmes en fonction de certains paramètres tels que m, n, p ou encore le nombre maximum de zéros par vecteurs. Nous montrons entre autre que ces problèmes peuvent être utilisés pour modéliser des problèmes plus classiques tels que Maximum Clique, Minimum Vertex Cover ou encore k-Dimensional Matching, permettant de prouver un certain nombre de résultats négatifs que ce soit d'un point de vue de la complexité classique, l'approximabilité ou la complexité paramétrée. Nous fournissons également des résultats positifs pour des cas particuliers du problème.Dans un second temps, nous nous intéressons à la résolution pratique du problème en fournissant et comparant un certain nombre de formulations en Programmation Linéaire en Nombres Entiers. Mais nous nous intéressons également aux performances en pratique de certaines heuristiques à garantie de performances détaillées dans la partie théorique. / In this thesis, we focused in the Wafer-to-Wafer integration problems. These problems come from IC manufacturing. During the production of three-dimensional processors, dies have to be superimposed. Until recent, the dies were engraved on a silicon disk called wafer, then were cut, tested and sorted to suppress faulty dies and lastly superimposed one to each other.However superimposing wafers instead of dies presents several technical and financial advantages. Since faulty dies can only be dismissed when cutting the wafer, superimpose two wafers can lead to superimpose a faulty die with a viable one. In this case, the resulting stack of dies is considered as faulty. It follows that a bad assignment between the wafers can lead to a disastrous yield.In order to minimize the number of faulty dies stacks, a "failure map" of each wafer is generated during a test phase. This map gives location of the faulty dies on the wafers. The objective is then to take advantage of this map to define an assignment of the wafers to each other in order to match as many failures as possible.This problem can be modelized with Multidimensional Assignment problems. Each wafer can be seen as a vector with as many dimensions as the number of dies engraved on it. A coordinate set to zero marks a faulty die while a coordinate set to one indicates a viable one. Each seat of wafers is represented by a set of vector. Formally, an instance of a Wafer-to-Wafer integration problem is represented by m sets of n p-dimensional vectors. The objective is then to partition the vectors into n disjoint m-tuples, each tuple containing exactly one vector per set. An m-tuple represents a stack of wafers. Every m-tuple can be represented by a p-dimensional vector. Each coordinate is computed by performing the bitwise AND between the corresponding coordinates of the vectors that compose the m-tuple. In other words, a coordinate of the representative vector is equal to one if and only if this coordinate is equal to one in every vector composing the tuple. It follows that a dies stack is viable if and only if all the dies composing the stack are viable. The objective is then to maximize the overall number of ones of to minimize the overall number of zeros.The first part of the thesis is a theoretical one. We study the complexity of the considered versions of the problem with regards to natural parameters such as m, n, p or the number of zeros per vector. We show that these problems can encode more classical problems such as Maximum Clique, Minimum Vertex Cover or k-Dimensional Matching. This leads to several negative results from computational complexity, approximability or even parameterized complexity point of view. We also provide several positive results for some specific cases of the problem.In a second part, we focus on the practical solving of the problem. We provide and compare several Integer Linear Programming formulations. We also focus on performances of some approximation algorithms that we detailed in the theoretical part.
12

Výpočetní složitost v teorii grafů / Computational complexity in graph theory

Doucha, Martin January 2012 (has links)
This work introduces two new parameterizations of graph problems generalizing vertex cover which fill part of the space between vertex cover and clique width in the hierarchy of graf parameterizations. We also study parameterized complexity of Hamiltonian path and cycle, vertex coloring, precoloring extension and equitable coloring parameterized by these two parameterizations. With the exception of precoloring extension which is W[1]-hard in one case, all the other problems listed above are tractable for both parameterizations. The boundary between tractability and intractability of these problems can therefore be moved closer to parameterization by clique width.
13

Strukturální vlastnosti grafů a efektivní algoritmy: Problémy separující parametry / Structural properties of graphs and eficient algorithms: Problems Between Parameters

Knop, Dušan January 2017 (has links)
Structural Properties of Graphs and Eficient Algorithms: Problems Between Parameters Dušan Knop Parameterized complexity became over last two decades one of the most impor- tant subfield of computational complexity. Structural graph parameters (widths) play important role both in graph theory and (parameterized) algoritmh design. By studying some concrete problems we exhibit the connection between struc- tural graph parameters and parameterized tractability. We do this by examining tractability and hardness results for the Target Set Selection, Minimum Length Bounded Cut, and other problems. In the Minimum Length Bounded Cut problem we are given a graph, source, sink, and a positive integer L and the task is to remove edges from the graph such that the distance between the source and the sink exceeds L in the resulting graph. We show that an optimal solution to the Minimum Length Bounded Cut problem can be computed in time f(k)n, where f is a computable function and k denotes the tree-depth of the input graph. On the other hand we prove that (under assumption that FPT ̸= W[1]) no such algorithm can exist if the parameter k is the tree-width of the input graph. Currently only few such problems are known. The Target Set Selection problem exibits the same phenomenon for the vertex cover number and...
14

A Parameterized Algorithm for Upward Planarity Testing of Biconnected Graphs

Chan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints. Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete. Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time. In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw. Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.
15

A Parameterized Algorithm for Upward Planarity Testing of Biconnected Graphs

Chan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints. Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete. Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time. In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw. Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.
16

Décompositions de graphes : quelques limites et obstructions / Graphs decompositions : some limits and obstructions

Chapelle, Mathieu 05 December 2011 (has links)
Les décompositions de graphes, lorsqu’elles sont de petite largeur, sont souvent utilisées pour résoudre plus efficacement des problèmes étant difficiles dans le cas de graphes quelconques. Dans ce travail de thèse, nous nous intéressons aux limites liées à ces décompositions, et à la construction d’obstructions certifiant leur grande largeur. Dans une première partie, nous donnons un algorithme généralisant et unifiant la construction d’obstructions pour différentes largeurs de graphes, en temps XP lorsque paramétré par la largeur considérée. Nous obtenons en particulier le premier algorithme permettant de construire efficacement une obstruction à la largeur arborescente en temps O(ntw+4). La seconde partie de notre travail porte sur l’étude du problème ENSEMBLE [σ, ρ]-DOMINANT, une généralisation des problèmes de domination sur les graphes et caractérisée par deux ensembles d’entiers σ et ρ. Les diverses études de ce problème apparaissant dans la littérature concernent uniquement les cas ou le problème est FPT, lorsque paramétré par la largeur arborescente. Nous montrons que ce problème ne l’est pas toujours, et que pour certains cas d’ensembles σ et ρ, il devient W[1]-difficile lorsque paramétré par la largeur arborescente. Dans la dernière partie, nous étudions la complexité d’un nouveau problème de coloration appelé k-COLORATION ADDITIVE, combinant théorie des graphes et théorie des nombres. Nous montrons que ce nouveau problème est NP-complet pour tout k ≥ 4 fixé, tandis qu’il peut être résolu en temps polynomial sur les arbres pour k quelconque et non fixé. / Graphs decompositions of small width are usually used to solve efficiently problems which are difficult in general. In this thesis, we focus on some limits of these decompositions, and the construction of some obstructions certifying a large width. First, we give a generic algorithm unifying obstructions’ construction for several graph widths, in XP time when parameterized by the considered width. In particular, it gives the first algorithm computing efficiently an obstruction to tree-width in time O(ntw+4). Secondly, we study the parameterized complexity of [σ, ρ]-DOMINATING SET, a generalization of some domination problems characterized by two sets of integers σ and ρ. All known studies focused only on cases where this problem is FPT when parameterized by tree-width. In this work, we show that there are some cases where the problem is no longer FPT, and become W[1]-hard instead. Finally, we study the computational complexity of a new coloration problem, named k-ADDITIVE COLORING, which combines both graph theory and number theory. We show that this new problem is NP-complete for any fixed number k ≥ 4, while it can be solved in polynomial time on trees for any k.
17

Alliances In Graphs: Parameterized Algorithms And On Partitioning Series-parallel Graphs

Enciso, Rosa 01 January 2009 (has links)
Alliances are used to denote agreements between members of a group with similar interests. Alliances can occur between nations, biological sequences, business cartels, and other entities. The notion of alliances in graphs was first introduced by Kristiansen, Hedetniemi, and Hedetniemi in . A defensive alliance in a graph G = (V, E) is a non empty set S ⊆ V where, for all x ∈ S, |N[x] ∩ S| ≥ |N[x] − S|. Consequently, every vertex that is a member of a defensive alliance has at least as many vertices defending it as there are vertices attacking it. Alliances can be used to model a variety of applications such as classification problems, communities in the web distributed protocols, etc [Sha01, FLG00, SX07]. In [GK98, GK00], Gerber and Kobler introduced the problem of partitioning a graph into strong defensive alliances for the first time as the "Satisfactory Graph Partitioning (SGP)" problem. In his dissertation , Shafique used the problem of partitioning a graph into alliances to model problems in data clustering. Decision problems for several types of alliances and alliance partitions have been shown to be NP-complete. However, because of their applicability, it is of interest to study methods to overcome the complexity of these problems. In this thesis, we will present a variety of algorithms for finding alliances in different families of graphs with a running time that is polynomial in terms of the size of the input, and allowing exponential running time as a function of a chosen parameter. This study is guided by the theory of parameterized complexity introduced by Rod Downey and Michael Fellows in [DF99]. In addition to parameterized algorithms for alliance related problems, we study the partition of series-parallel graphs into alliances. The class of series-parallel graphs is a special class in graph theory since many problems known to be NP-complete on general graphs have been shown to have polynomial time algorithms on series-parallel graphs [ZLL04, Hoj95, DS99, HHL87, TNS82]. For example, the problem of finding a minimum defensive alliance has been shown to have a linear time algorithm when restricted to series-parallel graphs . Series-parallel graphs have also been to focus of study in a wide range of applications including CMOS layout and scheduling problems [ML86, Oud97]. Our motivation is driven by clustering properties that can be modeled with alliances. We observe that partitioning series-parallel graphs into alliances of roughly the same size can be used to partition task graphs to minimize the communication between processors and balance the workload of each processor. We present a characterization of series-parallel graphs that allow a partition into defensive alliances and a subclass of series-parallel graphs with a satisfactory partitions.
18

Finding Interesting Subgraphs with Guarantees

Cadena, Jose 29 January 2018 (has links)
Networks are a mathematical abstraction of the interactions between a set of entities, with extensive applications in social science, epidemiology, bioinformatics, and cybersecurity, among others. There are many fundamental problems when analyzing network data, such as anomaly detection, dense subgraph mining, motif finding, information diffusion, and epidemic spread. A common underlying task in all these problems is finding an "interesting subgraph"; that is, finding a part of the graph---usually small relative to the whole---that optimizes a score function and has some property of interest, such as connectivity or a minimum density. Finding subgraphs that satisfy common constraints of interest, such as the ones above, is computationally hard in general, and state-of-the-art algorithms for many problems in network analysis are heuristic in nature. These methods are fast and usually easy to implement. However, they come with no theoretical guarantees on the quality of the solution, which makes it difficult to assess how the discovered subgraphs compare to an optimal solution, which in turn affects the data mining task at hand. For instance, in anomaly detection, solutions with low anomaly score lead to sub-optimal detection power. On the other end of the spectrum, there have been significant advances on approximation algorithms for these challenging graph problems in the theoretical computer science community. However, these algorithms tend to be slow, difficult to implement, and they do not scale to the large datasets that are common nowadays. The goal of this dissertation is developing scalable algorithms with theoretical guarantees for various network analysis problems, where the underlying task is to find subgraphs with constraints. We find interesting subgraphs with guarantees by adapting techniques from parameterized complexity, convex optimization, and submodularity optimization. These techniques are well-known in the algorithm design literature, but they lead to slow and impractical algorithms. One unifying theme in the problems that we study is that our methods are scalable without sacrificing the theoretical guarantees of these algorithm design techniques. We accomplish this combination of scalability and rigorous bounds by exploiting properties of the problems we are trying to optimize, decomposing or compressing the input graph to a manageable size, and parallelization. We consider problems on network analysis for both static and dynamic network models. And we illustrate the power of our methods in applications, such as public health, sensor data analysis, and event detection using social media data. / Ph. D.
19

Aspects algorithmiques de la comparaison d'éléments biologiques / Algorithmics aspects of biological entities comparison

Sikora, Florian 30 September 2011 (has links)
Pour mieux saisir les liens complexes entre génotype et phénotype, une méthode utilisée consiste à étudier les relations entre différents éléments biologiques (entre les protéines, entre les métabolites...). Celles-ci forment ce qui est appelé un réseau biologique, que l'on représente algorithmiquement par un graphe. Nous nous intéressons principalement dans cette thèse au problème de la recherche d'un motif (multi-ensemble de couleurs) dans un graphe coloré, représentant un réseau biologique. De tels motifs correspondent généralement à un ensemble d'éléments conservés au cours de l'évolution et participant à une même fonction biologique. Nous continuons l'étude algorithmique de ce problème et de ses variantes (qui admettent plus de souplesse biologique), en distinguant les instances difficiles algorithmiquement et en étudiant différentes possibilités pour contourner cette difficulté (complexité paramétrée, réduction d'instance, approximation...). Nous proposons également un greffon intégré au logiciel Cytoscape pour résoudre efficacement ce problème, que nous testons sur des données réelles.Nous nous intéressons également à différents problèmes de génomique comparative. La démarche scientifique adoptée reste la même: depuis une formalisation d'un problème biologique, déterminer ses instances difficiles algorithmiquement et proposer des solutions pour contourner cette difficulté (ou prouver que de telles solutions sont impossibles à trouver sous des hypothèses fortes) / To investigate the complex links between genotype and phenotype, one can study the relations between different biological entities. It forms a biological network, represented by a graph. In this thesis, we are interested in the occurrence of a motif (a multi-set of colors) in a vertex-colored graph, representing a biological network. Such motifs usually correspond to a set of elements realizing a same function, and which may have been evolutionarily preserved. We follow the algorithmic study of this problem, by establishing hard instances and studying possibilities to cope with the hardness (parameterized complexity, preprocessing, approximation...). We also develop a plugin for Cytoscape, in order to solve efficiently this problem and to test it on real data.We are also interested in different problems related to comparative genomics. The scientific method is the same: studying problems arising from biology, specifying the hard instances and giving solutions to cope with the hardness (or proving such solutions are unlikely)
20

On the parameterized complexity of finding short winning strategies in combinatorial games

Scott, Allan Edward Jolicoeur 29 April 2010 (has links)
A combinatorial game is a game in which all players have perfect information and there is no element of chance; some well-known examples include othello, checkers, and chess. When people play combinatorial games they develop strategies, which can be viewed as a function which takes as input a game position and returns a move to make from that position. A strategy is winning if it guarantees the player victory despite whatever legal moves any opponent may make in response. The classical complexity of deciding whether a winning strategy exists for a given position in some combinatorial game has been well-studied both in general and for many specific combinatorial games. The vast majority of these problems are, depending on the specific properties of the game or class of games being studied, complete for either PSPACE or EXP. In the parameterized complexity setting, Downey and Fellows initiated a study of "short" (or k-move) winning strategy problems. This can be seen as a generalization of "mate-in-k" chess problems, in which the goal is to find a strategy which checkmates your opponent within k moves regardless of how he responds. In their monograph on parameterized complexity, Downey and Fellows suggested that AW[*] was the "natural home" of short winning strategy problems, but there has been little work in this field since then. In this thesis, we study the parameterized complexity of finding short winning strategies in combinatorial games. We consider both the general and several specific cases. In the general case we show that many short games are as hard classically as their original variants, and that finding a short winning strategy is hard for AW[P] when the rules are implemented as succinct circuits. For specific short games, we show that endgame problems for checkers and othello are in FPT, that alternating hitting set, hex, and the non-endgame problem for othello are in AW[*], and that short chess is AW[*]-complete. We also consider pursuit-evasion parameterized by the number of cops. We show that two variants of pursuit-evasion are AW[*]-hard, and that the short versions of these problems are AW[*]-complete.

Page generated in 0.0672 seconds