• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Independent set problems and odd-hole-preserving graph reductions

Warren, Jeffrey Scott 15 May 2009 (has links)
Methods are described that implement a branch-and-price decomposition approach to solve the maximum weight independent set (MWIS) problem. The approach is first described by Warrier et. al, and herein our contributions to this research are presented. The decomposition calls for the exact solution of the MWIS problem on induced subgraphs of the original graph. The focus of our contribution is the use of chordal graphs as the induced subgraphs in this solution framework. Three combinatorial branch-and-bound solvers for the MWIS problem are described. All use weighted clique covers to generate upper bounds, and all branch according to the method of Balas and Yu. One extends and speeds up the method of Babel. A second one modifies a method of Balas and Xue to produce clique covers that share structural similarities with those produced by Babel. Each of these improves on its predecessor. A third solver is a hybrid of the other two. It yields the best known results on some graphs. The related matter of deciding the imperfection or perfection of a graph is also addressed. With the advent of the Strong Perfect Graph Theorem, this problem is reduced to the detection of odd holes and anti-holes or the proof of their absence. Techniques are provided that, for a given graph, find subgraphs in polynomial time that contain odd holes whenever they are present in the given graph. These techniques and some basic structural results on such subgraphs narrow the search for odd holes. Results are reported for the performance of the three new solvers for the MWIS problem that demonstrate that the third, hybrid solver outperforms its clique-cover-based ancestors and, in some cases, the best current open-source solver. The techniques for narrowing the search for odd holes are shown to provide a polynomial-time reduction in the size of the input required to decide the perfection or imperfection of a graph.
2

Novel approaches for solving large-scale optimization problems on graphs

Trukhanov, Svyatoslav 15 May 2009 (has links)
This dissertation considers a class of closely related NP-hard otpimization problems on graphs that arise in many important applications, including network-based data mining, analysis of the stock market, social networks, coding theory, fault diagnosis, molecular biology, biochemistry and genomics. In particular, the problems of interest include the classical maximum independent set problem (MISP) and maximum clique problem (MCP), their vertex-weighted vesrions, as well as novel optimization models that can be viewed as practical relaxations of their classical counterparts. The concept of clique has been a popular instrument in analysis of networks, and is, essentially, an idealized model of a “closely connected group”, or a cluster. But, at the same time, the restrictive nature of the definition of clique makes the clique model impractical in many applications. This motivated the development of clique relaxation models that relax different properties of a clique. On the one hand, while still possessing some clique-like properties, clique relaxations are not as “perfect” as cliques; and on the other hand, they do not exhibit the disadvantages associated with a clique. Using clique relaxations allows one to compromise between perfectness and flexibility, between ideality and reality, which is a usual issue that an engineer deals with when applying theoretical knowledge to solve practical problems in industry. The clique relaxation models studied in this dissertation were first proposed in the literature on social network analysis, however they have not been well investigated from a mathematical programming perspective. This dissertation considers new techniques for solving the MWISP and clique relaxation problems and investigates their effectiveness from theoretical and computational perspectives. The main results obtained in this work include (i) developing a scale-reduction approach for MWISP based on the concept of critical set and comparing it theoretically with other approaches; (ii) obtaining theoretical complexity results for clique relaxation problems; (iii) developing algorithms for solving the clique relaxation problems exactly; (iv) carrying out computational experiments to demonstrate the performance of the proposed approaches, and, finally, (v) applying the obtained theoretical results to several real-life problems.
3

A branch, price, and cut approach to solving the maximum weighted independent set problem

Warrier, Deepak 17 September 2007 (has links)
The maximum weight-independent set problem (MWISP) is one of the most well-known and well-studied NP-hard problems in the field of combinatorial optimization. In the first part of the dissertation, I explore efficient branch-and-price (B&P) approaches to solve MWISP exactly. B&P is a useful integer-programming tool for solving NP-hard optimization problems. Specifically, I look at vertex- and edge-disjoint decompositions of the underlying graph. MWISP’s on the resulting subgraphs are less challenging, on average, to solve. I use the B&P framework to solve MWISP on the original graph G using these specially constructed subproblems to generate columns. I demonstrate that vertex-disjoint partitioning scheme gives an effective approach for relatively sparse graphs. I also show that the edge-disjoint approach is less effective than the vertex-disjoint scheme because the associated DWD reformulation of the latter entails a slow rate of convergence. In the second part of the dissertation, I address convergence properties associated with Dantzig-Wolfe Decomposition (DWD). I discuss prevalent methods for improving the rate of convergence of DWD. I also implement specific methods in application to the edge-disjoint B&P scheme and show that these methods improve the rate of convergence. In the third part of the dissertation, I focus on identifying new cut-generation methods within the B&P framework. Such methods have not been explored in the literature. I present two new methodologies for generating generic cutting planes within the B&P framework. These techniques are not limited to MWISP and can be used in general applications of B&P. The first methodology generates cuts by identifying faces (facets) of subproblem polytopes and lifting associated inequalities; the second methodology computes Lift-and-Project (L&P) cuts within B&P. I successfully demonstrate the feasibility of both approaches and present preliminary computational tests of each.
4

A Parallel Aggregation Algorithm for Inter-Grid Transfer Operators in Algebraic Multigrid

Garcia Hilares, Nilton Alan 13 September 2019 (has links)
As finite element discretizations ever grow in size to address real-world problems, there is an increasing need for fast algorithms. Nowadays there are many GPU/CPU parallel approaches to solve such problems. Multigrid methods can be used to solve large-scale problems, or even better they can be used to precondition the conjugate gradient method, yielding better results in general. Capabilities of multigrid algorithms rely on the effectiveness of the inter-grid transfer operators. In this thesis we focus on the aggregation approach, discussing how different aggregation strategies affect the convergence rate. Based on these discussions, we propose an alternative parallel aggregation algorithm to improve convergence. We also provide numerous experimental results that compare different aggregation approaches, multigrid methods, and conjugate gradient iteration counts, showing that our proposed algorithm performs better in serial and parallel. / Modeling real-world problems incurs a high computational cost because these mathematical models involve large-scale data manipulation. Thus we need fast and efficient algorithms. Nowadays there are many high-performance approaches for these problems. One such method is called the Multigrid algorithm. This approach models a physical domain using a hierarchy of grids, and so the effectiveness of these approaches relies on how well data can be transferred from grid to grid. In this thesis, we focus on the aggregation approach, which clusters a grid’s vertices according to its connections. We also provide an alternative parallel aggregation algorithm to give a faster solution. We show numerous experimental results that compare different aggregation approaches and multigrid methods, showing that our proposed algorithm performs better in serial and parallel than other popular implementations.
5

Facets of conflict hypergraphs

Maheshwary, Siddhartha 25 August 2008 (has links)
We study the facial structure of the independent set polytope using the concept of conflict hypergraphs. A conflict hypergraph is a hypergraph whose vertices correspond to the binary variables, and edges correspond to covers in the constraint matrix of the independent set polytope. Various structures such as cliques, odd holes, odd anti-holes, webs and anti-webs are identified on the conflict hypergraph. These hypergraph structures are shown to be generalization of traditional graph structures. Valid inequalities are derived from these hypergraph structures, and the facet defining conditions are studied. Chvatal-Gomory ranks are derived for odd hole and clique inequalities. To test the hypergraph cuts, we conduct computational experiments on market-share (also referred to as market-split) problems. These instances consist of 100% dense multiple-knapsack constraints. They are small in size but are extremely hard to solve by traditional means. Their difficult nature is attributed mainly to the dense and symmetrical structure. We employ a special branching strategy in combination with the hypergraph inequalities to solve many of the particularly difficult instances. Results are reported for serial as well as parallel implementations.
6

Symmetry breaking in congested models: lower and upper bounds

Riaz, Talal 01 August 2019 (has links)
A fundamental issue in many distributed computing problems is the need for nodes to distinguish themselves from their neighbors in a process referred to as symmetry breaking. Many well-known problems such as Maximal Independent Set (MIS), t-Ruling Set, Maximal Matching, and (\Delta+1)-Coloring, belong to the class of problems that require symmetry breaking. These problems have been studied extensively in the LOCAL model, which assumes arbitrarily large message sizes, but not as much in the CONGEST and k-machine models, which assume messages of size O(log n) bits. This dissertation focuses on finding upper and lower bounds for symmetry breaking problems, such as MIS and t-Ruling Set, in these congested models. Chapter 2 shows that an MIS can be computed in O(sqrt{log n loglog n}) rounds for graphs with constant arboricity in the CONGEST model. Chapter 3 shows that the t-ruling set problem, for t \geq 3, can be computed in o(log n) rounds in the CONGEST model. Moreover, it is shown that a 2-ruling set can be computed in o(log n) rounds for a large range of values of the maximum degree in the graph. In the k-machine model, k machines must work together to solve a problem on an arbitrary n-node graph, where n is typically much larger than k. Chapter 4 shows that any algorithm in the BEEP model (which assumes 'primitive' single bit messages) with message complexity M and round complexity T can be simulated in O(t(M/k^2 + T) poly(log n)) rounds in the k-machine model. Using this result, it is shown that MIS, Minimum Dominating Set (MDS), and Minimum Connected Dominating Set (MCDS) can all be solved in O(poly(log n) m/k^2) rounds in the k-machine model, where 'm' is the number of edges in the input graph. It is shown that a 2-ruling set can be computed even faster, in O((n/k^2+ k) poly(log n)) rounds, in the k-machine model. On the other hand, using information theoretic techniques and a reduction to a communication complexity problem, an \Omega(n/(k^2 poly(log n))) rounds lower bound for MIS in the k-machine model is also shown. As far as we know, this is the first example of a lower bound in the k-machine model for a symmetry breaking problem. Chapter 5 focuses on the Max Clique problem in the CONGEST model. Max Clique is trivially solvable in one round in the LOCAL model since each node can share its entire neighborhood with all neighbors in a single round. However, in the CONGEST model, nodes have to choose what to communicate and along what communication links. Thus, in a sense, they have to break symmetry and this is forced upon them by the bandwidth constraints. Chapter 5 shows that an O(n^{3/5})-approximation to Max Clique in the CONGEST model can be computed in O(1) rounds. This dissertation ends with open questions in Chapter 6.
7

Development of a branch and price approach involving vertex cloning to solve the maximum weighted independent set problem

Sachdeva, Sandeep 12 April 2006 (has links)
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub-problems based on these edge-disjoint partitions using a B&P framework, which coordinates sub-problem solutions by involving an equivalence relationship between a vertex and each of its clones. We present test results for standard instances and randomly generated graphs for comparison. We show analytically and computationally that our approach gives tight bounds and it solves both dense and sparse graphs quite quickly.
8

Development of a branch and price approach involving vertex cloning to solve the maximum weighted independent set problem

Sachdeva, Sandeep 12 April 2006 (has links)
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub-problems based on these edge-disjoint partitions using a B&P framework, which coordinates sub-problem solutions by involving an equivalence relationship between a vertex and each of its clones. We present test results for standard instances and randomly generated graphs for comparison. We show analytically and computationally that our approach gives tight bounds and it solves both dense and sparse graphs quite quickly.
9

Σχεδιασμός και ανάλυση αλγορίθμων σε Τυχαία Γραφήματα Τομής / Design and analysis of algorithms on Random Intersection Graphs

Ραπτόπουλος, Χριστόφορος 16 May 2007 (has links)
Στη διπλωματική αυτή εργασία ορίζουμε δυο νέα μοντέλα τυχαίων γραφημάτων τομής ετικετών και εξετάζονται ως προς ορισμένες σημαντικές γραφοθεωρητικές ιδιότητές τους. Ένα τυχαίο γράφημα τομής ετικετών παράγεται αντιστοιχώντας σε κάθε κορυφή ένα \\\\emph{τυχαίο} υποσύνολο ενός πεπερασμένου) σύμπαντος $M$ από $m$ στοιχεία και βάζοντας μια ακμή μεταξύ δυο κορυφών αν και μόνον εάν τα αντίστοιχα σύνολά τους έχουν μη κενή τομή. Συγκεκριμενοποιώντας την κατανομή που ακολουθεί το τυχαίο υποσύνολο που αντιστοιχείται σε κάθε κορυφή παίρνουμε διαφορετικά μοντέλα τυχαίων γραφημάτων τομής. Στο γενικευμένο μοντέλο τυχαίων γραφημάτων τομής κάθε στοιχείο $i$ του $M$ επιλέγεται ανεξάρτητα από κάθε κορυφή με πιθανότητα $p_i$. Το ομοιόμορφο μοντέλο τυχαίων γραφημάτων τομής ετικετών αποτελεί μια ειδική περίπτωση του γενικευμένου μοντέλου όπου η πιθανότητα επιλογής των στοιχείων του $M$ είναι ίση με $p$, δηλαδή ίδια για όλα τα στοιχεία του $M$. Όπως θα δούμε, για ορισμένες τιμές των παραμέτρων $m$ και $p$, το ομοιόμορφο μοντέλο είναι ισοδύναμο με το μοντέλο $G_{n, \\\\hat{p}}$, δηλαδή με το μοντέλο τυχαίων γραφημάτων στο οποίο κάθε πλευρά εμφανίζεται στοχαστικά ανεξάρτητα με πιθανότητα $\\\\hat{p}$. Τέλος, στο κανονικό μοντέλο τυχαίων γραφημάτων τομής ετικετών το υποσύνολο του $M$ που αντιστοιχείται σε κάθε κορυφή έχει σταθερό αριθμό στοιχείων. Λόγω της στοχαστικής εξάρτησης που υποννοείται για την ύπαρξη πλευρών, τα γραφήματα αυτά θεωρούνται αρκετά ρεαλιστικά μοντέλα (σε σχέση με τα κλασσικά τυχαία γραφήματα) σε πολλές πρακτικές εφαρμογές, ιδιαίτερα σε αλγοριθμικά θέματα δικτύων. Στην εργασία αυτή αρχικά παρουσιάζουμε μερικά χαρακτηριστικά αποτελέσματα από τη σχετική βιβλιογραφία για τα μοντέλα αυτά. Ακόμα, μελετάμε, για πρώτη φορά στη βιβλιογραφία, την ύπαρξη ανεξάρτητων συνόλων κορυφών μεγέθους $k$ στο γενικευμένο μοντέλο τυχαίων γραφημάτων τομής ετικετών, υπολογίζοντας τη μέση τιμή και τη διασπορά του αριθμού τους. Επίσης, προτείνουμε και αναλύουμε τρείς πιθανοτικούς αλγόριθμους που τρέχουν σε μικρό πολυωνυμικό χρόνο (ως προς τον αριθμό των κορυφών και τον αριθμό των στοιχείων του $M$) για την εύρεση αρκετά μεγάλων συνόλων ανεξάρτητων κορυφών. / In this Master thesis we define and analyse two new models of random intersection graphs. A random intersection graph is produced by assigning to each vertex a random subset of a (finite) universe $M$ of $m$ elements and by drawing an edge between two vertices if and only if their corresponding subsets have some elements in common. By specifying the distribution of the subsets assigned to each vertex, we get various models of random intersection graphs. In the generalized random intersection graphs model each element $i$ in $M$ is chosen independently with probability $p_i$. The uniform random intersection graphs model is a special case of the generalized model where the probability of selecting an element of $M$ is $p$, i.e. the same for every element. As we will see, for some range of values of the parameters $m$ and $p$, the uniform model is equivalent in some sense with the model $G_{n, \\\\hat{p}}$, i.e. the random graphs model in which each edge appears independently with probability $\\\\hat{p}$. Finally, in the regular random intersection graphs model, the subset of $M$ assigned to each vertex has a fixed number of elements. Due to the dependence implied in the appearance of edges, these models are considered to be more realistic (than classic random graphs) in many applications. This thesis begins by presenting several important results concearning these models. Also, we study for the first time the existence of independent sets of size $k$ in the generalized random intersection graphs model, and we give exact formulae for the mean and variance of their number. Additionally, we propose three randomized algorithms, that run in small polynomial time (with respect to the number of vertices and the number of elements of $M$), for finding large independent sets of vertices.
10

From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems

Plociennik, Kai 18 February 2011 (has links) (PDF)
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.

Page generated in 0.1075 seconds