91 |
Postman Problems on Mixed GraphsZaragoza Martinez, Francisco Javier January 2003 (has links)
The <i>mixed postman problem</i> consists of finding a minimum cost tour of a mixed graph <i>M</i> = (<i>V</i>,<i>E</i>,<i>A</i>) traversing all its edges and arcs at least once. We prove that two well-known linear programming relaxations of this problem are equivalent. The <i>extra cost</i> of a mixed postman tour <i>T</i> is the cost of <i>T</i> minus the cost of the edges and arcs of <i>M</i>. We prove that it is <i>NP</i>-hard to approximate the minimum extra cost of a mixed postman tour.
A related problem, known as the <i>windy postman problem</i>, consists of finding a minimum cost tour of an undirected graph <i>G</i>=(<i>V</i>,<i>E</i>) traversing all its edges at least once, where the cost of an edge depends on the direction of traversal. We say that <i>G</i> is <i>windy postman perfect</i> if a certain <i>windy postman polyhedron O</i> (<i>G</i>) is integral. We prove that series-parallel undirected graphs are windy postman perfect, therefore solving a conjecture of Win.
Given a mixed graph <i>M</i> = (<i>V</i>,<i>E</i>,<i>A</i>) and a subset <i>R</i> ⊆ <i>E</i> ∪ <i>A</i>, we say that a mixed postman tour of <i>M</i> is <i>restricted</i> if it traverses the elements of <i>R</i> exactly once. The <i>restricted mixed postman problem</i> consists of finding a minimum cost restricted tour. We prove that this problem is <i>NP</i>-hard even if <i>R</i>=<i>A</i> and we restrict <i>M</i> to be planar, hence solving a conjecture of Veerasamy. We also prove that it is <i>NP</i>-complete to decide whether there exists a restricted tour even if <i>R</i>=<i>E</i> and we restrict <i>M</i> to be planar.
The <i>edges postman problem</i> is the special case of the restricted mixed postman problem when <i>R</i>=<i>A</i>. We give a new class of valid inequalities for this problem. We introduce a relaxation of this problem, called the <i>b-join problem</i>, that can be solved in polynomial time. We give an algorithm which is simultaneously a 4/3-approximation algorithm for the edges postman problem, and a 2-approximation algorithm for the extra cost of a tour.
The <i>arcs postman problem</i> is the special case of the restricted mixed postman problem when <i>R</i>=<i>E</i>. We introduce a class of necessary conditions for <i>M</i> to have an arcs postman tour, and we give a polynomial-time algorithm to decide whether one of these conditions holds. We give linear programming formulations of this problem for mixed graphs arising from windy postman perfect graphs, and mixed graphs whose arcs form a forest.
|
92 |
Solving Traveling Salesman Problem With a non-complete GraphEmami Taba, Mahsa Sadat January 2009 (has links)
One of the simplest, but still NP-hard, routing problems is the Traveling Salesman Problem (TSP). In the TSP, one is given a set of cities and a way of measuring the distance between cities. One has to find the shortest tour that visits all cities exactly once and returns back to the starting city. In state-of-the-art algorithms, they all assume that a complete graph is given as an input. However, for very large graphs, generating all edges in a complete graph, which corresponds to finding shortest paths for all city pairs, could be time-consuming. This is definitely a major obstacle for some real-life applications, especially when the tour needs to be generated in real-time. The objective, in this thesis, is to find a near-optimal TSP tour with a reduced set of edges in the complete graph. In particular, the following problems are investigated: which subset of edges can be produced in a shorter time comparing to the time for generating the complete graph? Is there a subset of edges in the complete graph that results in a better near-optimal tour than other sets? With a non-complete graph, which improvement algorithms work better? In this thesis, we study six algorithms to generate subsets of edges in a complete graph. To evaluate the proposed algorithms, extensive experiments are conducted with the well-known TSP data in a TSP library. In these experiments, we evaluate these algorithms in terms of tour quality, time and scalability.
|
93 |
A Multi-Parent Crossover for Combinatorial Optimization ProblemsSu, Chien-hao 31 August 2006 (has links)
Optimization problems are divided into numerical optimization problems and combinatorial optimization problems. Genetic algorithms (GAs) are applied to solve optimization problems widely. GAs with multi-parent crossover are often used to solve numerical optimization problems. However, no effective multi-parent crossover is used for combinatorial optimization problems. Partially mapped crossover (PMX) is the most popular crossover for combinatorial optimization problems. In this thesis, we propose multi-parent partially mapped crossover (MPPMX). A large amount of experimental results show that the improvement ratio of MPPMX reaches 38.63 % over PMX. The p-values of t-test on the difference between MPPMX and PMX range from 10-6 to 10-14, which indicates the significant improvement of MPPMX over PMX.
|
94 |
Performance Measurement In Multi Objective Combinatorial OptimizationBozkurt, Bilge 01 September 2007 (has links) (PDF)
ABSTRACT
PERFORMANCE MEASUREMENT IN MULTI OBJECTIVE COMBINATORIAL
OPTIMIZATION
Bozkurt, Bilge
M.Sc., Department of Industrial Engineering
Supervisor: Prof. Dr. Murat Kö / ksalan
September 2007, 96 pages
In this study we address the problem of measuring the quality of different sets of
nondominated solutions obtained by different approaches in multi objective
combinatorial optimization (MOCO). We propose a new measure that quantitatively
compares the sets of nondominated solutions, without needing an efficient frontier.
We develop the measure for bi-criteria and more than two criteria cases separately.
Rather than considering only the supported solutions in the evaluation, the measure
captures both supported and unsupported solutions through utilizing weighted
Tchebycheff function characteristics. We also adapt this method for determining the
neighborhood relations on the weight space for both bi-criteria and more than two
criteria cases. We check the consistency of the neighborhood assumption on the
objective space with the neighborhood relations on the weight space by this measure
and obtain highly good results.
Keywords: Multi objective combinatorial optimization, performance measurement
|
95 |
Approaches For Special Multiobjective Combinatorial Optimization Problems With Side ConstraintsAkin, Banu 01 September 2012 (has links) (PDF)
We propose a generic algorithm based on branch-and-bound to generate all efficient solutions of multiobjective combinatorial optimization (MOCO) problems. We present an algorithm specific to multiobjective 0-1 Knapsack Problem based on the generic algorithm. We test the performance of our algorithm on randomly generated sample problems against IBM ILOG CPLEX and we obtain better performance using a problem specific algorithm. We develop a heuristic algorithm by incorporating memory limitations at the expense of solution quality to overcome memory issues of the exact algorithm.
|
96 |
Performance understanding and tuning of iterative computation using profiling techniquesOzarde, Sarang Anil 18 May 2010 (has links)
Most applications spend a significant amount of time in the iterative parts of a computation. They typically iterate over the same set of operations with different values. These values either depend on inputs or values calculated in previous iterations. While loops capture some iterative behavior, in many cases such a behavior is spread over whole program sometimes through recursion. Understanding iterative behavior of the
computation can be very useful to fine-tune it. In this thesis, we present a profiling based
framework to understand and improve performance of iterative computation. We capture
the state of iterations in two aspects 1) Algorithmic State 2) Program State. We demonstrate the applicability of our framework for capturing algorithmic state by applying it to the SAT Solvers and program state by applying it to a variety of benchmarks exhibiting completely parallelizable loops. Further, we show that such a
performance characterization can be successfully used to improve the performance of the
underlying application.
Many high performance combinatorial optimization applications involve SAT solving. A variety of SAT solvers have been developed that employ different data structures and different propagation methods for converging on a fixed point for generating a satisfiable solution. The performance debugging and tuning of SAT solvers
to a given domain is an important problem encountered in practice. Unfortunately not much work has been done to quantify the iterative efficiency of SAT solvers. In this work, we develop quantifiable measures for calculating convergence efficiency of SAT solvers. Here, we capture the Algorithmic state of the application by tracking the assignment of variables for each iteration. A compact representation of profile data is developed to track the rate of progress and convergence. The novelty of this approach is
that it is independent of the specific strategies used in individual solvers, yet it gives key
insights into the "progress" and "convergence behavior" of the solver in terms of a specific implementation at hand. An analysis tool is written to interpret the profile data and extract values of the following metrics such as: average convergence rate, efficiency of iteration and variable stabilization. Finally, using this system we produce a study of 4 well known SAT solvers to compare their iterative efficiency using random as well as industrial benchmarks. Using the framework, iterative inefficiencies that lead to slow convergence are identified. We also show how to fine-tune the solvers by adapting the key steps.
We also show that the similar profile data representation can be easily applied to loops, in general, to capture their program state. One of the key attributes of the program state inside loops is their branch behavior. We demonstrate the applicability of the framework by profiling completely parallelizable loops (no cross-iteration dependence) and by storing the branching behavior of each iteration. The branch behavior across a group of iterations is important in devising the thread warps from parallel loops for efficient execution on GPUs. We show how some loops can be effectively parallelized on GPUs using this information.
|
97 |
Some combinatorial optimization problems related to network encoding complexityXu, Li, 徐力 January 2014 (has links)
Network coding is a novel technique to improve the throughput of networks to transfer messages from sources to sinks. Before the birth of network coding, intermediate nodes can only forward the received messages. In a network coding scheme, the intermediate nodes are allowed to mix the received messages from different incoming links. Network coding has found a wide range of applications, such as peer-to-peer networks, distributed storage and content distribution. The theory of network encoding complexity aims to deal with the question what the minimum number of encoding nodes needed in networks is. In order to tackle this question, we convert it into a combinatorial optimization problem.
For directed networks, I examine the number of “mergings”, a special type of graph structure. Consider an acyclic directed network G with l source-sink pairs. Let ci denote the minimum size of edge-cut between i-th source-sink pair for 1 ≤ i ≤ l. Then, by Menger’s theorem, there exists a group of ci edge-disjoint paths (Menger’s paths) between i-th source-sink pair. Although within the same group these paths are edge-disjoint, the paths from different groups may have to merge with each other. It is known that by choosing Menger’s paths appropriately, the number of mergings among different groups of Menger’s paths is always bounded by a constant, which is independent of the size of G. The tightest such constant for the all the above-mentioned networks is denoted byM(c1, c2, . . . , cl) when all sources are distinct, and by M∗(c1, c2, . . . , cl) when all sources are identical. It turns out that M and M∗ are closely related to the network encoding complexity for a variety of networks, such as multicast networks, two-way networks and networks with multiple sessions of unicast. Computation of these two important functions, however, appears to be rather difficult; so far there are no explicit formulas for M and M∗ for a generic parameter c1, c2, . . . , cl. In this thesis, I derive exact values of and tighter bounds on M and M∗ for some parameters, and establish the inequality relationships between M and M∗.
For undirected networks, I examine the number of “hubs”, the vertices of degree at least three. Compared to directed networks, study on network en-coding complexity in undirected networks has seen little progress. Consider an undirected network G with l source-sink pairs. For i = 1, 2, . . . , l, let ci de-note the minimum size of vertex-cut between i-th source-sink pair. I study H (c1, c2, . . . , cl), the minimum number of hubs needed in an undirected network with min-cut constraints. The function H is closely related to network en-coding complexity for undirected networks. I prove that under some constraints, regardless of the size of the undirected networks, such minimum number is always bounded above and I derive tight upper bounds for some special parameters. In particular, for two pairs of sources and sinks, I present a novel path-searching algorithm, the analysis of which is instrumental for the derivations of the tight upper bounds. / published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
|
98 |
On the shortest path and minimum spanning tree problemsPettie, Seth 28 August 2008 (has links)
Not available / text
|
99 |
Optimality and approximability of the rectangle covering problemChung, Yau-lin., 鍾有蓮. January 2004 (has links)
published_or_final_version / abstract / toc / Mathematics / Master / Master of Philosophy
|
100 |
Inverse Parametric Alignment for Accurate Biological Sequence ComparisonKim, Eagu January 2008 (has links)
For as long as biologists have been computing alignments of sequences, the question of what values to use for scoring substitutions and gaps has persisted. In practice, substitution scores are usually chosen by convention, and gap penalties are often found by trial and error. In contrast, a rigorous way to determine parameter values that are appropriate for aligning biological sequences is by solving the problem of Inverse Parametric Sequence Alignment. Given examples of biologically correct reference alignments, this is the problem of finding parameter values that make the examples score as close as possible to optimal alignments of their sequences. The reference alignments that are currently available contain regions where the alignment is not specified, which leads to a version of the problem with partial examples.In this dissertation, we develop a new polynomial-time algorithm for Inverse Parametric Sequence Alignment that is simple to implement, fast in practice, and can learn hundreds of parameters simultaneously from hundreds of examples. Computational results with partial examples show that best possible values for all 212 parameters of the standard alignment scoring model for protein sequences can be computed from 200 examples in 4 hours of computation on a standard desktop machine. We also consider a new scoring model with a small number of additional parameters that incorporates predicted secondary structure for the protein sequences. By learning parameter values for this new secondary-structure-based model, we can improve on the alignment accuracy of the standard model by as much as 15% for sequences with less than 25% identity.
|
Page generated in 0.1275 seconds