Spelling suggestions: "subject:"[een] APPROXIMATION"" "subject:"[enn] APPROXIMATION""
441 |
Oblivious and Non-oblivious Local Search for Combinatorial OptimizationWard, Justin 07 January 2013 (has links)
Standard local search algorithms for combinatorial optimization problems repeatedly apply small changes to a current solution to improve the problem's given objective function. In contrast, non-oblivious local search algorithms are guided by an auxiliary potential function, which is distinct from the problem's objective. In this thesis, we compare the standard and non-oblivious approaches for a variety of problems, and derive new, improved non-oblivious local search algorithms for several problems in the area of constrained linear and monotone submodular maximization.
First, we give a new, randomized approximation algorithm for maximizing a monotone submodular function subject to a matroid constraint. Our algorithm's approximation ratio matches both the known hardness of approximation bounds for the problem and the performance of the recent ``continuous greedy'' algorithm. Unlike the continuous greedy algorithm, our algorithm is straightforward and combinatorial. In the case that the monotone submodular function is a coverage function, we can obtain a further simplified, deterministic algorithm with improved running time.
Moving beyond the case of single matroid constraints, we then consider general classes of set systems that capture problems that can be approximated well. While previous such classes have focused primarily on greedy algorithms, we give a new class that captures problems amenable to optimization by local search algorithms. We show that several combinatorial optimization problems can be placed in this class, and give a non-oblivious local search algorithm that delivers improved approximations for a variety of specific problems.
In contrast, we show that standard local search algorithms give no improvement over known approximation results for these problems, even when allowed to search larger neighborhoods than their non-oblivious counterparts.
Finally, we expand on these results by considering standard local search algorithms for constraint satisfaction problems. We develop conditions under which the approximation ratio of standard local search remains limited even for super-polynomial or exponential local neighborhoods. In the special case of MaxCut, we further show that a variety of techniques including random or greedy initialization, large neighborhoods, and best-improvement pivot rules cannot improve the approximation performance of standard local search.
|
442 |
On Covering Points with Conics and Strips in the PlaneTiwari, Praveen 1985- 14 March 2013 (has links)
Geometric covering problems have always been of focus in computer scientific research. The generic geometric covering problem asks to cover a set S of n objects with another set of objects whose cardinality is minimum, in a geometric setting. Many versions of geometric cover have been studied in detail, one of which is line cover: Given a set of points in the plane, find the minimum number of lines to cover them. In Euclidean space Rm, this problem is known as Hyperplane Cover, where lines are replaced by affine hyperplanes bounded by dimension d. Line cover is NP-hard, so is its hyperplane analogue. Our thesis focuses on few extensions of hyperplane cover and line cover.
One of the techniques used to study NP-hard problems is Fixed Parameter Tractability (FPT), where, in addition to input size, a parameter k is provided for input instance. We ask to solve the problem with respect to k, such that the running time is a function in both n and k, strictly polynomial in n, while the exponential component is limited to k. In this thesis, we study FPT and parameterized complexity theory, the theory of classifying hard problems involving a parameter k.
We focus on two new geometric covering problems: covering a set of points in the plane with conics (conic cover) and covering a set of points with strips or fat lines of given width in the plane (fat line cover). A conic is a non-degenerate curve of degree two in the plane. A fat line is defined as a strip of finite width w. In this dissertation, we focus on the parameterized versions of these two problems, where, we are asked to cover the set of points with k conics or k fat lines. We use the existing techniques of FPT algorithms, kernelization and approximation algorithms to study these problems. We do a comprehensive study of these problems, starting with NP-hardness results to studying their parameterized hardness in terms of parameter k.
We show that conic cover is fixed parameter tractable, and give an algorithm of running time O∗ ((k/1.38)^4k), where, O∗ implies that the running time is some polynomial in input size. Utilizing special properties of a parabola, we are able to achieve a faster algorithm and show a running time of O∗ ((k/1.15)^3k).
For fat line cover, first we establish its NP-hardness, then we explore algorithmic possibilities with respect to parameterized complexity theory. We show W [1]-hardness of fat line cover with respect to the number of fat lines, by showing a parameterized reduction from the problem of stabbing axis-parallel squares in the plane. A parameterized reduction is an algorithm which transforms an instance of one parameterized problem into an instance of another parameterized problem using a FPT-algorithm. In addition, we show that some restricted versions of fat line cover are also W [1]-hard. Further, in this thesis, we explore a restricted version of fat line cover, where the set of points are integer coordinates and allow only axis-parallel lines to cover them. We show that the problem is still NP-hard. We also show that this version is fixed parameter tractable having a kernel size of O (k^2) and give a FPT-algorithm with a running time of O∗ (3^k). Finally, we conclude our study on this problem by giving an approximation algorithm for this version having a constant approximation ratio 2.
|
443 |
A Novel Approach for the Rapid Estimation of Drainage Volume, Pressure and Well RatesGupta, Neha 1986- 14 March 2013 (has links)
For effective reservoir management and production optimization, it is important to understand drained volumes, pressure depletion and reservoir well rates at all flow times. For conventional reservoirs, this behavior is based on the concepts of reservoir pressure and energy and convective flow. But, with the development of unconventional reservoirs, there is increased focus on the unsteady state transient flow behavior. For analyzing such flow behaviors, well test analysis concepts are commonly applied, based on the analytical solutions of the diffusivity equation. In this thesis, we have proposed a novel methodology for estimating the drainage volumes and utilizing it to obtain the pressure and flux at any location in the reservoir.
The result is a semi-analytic calculation only, with close to the simplicity of an analytic approach, but with significantly more generality. The approach is significantly faster than a conventional finite difference solution, although with some simplifying assumptions. The proposed solution is generalized to handle heterogeneous reservoirs, complex well geometries and bounded and semi-bounded reservoirs. Therefore, this approach is particularly beneficial for unconventional reservoir development with multiple transverse fractured horizontal wells, where limited analytical solutions are available.
To estimate the drainage volume, we have applied an asymptotic solution to the diffusivity equation and determined the diffusive time of flight distribution. For the pressure solution, a geometric approximation has been applied within the drainage volume to reduce the full solution of the diffusivity equation to a system of decoupled ordinary differential equations. Besides, this asymptotic expression can also be extended to obtain the well rates, producing under constant bottomhole pressure constraint.
In this thesis, we have described the detailed methodology and its validation through various case studies. We have also studied the limits of validity of the approximation to better understand the general applicability. We expect that this approach will enable the inversion of field performance data for improved well and/or fracture characterization, and similarly, the optimization of well trajectories and fracture design, in an analogous manner to how rapid but approximate streamline techniques have been used for improved conventional reservoir management.
|
444 |
Linear Programming Tools and Approximation Algorithms for Combinatorial OptimizationPritchard, David January 2009 (has links)
We study techniques, approximation algorithms, structural properties and lower bounds related to applications of linear programs in combinatorial optimization. The following "Steiner tree problem" is central: given a graph with a distinguished subset of required vertices, and costs for each edge, find a minimum-cost subgraph that connects the required vertices. We also investigate the areas of network design, multicommodity flows, and packing/covering integer programs. All of these problems are NP-complete so it is natural to seek approximation algorithms with the best provable approximation ratio.
Overall, we show some new techniques that enhance the already-substantial corpus of LP-based approximation methods, and we also look for limitations of these techniques.
The first half of the thesis deals with linear programming relaxations for the Steiner tree problem. The crux of our work deals with hypergraphic relaxations obtained via the well-known full component decomposition of Steiner trees; explicitly, in this view the fundamental building blocks are not edges, but hyperedges containing two or more required vertices. We introduce a new hypergraphic LP based on partitions. We show the new LP has the same value as several previously-studied hypergraphic ones; when no Steiner nodes are adjacent, we show that the value of the well-known bidirected cut relaxation is also the same. A new partition uncrossing technique is used to demonstrate these equivalences, and to show that extreme points of the new LP are well-structured. We improve the best known integrality gap on these LPs in some special cases. We show that several approximation algorithms from the literature on Steiner trees can be re-interpreted through linear programs, in particular our hypergraphic relaxation yields a new view of the Robins-Zelikovsky 1.55-approximation algorithm for the Steiner tree problem.
The second half of the thesis deals with a variety of fundamental problems in combinatorial optimization. We show how to apply the iterated LP relaxation framework to the problem of multicommodity integral flow in a tree, to get an approximation ratio that is asymptotically optimal in terms of the minimum capacity. Iterated relaxation gives an infeasible solution, so we need to finesse it back to feasibility without losing too much value. Iterated LP relaxation similarly gives an O(k^2)-approximation algorithm for packing integer programs with at most k occurrences of each variable; new LP rounding techniques give a k-approximation algorithm for covering integer programs with at most k variable per constraint. We study extreme points of the standard LP relaxation for the traveling salesperson problem and show that they can be much more complex than was previously known. The k-edge-connected spanning multi-subgraph problem has the same LP and we prove a lower bound and conjecture an upper bound on the approximability of variants of this problem. Finally, we show that for packing/covering integer programs with a bounded number of constraints, for any epsilon > 0, there is an LP with integrality gap at most 1 + epsilon.
|
445 |
On the approximation of the Dirichlet to Neumann map for high contrast two phase compositesWang, Yingpei 16 September 2013 (has links)
Many problems in the natural world have high contrast properties, like transport in composites, fluid in porous media and so on. These problems have huge numerical difficulties because of the singularities of their solutions. It may be really expensive to solve these problems directly by traditional numerical methods. It is necessary and important to understand these problems more in mathematical aspect first, and then using the mathematical results to simplify the original problems or develop more efficient numerical methods.
In this thesis we are going to approximate the Dirichlet to Neumann map for the high contrast two phase composites. The mathematical formulation of our problem is to approximate the energy for an elliptic equation with arbitrary boundary conditions.
The boundary conditions may have highly oscillations, which makes our problems very interesting and difficult.
We developed a method to divide the domain into two different subdomains, one is close to and the other one is far from the boundary, and we can approximate the energy in these two subdomains separately. In the subdomain far from the boundary, the energy is not influenced that much by the boundary conditions. Methods for approximation of the energy in this subdomain are studied before. In the subdomain near the boundary, the energy depends on the boundary conditions a lot. We used a new method to approximate the energy there such that it works for any kind of boundary conditions. By this way, we can have the approximation for the total energy of high contrast problems with any boundary conditions.
In other words, we can have a matrix up to any dimension to approximate the continuous Dirichlet to Neumann map of the high contrast composites. Then we will use this matrix as a preconditioner in domain decomposition methods, such that our numerical methods are very efficient to solve the problems in high contrast composites.
|
446 |
The Asymmetric Traveling Salesman ProblemMattsson, Per January 2010 (has links)
This thesis is a survey on the approximability of the asymmetric traveling salesmanproblem with triangle inequality (ATSP).In the ATSP we are given a set of cities and a function that gives the cost of travelingbetween any pair of cities. The cost function must satisfy the triangle inequality, i.e.the cost of traveling from city A to city B cannot be larger than the cost of travelingfrom A to some other city C and then to B. However, we allow the cost function tobe asymmetric, i.e. the cost of traveling from city A to city B may not equal the costof traveling from B to A. The problem is then to find the cheapest tour that visit eachcity exactly once. This problem is NP-hard, and thus we are mainly interested in approximationalgorithms. We study the repeated cycle cover heuristic by Frieze et al. We alsostudy the Held-Karp heuristic, including the recent result by Asadpour et al. that givesa new upper bound on the integrality gap. Finally we present the result ofPapadimitriou and Vempala which shows that it is NP-hard to approximate the ATSP with a ratio better than 117/116.
|
447 |
Implementation of a non-conforming rotated Q1 approximation on tetrahedronCenanovic, Mirza, Khanmohammadi, Mahdieh January 2011 (has links)
Our project consists of two parts (A and B). In part A we solve a linear elasticity problem with implementing a rotated approximation method and simulating the problem in commercial softwares (COMSOL and SolidWorks). To evaluate the results we implement an analytical eigenvalue solver. As a simple case, we use a cube with side length of L = 1m made of Alloy steel with density of 7850 Kg/m^3. In part B we implement a time dependent linear elasticity problem on a beam made of Alloy steel with density of 7850 Kg/m^3 with size of 1x0.1x0.01m. We use the implicit method to solve our problem. The frequency results in part A show that the rotated Q1 approximation method works more accurate than the commercial software.
|
448 |
On the Fisher Information of Discretized DataPötzelberger, Klaus, Felsenstein, Klaus January 1991 (has links) (PDF)
In this paper we study the loss of Fisher information in approximating a continous distribution by a multinominal distribution coming from a partition of the sample space into a finite number of intervals. We describe and characterize the Fisher information as a function of the partition chosen especially for location parameters. For a small number of intervals the consequences of the choice is demonstrated by instructive examples. For increasing number of individuals we give the asymptotically optimal partition. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
449 |
Postman Problems on Mixed GraphsZaragoza Martinez, Francisco Javier January 2003 (has links)
The <i>mixed postman problem</i> consists of finding a minimum cost tour of a mixed graph <i>M</i> = (<i>V</i>,<i>E</i>,<i>A</i>) traversing all its edges and arcs at least once. We prove that two well-known linear programming relaxations of this problem are equivalent. The <i>extra cost</i> of a mixed postman tour <i>T</i> is the cost of <i>T</i> minus the cost of the edges and arcs of <i>M</i>. We prove that it is <i>NP</i>-hard to approximate the minimum extra cost of a mixed postman tour.
A related problem, known as the <i>windy postman problem</i>, consists of finding a minimum cost tour of an undirected graph <i>G</i>=(<i>V</i>,<i>E</i>) traversing all its edges at least once, where the cost of an edge depends on the direction of traversal. We say that <i>G</i> is <i>windy postman perfect</i> if a certain <i>windy postman polyhedron O</i> (<i>G</i>) is integral. We prove that series-parallel undirected graphs are windy postman perfect, therefore solving a conjecture of Win.
Given a mixed graph <i>M</i> = (<i>V</i>,<i>E</i>,<i>A</i>) and a subset <i>R</i> ⊆ <i>E</i> ∪ <i>A</i>, we say that a mixed postman tour of <i>M</i> is <i>restricted</i> if it traverses the elements of <i>R</i> exactly once. The <i>restricted mixed postman problem</i> consists of finding a minimum cost restricted tour. We prove that this problem is <i>NP</i>-hard even if <i>R</i>=<i>A</i> and we restrict <i>M</i> to be planar, hence solving a conjecture of Veerasamy. We also prove that it is <i>NP</i>-complete to decide whether there exists a restricted tour even if <i>R</i>=<i>E</i> and we restrict <i>M</i> to be planar.
The <i>edges postman problem</i> is the special case of the restricted mixed postman problem when <i>R</i>=<i>A</i>. We give a new class of valid inequalities for this problem. We introduce a relaxation of this problem, called the <i>b-join problem</i>, that can be solved in polynomial time. We give an algorithm which is simultaneously a 4/3-approximation algorithm for the edges postman problem, and a 2-approximation algorithm for the extra cost of a tour.
The <i>arcs postman problem</i> is the special case of the restricted mixed postman problem when <i>R</i>=<i>E</i>. We introduce a class of necessary conditions for <i>M</i> to have an arcs postman tour, and we give a polynomial-time algorithm to decide whether one of these conditions holds. We give linear programming formulations of this problem for mixed graphs arising from windy postman perfect graphs, and mixed graphs whose arcs form a forest.
|
450 |
Minimum Crossing Problems on GraphsRoh, Patrick January 2007 (has links)
This thesis will address several problems in discrete optimization. These problems are considered hard to solve. However, good approximation algorithms for these problems may be helpful in approximating problems in computational biology and computer science.
Given an undirected graph G=(V,E) and a family of subsets of vertices S, the minimum crossing spanning tree is a spanning tree where the maximum number of edges crossing any single set in S is minimized, where an edge crosses a set if it has exactly one endpoint in the set. This thesis will present two algorithms for special cases of minimum crossing spanning trees.
The first algorithm is for the case where the sets of S are pairwise disjoint. It gives a spanning tree with the maximum crossing of a set being 2OPT+2, where OPT is the maximum crossing for a minimum crossing spanning tree.
The second algorithm is for the case where the sets of S form a laminar family. Let b_i be a bound for each S_i in S. If there exists a spanning tree where each set S_i is crossed at most b_i times, the algorithm finds a spanning tree where each set S_i is crossed O(b_i log n) times. From this algorithm, one can get a spanning tree with maximum crossing O(OPT log n).
Given an undirected graph G=(V,E), and a family of subsets of vertices S, the minimum crossing perfect matching is a perfect matching where the maximum number of edges crossing any set in S is minimized. A proof will be presented showing that finding a minimum crossing perfect matching is NP-hard, even when the graph is bipartite and the sets of S are pairwise disjoint.
|
Page generated in 0.051 seconds