Spelling suggestions: "subject:"[een] APPROXIMATION"" "subject:"[enn] APPROXIMATION""
451 |
Linear Programming Tools and Approximation Algorithms for Combinatorial OptimizationPritchard, David January 2009 (has links)
We study techniques, approximation algorithms, structural properties and lower bounds related to applications of linear programs in combinatorial optimization. The following "Steiner tree problem" is central: given a graph with a distinguished subset of required vertices, and costs for each edge, find a minimum-cost subgraph that connects the required vertices. We also investigate the areas of network design, multicommodity flows, and packing/covering integer programs. All of these problems are NP-complete so it is natural to seek approximation algorithms with the best provable approximation ratio.
Overall, we show some new techniques that enhance the already-substantial corpus of LP-based approximation methods, and we also look for limitations of these techniques.
The first half of the thesis deals with linear programming relaxations for the Steiner tree problem. The crux of our work deals with hypergraphic relaxations obtained via the well-known full component decomposition of Steiner trees; explicitly, in this view the fundamental building blocks are not edges, but hyperedges containing two or more required vertices. We introduce a new hypergraphic LP based on partitions. We show the new LP has the same value as several previously-studied hypergraphic ones; when no Steiner nodes are adjacent, we show that the value of the well-known bidirected cut relaxation is also the same. A new partition uncrossing technique is used to demonstrate these equivalences, and to show that extreme points of the new LP are well-structured. We improve the best known integrality gap on these LPs in some special cases. We show that several approximation algorithms from the literature on Steiner trees can be re-interpreted through linear programs, in particular our hypergraphic relaxation yields a new view of the Robins-Zelikovsky 1.55-approximation algorithm for the Steiner tree problem.
The second half of the thesis deals with a variety of fundamental problems in combinatorial optimization. We show how to apply the iterated LP relaxation framework to the problem of multicommodity integral flow in a tree, to get an approximation ratio that is asymptotically optimal in terms of the minimum capacity. Iterated relaxation gives an infeasible solution, so we need to finesse it back to feasibility without losing too much value. Iterated LP relaxation similarly gives an O(k^2)-approximation algorithm for packing integer programs with at most k occurrences of each variable; new LP rounding techniques give a k-approximation algorithm for covering integer programs with at most k variable per constraint. We study extreme points of the standard LP relaxation for the traveling salesperson problem and show that they can be much more complex than was previously known. The k-edge-connected spanning multi-subgraph problem has the same LP and we prove a lower bound and conjecture an upper bound on the approximability of variants of this problem. Finally, we show that for packing/covering integer programs with a bounded number of constraints, for any epsilon > 0, there is an LP with integrality gap at most 1 + epsilon.
|
452 |
Approximation Algorithms for (S,T)-Connectivity ProblemsLaekhanukit, Bundit 27 July 2010 (has links)
We study a directed network design problem called the $k$-$(S,T)$-connectivity problem; we design and analyze approximation
algorithms and give hardness results. For each positive integer $k$, the minimum cost $k$-vertex connected spanning subgraph problem is a special case of the $k$-$(S,T)$-connectivity problem. We defer
precise statements of the problem and of our results to the introduction.
For $k=1$, we call the problem the $(S,T)$-connectivity problem. We study three variants of the problem: the standard
$(S,T)$-connectivity problem, the relaxed $(S,T)$-connectivity problem, and the unrestricted $(S,T)$-connectivity problem. We give hardness results for these three variants. We design a $2$-approximation algorithm for the standard $(S,T)$-connectivity problem. We design tight approximation algorithms for the relaxed $(S,T)$-connectivity problem and one of its special cases.
For any $k$, we give an $O(\log k\log n)$-approximation algorithm,
where $n$ denotes the number of vertices. The approximation guarantee
almost matches the best approximation guarantee known for the minimum
cost $k$-vertex connected spanning subgraph problem which is $O(\log
k\log\frac{n}{n-k})$ due to Nutov in 2009.
|
453 |
Small and Stable Descriptors of Distributions for Geometric Statistical ProblemsPhillips, Jeff M. January 2009 (has links)
<p>This thesis explores how to sparsely represent distributions of points for geometric statistical problems. A <italic>coreset<italic> C is a small summary of a point set P such that if a certain statistic is computed on P and C, then the difference in the results is guaranteed to be bounded by a parameter ε. Two examples of coresets are ε-samples and ε-kernels. An ε-sample can estimate the density of a point set in any range from a geometric family of ranges (e.g., disks, axis-aligned rectangles). An ε-kernel approximates the width of a point set in all directions. Both coresets have size that depends only on ε, the error parameter, not the size of the original data set. We demonstrate several improvements to these coresets and how they are useful for geometric statistical problems.</p><p>We reduce the size of ε-samples for density queries in axis-aligned rectangles to nearly a square root of the size when the queries are with respect to more general families of shapes, such as disks. We also show how to construct ε-samples of probability distributions. </p><p>We show how to maintain “stable” ε-kernels, that is if the point set P changes by a small amount, then the ε-kernel also changes by a small amount. This is useful in surveillance tracking problems and the stable properties leads to more efficient algorithms for maintaining ε-kernels. </p><p>We next study when the input point sets are uncertain and their uncertainty is modeled by probability distributions. Statistics on these point sets (e.g., radius of smallest enclosing ball) do not have exact answers, but rather distributions of answers. We describe data structures to represent approximations of these distributions and algorithms to compute them. We also show how to create distributions of ε-kernels and ε-samples for these uncertain data sets. </p><p>Finally, we examine a spatial anomaly detection problem: computing a spatial scan statistic. The input is a point set P and measurements on the point set. The spatial scan statistic finds the range (e.g., an axis-aligned bounding box) where the measurements inside the range are the most different from measurements outside of the range. We show how to compute this statistic efficiently while allowing for a bounded amount of approximation error. This result generalizes to several statistical models and types of input point sets.</p> / Dissertation
|
454 |
Geometric Approximation Algorithms - A Summary Based ApproachRaghvendra, Sharathkumar January 2012 (has links)
<p>Large scale geometric data is ubiquitous. In this dissertation, we design algorithms and data structures to process large scale geometric data efficiently. We design algorithms for some fundamental geometric optimization problems that arise in motion planning, machine learning and computer vision.</p><p>For a stream S of n points in d-dimensional space, we develop (single-pass) streaming algorithms for maintaining extent measures such as the minimum enclosing ball and diameter. Our streaming algorithms have a work space that is polynomial in d and sub-linear in n. For problems of computing diameter, width and minimum enclosing ball of S, we obtain lower bounds on the worst-case approximation ratio of any streaming algorithm that uses polynomial in d space. On the positive side, we design a summary called the blurred ball cover and use it for answering approximate farthest-point queries and maintaining approximate minimum enclosing ball and diameter of S. We describe a streaming algorithm for maintaining a blurred ball cover whose working space is linear in d and independent</p><p>of n.</p><p>For a set P of k pairwise-disjoint convex obstacles in 3-dimensions, we design algorithms and data structures for computing Euclidean shortest path between source s and destination t. The running time of our algorithm is linear in n and the size and query time of our data structure is independent of n. We follow a summary based approach, i.e., quickly compute a small sketch Q of P whose size is independent of n and then compute approximate shortest paths with respect to Q.</p><p>For d-dimensional point sets A and B, |A| |B| n, and for a parameter &epsilon > 0,</p><p>We give an algorithm to compute &epsilon-approximate minimum weight perfect matching of A and B under d(. , .) in time O(n<super>1.5</super>&tau(n)) ; here &tau(n) is the query/update time of a dynamic weighted nearest neighbor under d(. , .). When A, B are point sets from</p><p>a bounded integer grid, for L<sub>1</sub> and L<sub>infinity</sub>-norms, our algorithm computes minimum weight</p><p>perfect matching of A and B in time O(n<super>1.5</super>). Our algorithm also extends to a generalization of matching called the transportation problem.</p><p>We also present an O(n polylog n ) time algorithm that computes under any L<sub>p</sub>-</p><p>norm, an &epsilon-approximate minimum weight perfect matching of A and B with high probability; all previous algorithms take </p><p>O(n<super>1.5</super> time. We approximate the L<sub>p</sub> norm using a distance function, based on a randomly shifted quad-tree. The algorithm iteratively generates an approximate minimum-cost augmenting path under the new distance function in</p><p>time proportional to the length of the path. We show that the total length of the augmenting paths generated by the algorithm is O(n log n) implying a near-linear running time.</p><p>All the problems mentioned above have a history of more than two decades and algorithms presented here improve previous work by an order of magnitude. Many of these improvements are obtained by new geometric techniques that might have broader applications</p><p>and are of independent interest.</p> / Dissertation
|
455 |
Multi-precision Floating Point Special Function Unit for Low Power ApplicationsLiao, Ying-Chen 07 September 2010 (has links)
In today¡¦s modern society, our latest up-to-date technology contains various types of multimedia applications. These applications don¡¦t necessarily have to be executed with the most precise accuracy. In short, they are fault tolerant. As a consequence, this thesis proposes a multi-precision iterative floating-point special function unit, which can be executed under different modes to meet the error requirements of each specific application, and also achieve power reduction during the process.
In order to minimize the area of our design, we have developed two iterative architectures to implement the multi-precision floating point special function unit. The first proposed architecture can perform three kinds of operations, which include a reciprocal operation, a reciprocal square root operation, and last but not least, a logarithm operation. After deciding which function is to be performed, the user can choose four precision modes to execute the special function unit. According to each mode from lowest precision to highest, we name them the first mode, the second mode, the third mode, and the fourth mode. During implementation, a C model has also been designed to evaluate the maximum error of each mode by making comparisons with the most accurate software result, which is the 23 bit result. When the reciprocal function is chosen, and the user defines that application to be performed in full precision, the multi-precision special function operator needs to be executed twice, and it has the error rate of approximately 0.0001%. When less precision is required, we can choose from two intermediate modes, one offers 15 bit accuracy, and the other can guarantee a 12 bit precision. The former precision mode also required the hardware to be executed twice, but the latter only once. The 15 bit accuracy mode has an error rate around 0.01¢H, and the 12 bit mode has the error rate roughly around 0.05¢H. In addition, when visual effects or even audio effects are not our greatest concern, we provide a least accurate mode for the users to pick to execute the special function operator. This mode can maintain 8 bit accuracy, and has the error rate of approximately 0.8%. Other operations including the reciprocal square root, and the logarithm also have four precision modes to choose from. The reciprcocal square root operation can guarantee the same accuracy in each mode as the reciprocal operation, and their error rates are 0.004%, 0.01%, 0.06%, and 0.5% from the highest precision mode to the lowest one. The precisions the logarithm operation can guarantee from highest accuracy to the lowest one are 23, 16, 12, and 8 bits, respectively, and have error rates including 0.00003%, 0.002%, 0.06%, and 0.3%. These different precision choices are built in the proposed structure mainly to reduce the power consumption. The main concept is to pick a low precision mode in order shut down some components in our design. In addition to switching modes, we¡¦ve also added tri-state buffers in certain components as another means to decrease power.
Through experimental results we¡¦ve discovered that the proposed architecture¡¦s affect on power reduction was not as we¡¦ve expected. Due to the integration of the Newton Raphson Method and the Piecewise Polynomial Approximation Method, our architecture¡¦s delay and area have largely increased, and causing a bad influence on saving power. As a consequence, we¡¥ve developed a second architecture to meet our demands. This architecture is mainly based on the Piecewise Polynomial Approximation Method. From this method, we¡¦ve implemented an iterative design which also supports three kinds of operations, the same as the first architecture. It also provides three precision modes for the user to choose. The lowest precision mode provides 8 bit accuracy. The second mode provides 14 bit accuracy, and the third mode, which is the most precise mode, can provide 22 bit accuracy. According to our C model, we can specify our maximum error rate in each function while executing under different modes. When the reciprocal function is executed, the largest error rate in from the lowest mode to the highest mode is 0.19%, 0.00006% and 0.000015% , and the error rate for reciprocal square root from lowest precision mode to the highest is 0.09%, 0.000022% and 0.000014%, and the error rate for the logarithm function is 0.33%, 0.000043% and 0.000015%, from the lowest to the highest. From experimental results we can discover that the newly proposed architecture is better in comparison with the traditional Piecewise Polynomial Approximation architecture. The proposed architecture has a smaller area, and a faster delay, and most important of all, it reduces power and energy affectively.
|
456 |
Iterative Methods for Common Fixed Points of Nonexpansive Mappings in Hilbert spacesLai, Pei-lin 16 May 2011 (has links)
The aim of this work is to propose viscosity-like methods for finding a specific common fixed point of a finite family T={ T_{i} }_{i=1}^{N} of nonexpansive self-mappings of a closed convex subset C of a Hilbert space H.We propose two schemes: one implicit and the other explicit.The implicit scheme determines a set {x_{t} : 0 < t < 1} through
the fixed point equation x_{t}= tf (x_{t} ) + (1− t)Tx_{t}, where f : C¡÷C is a contraction.The explicit scheme is the discretization of the implicit scheme and de defines a sequence {x_{n} } by the recursion x_{n+1}=£\\_{n}f(x_{n}) +(1−£\\_{n})Tx_{n} for n ≥ 0, where {£\\_{n} }⊂ (0,1) It has been shown in the literature that both implicit and explicit schemes converge in
norm to a fixed point of T (with additional conditions imposed on the sequence {£\ _{n} } in the explicit scheme).We will extend both schemes to the case of a finite family of nonexpansive mappings. Our proposed schemes converge in norm to a common fixed point of the family which in addition solves a variational inequality.
|
457 |
Spatial Scaling for the Numerical Approximation of Problems on Unbounded DomainsTrenev, Dimitar Vasilev 2009 December 1900 (has links)
In this dissertation we describe a coordinate scaling technique for the numerical
approximation of solutions to certain problems posed on unbounded domains in two
and three dimensions. This technique amounts to introducing variable coefficients into the problem, which results in defining a solution coinciding with the solution
to the original problem inside a bounded domain of interest and rapidly decaying
outside of it. The decay of the solution to the modified problem allows us to truncate
the problem to a bounded domain and subsequently solve the finite element
approximation problem on a finite domain.
The particular problems that we consider are exterior problems for the Laplace
equation and the time-harmonic acoustic and elastic wave scattering problems.
We introduce a real scaling change of variables for the Laplace equation and
experimentally compare its performance to the performance of the existing alternative
approaches for the numerical approximation of this problem.
Proceeding from the real scaling transformation, we introduce a version of the
perfectly matched layer (PML) absorbing boundary as a complex coordinate shift
and apply it to the exterior Helmholtz (acoustic scattering) equation. We outline the
analysis of the continuous PML problem, discuss the implementation of a numerical
method for its approximation and present computational results illustrating its
efficiency.
We then discuss in detail the analysis of the elastic wave PML problem and its numerical discretiazation. We show that the continuous problem is well-posed for
sufficiently large truncation domain, and the discrete problem is well-posed on the
truncated domain for a sufficiently small PML damping parameter. We discuss ways
of avoiding the latter restriction.
Finally, we consider a new non-spherical scaling for the Laplace and Helmholtz
equation. We present computational results with such scalings and conduct numerical
experiments coupling real scaling with PML as means to increase the efficiency of the
PML techniques, even if the damping parameters are small.
|
458 |
The Schrodinger Equation as a Volterra ProblemMera, Fernando Daniel 2011 May 1900 (has links)
The objective of the thesis is to treat the Schrodinger equation in parallel with a standard treatment of the heat equation. In the books of the Rubensteins and Kress,
the heat equation initial value problem is converted into a Volterra integral equation of the second kind, and then the Picard algorithm is used to find the exact solution
of the integral equation. Similarly, the Schrodinger equation boundary initial value problem can be turned into a Volterra integral equation. We follow the books of
the Rubinsteins and Kress to show for the Schrodinger equation similar results to those for the heat equation. The thesis proves that the Schrodinger equation with
a source function does indeed have a unique solution. The Poisson integral formula with the Schrodinger kernel is shown to hold in the Abel summable sense. The Green functions are introduced in order to obtain a representation for any function which satisfies the Schrodinger initial-boundary value problem. The Picard method of successive approximations is to be used to construct an approximate solution which should approach the exact Green function as n goes to infinity. To prove convergence, Volterra kernels are introduced in arbitrary Banach spaces, and the Volterra and General Volterra theorems are proved and used in order to show that the Neumann series for the L^1 kernel, the L^infinity kernel, the Hilbert-Schmidt kernel, the unitary kernel, and the
WKB kernel converge to the exact Green function. In the WKB case, the solution of the Schrodinger equation is given in terms of classical paths; that is, the multiple scattering expansions are used to construct from, the action S, the quantum Green function. Then the interior Dirichlet problem is converted into a Volterra integral
problem, and it is shown that Volterra integral equation with the quantum surface kernel can be solved by the method of successive approximations.
|
459 |
Approximation Algorithms and Heuristics for a 2-depot, Heterogeneous Hamiltonian Path ProblemDoshi, Riddhi Rajeev 2010 August 1900 (has links)
Various civil and military applications of UAVs, or ground robots, require a set of vehicles to monitor a group of targets. Routing problems naturally arise in this setting where the operators of the vehicles have to plan the paths suitably in order to optimize the use of resources available such as sensors, fuel etc. These vehicles may differ either in their structural (design and dynamics) or functional (sensing) capabilities. This thesis addresses an important routing problem involving two heterogeneous vehicles. As the addressed routing problem is NP-Hard, we develop an approximation algorithm and heuristics to solve the problem. Our approach involves dividing the routing problem into two sub-problems: Partitioning and Sequencing. Partitioning the targets involves finding two distinct sets of targets, each corresponding to one of the vehicles. We then find a sequence in which these targets need to be visited in order to optimize the use of resources to the maximum possible extent. The sequencing problem can be solved either by Christofides algorithm or the Lin-Kernighan Heuristic (LKH). The problem of partitioning is tackled by solving a Linear Program (LP) obtained by relaxing some of the constraints of an Integer Programming (IP) model for the problem. We observe the performance of two LP models for the partitioning. The first LP model is obtained by relaxing only the integrality constraints whereas in the second model relaxes both integrality and degree constraints. The algorithms were implemented in a C++ environment with the help of Concert Technology for CPLEX, and Boost Graph Libraries. The performance of these algorithms was studied for 50 random instances of varying problem sizes. It was found that on an average, the algorithms based on the first LP model provided better (closer to the optimum) solutions as compared to those based on the second LP model. We also observed that for both the LP models, the average quality of solutions given by the heuristics were found to be better ( within 5% of the optimum) than the average quality of solutions obtained from the approximation algorithm (between 30 - 60% of the optimum depending on the problem size).
|
460 |
Boundary Approximation Method for Stoke's FlowsChang, Chia-ming 20 July 2007 (has links)
none
|
Page generated in 0.0446 seconds