Spelling suggestions: "subject:"approximation algorithms."" "subject:"eapproximation algorithms.""
41 |
Survey of Approximation Algorithms for Set Cover ProblemDutta, Himanshu Shekhar 12 1900 (has links)
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
|
42 |
Fast and scalable triangle counting in graph streams: the hybrid approachSingh, Paramvir 14 December 2020 (has links)
Triangle counting is a major graph problem with several applications in social network analysis, anomaly detection, etc. A considerable amount of work has contributed to approximately computing the global triangle counts using several computational models. One of the most popular streaming models considered is Edge Streaming in which the edges arrive in the form of a graph stream. We categorize the existing literature into two categories: Fixed Memory (FM) approach, and Fixed Probability (FP) approach. As the size of the graphs grows, several challenges arise such as memory space limitations, and prohibitively long running time. Therefore, both FM and FP categories exhibit some limitations. FP algorithms fail to scale for massive graphs. We identified a limitation of FM category $i.e.$ FM algorithms have higher computational time than their FP variants.
In this work, we present a new category called the Hybrid approach that overcomes the limitations of both FM and FP approaches. We present two new algorithms that belong to the hybrid category: Neighbourhood Hybrid Multisampling (NHMS) and Triest/ThinkD Hybrid Sampling (THS) for estimating the number of global triangles in graphs. These algorithms are highly scalable and have better running time than FM and FP variants. We experimentally show that both NHMS and THS outperform state-of-the-art algorithms in space-efficient environments. / Graduate
|
43 |
Efficient Use of Exponential Size Linear ProgramsPolacek, Lukas January 2015 (has links)
In the past decades, linear programming (LP) has been successfully used to develop approximation algorithms for various optimization problems. In particular, the so-called assignment LP has lead to substantial progress for various allocation problems, including scheduling unrelated parallel machines. However, we have reached its limits for many problems, since the best-known approximation algorithms match the integrality gap of the assignment LP for these problems. The natural question is then whether a different LP formulation can lead to better algorithms. We answer this question positively for variants of two allocation problems: max-min fair allocation and maximum budgeted allocation. This is achieved by using a more powerful LP formulation called the configuration LP that has an exponential number of variables, but can be approximated in polynomial time. The restricted max-min fair allocation problem, also known as the restricted Santa Claus problem, is one of few problems that have a better polynomial estimation algorithm than approximation algorithm. An estimation algorithm estimates the value of the optimal solution, but is not necessarily able to find the optimal solution. The configuration LP can be used to estimate the optimal value within a factor of 1/(4+ɛ) for any ɛ>0, but it is only known how to efficiently find a solution achieving this value in exponential time. We give an approximation algorithm with the same approximation ratio but improve the running time to quasi-polynomial: n^O(log n). Our techniques also have the interesting property that although we use the rather complex configuration LP in the analysis, we never actually solve it and therefore the resulting algorithm is purely combinatorial. For the maximum budgeted allocation (MBA) the integrality gap of the assignment LP is exactly 3/4. We prove that the integrality gap of the configuration LP is strictly better than 3/4 and provide corresponding polynomial time rounding algorithms for two variants of the problem: the restricted MBA and the graph MBA. Finally, we improve the best-known upper bound on the integrality gap for the general case from 0.833 to 0.828 and also show hardness of approximation results for both variants studied. / Under de senaste decennierna har linjärprogrammering (LP) framgångsrikt använts för att utveckla approximeringsalgoritmer. I synnerhet har det så kallade tilldelnings-LP lett till betydande framsteg för olika allokeringsproblem, som scheduling unrelated parallel machines. Vi verkar dock ha nått dess gräns, eftersom de bästa approximeringsalgoritmerna har samma kvalitet som heltalsgapet för dessa problem. Den naturliga frågan är då om någon annan LP-formulering kan leda till bättre algoritmer. Vi besvarar denna fråga positivt för varianter av två fördelningsproblem: max-min fair allocation och maximal budgeted allocation. Vi använder en mer kraftfull LP-formulering som kallas konfigurations-LP och har ett exponentiellt antal variabler men kan approximeras i polynomisk tid. Problemet restricted max-min fair allocation, som är även känt som restricted Santa Claus problem, är ett av få problem som har en bättre polynomisk värderingsalgoritm än approximeringsalgoritm. En värderingsalgoritm approximerar det optimala värdet, men hittar inte nödvändigtvis den optimala lösningen. Konfigurations-LP kan användas för att approximera det optimala värdet inom en faktor 1 / (4 + ɛ) för något ɛ > 0, men man vet bara hur man hittar en lösning med sådan kvalitet i exponentiellt tid. Vi ger en approximeringsalgoritm med samma approximeringskvalitet men förbättrar tidskomplexitet till kvasipolynomisk: n^O(log n). Våra tekniker har också den intressanta egenskapen att även om vi använder det ganska komplext konfigurations-LP:t i analysen, löser vi aldrig det och vår algoritm är rent kombinatorisk. För maximal budgeted allocation (MBA) är heltalsgapet av tilldelnings-LP:et är precis 3/4. Vi bevisar att heltalsgapet av konfiguration-LP är strikt bättre än 3/4 och vi ger en motsvarande polynomisk avrundningsalgoritm för två varianter av problemet: restricted MBA och graph MBA. Slutligen förbättrar vi den bäst kända övre gränsen på heltalsgapet för det allmänna fallet från 0.833 till 0.828 samt ger approximeringssvårighetsresultat för båda två studerade varianter. / <p>QC 20150305</p> / ERC APPROXNP 226203
|
44 |
Independent b-matching Approximation Algorithm with Applications to Peer-to-Peer NetworksOchs, Christopher S. 04 November 2020 (has links)
No description available.
|
45 |
Capacity Characterization of Multi-Hop Wireless Networks- A Cross Layer ApproachChafekar, Deepti Ramesh 06 May 2009 (has links)
A fundamental problem in multi-hop wireless networks is to estimate their throughout capacity. The problem can be informally stated as follows: given a multi-hop wireless network and a set of source destination pairs, determine the maximum rate r at which data can be transmitted between each source destination pair. Estimating the capacity of a multi-hop wireless network is practically useful --- it yields insights into the fundamental performance limits of the wireless network and at the same time aids the development of protocols that can utilize the network close to this limit. A goal of this dissertation is to develop rigorous mathematical foundations to compute the capacity of any given multi-hop wireless network with known source-destination pairs.
An important factor that affects the capacity of multi-hop wireless networks is radio interference. As a result, researchers have proposed increasingly realistic interference models that aim to capture the physical characteristics of radio signals. Some of the commonly used simple models that capture radio interference are based on geometric disk-graphs. The simplicity of these models facilitate the development of provable and often conceptually simple methods for estimating the capacity of wireless networks. A potential weakness of this class of models is that they oversimplify the physical process by assuming that the signal ends abruptly at the boundary of a geometric region (a disk for omni-directional antennas). A more sophisticated interference model is the physical interference model, also known as the Signal to Interference Plus Noise Ratio (SINR) model. This model is more realistic than disk-graph models as it captures the effects of signal fading and ambient noise. This work considers both disk-graph and SINR interference models.
In addition to radio interference, the throughput capacity of a multi-hop wireless network also depends on other factors, including the specific paths selected to route the packets between the source destination pairs (routing), the time at which packets are transmitted (scheduling), the power with which nodes transmit (power control) and the rate at which packets are injected (rate control). In this dissertation, we consider three different problems related to estimating network capacity. We propose an algorithmic approach for solving these problems. We first consider the problem of maximizing throughput with the SINR interference model by jointly considering the effects of routing and scheduling constraints. Second, we consider the problem of maximizing throughput by performing adaptive power control, scheduling and routing for disk-graph interference models. Finally, we examine the problem of minimizing end-to-end latency by performing joint routing, scheduling and power control using the SINR interference model. Recent results have shown that traditional layered networking principles lead to inefficient utilization of resources in multi-hop wireless networks. Motivated by these observations, recent papers have begun investigating cross-layer design approaches. Although our work does not develop new cross-layered protocols, it yields new insights that could contribute to the development of such protocols in the future.
Our approach for solving these multi-objective optimization problems is based on combining mathematical programming with randomized rounding to obtain polynomial time approximation algorithms with provable worst case performance ratios. For the problems considered in this work, our results provide the best analytical performance guarantees currently known in the literature. We complement our rigorous theoretical and algorithmic analysis with simulation-based experimental analysis. Our experimental results help us understand the limitations of our approach and assist in identifying certain parameters for improving the performance of our techniques. / Ph. D.
|
46 |
Approximation algorithms for multidimensional bin packingKhan, Arindam 07 January 2016 (has links)
The bin packing problem has been the corner stone of approximation algorithms and has been extensively studied starting from the early seventies.
In the classical bin packing problem, we are given a list of real numbers in the range (0, 1], the goal is to place them in a minimum number of bins so that no bin holds numbers summing to more than 1.
In this thesis we study approximation algorithms for three generalizations of bin packing: geometric bin packing, vector bin packing and weighted bipartite edge coloring.
In two-dimensional (2-D) geometric bin packing, we are given a collection of rectangular items to be packed into a minimum number of unit size square bins. Geometric packing has vast applications in cutting stock, vehicle loading, pallet packing, memory allocation and several other logistics and robotics related problems.
We consider the widely studied orthogonal packing case, where the items must be placed in the bin such that their sides are parallel to the sides of the bin.
Here two variants are usually studied, (i) where the items cannot be rotated, and (ii) they can be rotated by 90 degrees. We give a polynomial time algorithm with an asymptotic approximation ratio of
$\ln(1.5) + 1 \approx 1.405$ for the versions with and without rotations.
We have also shown the limitations of rounding based algorithms, ubiquitous in bin packing algorithms. We have shown that any algorithm that
rounds at least one side of each large item to some number in a constant size collection values chosen independent of the problem instance, cannot achieve an asymptotic approximation ratio better than 3/2.
In d-dimensional vector bin packing (VBP), each item is a d-dimensional vector that needs to be packed into unit vector bins. The problem is of great significance in resource constrained scheduling and also appears in recent virtual machine placement in cloud computing. Even in two dimensions, it has novel applications in layout design, logistics, loading and scheduling problems.
We obtain a polynomial time algorithm with an asymptotic approximation ratio of $\ln(1.5) + 1 \approx 1.405$ for 2-D VBP. We also obtain a polynomial time algorithm with almost tight (absolute) approximation ratio of $1+\ln(1.5)$ for 2-D VBP.
For $d$ dimensions, we give a polynomial time algorithm with an asymptotic approximation ratio of $\ln(d/2) + 1.5 \approx \ln d+0.81$.
We also consider vector bin packing under resource augmentation. We give a polynomial time algorithm that packs vectors into $(1+\epsilon)Opt$ bins when we allow augmentation in (d - 1) dimensions and $Opt$ is the minimum number of bins needed to pack the vectors into (1,1) bins.
In weighted bipartite edge coloring problem, we are given an edge-weighted bipartite graph $G=(V,E)$ with weights $w: E \rightarrow [0,1]$. The task is to find a proper weighted coloring of the edges with as few colors as possible. An edge coloring of the weighted graph is called a proper weighted coloring if the sum of the weights of the edges incident to a vertex of any color is at most one. This problem is motivated by rearrangeability of 3-stage Clos networks which is very useful in various applications in interconnected networks and routing.
We show a polynomial time approximation algorithm that returns a proper weighted coloring with at most $\lceil 2.2223m \rceil$ colors where $m$ is the minimum number of unit sized bins needed to pack the weight of all edges incident at any vertex.
We also show that if all edge weights are $>1/4$ then $\lceil 2.2m \rceil$ colors are sufficient.
|
47 |
Quantitative verification of real-time properties with application to medical devicesDiciolla, Marco January 2014 (has links)
Probabilistic model checking is a powerful technique used to ensure the correct functioning of systems which exhibit real-time and stochastic behaviours. Many such systems are embedded and used in safety-critical situations, to mention implantable medical devices. This thesis aims to develop a formal model-based framework that is tailored for the analysis and verification of cardiac pacemakers. The contributions are novel approaches for the automatic verification and validation of real-time properties over continuous-time models, which are applicable to software embedded in medical devices. First, we address the problem of model checking continuous-time Markov chain (CTMC) models against real-time specifications given in the form of temporal logic, namely, metric temporal logic (MTL) and linear duration properties (LDP), or as timed automata (TA). The main question that we address is “given a continuous-time Markov chain, what is the probability of the set of timed paths that satisfy the real-time property under consideration?”. We provide novel algorithms to approximate the probability through generating systems of linear inequalities over variables that represent the waiting times in system states, and then solving multidimensional integrals over this set. Second, we present a model-based framework to support the design and verification of pacemakers against real-time properties. The pacemaker is modelled as a network of timed automata, whereas the human heart is modelled either as a network of timed automata or as a network of hybrid automata. Our framework can be instantiated with personalised heart models whose parameters can be learnt from patient data, and we have done so to validate our approach. We introduce property patterns and the counting metric temporal logic (CMTL) in order to specify the properties of interest. We provide new verification algorithms for networks of timed or hybrid automata against property patterns and CMTL. Finally, we pose and solve the parameter synthesis problem, i.e., given a network of timed automata containing model parameters, an objective function and a CMTL formula, find the set of parameter valuations, whenever existing, which satisfy the CMTL formula and maximise the objective function. The framework has been implemented using Simulink, Matlab and Python code. Extensive experimental results on pacemaker models have been carried out and discussed in detail. The techniques developed in this thesis can assist in the design and verification of software embedded in medical devices.
|
48 |
Spatial Scheduling Algorithms for Production Planning ProblemsSrinivasan, Sudharshana 30 April 2014 (has links)
Spatial resource allocation is an important consideration in shipbuilding and large-scale manufacturing industries. Spatial scheduling problems (SSP) involve the non-overlapping arrangement of jobs within a limited physical workspace such that some scheduling objective is optimized. Since jobs are heavy and occupy large areas, they cannot be moved once set up, requiring that the same contiguous units of space be assigned throughout the duration of their processing time. This adds an additional level of complexity to the general scheduling problem, due to which solving large instances of the problem becomes computationally intractable. The aim of this study is to gain a deeper understanding of the relationship between the spatial and temporal components of the problem. We exploit these acquired insights on problem characteristics to aid in devising solution procedures that perform well in practice. Much of the literature on SSP focuses on the objective of minimizing the makespan of the schedule. We concentrate our efforts towards the minimum sum of completion times objective and state several interesting results encountered in the pursuit of developing fast and reliable solution methods for this problem. Specifically, we develop mixed-integer programming models that identify groups of jobs (batches) that can be scheduled simultaneously. We identify scenarios where batching is useful and ones where batching jobs provides a solution with a worse objective function value. We present computational analysis on large instances and prove an approximation factor on the performance of this method, under certain conditions. We also provide greedy and list-scheduling heuristics for the problem and compare their objectives with the optimal solution. Based on the instances we tested for both batching and list-scheduling approaches, our assessment is that scheduling jobs similar in processing times within the same space yields good solutions. If processing times are sufficiently different, then grouping jobs together, although seemingly makes a more effective use of the space, does not necessarily result in a lower sum of completion times.
|
49 |
Problema da árvore geradora de comunicação ótima: variantes, complexidade e aproximação / Optimum communication spanning tree problem: variants, complexity and approximationRavelo, Santiago Valdes 18 February 2016 (has links)
O problema da árvore geradora de comunicação ótima recebe um grafo com comprimentos não negativos nas arestas e um requerimento não negativo entre cada par de vértices; sendo o objetivo encontrar uma árvore geradora do grafo que minimize o custo de comunicação, que é a soma sobre cada par de vértice da distância entre eles na árvore vezes o requerimento entre eles. Este problema é NP-difícil, assim como vários casos particulares dele. Neste trabalho estudamos algumas variantes deste problema, introduzimos novos casos particulares que são também NP-difíceis e propomos esquemas de aproximação polinomial para alguns deles. / The optimum communication spanning tree problem receives a graph with non-negative lengths over the edges and non-negative requirements for each pair of nodes; being the objective to find a spanning tree of the graph that minimizes the communication cost, which is given by the sum, over each pair of nodes, of the distance, in the tree, between the nodes multiplied by the requirement between them. This problem and several of its particular cases are NP-hard. In this work we study some of the variants, also we introduce new NP-hard particular cases of the problem and propose polynomial approximation schemes for some of them.
|
50 |
Essays on Approximation Algorithms for Robust Linear Optimization ProblemsLu, Brian Yin January 2016 (has links)
Solving optimization problems under uncertainty has been an important topic since the appearance of mathematical optimization in the mid 19th century. George Dantzig’s 1955 paper, “Linear Programming under Uncertainty” is considered one of the ten most influential papers in Management Science [26]. The methodology introduced in Dantzig’s paper is named stochastic programming, since it assumes an underlying probability distribution of the uncertain input parameters. However, stochastic programming suffers from the “curse of dimensionality”, and knowing the exact distribution of the input parameter may not be realistic. On the other hand, robust optimization models the uncertainty using a deterministic uncertainty set. The goal is to optimize the worst-case scenario from the uncertainty set. In recent years, many studies in robust optimization have been conducted and we refer the reader to Ben-Tal and Nemirovski [4–6], El Ghaoui and Lebret [19], Bertsimas and
Sim [15, 16], Goldfarb and Iyengar [23], Bertsimas et al. [8] for a review of robust optimization. Computing an optimal adjustable (or dynamic) solution to a robust optimization problem is generally hard. This motivates us to study the hardness of approximation of the problem and provide efficient approximation algorithms. In this dissertation, we consider adjustable robust linear optimization problems with packing and covering formulations and their approximation algorithms. In particular, we study the performances of static solution and affine solution as approximations for the adjustable robust problem.
Chapter 2 and 3 consider two-stage adjustable robust linear packing problem with uncertain second-stage constraint coefficients. For general convex, compact and down-monotone uncertainty sets, the problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters [22]. In particular, for a fairly general class of uncertainty sets, we show that the two-stage adjustable robust problem is NP-hard to approximate within a factor that is better than Ω(logn), where n is the number of columns of the uncertain coefficient matrix. On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efficiently. We study the performance of static solutions an approximation for the adjustable robust problem and relate its optimality to a transformation of the uncertain set. With this transformation, we show that for a fairly general class of uncertainty sets, static solution is optimal for the adjustable robust problem. This is surprising since the static solution is widely perceived as highly conservative. Moreover, when the static solution is not optimal, we provide an instance-based tight approximation bound that is related to a measure of non-convexity of the transformation of the uncertain set. We also show that for two-stage problems, our bound is at least as good (and in many case significantly better) as the bound given by the symmetry of the uncertainty set [11, 12]. Moreover, our results can be generalized to the case where the objective coefficients and right-hand-side are also uncertainty.
In Chapter 3, we focus on the two-stage problems with a family of column-wise and constraint-wise uncertainty sets where any constraint describing the set involves entries of only a single column or a single row. This is a fairly general class of uncertainty sets to model constraint coefficient uncertainty. Moreover, it is the family of uncertainty sets that gives the previous hardness result. On the positive side, we show that a static solution is an
O(\log n · min(\log \Gamma, \log(m+n))-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \Gamma is the maximum possible ratio of upper bounds of the uncertain constraint coefficients. Therefore, for constant \Gamma, surprisingly the performance bound for static solutions matches
the hardness of approximation for the adjustable problem. Furthermore, in general the static solution provides nearly the best efficient approximation for the two-stage adjustable robust problem.
In Chapter 4, we extend our result in Chapter 2 to a multi-stage adjustable robust linear optimization problem. In particular, we consider the case where the choice of the uncertain constraint coefficient matrix for each stage is independent of the others. In real world applications, decision problems are often of multiple stages and a iterative implementation of two-stage solution may result in a suboptimal solution for multi-stage problem. We consider the static solution for the adjustable robust problem and the transformation of the uncertainty sets introduced in Chapter 2. We show that the static solution is optimal for the adjustable robust problem when the transformation of the uncertainty set for each stage is convex.
Chapters 5 considers a two-stage adjustable robust linear covering problem with uncertain right-hand-side parameter. As mentioned earlier, such problems are often intractable due to astronomically many extreme points of the uncertainty set. We introduce a new approximation framework where we consider a “simple” set that is “close” to the original uncertainty set. Moreover, the adjustable robust problem can be solved efficiently over the extended set. We show that the approximation bound is related to a geometric factor that represents the Banach-Mazur distance between the two sets. Using this framework, we provide approximation bounds that are better than the bounds given by an affine policy in [7] for a large class of interesting uncertainty sets. For instance, we provide an approximation solution that gives a m^{1/4}-approximation for the two-stage adjustable robust problem with hypersphere uncertainty set, while the affine policy has an approximation ratio of O(\sqrt{m}).
Moreover, our bound for general p-norm ball is m^{\frac{p-1}{p^2}} as opposed to m^{1/p} as given by an affine policy.
|
Page generated in 0.1518 seconds