1 |
A Primal-Dual Approximation Algorithm for the Concurrent Flow ProblemNahabedian, Aaron Joseph 29 April 2010 (has links)
The multicommodity flow problem involves shipping multiple commodities simultaneously through a network so that the total flow over each edge does not exceed the capacity of that edge. The concurrent flow problem also associates with each commodity a demand, and involves finding the maximum fraction z, such that z of each commodity's demand can be feasibly shipped through the network. This problem has applications in message routing, transportation, and scheduling problems. It can be formulated as a linear programming problem, and the best known solutions take advantage of decomposition techniques for linear programming. Often, quickly finding an approximate solution is more important than finding an optimal solution. A solution is epsilon-optimal if it lies within a factor of (1+epsilon) of the optimal solution. We present a combinatorial approximation algorithm for the concurrent flow problem. This algorithm consists of finding an initial flow, and gradually rerouting this flow from more to less congested paths, until an epsilon-optimal flow is achieved. This algorithm theoretically runs much faster than linear programming based algorithms.
|
2 |
Analysis of large scale linear programming problems with embedded network structures : detection and solution algorithmsGülpinar, Nalân January 1998 (has links)
Linear programming (LP) models that contain a (substantial) network structure frequently arise in many real life applications. In this thesis, we investigate two main questions; i) how an embedded network structure can be detected, ii) how the network structure can be exploited to create improved sparse simplex solution algorithms. In order to extract an embedded pure network structure from a general LP problem we develop two new heuristics. The first heuristic is an alternative multi-stage generalised upper bounds (GUB) based approach which finds as many GUB subsets as possible. In order to identify a GUB subset two different approaches are introduced; the first is based on the notion of Markowitz merit count and the second exploits an independent set in the corresponding graph. The second heuristic is based on the generalised signed graph of the coefficient matrix. This heuristic determines whether the given LP problem is an entirely pure network; this is in contrast to all previously known heuristics. Using generalised signed graphs, we prove that the problem of detecting the maximum size embedded network structure within an LP problem is NP-hard. The two detection algorithms perform very well computationally and make positive contributions to the known body of results for the embedded network detection. For computational solution a decomposition based approach is presented which solves a network problem with side constraints. In this approach, the original coefficient matrix is partitioned into the network and the non-network parts. For the partitioned problem, we investigate two alternative decomposition techniques namely, Lagrangean relaxation and Benders decomposition. Active variables identified by these procedures are then used to create an advanced basis for the original problem. The computational results of applying these techniques to a selection of Netlib models are encouraging. The development and computational investigation of this solution algorithm constitute further contribution made by the research reported in this thesis.
|
3 |
Efficient Algorithms for Market EquilibriaDevanur, Nikhil Rangarajan 18 May 2007 (has links)
The mathematical modelling of a market, and the proof of existence of
equilibria have been of central importance in mathematical economics.
Since the existence proof is non-constructive in general,
a natural question is if computation of equilibria can be done efficiently.
Moreover, the emergence of Internet and e-commerce has given rise to new
markets that have completely changed the traditional notions.
Add to this the pervasiveness of computing resources,
and an algorithmic theory of market equilibrium becomes highly desirable.
The goal of this thesis is to provide polynomial time algorithms for
various market models.
Two basic market models are the Fisher model: one in which there is a
demarcation between buyers and sellers, buyers are interested in the
goods that the sellers possess, and sellers are only interested in the
money that the buyers have; and the Arrow-Debreu model: everyone has
an endowment of goods, and wants to exchange them for other goods.
We give the first polynomial time algorithm for exactly computing an
equilibrium in the Fisher model with linear utilities. We also show that
the basic ideas in this algorithm can be extended to give a strongly
polynomial time approximation scheme in the Arrow-Debreu model.
We also give several existential, algorithmic and structural
results for new market models:
- the *spending constraint* utilities (defined by Vazirani) that captures
the "diminishing returns" property while generalizing the algorithm for
the linear case.
- the capacity allocation market (defined by Kelly), motivated
by the study of fairness and stability of the Transmission Control
Protocol (TCP) for the Internet, and more generally the class of
Eisenberg-Gale (EG) markets (defined by Jain and Vazirani).
In addition, we consider the adwords market
on search engines and show that some of these models are a natural fit
in this setting. Finally, this line of research has given insights into
the fundamental techniques in algorithm design. The primal-dual schema
has been a great success in combinatorial optimization and approximation
algorithms. Our algorithms use this paradigm in the enhanced setting of
Karush-Kuhn-Tucker (KKT) conditions and convex programs.
|
4 |
Design and Implementation of P2P Network Flow Management SystemLee, Shao-Tang 12 February 2008 (has links)
The use of peer-to-peer applications is growing dramatically, particularly for sharing large files and software. Currently, peer-to-peer file sharing systems are playing a dominant role in the content distribution over Internet. Therefore understanding the impact of peer-to-peer traffic on networks is significant and some reaction is necessary in order to keep the Internet functioning efficiently.
In this Thesis, we designed a P2P network flow management system, combining Classifier and Linux TC (Traffic Control). We analyzed the network traffic by using Classifier and classified the network traffic into P2P and NP2P (non-P2P). Linux TC is a tool for implementing the management policy. Finally, the management has implemented in real network environment and improved the network performance.
|
5 |
On Vegh's Strongly Polynomial Algorithm for Generalized FlowsLo, Venus Hiu Ling 16 May 2014 (has links)
This thesis contains an exposition of the new strongly polynomial algorithm for the generalized flow problem by Laszlo Vegh (2013). It has been a long-standing open question whether such an algorithm exists, until it was resolved by Vegh in 2013. Generalized flows have many applications in economic problems, such as transportation of goods and foreign currency exchange. The main presentation follows Vegh's paper, but this exposition contains some simplifications and differences in the algorithm and its analysis. The main difference is that we consider the running time of the strongly polynomial algorithm up to one arc contraction before starting fresh on a smaller network. This increases the running time of the algorithm slightly, but the analysis becomes easier.
|
6 |
Resident Scheduling ProblemRamahi, Muhannad Hasan 12 April 2012 (has links)
This thesis is concerned with the Resident Scheduling Problem (RSP) in which a good schedule is desired that will meet both departmental requirements and residents' preferences. Three scenarios that represent most situations and account for various departmental requirements and needs are described. Although similar scheduling problems are considered in the literature, no analysis exists that adequately deals with this specific problem. The problem is modeled as a mixed-integer program (MIP) and heuristic solution procedures are developed for the different identified scheduling scenarios. These procedures exploit the network structure of the problem which is an important feature that enhances problem solvability. For the sake of comparison, the problem is also solved exactly via the CPLEX-MIP package. The contribution of this work is important since many hospitals are still utilizing manual techniques in preparing their own schedules, expending considerable effort and time with less scheduling flexibility. / Master of Science
|
7 |
Efficient Algorithms for the Cell Based Single Destination System Optimal Dynamic Traffic Assignment ProblemZheng, Hong January 2009 (has links)
The cell transmission model (CTM) based single destination system optimal dynamic traffic assignment (SD-SO-DTA) model has been widely applied to situations such as mass evacuations on a transportation network. Although formulated as a linear programming (LP) model, embedded multi-period cell network representation yields an extremely large model for real-size networks. As a result, most of these models are not solvable using existing LP solvers. Solutions obtained by LP also involve holding vehicles at certain locations, violating CTM flow dynamics. This doctoral research is aimed at developing innovative algorithms that overcome both computational efficiency and solution realism issues. We first prove that the LP formulation of the SD-SO-DTA problem is equivalent to the earliest arrival flow (EAF), and then develop efficient algorithms to solve EAF. Two variants of the algorithm are developed under different model assumptions and network operating conditions. For the case of time-varying network parameters, we develop a network flow algorithm on a time-expanded network. The main challenge in this approach is to address the issue of having backward wave speed lower than forward wave speed. This situation leads to non-typical constraints involving coefficients with value of less than 1. In this dissertation we develop a new network algorithm to solve this problem in optimal, even with coefficients of value less than 1. Additionally, the developed approach solves for optimal flows that exhibit non-vehicle-holding properties, which is a major breakthrough compared to all existing solution techniques for SD-SODTA. For the case of time-invariant network parameters, we reduce the SD-SO-DTA to a standard EAF problem on a dynamic network, which is constructed on the original roadway network without dividing it into cells. We prove that the EAF under free flow status is one of the optimal solutions of SD-SO-DTA, if cell properties follow a trapezoidal/triangular fundamental diagram. We use chain flows obtained on a static network to induce dynamic flows, an approach applicable to large-scale networks. Another contribution of this research is to provide a simple and practical algorithm solving the EAF with multiple sources, which has been an active research area for many years. Most existing studies involve submodular function optimization as subroutines, and thus are not practical for real-life implementation. This study’s contribution in this regard is the development of a practical algorithm that avoids submodular function optimization. The main body of the given method is comprised of |S⁺| iterations of earliest arrival s - t flow computations, where |S⁺| is the number of sources. Numerical results show that our multi-source EAF algorithm solves the SD-SO-DTA problem with time-invariant parameters to optimum.
|
8 |
Evolutionary optimisation of network flow plans for emergency movement in the built environmentFrench, Thomas Reginald January 2012 (has links)
Planning for emergency evacuation, and, more generally, for emergency movement involving both evacuation (egress) of occupants and ingress of first responders, presents important and challenging problems. A number of the current issues that arise during emergency incidents are due to the uncertainty and transiency of environmental conditions. In general, movement plans are formulated at building design-time, and those involved, such as building occupants and emergency responders, are left to adapt routing plans to actual events as they unfold. In the context of next-generation emergency response systems, it has been proposed to dynamically plan and route individuals during an emergency event, replanning to take account of changes in the environment. In this work, an emergency movement problem, the Maximal Safest Escape (MSE) problem, is formulated in terms that model the uncertain and transient environmental conditions as a flow problem in time-dependent networks with time-varying and stochastic edge travel-times and capacities (STV Networks). The objective of the MSE problem is to find flow patterns with the a priori maximal probability of successfully conveying all supply from the source to the sink in some given STV Network. The MSE and its deterministic counterpart are proved to be NP-hard. Furthermore, due to inherent complexity in evaluating the exact quality of candidate solutions, a simulation approximation method is presented based on well-known Monte-Carlo sampling methods. Given the complexity of the problem, and using the approximation method for evaluating solutions, it is proposed to tackle the MSE problem using a metaheuristic approach based on an existing framework that integrates Evolutionary Algorithms (EA) with a state-of-the-art statistical ranking and selection method, the Optimal Computing Budget Allocation (OCBA). Several improvements are proposed for the framework to reduce the computational demand of the ranking method. Empirically, the approach is compared with a simple fitness averaging approach and conditions under which the integrated framework is more efficient are investigated. The performance of the EA is compared against upper and lower bounds on optimal solutions. An upper bound is established through the “wait-and-see” bound, and a lower bound by a naıve random search algorithm (RSA). An experimental design is presented that allows for a fair comparison between the EA and the RSA. While there is no guarantee that the EA will find optimal solutions, this work demonstrates that the EA can still find useful solutions; useful solutions are those that are at least better than some baseline, here the lower bound, in terms of solution quality and computational effort. Experimentally, it is demonstrated that the EA performs significantly better than the baseline. Also, the EA finds solutions relatively close to the upper bound; however, it is difficult to establish how optimistic the upper bounds. The main approach is also compared against an existing approach developed for solving a related problem wrapped in a heuristic procedure in order to apply the approach to the MSE. Empirical results show that the heuristic approach requires significantly less computation time, but finds solutions of significantly lower quality. Overall, this work introduces and empirically verifies the efficacy of a metaheuristic based on a framework integrating EAs with a state-of-the-art statistical ranking and selection technique, the OCBA, for a novel flow problem in STV Networks. It is suggested that the lessons learned during the course of this work, along with the specific techniques developed, may be relevant for addressing other flow problems of similar complexity.
|
9 |
Linear Programming Algorithms Using Least-Squares MethodKong, Seunghyun 04 April 2007 (has links)
This thesis is a computational study of recently developed algorithms which aim to overcome degeneracy in the simplex method. We study the following algorithms: the non-negative least squares algorithm, the least-squares primal-dual algorithm, the least-squares network flow algorithm, and the combined-objective least-squares algorithm.
All of the four algorithms use least-squares measures to solve their subproblems,
so they do not exhibit degeneracy. But they have never been efficiently implemented and thus their performance has also not been proved. In this research we implement these algorithms in an efficient manner and improve their performance compared to their preliminary results.
For the non-negative least-squares algorithm, we develop the basis update technique and data structure that fit our purpose. In addition, we also develop a measure to help find a good ordering of columns and rows so that we have a sparse and concise representation of QR-factors. The least-squares primal-dual algorithm uses the non-negative least-squares problem as its subproblem, which minimizes infeasibility while satisfying dual feasibility and complementary slackness. The least-squares network flow algorithm is the least-squares primal-dual algorithm applied to min-cost network flow instances. The least-squares network flow algorithm can efficiently solve much bigger instances than the least-squares primal-dual algorithm. The combined-objective least-squares algorithm is the primal version of the least-squares primal-dual algorithm. Each subproblem tries to minimize true objective and infeasibility simultaneously so that optimality and primal feasibility can be obtained together. It uses a big-M to minimize the infeasibility.
We developed the techniques to improve the convergence rates of each algorithm: the relaxation of complementary slackness condition, special pricing strategy, and dynamic-M value. Our computational results show that the least-squares primal-dual algorithm and the combined-objective least-squares algorithm perform better than the CPLEX Primal solver, but are slower than the CPLEX Dual solver. The least-squares network flow algorithm performs as fast as the CPLEX Network solver.
|
10 |
Detecting Compute Cloud Co-residency with Network Flow Watermarking TechniquesBates, Adam, Bates, Adam January 2012 (has links)
This paper presents co-resident watermarking, a traffic analysis attack for cloud environments that allows a malicious co-resident virtual machine to inject a watermark signature into the network flow of a target instance. This watermark can be used to exfiltrate co-residency data, compromising isolation assurances. While previous work depends on virtual hypervisor resource management, our approach is difficult to defend without costly underutilization of the physical machine. We evaluate co-resident watermarking under many configurations, from a local lab environment to production cloud environments. We demonstrate the ability to initiate a covert channel of 4 bits per second, and we can confirm co-residency with a target VM instance in less than 10 seconds. We also show that passive load measurement of the target and behavior profiling is possible. Our investigation demonstrates the need for the careful design of hardware to be used in the cloud.
This thesis includes unpublished co-authored
material.
|
Page generated in 0.3445 seconds