• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 92
  • 18
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 281
  • 52
  • 47
  • 33
  • 28
  • 27
  • 26
  • 23
  • 21
  • 17
  • 17
  • 16
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Hierarchical Maximal Covering Location Problem With Referral In The Presence Of Partial Coverage

Toreyen, Ozgun 01 September 2007 (has links) (PDF)
We consider a hierarchical maximal covering location problem to locate p health centers and q hospitals in such a way that maximum demand is covered, where health centers and hospitals have successively inclusive hierarchy. Demands are 3 types: demand requiring low-level service only, demand requiring high-level service only, and demand requiring both levels of service at the same time. All types of requirements of a demand point should be either covered by hospital providing both levels of service or referred to hospital via health center since a demand point is not covered unless all levels of requirements are satisfied. Thus, a health center cannot be opened unless it is suitable to refer its covered demand to a hospital. Referral is defined as coverage of health centers by hospitals. We also added partial coverage to this complex hierarchic structure, that is, a demand point is fully covered up to the minimum critical distance, non-covered after the maximum critical distance and covered with a decreasing quality while increasing distance to the facility between minimum and maximum critical distances. We developed an MIP formulation to solve the Hierarchical Maximal Covering Location Problem with referral in the presence of partial coverage. We solved small-size problems optimally using GAMS. For large-size problems we developed a Genetic Algorithm that gives near-optimal results quickly. We tested our Genetic Algorithm on randomly generated problems of sizes up to 1000 nodes.
92

Approximate Models And Solution Approaches For The Vehicle Routing Problem With Multiple Use Of Vehicles And Time Windows

De Boer, Jeroen Wouter 01 June 2008 (has links) (PDF)
In this study we discuss the Vehicle Routing Problem with multiple use of vehicles (VRPM). In this variant of the routing problem the vehicles may replenish at any time at the depot. We present a detailed review of existing literature and propose two mathematical models to solve the VRPM. For these two models and their several variants we provide computational results based on the test problems taken from the literature. We also discuss a case study in which we are simultaneously dealing with side constraints such as time windows, working hour limits, backhaul customers and a heterogeneous vehicle fleet.
93

An Interactive Evolutionary Algorithm For The Multiobjective Relocation Problem With Partial Coverage

Orbay, Berk 01 April 2011 (has links) (PDF)
In this study, a bi-objective capacitated facility location problem is presented which includes partial coverage concept and relocation of facility nodes. In partial coverage, a predefined distance between a demand node and a facility node is assumed to be fully covered. After the predefined distance, the service level commences to decay linearly. The problem is designed to consider the existence of already functioning facility nodes. It is allowed to close these existing facilities and open new facilities in potential sites. However, existing facility nodes are strongly favored against new facility nodes. The objectives are the maximization of the weighted total coverage and the minimization of number of facility nodes. A novel interactive multi-objective evolutionary algorithm is proposed to solve this problem, I-TREA. I-TREA is originated from NSGA-II and designed for interactive methods benefiting from quality infeasible solutions. The performance of I-TREA is benchmarked with a modified version of NSGA-II on randomly generated problems with various sizes and utility functions.
94

RELAXATION HEURISTICS FOR THE SET COVERING PROBLEM

Umetani, Shunji, Yagiura, Mutsunori, 柳浦, 睦憲 12 1900 (has links) (PDF)
No description available.
95

Mechanism Design For Covering Problems

Minooei, Hadi January 2014 (has links)
Algorithmic mechanism design deals with efficiently-computable algorithmic constructions in the presence of strategic players who hold the inputs to the problem and may misreport their input if doing so benefits them. Algorithmic mechanism design finds applications in a variety of internet settings such as resource allocation, facility location and e-commerce, such as sponsored search auctions. There is an extensive amount of work in algorithmic mechanism design on packing problems such as single-item auctions, multi-unit auctions and combinatorial auctions. But, surprisingly, covering problems, also called procurement auctions, have almost been completely unexplored, especially in the multidimensional setting. In this thesis, we systematically investigate multidimensional covering mechanism- design problems, wherein there are m items that need to be covered and n players who provide covering objects, with each player i having a private cost for the covering objects he provides. A feasible solution to the covering problem is a collection of covering objects (obtained from the various players) that together cover all items. Two widely considered objectives in mechanism design are: (i) cost-minimization (CM) which aims to minimize the total cost incurred by the players and the mechanism designer; and (ii) payment minimization (PayM), which aims to minimize the payment to players. Covering mechanism design problems turn out to behave quite differently from packing mechanism design problems. In particular, various techniques utilized successfully for packing problems do not perform well for covering mechanism design problems, and this necessitates new approaches and solution concepts. In this thesis we devise various techniques for handling covering mechanism design problems, which yield a variety of results for both the CM and PayM objectives. In our investigation of the CM objective, we focus on two representative covering problems: uncapacitated facility location (UFL) and vertex cover. For multi-dimensional UFL, we give a black-box method to transform any Lagrangian-multiplier-preserving ??-approximation algorithm for UFL into a truthful-in-expectation, ??-approximation mechanism. This yields the first result for multi-dimensional UFL, namely a truthful-in-expectation 2-approximation mechanism. For multi-dimensional VCP (Multi-VCP), we develop a decomposition method that reduces the mechanism-design problem into the simpler task of constructing threshold mechanisms, which are a restricted class of truthful mechanisms, for simpler (in terms of graph structure or problem dimension) instances of Multi-VCP. By suitably designing the decomposition and the threshold mechanisms it uses as building blocks, we obtain truthful mechanisms with approximation ratios (n is the number of nodes): (1) O(r2 log n) for r-dimensional VCP; and (2) O(r log n) for r-dimensional VCP on any proper minor-closed family of graphs (which improves to O(log n) if no two neighbors of a node belong to the same player). These are the first truthful mechanisms for Multi-VCP with non-trivial approximation guarantees. For the PayM objective, we work in the oft-used Bayesian setting, where players??? types are drawn from an underlying distribution and may be correlated, and the goal is to minimize the expected total payment made by the mechanism. We consider the problem of designing incentive compatible, ex-post individually rational (IR) mechanisms for covering problems in the above model. The standard notion of incentive compatibility (IC) in such settings is Bayesian incentive compatibility (BIC), but this notion is over-reliant on having precise knowledge of the underlying distribution, which makes it a rather non- robust notion. We formulate a notion of IC that we call robust Bayesian IC (robust BIC) that is substantially more robust than BIC, and develop black-box reductions from robust BIC-mechanism design to algorithm design. This black-box reduction applies to single- dimensional settings even when we only have an LP-relative approximation algorithm for the algorithmic problem. We obtain near-optimal mechanisms for various covering settings including single- and multi-item procurement auctions, various single-dimensional covering problems, and multidimensional facility location problems. Finally, we study the notion of frugality, which considers the PayM objective but in a worst-case setting, where one does not have prior information about the players??? types. We show that some of our mechanisms developed for the CM objective are also good with respect to certain oft-used frugality benchmarks proposed in the literature. We also introduce an alternate benchmark for frugality, which more directly reflects the goal that the mechanism???s payment be close to the best possible payment, and obtain some preliminary results with respect to this benchmark.
96

Branched covers of contact manifolds

Casey, Meredith Perrie 13 January 2014 (has links)
We will discuss what is known about the construction of contact structures via branched covers, emphasizing the search for universal transverse knots. Recall that a topological knot is called universal if all 3-manifold can be obtained as a cover of the 3-sphere branched over that knot. Analogously one can ask if there is a transverse knot in the standard contact structure on S³ from which all contact 3-manifold can be obtained as a branched cover over this transverse knot. It is not known if such a transverse knot exists.
97

Algorithms for budgeted auctions and multi-agent covering problems

Goel, Gagan 07 July 2009 (has links)
In this thesis, we do an algorithmic study of optimization problems in budgeted auctions, and some well known covering problems in the multi-agent setting. We give new results for the design of approximation algorithms, online algorithms and hardness of approximation for these problems. Along the way we give new insights for many other related problems. Budgeted Auction. We study the following allocation problem which arises in budgeted auctions (such as advertisement auctions run by Google, Microsoft, Yahoo! etc.) : Given a set of m indivisible items and n agents; agent i is willing to pay b[subscript ij] for item j and has an overall budget of B[subscript i] (i.e. the maximum total amount he is willing to pay). The goal is to allocate items to the agents so as to maximize the total revenue obtained. We study the computation complexity of the above allocation problem, and give improved results for the approximation and the hardness of approximation. We also study the above allocation problem in an online setting. Online version of the problem has motivation in the sponsored search auctions which are run by search engines. Lastly, we propose a new bidding language for the budgeted auctions: decreasing bid curves with budget constraints. We make a case for why this language is better both for the sellers and for the buyers. Multi-agent Covering Problems. To motivate this class of problems, consider the network design problem of constructing a spanning tree of a graph, assuming there are many agents willing to construct different parts of the tree. The cost of each agent for constructing a particular set of edges could be a complex function. For instance, some agents might provide discounts depending on how many edges they construct. The algorithmic question that one would be interested in is: Can we find a spanning tree of minimum cost in polynomial time in these complex settings? Note that such an algorithm will have to find a spanning tree, and partition its edges among the agents. Above are the type of questions that we are trying to answer for various combinatorial problems. We look at the case when the agents' cost functions are submodular. These functions form a rich class and capture the natural properties of economies of scale or the law of diminishing returns.We study the following fundamental problems in this setting- Vertex Cover, Spanning Tree, Perfect Matching, Reverse Auctions. We look at both the single agent and the multi-agent case, and study the approximability of each of these problems.
98

Self-Reduction for Combinatorial Optimisation

Sheppard, Nicholas Paul January 2001 (has links)
This thesis presents and develops a theory of self-reduction. This process is used to map instances of combinatorial optimisation problems onto smaller, more easily solvable instances in such a way that a solution of the former can be readily re-constructed, without loss of information or quality, from a solution of the latter. Self-reduction rules are surveyed for the Graph Colouring Problem, the Maximum Clique Problem, the Steiner Problem in Graphs, the Bin Packing Problem and the Set Covering Problem. This thesis introduces the problem of determining the maximum sequence of self-reductions on a given structure, and shows how the theory of confluence can be adapted from term re-writing to solve this problem by identifying rule sets for which all maximal reduction sequences are equivalent. Such confluence results are given for a number of reduction rules on problems on discrete systems. In contrast, NP-hardness results are also presented for some reduction rules. A probabilistic analysis of self-reductions on graphs is performed, showing that the expected number of self-reductions on a graph tends to zero as the order of the graph tends to infinity. An empirical study is performed comparing the performance of self-reduction, graph decomposition and direct methods of solving the Graph Colouring and Set Covering Problems. The results show that self-reduction is a potentially valuable, but sometimes erratic, method of finding exact solutions to combinatorial problems.
99

Evaluation of loggerhead sea turtle carapace properties and prototype biomimetic carapace fabrication

Hodges, Justin E.. January 2008 (has links)
Thesis (M. S.)--Civil and Environmental Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Scott, David; Committee Member: Kurtis, Kimberly; Committee Member: Work, Paul. Part of the SMARTech Electronic Thesis and Dissertation Collection.
100

Aftermarket short covering in ipos and long-term stock liquidity

Tolentino, Rodrigo Andrade 28 July 2009 (has links)
Made available in DSpace on 2010-04-20T20:58:13Z (GMT). No. of bitstreams: 4 Rodrigo_Andrade_Tolentino.pdf.jpg: 17569 bytes, checksum: 694cfad8e417c71da95ec100055a8cbb (MD5) Rodrigo_Andrade_Tolentino.pdf.txt: 59602 bytes, checksum: 64bdba78b296f2a7a8ae36e0ecd10c2f (MD5) Rodrigo_Andrade_Tolentino.pdf: 464784 bytes, checksum: 106b0f6ad39a2dea96c0d1cbd2af440c (MD5) license.txt: 4820 bytes, checksum: 59e78f48285d651ee2193a508a46b954 (MD5) Previous issue date: 2009-07-28T00:00:00Z / This study investigates the effect of the aftermarket short covering (ASC) carried out by the underwriter during the price stabilization period on stock long-term liquidity. Because the ASC increases liquidity during the stabilization period and liquidity is a persistent characteristic of stocks, the ASC can increase long-term liquidity. In fact, we show that the ASC has a positive effect on liquidity over the 6 months subsequent to the stabilization period. This positive relation holds true even after controlling for many variables found important to explain liquidity by previous authors and the instrumentalization of the ASC. / Este trabalho investiga o efeito da recompra de ações no mercado secundário (ASC), realizada pelo banco de investimento no período de estabilização, sobre a liquidez de longo prazo. Devido ao fato da recompra aumentar a liquidez no período de estabilização e da liquidez seguir processo com dependência de trajetória, a recompra pode elevar a liquidez de longo prazo. Mostramos, neste trabalho, que a recompra tem um efeito positivo sobre a liquidez nos 6 meses subseqüentes ao período de estabilização. Esta relação positiva se mantém mesmo quando as variáveis utilizadas na literatura para explicar a liquidez são usadas como controle e após a instrumentalização da recompra.

Page generated in 0.0628 seconds