• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3847
  • 443
  • 295
  • 146
  • 142
  • 78
  • 41
  • 27
  • 27
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 6527
  • 2772
  • 2194
  • 1977
  • 1716
  • 1246
  • 1100
  • 905
  • 847
  • 742
  • 552
  • 544
  • 539
  • 535
  • 525
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Many-Server Queues with Time-Varying Arrivals, Customer Abandonment, and non-Exponential Distributions

Liu, Yunan January 2011 (has links)
This thesis develops deterministic heavy-traffic fluid approximations for many-server stochastic queueing models. The queueing models, with many homogeneous servers working independently in parallel, are intended to model large-scale service systems such as call centers and health care systems. Such models also have been employed to study communication, computing and manufacturing systems. The heavy-traffic approximations yield relatively simple formulas for quantities describing system performance, such as the expected number of customers waiting in the queue. The new performance approximations are valuable because, in the generality considered, these complex systems are not amenable to exact mathematical analysis. Since the approximate performance measures can be computed quite rapidly, they usefully complement more cumbersome computer simulation. Thus these heavy-traffic approximations can be used to improve capacity planning and operational control. More specifically, the heavy-traffic approximations here are for large-scale service systems, having many servers and a high arrival rate. The main focus is on systems that have time-varying arrival rates and staffing functions. The system is considered under the assumption that there are alternating periods of overloading and underloading, which commonly occurs when service providers are unable to adjust the staffing frequently enough to economically meet demand at all times. The models also allow the realistic features of customer abandonment and non-exponential probability distributions for the service times and the times customers are willing to wait before abandoning. These features make the overall stochastic model non-Markovian and thus thus very difficult to analyze directly. This thesis provides effective algorithms to compute approximate performance descriptions for these complex systems. These algorithms are based on ordinary differential equations and fixed point equations associated with contraction operators. Simulation experiments are conducted to verify that the approximations are effective. This thesis consists of four pieces of work, each presented in one chapter. The first chapter (Chapter 2) develops the basic fluid approximation for a non-Markovian many-server queue with time-varying arrival rate and staffing. The second chapter (Chapter 3) extends the fluid approximation to systems with complex network structure and Markovian routing to other queues of customers after completing service from each queue. The extension to open networks of queues has important applications. For one example, in hospitals, patients usually move among different units such as emergency rooms, operating rooms, and intensive care units. For another example, in manufacturing systems, individual products visit different work stations one or more times. The open network fluid model has multiple queues each of which has a time-varying arrival rate and staffing function. The third chapter (Chapter 4) studies the large-time asymptotic dynamics of a single fluid queue. When the model parameters are constant, convergence to the steady state as time evolves is established. When the arrival rates are periodic functions, such as in service systems with daily or seasonal cycles, the existence of a periodic steady state and the convergence to that periodic steady state as time evolves are established. Conditions are provided under which this convergence is exponentially fast. The fourth chapter (Chapter 5) uses a fluid approximation to gain insight into nearly periodic behavior seen in overloaded stationary many-server queues with customer abandonment and nearly deterministic service times. Deterministic service times are of applied interest because computer-generated service times, such as automated messages, may well be deterministic, and computer-generated service is becoming more prevalent. With deterministic service times, if all the servers remain busy for a long interval of time, then the times customers enter service assumes a periodic behavior throughout that interval. In overloaded large-scale systems, these intervals tend to persist for a long time, producing nearly periodic behavior. To gain insight, a heavy-traffic limit theorem is established showing that the fluid model arises as the many-server heavy-traffic limit of a sequence of appropriately scaled queueing models, all having these deterministic service times. Simulation experiments confirm that the transient behavior of the limiting fluid model provides a useful description of the transient performance of the queueing system. However, unlike the asymptotic loss of memory results in the previous chapter for service times with densities, the stationary fluid model with deterministic service times does not approach steady state as time evolves independent of the initial conditions. Since the queueing model with deterministic service times approaches a proper steady state as time evolves, this model with deterministic service times provides an example where the limit interchange (limiting steady state as time evolves and heavy traffic as scale increases) is not valid.
22

Essays on Inventory Management and Object Allocation

Lee, Thiam Hui January 2012 (has links)
This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees. We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree. Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal. In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an α-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/α.
23

Three Essays on Dynamic Pricing and Resource Allocation

Nur, Cavdaroglu January 2012 (has links)
This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.
24

Data-driven System Design in Service Operations

Lu, Yina January 2013 (has links)
The service industry has become an increasingly important component in the world's economy. Simultaneously, the data collected from service systems has grown rapidly in both size and complexity due to the rapid spread of information technology, providing new opportunities and challenges for operations management researchers. This dissertation aims to explore methodologies to extract information from data and provide powerful insights to guide the design of service delivery systems. To do this, we analyze three applications in the retail, healthcare, and IT service industries. In the first application, we conduct an empirical study to analyze how waiting in queue in the context of a retail store affects customers' purchasing behavior. The methodology combines a novel dataset collected via video recognition technology with traditional point-of-sales data. We find that waiting in queue has a nonlinear impact on purchase incidence and that customers appear to focus mostly on the length of the queue, without adjusting enough for the speed at which the line moves. We also find that customers' sensitivity to waiting is heterogeneous and negatively correlated with price sensitivity. These findings have important implications for queueing system design and pricing management under congestion. The second application focuses on disaster planning in healthcare. According to a U.S. government mandate, in a catastrophic event, the New York City metropolitan areas need to be capable of caring for 400 burn-injured patients during a catastrophe, which far exceeds the current burn bed capacity. We develop a new system for prioritizing patients for transfer to burn beds as they become available and demonstrate its superiority over several other triage methods. Based on data from previous burn catastrophes, we study the feasibility of being able to admit the required number of patients to burn beds within the critical three-to-five-day time frame. We find that this is unlikely and that the ability to do so is highly dependent on the type of event and the demographics of the patient population. This work has implications for how disaster plans in other metropolitan areas should be developed. In the third application, we study workers' productivity in a global IT service delivery system, where service requests from possibly globally distributed customers are managed centrally and served by agents. Based on a novel dataset which tracks the detailed time intervals an agent spends on all business related activities, we develop a methodology to study the variation of productivity over time motivated by econometric tools from survival analysis. This approach can be used to identify different mechanisms by which workload affects productivity. The findings provide important insights for the design of the workload allocation policies which account for agents' workload management behavior.
25

Perfect Simulation, Sample-path Large Deviations, and Multiscale Modeling for Some Fundamental Queueing Systems

Chen, Xinyun January 2014 (has links)
As a primary branch of Operations Research, Queueing Theory models and analyzes engineering systems with random fluctuations. With the development of internet and computation techniques, the engineering systems today are much bigger in scale and more complicated in structure than 20 years ago, which raises numerous new problems to researchers in the field of queueing theory. The aim of this thesis is to explore new methods and tools, from both algorithmic and analytical perspectives, that are useful to solve such problems. In Chapter 1 and 2, we introduce some techniques of asymptotic analysis that are relatively new to queueing applications in order to give more accurate probabilistic characterization of queueing models with large scale and complicated structure. In particular, Chapter 1 gives the first functional large deviation result for infinite-server system with general inter-arrival and service times. The functional approach we use enables a nice description of the whole system over the entire time horizon of interest, which is important in real problems. In Chapter 2, we construct a queueing model for the so-called limit order book that is used in main financial markets worldwide. We use an asymptotic approach called multi-scale modeling to disentangle the complicated dependence among the elements in the trading system and to reduce the model dimensionality. The asymptotic regime we use is inspired by empirical observations and the resulting limit process explains and reproduces stylized features of real market data. Chapter 2 also provides a nice example of novel applications of queueing models in systems, such as the electronic trading system, that are traditionally outside the scope of queueing theory. Chapter 3 and 4 focus on stochastic simulation methods for performance evaluation of queueing models where analytic approaches fail. In Chapter 3, we develop a perfect sampling algorithm to generate exact samples from the stationary distribution of stochastic fluid networks in polynomial time. Our approach can be used for time-varying networks with general inter-arrival and service times, whose stationary distributions have no analytic expression. In Chapter 4, we focus on the stochastic systems with continuous random fluctuations, for instance, the workload arrives to the system in continuous flow like a Levy process. We develop a general framework of simulation algorithms featuring a deterministic error bound and an almost square root convergence rate. As an application, we apply this framework to estimate the stationary distributions of reflected Brownian motions and the performance of our algorithm is better than existing prevalent numeric methods.
26

On the Kidney Exchange Problem and Online Minimum Energy Scheduling

Herrera Humphries, Tulia January 2014 (has links)
The allocation and management of scarce resources are of central importance in the design of policies to improve social well-being. This dissertation consists of three essays; the first two deals with the problem of allocating kidneys and the third one on power management in computing devices. Kidney exchange programs are an attractive alternative for patients who need a kidney transplant and who have a willing, but medically incompatible, donor. A registry that keeps track of such patient-donor pairs can nd matches through exchanges amongst such pairs. This results in a quicker transplant for the patients involved, and equally importantly, keeps such patients from the long wait list of patients without an intended donor. As of March 2014, there were at least 99,000 candidates waiting for a kidney transplant in the U.S. However, in 2013 only 16,893 transplants were conducted. This imbalance between supply and demand among other factors, has driven the development of multiple kidney exchange programs in the U.S. and the subsequent development of matching mechanisms to run the programs. In the first essay we consider a matching problem arising in kidney exchanges between hospitals. Focusing on the case of two hospitals, we construct a strategy-proof matching mechanism that is guaranteed to return a matching that is at least 3/4 the size of a maximum cardinality matching. It is known that no better performance is possible if one focuses on mechanisms that return a maximal matching, and so our mechanism is best possible within this natural class of mechanisms. For path-cycle graphs we construct a mechanism that returns a matching that is at least 4/5 the size of max-cardinality matching. This mechanism does not necessarily return a maximal matching. Finally, we construct a mechanism that is universally truthful on path-cycle graphs and whose performance is within 2/3 of optimal. Again, it is known that no better ratio is possible. In most of the existing literature, mechanisms are typically evaluated by their overall performance on a large exchange pool, based on which conclusions and recommendations are drawn. In our second essay, we consider a dynamic framework to evaluate extensively used kidney exchange mechanisms. We conduct a simulation-based study of a dynamically evolving exchange pool during 9 years. Our results suggest that some of the features that are critical in a mechanism in the static setting have only a minor impact in its longrun performance when viewed in the dynamic setting. More importantly, features that are generally underestimated in the static setting turn to be relevant when we look at dynamically evolving exchange pool. For example, the pairs' arrival rates. In particular we provide insights into the eect on the waiting times and the probability to receive an oer of controllable features such as the frequency at which matching are run, the structures through which pairs could be matched (cycles or chains) as well as inherent features such as the pairs ABO-PRA characteristics, the availability of altruistic donors, and wether or not compatible pairs join the exchange etc. We evaluate the odds to receive an oer and the expected time to receive an oer for each ABO-PRA type of pairs in the model. Power management in computing devices aims to minimize energy consumption to perform tasks, meanwhile keeping acceptable performance levels. A widely used power management strategy for devices, is to transit the devices and/or components to lower power consumption states during inactivity periods. Transitions between power states consume energy, thus, depending on such costs, it may be advantageous to stay in high power state during some inactivity periods. In our third essay we consider the problem of minimizing the total energy consumed by a 2-power state device, to process jobs that are sent over time by a constrained adversary. Jobs can be preempted, but deadlines need to be met. In this problem, an algorithm must decide when to schedule the jobs, as well as a sequence of power states, and the discrete time thresholds at which these states will be reached. We provide an online algorithm to minimize the energy consumption when the cost of a transition to the low power state is small enough. In this case, the problem of minimizing the energy consumption is equivalent to minimizing the total number of inactivity periods. We also provide an algorithm to minimize the energy consumption when it may be advantageous to stay in high power state during some inactivity periods. In both cases we provide upper bounds on the competitive ratio of our algorithms, and lower bounds on the competitive ratio of all online algorithms.
27

New Quantitative Approaches to Asset Selection and Portfolio Construction

Song, Irene January 2014 (has links)
Since the publication of Markowitz's landmark paper "Portfolio Selection" in 1952, portfolio construction has evolved into a disciplined and personalized process. In this process, security selection and portfolio optimization constitute key steps for making investment decisions across a collection of assets. The use of quantitative algorithms and models in these steps has become a widely-accepted investment practice by modern investors. This dissertation is devoted to exploring and developing those quantitative algorithms and models. In the first part of the dissertation, we present two efficiency-based approaches to security selection: (i) a quantitative stock selection strategy based on operational efficiency and (ii) a quantitative currency selection strategy based on macroeconomic efficiency. In developing the efficiency-based stock selection strategy, we exploit a potential positive link between firm's operational efficiency and its stock performance. By means of data envelopment analysis (DEA), a non-parametric approach to productive efficiency analysis, we quantify firm's operational efficiency into a single score representing a consolidated measure of financial ratios. The financial ratios integrated into an efficiency score are selected on the basis of their predictive power for the firm's future operating performance using the LASSO (least absolute shrinkage and selection operator)-based variable selection method. The computed efficiency scores are directly used for identifying stocks worthy of investment. The basic idea behind the proposed stock selection strategy is that as efficient firms are presumed to be more profitable than inefficient firms, higher returns are expected from their stocks. This idea is tested in a contextual and empirical setting provided by the U.S. Information Technology (IT) sector. Our empirical findings confirm that there is a strong positive relationship between firm's operational efficiency and its stock performance, and further establish that firm's operational efficiency has significant explanatory power in describing the cross-sectional variations of stock returns. We moreover offer an economic argument that posits operational efficiency as a systematic risk factor and the most likely source of excess returns of investing in efficient firms. The efficiency-based currency selection strategy is developed in a similar way; i.e. currencies are selected based on a certain efficiency metric. An exchange rate has long been regarded as a reliable barometer of the state of the economy and the measure of international competitiveness of countries. While strong and appreciating currencies correspond to productive and efficient economies, weak and depreciating currencies correspond to slowing down and less efficient economies. This study hence develops a currency selection strategy that utilizes macroeconomic efficiency of countries measured based on a widely-accepted relationship between exchange rates and macroeconomic variables. For quantifying macroeconomic efficiency of countries, we first establish a multilateral framework using effective exchange rates and trade-weighted macroeconomic variables. This framework is used for transforming the three representative bilateral structural exchange rate models: the flexible price monetary model, the sticky price monetary model, and the sticky price asset model, into their multilateral counterparts. We then translate these multilateral models into DEA models, which yield an efficiency score representing an aggregate measure of macroeconomic variables. Consistent with the stock selection strategy, the resulting efficiency scores are used for identifying currencies worthy of investment. We evaluate our currency selection strategy against appropriate market and strategic benchmarks using historical data. Our empirical results confirm that currencies of efficient countries have stronger performance than those of inefficient countries, and further suggest that compared to the exchange rate models based on standard regression analysis, our models based on DEA improve on the predictability of the future performance of currencies. In the first part of the dissertation, we also develop a data-driven variable selection method for DEA based on the group LASSO. This method extends the LASSO-based variable selection method used for specifying a DEA model for estimating firm's operational efficiency. In our proposed method, we derive a special constrained version of the group LASSO with the loss function suited for variable selection in DEA models and solve it by a new tailored algorithm based on the alternating direction method of multipliers (ADMM). We conduct a thorough evaluation of the proposed method against two widely-used variable selection methods: the efficiency contribution measure (ECM) method and the regression-based (RB) test, in the DEA literature using Monte Carlo simulations. The simulation results show that our method provides more favorable performance compared with its benchmarks. In the second part of the dissertation, we propose a generalized risk budgeting (GRB) approach to portfolio construction. In a GRB portfolio, assets are grouped into possibly overlapping subsets, and each subset is allocated a risk budget that has been pre-specified by the investor. Minimum variance, risk parity and risk budgeting portfolios are all special instances of a GRB portfolio. The GRB portfolio optimization problem is to find a GRB portfolio with an optimal risk-return profile where risk is measured using any positively homogeneous risk measure. When the subsets form a partition, the assets all have identical returns and we restrict ourselves to long-only portfolios, then the GRB problem can in fact be solved as a convex optimization problem. In general, however, the GRB problem is a constrained non-convex problem, for which we propose two solution approaches. The first approach uses a semidefinite programming (SDP) relaxation to obtain an (upper) bound on the optimal objective function value. In the second approach we develop a numerical algorithm that integrates augmented Lagrangian and Markov chain Monte Carlo (MCMC) methods in order to find a point in the vicinity of a very good local optimum. This point is then supplied to a standard non-linear optimization routine with the goal of finding this local optimum. It should be emphasized that the merit of this second approach is in its generic nature: in particular, it provides a starting-point strategy for any non-linear optimization algorithms.
28

Graph Structure and Coloring

Plumettaz, Matthieu January 2014 (has links)
We denote by G=(V,E) a graph with vertex set V and edge set E. A graph G is claw-free if no vertex of G has three pairwise nonadjacent neighbours. Claw-free graphs are a natural generalization of line graphs. This thesis answers several questions about claw-free graphs and line graphs. In 1988, Chvatal and Sbihi proved a decomposition theorem for claw-free perfect graphs. They showed that claw-free perfect graphs either have a clique-cutset or come from two basic classes of graphs called elementary and peculiar graphs. In 1999, Maffray and Reed successfully described how elementary graphs can be built using line graphs of bipartite graphs and local augmentation. However gluing two claw-free perfect graphs on a clique does not necessarily produce claw-free graphs. The first result of this thesis is a complete structural description of claw-free perfect graphs. We also give a construction for all perfect circular interval graphs. This is joint work with Chudnovsky. Erdos and Lovasz conjectured in 1968 that for every graph G and all integers s,t≥ 2 such that s+t-1=χ(G) > ω(G), there exists a partition (S,T) of the vertex set of G such that ω(G|S)≥ s and χ(G|T)≥ t. This conjecture is known in the graph theory community as the Erdos-Lovasz Tihany Conjecture. For general graphs, the only settled cases of the conjecture are when s and t are small. Recently, the conjecture was proved for a few special classes of graphs: graphs with stability number 2, line graphs and quasi-line graphs. The second part of this thesis considers the conjecture for claw-free graphs and presents some progresses on it. This is joint work with Chudnovsky and Fradkin. Reed's ω, ∆, χ conjecture proposes that every graph satisfies χ≤ ⎡½ (Δ+1+ω)⎤ ; it is known to hold for all claw-free graphs. The third part of this thesis considers a local strengthening of this conjecture. We prove the local strengthening for line graphs, then note that previous results immediately tell us that the local strengthening holds for all quasi-line graphs. Our proofs lead to polytime algorithms for constructing colorings that achieve our bounds: The complexity are O(n²) for line graphs and O(n³m²) for quasi-line graphs. For line graphs, this is faster than the best known algorithm for constructing a coloring that achieves the bound of Reed's original conjecture. This is joint work with Chudnovsky, King and Seymour.
29

Data-driven Decisions in Service Systems

Kim, Song-Hee January 2014 (has links)
This thesis makes contributions to help provide data-driven (or evidence-based) decision support to service systems, especially hospitals. Three selected topics are presented. First, we discuss how Little's Law, which relates average limits and expected values of stationary distributions, can be applied to service systems data that are collected over a finite time interval. To make inferences based on the indirect estimator of average waiting times, we propose methods for estimating confidence intervals and for adjusting estimates to reduce bias. We show our new methods are effective using simulations and data from a US bank call center. Second, we address important issues that need to be taken into account when testing whether real arrival data can be modeled by nonhomogeneous Poisson processes (NHPPs). We apply our method to data from a US bank call center and a hospital emergency department and demonstrate that their arrivals come from NHPPs. Lastly, we discuss an approach to standardize the Intensive Care Unit admission process, which currently lacks a well-defined criteria. Using data from nearly 200,000 hospitalizations, we discuss how we can quantify the impact of Intensive Care Unit admission on individual patient's clinical outcomes. We then use this quantified impact and a stylized model to discuss optimal admission policies. We use simulation to compare the performance of our proposed optimal policies to the current admission policy, and show that the gain can be significant.
30

Network Resource Allocation Under Fairness Constraints

Chandramouli, Shyam Sundar January 2014 (has links)
This work considers the basic problem of allocating resources among a group of agents in a network, when the agents are equipped with single-peaked preferences over their assignments. This generalizes the classical claims problem, which concerns the division of an estate's liquidation value when the total claim on it exceeds this value. The claims problem also models the problem of rationing a single commodity, or the problem of dividing the cost of a public project among the people it serves, or the problem of apportioning taxes. A key consideration in this classical literature is equity: the good (or the ``bad,'' in the case of apportioning taxes or costs) should be distributed as fairly as possible. The main contribution of this dissertation is a comprehensive treatment of a generalization of this classical rationing problem to a network setting. Bochet et al. recently introduced a generalization of the classical rationing problem to the network setting. For this problem they designed an allocation mechanism---the egalitarian mechanism---that is Pareto optimal, envy free and strategyproof. In chapter 2, it is shown that the egalitarian mechanism is in fact group strategyproof, implying that no coalition of agents can collectively misreport their information to obtain a (weakly) better allocation for themselves. Further, a complete characterization of the set of all group strategyproof mechanisms is obtained. The egalitarian mechanism satisfies many attractive properties, but fails consistency, an important property in the literature on rationing problems. It is shown in chapter 3 that no Pareto optimal mechanism can be envy-free and consistent. Chapter 3 is devoted to the edge-fair mechanism that is Pareto optimal, group strategyproof, and consistent. In a related model where the agents are located on the edges of the graph rather than the nodes, the edge-fair rule is shown to be envy-free, group strategyproof, and consistent. Chapter 4 extends the egalitarian mechanism to the problem of finding an optimal exchange in non-bipartite networks. The results vary depending on whether the commodity being exchanged is divisible or indivisible. For the latter case, it is shown that no efficient mechanism can be strategyproof, and that the egalitarian mechanism is Pareto optimal and envy-free. Chapter 5 generalizes recent work on finding stable and balanced allocations in graphs with unit capacities and unit weights to more general networks. The existence of a stable and balanced allocation is established by a transformation to an equivalent unit capacity network.

Page generated in 0.1326 seconds