• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Matching Spatially Diversified Suppliers with Random Demands

Liu, Zhe . January 2019 (has links)
A fundamental challenge in operations management is to dynamically match spatially diversified supply sources with random demand units. This dissertation tackles this challenge in two major areas: in supply chain management, a company procures from multiple, geographically differentiated suppliers to service stochastic demands based on dynamically evolving inventory conditions; in revenue management of ride-hailing systems, a platform uses operational and pricing levers to match strategic drivers with random, location and time-varying ride requests over geographically dispersed networks. The first part of this dissertation is devoted to finding the optimal procurement and inventory management strategies for a company facing two potential suppliers differentiated by their lead times, costs and capacities. We synthesize and generalize the existing literature by addressing a general model with the simultaneous presence of (a) orders subject to capacity limits, (b) fixed costs associated with inventory adjustments, and (c) possible salvage opportunities that enable bilateral adjustments of the inventory, both for finite and infinite horizon periodic review models. By identifying a novel, generalized convexity property, termed (C1K1, C2K2)-convexity, we are able to characterize the optimal single-source procurement strategy under the simultaneous treatment of all three complications above, which has remained an open challenge in stochastic inventory theory literature. To our knowledge, we recover almost all existing structural results as special cases of a unified analysis. We then generalize our results to dual-source settings and derive optimal policies under specific lead time restrictions. Based on these exact optimality results, we develop various heuristics and bounds to address settings with fully general lead times. The second part of this dissertation focuses on a ride-hailing platform's optimal control facing two major challenges: (a) significant demand imbalances across the network, and (b) stochastic demand shocks at hotspot locations. Towards the first major challenge, which is evidenced by our analysis of New York City taxi trip data, the dissertation shows how the platform's operational controls--including demand-side admission control and supply-side empty car repositioning--can improve system performance significantly. Counterintuitively, it is shown that the platform can improve the overall value through strategic rejection of demand in locations with ample supply capacity (driver queue). Responding to the second challenge, a demand shock of uncertain duration, we show how the platform can resort to surge pricing and dynamic spatial matching jointly, to enhance profits in an incentive compatible way for the drivers. Our results provide distinctive insights on the interplay among the relevant timescales of different phenomena, including rider patience, demand shock duration and drivers' traffic delay to the hotspot, and their impact on optimal platform operations.
2

Algorithms for Matching Problems Under Data Accessibility Constraints

Hanguir, Oussama January 2022 (has links)
Traditionally, optimization problems in operations research have been studied in a complete information setting; the input/data is collected and made fully accessible to the user, before an algorithm is sequentially run to generate the optimal output. However, the growing magnitude of treated data and the need to make immediate decisions are increasingly shifting the focus to optimizing under incomplete information settings. The input can be partially inaccessible to the user either because it is generated continuously, contains some uncertainty, is too large and cannot be stored on a single machine, or has a hidden structure that is costly to unveil. Many problems providing a context for studying algorithms when the input is not entirely accessible emanate from the field of matching theory, where the objective is to pair clients and servers or, more generally, to group clients in disjoint sets. Examples include ride-sharing and food delivery platforms, internet advertising, combinatorial auctions, and online gaming. In this thesis, we study three different novel problems from the theory of matchings. These problems correspond to situations where the input is hidden, spread across multiple processors, or revealed in two stages with some uncertainty. In particular, we present in Chapter 1 the necessary definitions and terminology for the concepts and problems we cover. In Chapter 2, we consider a two-stage robust optimization framework that captures matching problems where one side of the input includes some future demand uncertainty. We propose two models to capture the demand uncertainty: explicit and implicit scenarios. Chapters 3 and 4 see us switch our attention to matchings in hypergraphs. In Chapter 3, we consider the problem of learning hidden hypergraph matchings through membership queries. Finally, in Chapter 4, we study the problem of finding matchings in uniform hypergraphs in the massively parallel computation (MPC) model where the data (e.g. vertices and edges) is distributed across the machines and in each round, a machine performs local computation on its fragment of data, and then sends messages to other machines for the next round.
3

Extremal Queueing Theory

Chen, Yan January 2022 (has links)
Queueing theory has often been applied to study communication and service queueing systems such as call centers, hospital emergency departments and ride-sharing platforms. Unfortunately, it is complicated to analyze queueing systems. That is largely because the arrival and service processes that mainly determine a queueing system are uncertain and must be represented as stochastic processes that are difficult to analyze. In response, service providers might be able to partially capture the main characteristics of systems given partial data information and limited domain knowledge. An effective engineering response is to develop tractable approximations to approximate queueing characteristics of interest that depend on critical partial information. In this thesis, we contribute to developing high-quality approximations by studying tight bounds for the transient and the steady-state mean waiting time given partial information. We focus on single-server queues and multi-server queues with the unlimited waiting room, the first-come-first-served service discipline, and independent sequences of independent and identically distributed sequences of interarrival times and service times. We assume some partial information is known, e.g., the first two moments of inter-arrival and service time distributions. For the single-server GI/GI/1 model, we first study the tight upper bounds for the mean and higher moments of the steady-state waiting time given the first two moments of the inter-arrival time and service-time distributions. We apply the theory of Tchebycheff systems to obtain sufficient conditions for classical two-point distributions to yield the extreme values. For the tight upper bound of the transient mean waiting time, we formulate the problem as a non-convex non-linear program, derive the gradient of the transient mean waiting time over distributions with finite support, and apply classical non-linear programming theory to characterize stationary points. We then develop and apply a stochastic variant of the conditional gradient algorithm to find a stationary point for any given service-time distribution. We also establish necessary conditions and sufficient conditions for stationary points to be three-point distributions or special two-point distributions. Our studies indicate that the tight upper bound for the steady-state mean waiting time is attained asymptotically by two-point distributions as the upper mass point of the service-time distribution increases and the probability decreases, while one mass of the inter-arrival time distribution is fixed at 0. We then develop effective numerical and simulation algorithms to compute the tight upper bound. The algorithms are aided by reductions of the special queues with extremal inter-arrival time and extremal service-time distributions to D/GI/1 and GI/D/1 models. Combining these reductions yields an overall representation in terms of a D/RS(D)/1 discrete-time model involving a geometric random sum of deterministic random variables, where the two deterministic random variables have different values, so that the extremal waiting times need not have a lattice distribution. We finally evaluate the tight upper bound to show that it offers a significant improvement over established bounds. In order to understand queueing performance given only partial information, we propose determining intervals of likely performance measures given that limited information. We illustrate this approach for the steady-state waiting time distribution in the GI/GI/K queue given the first two moments of the inter-arrival time and service time distributions plus additional information about these underlying distributions, including support bounds, higher moments, and Laplace transform values. As a theoretical basis, we apply the theory of Tchebycheff systems to determine extremal models (yielding tight upper and lower bounds) on the asymptotic decay rate of the steady-state waiting-time tail probability, as in the Kingman-Lundberg bound and large deviations asymptotics. We then can use these extremal models to indicate likely intervals of other performance measures. We illustrate by constructing such intervals of likely mean waiting times. Without extra information, the extremal models involve two-point distributions, which yield a wide range for the mean. Adding constraints on the third moment and a transform value produces three-point extremal distributions, which significantly reduce the range, yielding practical levels of accuracy.
4

Essays on Applications of Dynamic Models

Al-Chanati, Motaz Rafic January 2022 (has links)
In many real-world settings, individuals face a dynamic decision problem: choices in the present have an impact on future outcomes. It it important for researchers to recognizing these dynamic forces so that we are able to fully understand the trade-offs an individual faces and to correctly estimate the parameters of interest. I study dynamic decision making in three diverse contexts: residential choice of families in New Zealand, search strategies of ridesharing drivers in Texas, and welfare participation of single mothers in Michigan. In each of these, I motivate the analysis using a theoretical model, and bring the model to the data to estimate parameters and evaluate testable implications. In the first chapter, I ask: how do schools affect where families choose to live and does their effect contribute to residential segregation? I study these questions using unique administrative microdata from Auckland, New Zealand, an ethnically diverse -- but segregated -- city. I develop and estimate a dynamic model of residential choice where forward-looking families choose neighborhoods based on their children's schools, local amenities, and moving costs. Previous studies typically estimate school quality valuations using a boundary discontinuity design. I leverage attendance zones in this setting to also generate reduced form estimates using this methodology. The structural model estimates show that the valuation of school quality varies by the child's school level and the family's ethnicity; the reduced form approach, however, cannot capture this heterogeneity. Moreover, I find that the reduced form estimates are aligned only with white families' valuations of quality. The model estimates also show that families experience a high disutility from moving houses if it results in their child changing school. In counterfactuals, I show that residential segregation increases as the link between housing and schools weakens. In the second chapter, co-authored with Vinayak Iyer, we ask: what drives the efficiency in ridesharing markets? In decentralized transportation markets, search and match frictions lead to inefficient outcomes. Ridesharing platforms, who act as intermediaries in traditional taxi markets, improve upon the status quo along two key dimensions: surge pricing and centralized matching. We study how and why these two features make the market more efficient; and explore how alternate pricing and matching rules can improve outcomes further. To this end, we develop a structural model of the ridesharing market with four components: (1) dynamically optimizing drivers who make entry, exit and search decisions; (2) stochastic demand; (3) surge pricing rule and (4) a matching technology. Relative to our benchmark model, surge pricing generates large gains for all agents; primarily during late nights. This is driven by the role surge plays in inducing drivers to enter the market. In contrast, centralized matching reduces match frictions and increases surplus for consumers, drivers, and the ridesharing platform, irrespective of the time of the day. We then show that a simple, more flexible pricing rule can generate even larger welfare gains for all agents. Our results highlight how and why centralized matching and surge pricing are able to make the market more efficient. We conclude by drawing policy implications for improving the competitiveness between taxis and ridesharing platforms. In the third chapter, co-authored with Lucas Husted, we ask: does removing families from welfare programs result in increased employment? Using detailed administrative data from Michigan, we study a policy reform in the state's TANF program that swiftly and unexpectedly removed over 10,000 families from welfare while quasi-randomly assigning time limits to over 30,000 remaining participants. We motivate our analysis using a dynamic model of welfare benefits usage. Consistent with economic theory, removing families from welfare increases formal labor force participation by roughly 4 percentage points (20\% over control group mean), with increases in annualized earnings of roughly \$500. However, despite this, the majority of families remain formally unemployed after welfare removal, and using quantile regressions we show that even the highest percentile wage gains fail to offset the loss in welfare benefits. The policy even affects families who are far from exhausting their time-limited benefits. Under a dynamic model, families have an incentive to bank benefits for future use -- an effect we observe in the data. Overall, our findings provide evidence that, contrary to their stated goals, welfare reform measures that either kick families off welfare or make welfare harder to access could possibly deepen poverty.

Page generated in 0.1307 seconds