• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

New Optimization Models and Methods for Classical, Infinite-Dimensional, and Online Fisher Markets

Gao, Yuan January 2022 (has links)
Fisher market models and market equilibrium computation algorithms have long been central research topics in economics, operations research, and theoretical computer science. Recently, they have found diverse applications in the design of Internet marketplaces. In this thesis, we develop tractable optimization models and algorithms for computing market equilibria under various practically relevant settings. In Chapter 1, we study first-order methods for computing market equilibria under a finite number of buyers with linear, quasilinear or Leontief utilities. For linear and Leontief utilities, we show that their corresponding convex programs---whose solutions are market equilibria and vice versa---exhibits strong-convexity-like structures after simple reformulations. This allows us to design the first gradient-based algorithms that achieve a linear rate of convergence for computing market equilibria. For buyers with quasilinear utility functions, we propose a new convex program capturing market equilibria, which is analogous to the Shmyrev convex program for linear utilities. Applying the mirror descent algorithm to this convex program leads to a distributed and interpretable Proportional Response (PR) dynamics that converges to equilibrium prices and utilities. This generalizes the classical PR dynamics and its convergence guarantees, previously known for linear utilities, to the case of quasilinear utilities. In Chapter 2, we consider a generalization of a linear Fisher market where there is a finite set of buyers and a measurable item space. We introduce generalizations of the Eisenberg-Gale convex program and its dual to this setting, which leads to infinite-dimensional Banach-space optimization problems. We show that these convex programs always have optimal solutions and these optimal solutions correspond to market equilibria. In particular, a market equilibrium always exists. We also show that KKT-type optimality conditions for these convex programs imply the defining properties of market equilibria and are necessary and sufficient for a solution pair to be optimal. Then, we show that, similar to the classical finite-dimensional case, a market equilibrium is Pareto optimal, envy-free and proportional. Moreover, when the item space measure is atomless, we show that there always exists a pure equilibrium allocation, which can be viewed as a generalized fair division, that is, a Pareto optimal, envy-free, and proportional partition of the item space. This leads to generalizations of classical results on the existence and characterizations of fair divisions of a measurable set. When the item space is a closed interval and buyers have piecewise linear valuations, we show that the infinite-dimensional Eisenberg-Gale-type convex program can be reformulated as a finite-dimensional convex conic program, which can be solved efficiently using off-the-shelf optimization software. Based on the convex conic reformulation, we also develop the first polynomial-time algorithm for finding a fair division of an interval under piecewise linear valuations. For general buyer valuations or a very large number of buyers, we propose computing market equilibria using stochastic optimization and give high-probability convergence guarantees. Finally, we show that most of the above results easily extend to the case of quasilinear utilities. In Chapter 3, we consider an online market setting where items arrive sequentially and must be allocated to buyers irrevocably. We define the notion of an online market equilibrium as time-indexed allocations and prices which guarantee buyer optimality and market clearance in hindsight. We propose simple, scalable and interpretable allocation and pricing dynamics termed as PACE (Pacing ACcording to Estimated utilities). When items are drawn independently from an unknown distribution with a possibly continuous support, we show that PACE leads to an online market equilibrium asymptotically. In particular, PACE ensures that buyers' time-averaged utilities converge to the equilibrium utilities of a static market with item supplies being the unknown distribution and that buyers' time-averaged expenditures converge to their per-period budget. Hence, many desirable properties of market equilibrium-based fair division such as envy-freeness, Pareto optimality, and the proportional-share guarantee are also attained asymptotically in the online setting. Next, we extend the dynamics to handle quasilinear buyer utilities, which gives the first online algorithm for computing pacing equilibria in first-price auctions. Finally, numerical experiments show that the dynamics converges quickly under various metrics.
2

Data-driven Decision-making: New Insights on Algorithm Performance and Data Value

Mouchtaki, Omar January 2024 (has links)
With the rise of data-driven algorithms, both industrial practitioners and academicians have aimed at understanding how one can use past information to make better future decisions. This question is particularly challenging, as any answer necessarily depends on several parameters, such as the features of the data used (e.g., the quantity and relevance of data), the downstream problem being solved, and the type of algorithms deployed to leverage the data. Most of the current literature analyzes the value of data by anchoring their methods in the large data regime, making the implicit assumption that data is widely available in practice. In this work, we depart from this implicit assumption and posit that, in fact, relevant data is a scarce resource in many practical settings. For instance, data is usually aggregated across different times, product categories, and geographies, and therefore the effective size of datasets is orders of magnitude lower than it may appear to be. The goal of this thesis is to bridge the gap between the theoretical understanding of data-driven decisions and practical performance by developing a problem-centric theory of data-driven decision-making in which we assess the value of data by quantifying its impact on our downstream decisions. In particular, we design methodological tools tailored to the problem at hand and derive fine-grained and problem-specific guarantees for algorithms. In the first chapter, we study the data-driven newsvendor problem under the modeling assumption that data is identically and independently distributed. We are interested in analyzing central policies in the literature, such as Sample Average Approximation (SAA), along with optimal ones, and in characterizing the performance achievable across data sizes, both small and large. Specifically, we characterize exactly the performance of SAA and uncover novel fundamental insights on the value of data. Indeed, our analysis reveals that tens of samples are sufficient to perform very efficiently, but also that more data can lead to worse out-of-sample performance for SAA. In turn, we derive an optimal algorithm in the minimax sense, enhancing decision quality with limited data. The second chapter explores the impact of data relevance on decision quality, addressing the challenge of using historical data from varying sources that may not be fully indicative of the future. We quantify the performance of SAA in these heterogeneous environments and design rate-optimal policies in settings where SAA falters. We illustrate the versatility of our framework by analyzing several prototypical problems across various fields: the newsvendor, pricing, and ski rental problems. Our analysis shows that the type of achievable asymptotic performance varies significantly across different problem classes and heterogeneity notions. Finally, the third chapter develops a framework for contextual decision-making, examining how past data relevance and quantity affect policy performance. Focusing on the contextual newsvendor problem, we analyze the wide class of Weighted Empirical Risk Minimization (WERM) policies, which weigh past data according to their relevance. This class of policies includes the SAA policy (also referred to as ERM), k-Nearest Neighbors, and kernel-based methods. While past literature focuses on upper bounds via concentration inequalities, we instead take an optimization approach and isolate a structure in the newsvendor loss function that allows us to reduce the infinite-dimensional optimization problem over worst-case distributions to a simple line search. In addition to this methodological contribution, our exact analysis offers new granular insights into the learning curve of algorithms in contextual settings. Through these contributions, the thesis advances our understanding of data-driven decision-making, offering both theoretical foundations and practical insights for diverse operational applications.

Page generated in 0.1364 seconds