This thesis focuses on learning from revealed preferences and their implications across operations management problems through an Inverse Problem perspective.
For the first part of the thesis, we focus on decentralized platforms facilitating many-to-many matches between two sides of a marketplace. In the absence of direct matching, inefficiency in market outcomes can easily arise. For instance, popular supply agents may garner many units from the demand side, while other supply units may not receive any match. A central question for the platform is how to manage congestion and improve market outcomes.
In Chapter One, we study the impact of a detail-free lever: the disclosure of information to agents on current competition levels. How large are the effects of this lever, and how do they affect overall market outcomes? We answer this question empirically. We partner with the largest service marketplace in Latin America, which sells non-exclusive labor market leads to workers. The key innovation in our approach is the proposal of a structural model that allows agents (workers) to respond to competitors through beliefs about competition at the lead level, which in turn implies an equilibrium at the platform level under the assumption of rational expectations. In this problem, we observe agents' best responses (actions), and from that, we need to infer their structural parameters. Identification follows from an exogenous intervention that changes agents' contextual information and the platform equilibrium. We then conduct counterfactual analyses to study the impact of signaling competition on workers' lead purchasing decisions, the platform's revenue, and the expected number of matches. We find that signaling competition is a powerful lever for the platform to reduce congestion, redirect demand, and ultimately improve the expected number of matches for the markets we analyze.
For the second part of the thesis, we discuss both parametric and modelling approaches in Inverse Problems. In Chapter Two, we focus on Inverse Optimization Problems in a single-agent setting. Specifically, we study offline and online contextual optimization with feedback information, where instead of observing the loss, we observe, after-the-fact, the optimal action an oracle with full knowledge of the objective function would have taken. We aim to minimize regret, which is defined as the difference between our losses and the ones incurred by an all-knowing oracle. In the offline setting, the decision-maker has information available from past periods and needs to make one decision, while in the online setting, the decision-maker optimizes decisions dynamically over time based on a new set of feasible actions and contextual functions in each period. For the offline setting, we characterize the optimal minimax policy, establishing the performance that can be achieved as a function of the underlying geometry of the information induced by the data. In the online setting, we leverage this geometric characterization to optimize the cumulative regret. We develop an algorithm that yields the first regret bound for this problem, which is logarithmic in the time horizon. Furthermore, we show via simulation that our proposed algorithms outperform previous methods from the literature.
Finally, in Chapter Three, we consider data-driven methods for general Inverse Problem formulations under a statistical framework (Statistical Inverse Problem-SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used to solve linear SIP. We provide consistency and finite sample bounds for the excess risk. We exemplify the algorithm in the Functional Linear Regression setting with an empirical application in predicting illegal activity from bitcoin wallets. We also discuss additional applications and extensions.
Identifer | oai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/6354-3k36 |
Date | January 2024 |
Creators | Resende Fonseca, Yuri |
Source Sets | Columbia University |
Language | English |
Detected Language | English |
Type | Theses |
Page generated in 0.0019 seconds