Return to search

Algorithms for Matching Problems Under Data Accessibility Constraints

Traditionally, optimization problems in operations research have been studied in a complete information setting; the input/data is collected and made fully accessible to the user, before an algorithm is sequentially run to generate the optimal output. However, the growing magnitude of treated data and the need to make immediate decisions are increasingly shifting the focus to optimizing under incomplete information settings. The input can be partially inaccessible to the user either because it is generated continuously, contains some uncertainty, is too large and cannot be stored on a single machine, or has a hidden structure that is costly to unveil. Many problems providing a context for studying algorithms when the input is not entirely accessible emanate from the field of matching theory, where the objective is to pair clients and servers or, more generally, to group clients in disjoint sets. Examples include ride-sharing and food delivery platforms, internet advertising, combinatorial auctions, and online gaming.

In this thesis, we study three different novel problems from the theory of matchings. These problems correspond to situations where the input is hidden, spread across multiple processors, or revealed in two stages with some uncertainty. In particular, we present in Chapter 1 the necessary definitions and terminology for the concepts and problems we cover. In Chapter 2, we consider a two-stage robust optimization framework that captures matching problems where one side of the input includes some future demand uncertainty. We propose two models to capture the demand uncertainty: explicit and implicit scenarios.

Chapters 3 and 4 see us switch our attention to matchings in hypergraphs. In Chapter 3, we consider the problem of learning hidden hypergraph matchings through membership queries. Finally, in Chapter 4, we study the problem of finding matchings in uniform hypergraphs in the massively parallel computation (MPC) model where the data (e.g. vertices and edges) is distributed across the machines and in each round, a machine performs local computation on its fragment of data, and then sends messages to other machines for the next round.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/z8pt-gb09
Date January 2022
CreatorsHanguir, Oussama
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0026 seconds