Return to search

Essays on Fair Operations

Fairness emerges as a vital concern to decision makers as crucial as efficiency, if not more important. Fair operations decisions are aimed at distributive justice in various scenarios. In this dissertation, we study two examples of distributively fair decision making in operations research, a dynamic fair allocation problem and a subpopulational robustness assessment problem for machine learning models.

We first study a dynamic allocation problem in which 𝑇 sequentially arriving divisible resources are to be allocated to a number of agents with concave utilities. The joint utility functions of each resource to the agents are drawn stochastically from a known joint distribution, independently and identically across time, and the central planner makes immediate and irrevocable allocation decisions. Most works on dynamic resource allocation aim to maximize the utilitarian welfare, i.e., the efficiency of the allocation, which may result in unfair concentration of resources on certain high-utility agents while leaving others' demands under-fulfilled. In this work, aiming at balancing efficiency and fairness, we instead consider a broad collection of welfare metrics, the HΓΆlder means, which includes the Nash social welfare and the egalitarian welfare.

To this end, we first study a fluid-based policy derived from a deterministic surrogate to the underlying problem and show that for all smooth Hölder mean welfare metrics it attains an 𝑂 (1) regret over the time horizon length 𝑇 against the hindsight optimum, i.e., the optimal welfare if all utilities were known in advance of deciding on allocations. However, when evaluated under the non-smooth egalitarian welfare, the fluid-based policy attains a regret of order 𝛩 (βˆšπ‘‡). We then propose a new policy built thereupon, called Backward Infrequent Re-solving (𝖑𝖨𝖱), which consists of re-solving the deterministic surrogate problem at most 𝑂 (log 𝑇) times. We show under a mild regularity condition that it attains a regret against the hindsight optimal egalitarian welfare of order 𝑂 (1) when all agents have linear utilities and 𝑂 (log 𝑇) otherwise. We further propose the Backward Infrequent Re-solving with Thresholding (𝖑𝖨𝖱𝖳) policy, which enhances the (𝖑𝖨𝖱𝖳) policy by thresholding adjustments and performs similarly without any assumption whatsoever. More specifically, we prove the (𝖑𝖨𝖱𝖳) policy attains an 𝑂 (1) regret independently of the horizon length 𝑇 when all agents have linear utilities and 𝑂 (log²⁺^πœ€) otherwise. We conclude by presenting numerical experiments to corroborate our theoretical claims and to illustrate the significant performance improvement against several benchmark policies.

The performance of ML models degrades when the training population is different from that seen under operation. Towards assessing distributional robustness, we study the worst-case performance of a model over 𝒂𝒍𝒍 subpopulations of a given size, defined with respect to core attributes 𝑍. This notion of robustness can consider arbitrary (continuous) attributes 𝑍, and automatically accounts for complex intersectionality in disadvantaged groups. We develop a scalable yet principled two-stage estimation procedure that can evaluate the robustness of state-of-the-art models. We prove that our procedure enjoys several finite-sample convergence guarantees, including π’…π’Šπ’Žπ’†π’π’”π’Šπ’π’-𝒇𝒓𝒆𝒆 convergence. Instead of overly conservative notions based on Rademacher complexities, our evaluation error depends on the dimension of 𝑍 only through the out-of-sample error in estimating the performance conditional on 𝑍. On real datasets, we demonstrate that our method certifies the robustness of a model and prevents deployment of unreliable models.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/cmhz-z587
Date January 2024
CreatorsXia, Shangzhou
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0022 seconds