Return to search

Statistically and Computationally Efficient Resampling and Distributionally Robust Optimization with Applications

Uncertainty quantification via construction of confidence regions has been long studied in statistics. While these existing methods are powerful and commonly used, some modern problems that require expensive model fitting, or those that elicit convoluted interactions between statistical and computational noises, could challenge the effectiveness of these methods. To remedy some of these challenges, this thesis proposes novel approaches that not only guarantee statistical validity but also are computationally efficient. We study two main methodological directions: resampling-based methods in the first half (Chapters 2 and 3) and optimization-based methods, in particular so-called distributionally robust optimization, in the second half (Chapters 4 to 6) of this thesis.

The first half focuses on the bootstrap, a common approach for statistical inference. This approach resamples data and hinges on the principle of using the resampling distribution as an approximation to the sampling distribution. However, implementing the bootstrap often demands extensive resampling and model refitting effort to wash away the Monte Carlo error, which can be computationally expensive for modern problems. Chapters 2 and 3 study bootstrap approaches using fewer resamples while maintaining coverage validity, and also the quantification of uncertainty for models with both statistical and Monte Carlo computation errors.

In Chapter 2, we investigate bootstrap-based construction of confidence intervals using minimal resampling. We use a “cheap” bootstrap perspective based on sample-resample independence that yields valid coverage with as small as one resample, even when the problem dimension grows closely with the data size. We validate our theoretical findings and assess our approach against other benchmarks through various large-scale or high-dimensional problems. In Chapter 3, we focus on the so-called input uncertainty problem in stochastic simulation, which refers to the propagation of the statistical noise in calibrating input models to impact output accuracy. Unlike most existing literature that focuses on real-valued output quantities, we aim at constructing confidence bands for the entire output distribution function that can contain more holistic information. We develop a new test statistic that generalizes the Kolmogorov-Smirnov statistic to construct confidence bands that account for input uncertainty on top of Monte Carlo errors via an additional asymptotic component formed by a mean-zero Gaussian process. We also demonstrate how subsampling can be used to efficiently estimate the covariance function of this Gaussian process in a computationally cheap fashion.

The second part of the thesis is devoted to optimization-based methods, in particular distributionally robust optimization (DRO). Originally built to tackle the uncertainty of the underlying distribution in a stochastic optimization, DRO adopts a worst-case perspective and seeks decisions that optimize under the worst-case scenario, over the so-called ambiguity set that represents the distributional uncertainty. In this thesis, we turn DRO broadly into a statistical tool (still referred to as DRO) by optimizing targets of interest over the ambiguity set and transforming the coverage guarantee of the ambiguity set into confidence bounds for targets. The flexibility of ambiguity sets advantageously allows the injection of prior distribution knowledge that operates with less data requirement than existing methods.

In Chapter 4, motivated by the bias-variance tradeoff and other technical complications in conventional multivariate extreme value theory, we propose a shape-constrained DRO called orthounimodality DRO (OU-DRO) as a vehicle to incorporate natural and verifiable information into the tail. We study the statistical guarantee, and tractability especially in the bivariate setting via a new Choquet representation in convex analysis. Chapter 5 further studies a general approach that applies to higher dimensions via sample average approximation (SAA) and importance sampling. We establish convergence guarantee of the SAA optimal value for OU-DRO in any dimension under regularity conditions. We also argue that the resulting SAA problem is a linear program that can be solved by off-the-shelf algorithms.

In Chapter 6, we study the connection between the out-of-sample errors of data-driven stochastic optimization and DRO via large deviations theory. We propose a special type of DRO formulation which uses an ambiguity set based on a Kullback Leibler divergence smoothed by the Wasserstein or Levy-Prokhorov distance. We relate large deviations theory to the performance of the proposed DRO and show it achieves nearly optimal out-of-sample performance in terms of the exponential decay rate of the generalization error. Furthermore, the computation of the proposed DRO is not harder than DRO problems based on f-divergence or Wasserstein distances, which leads to a statistically optimal and computationally tractable DRO formulation.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/7b7q-k493
Date January 2024
CreatorsLiu, Zhenyuan
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.002 seconds