• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimization-based approaches to non-parametric extreme event estimation

Mottet, Clementine Delphine Sophie 09 October 2018 (has links)
Modeling extreme events is one of the central tasks in risk management and planning, as catastrophes and crises put human lives and financial assets at stake. A common approach to estimate the likelihood of extreme events, using extreme value theory (EVT), studies the asymptotic behavior of the ``tail" portion of data, and suggests suitable parametric distributions to fit the data backed up by their limiting behaviors as the data size or the excess threshold grows. We explore an alternate approach to estimate extreme events that is inspired from recent advances in robust optimization. Our approach represents information about tail behaviors as constraints and attempts to estimate a target extremal quantity of interest (e.g, tail probability above a given high level) by imposing an optimization problem to find a conservative estimate subject to the constraints that encode the tail information capturing belief on the tail distributional shape. We first study programs where the feasible region is restricted to distribution functions with convex tail densities, a feature shared by all common parametric tail distributions. We then extend our work by generalizing the feasible region to distribution functions with monotone derivatives and bounded or infinite moments. In both cases, we study the statistical implications of the resulting optimization problems. Through investigating their optimality structures, we also present how the worst-case tail in general behaves as a linear combination of polynomial decay tails. Numerically, we develop results to reduce these optimization problems into tractable forms that allow solution schemes via linear-programming-based techniques.
2

Data-Driven Methods for Optimization Under Uncertainty with Application to Water Allocation

Love, David Keith January 2013 (has links)
Stochastic programming is a mathematical technique for decision making under uncertainty using probabilistic statements in the problem objective and constraints. In practice, the distribution of the unknown quantities are often known only through observed or simulated data. This dissertation discusses several methods of using this data to formulate, solve, and evaluate the quality of solutions of stochastic programs. The central contribution of this dissertation is to investigate the use of techniques from simulation and statistics to enable data-driven models and methods for stochastic programming. We begin by extending the method of overlapping batches from simulation to assessing solution quality in stochastic programming. The Multiple Replications Procedure, where multiple stochastic programs are solved using independent batches of samples, has previously been used for assessing solution quality. The Overlapping Multiple Replications Procedure overlaps the batches, thus losing the independence between samples, but reducing the variance of the estimator without affecting its bias. We provide conditions under which the optimality gap estimators are consistent, the variance reduction benefits are obtained, and give a computational illustration of the small-sample behavior. Our second result explores the use of phi-divergences for distributionally robust optimization, also known as ambiguous stochastic programming. The phi-divergences provide a method of measuring distance between probability distributions, are widely used in statistical inference and information theory, and have recently been proposed to formulate data-driven stochastic programs. We provide a novel classification of phi-divergences for stochastic programming and give recommendations for their use. A value of data condition is derived and the asymptotic behavior of the phi-divergence constrained stochastic program is described. Then a decomposition-based solution method is proposed to solve problems computationally. The final portion of this dissertation applies the phi-divergence method to a problem of water allocation in a developing region of Tucson, AZ. In this application, we integrate several sources of uncertainty into a single model, including (1) future population growth in the region, (2) amount of water available from the Colorado River, and (3) the effects of climate variability on water demand. Estimates of the frequency and severity of future water shortages are given and we evaluate the effectiveness of several infrastructure options.
3

Distributionally robust unsupervised domain adaptation and its applications in 2D and 3D image analysis

Wang, Yibin 08 August 2023 (has links) (PDF)
Obtaining ground-truth label information from real-world data along with uncertainty quantification can be challenging or even infeasible. In the absence of labeled data for a certain task, unsupervised domain adaptation (UDA) techniques have shown great accomplishment by learning transferable knowledge from labeled source domain data and adapting it to unlabeled target domain data, yet uncertainties are still a big concern under domain shifts. Distributionally robust learning (DRL) is emerging as a high-potential technique for building reliable learning systems that are robust to distribution shifts. In this research, a distributionally robust unsupervised domain adaptation (DRUDA) method is proposed to enhance the machine learning model generalization ability under input space perturbations. The DRL-based UDA learning scheme is formulated as a min-max optimization problem by optimizing worst-case perturbations of the training source data. Our Wasserstein distributionally robust framework can reduce the shifts in the joint distributions across domains. The proposed DRUDA method has been tested on various benchmark datasets. In addition, a gradient mapping-guided explainable network (GMGENet) is proposed to analyze 3D medical images for extracapsular extension (ECE) identification. DRUDA-enhanced GMGENet is evaluated, and experimental results demonstrate that the proposed DRUDA improves transfer performance on target domains for the 3D image analysis task successfully. This research enhances the understanding of distributionally robust optimization in domain adaptation and is expected to advance the current unsupervised machine learning techniques.
4

Models, algorithms, and distributional robustness in Nash games and related problems / ナッシュゲームと関連する問題におけるモデル・アルゴリズム・分布的ロバスト性

Hori, Atsushi 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24741号 / 情博第829号 / 新制||情||139(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 山下 信雄, 教授 太田 快人, 教授 永持 仁 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
5

Distributionally Robust Learning under the Wasserstein Metric

Chen, Ruidi 29 September 2019 (has links)
This dissertation develops a comprehensive statistical learning framework that is robust to (distributional) perturbations in the data using Distributionally Robust Optimization (DRO) under the Wasserstein metric. The learning problems that are studied include: (i) Distributionally Robust Linear Regression (DRLR), which estimates a robustified linear regression plane by minimizing the worst-case expected absolute loss over a probabilistic ambiguity set characterized by the Wasserstein metric; (ii) Groupwise Wasserstein Grouped LASSO (GWGL), which aims at inducing sparsity at a group level when there exists a predefined grouping structure for the predictors, through defining a specially structured Wasserstein metric for DRO; (iii) Optimal decision making using DRLR informed K-Nearest Neighbors (K-NN) estimation, which selects among a set of actions the optimal one through predicting the outcome under each action using K-NN with a distance metric weighted by the DRLR solution; and (iv) Distributionally Robust Multivariate Learning, which solves a DRO problem with a multi-dimensional response/label vector, as in Multivariate Linear Regression (MLR) and Multiclass Logistic Regression (MLG), generalizing the univariate response model addressed in DRLR. A tractable DRO relaxation for each problem is being derived, establishing a connection between robustness and regularization, and obtaining upper bounds on the prediction and estimation errors of the solution. The accuracy and robustness of the estimator is verified through a series of synthetic and real data experiments. The experiments with real data are all associated with various health informatics applications, an application area which motivated the work in this dissertation. In addition to estimation (regression and classification), this dissertation also considers outlier detection applications.
6

Risk-Averse and Distributionally Robust Optimization:Methodology and Applications

Rahimian, Hamed 11 October 2018 (has links)
No description available.
7

Optimisation stochastique avec contraintes en probabilités et applications / Chance constrained problem and its applications

Peng, Shen 17 June 2019 (has links)
L'incertitude est une propriété naturelle des systèmes complexes. Les paramètres de certains modèles peuvent être imprécis; la présence de perturbations aléatoires est une source majeure d'incertitude pouvant avoir un impact important sur les performances du système. Dans cette thèse, nous étudierons les problèmes d’optimisation avec contraintes en probabilités dans les cas suivants : Tout d’abord, nous passons en revue les principaux résultats relatifs aux contraintes en probabilités selon trois perspectives: les problèmes liés à la convexité, les reformulations et les approximations de ces contraintes, et le cas de l’optimisation distributionnellement robuste. Pour les problèmes d’optimisation géométriques, nous étudions les programmes avec contraintes en probabilités jointes. A l’aide d’hypothèses d’indépendance des variables aléatoires elliptiquement distribuées, nous déduisons une reformulation des programmes avec contraintes géométriques rectangulaires jointes. Comme la reformulation n’est pas convexe, nous proposons de nouvelles approximations convexes basées sur la transformation des variables ainsi que des méthodes d’approximation linéaire par morceaux. Nos résultats numériques montrent que nos approximations sont asymptotiquement serrées. Lorsque les distributions de probabilité ne sont pas connues à l’avance, le calcul des bornes peut être très utile. Par conséquent, nous développons quatre bornes supérieures pour les contraintes probabilistes individuelles, et jointes dont les vecteur-lignes de la matrice des contraintes sont indépendantes. Sur la base des inégalités de Chebyshev, Chernoff, Bernstein et de Hoeffding, nous proposons des approximations déterministes. Des conditions suffisantes de convexité. Pour réduire la complexité des calculs, nous reformulons les approximations sous forme de problèmes d'optimisation convexes solvables basés sur des approximations linéaires et tangentielles par morceaux. Enfin, des expériences numériques sont menées afin de montrer la qualité des approximations étudiées sur des données aléatoires. Dans certains systèmes complexes, la distribution des paramètres aléatoires n’est que partiellement connue. Pour traiter les incertitudes dans ces cas, nous proposons un ensemble d'incertitude basé sur des données obtenues à partir de distributions mixtes. L'ensemble d'incertitude est construit dans la perspective d'estimer simultanément des moments d'ordre supérieur. Ensuite, nous proposons une reformulation du problème robuste avec contraintes en probabilités en utilisant des données issues d’échantillonnage. Comme la reformulation n’est pas convexe, nous proposons des approximations convexes serrées basées sur la méthode d’approximation linéaire par morceaux sous certaines conditions. Pour le cas général, nous proposons une approximation DC pour dériver une borne supérieure et une approximation convexe relaxée pour dériver une borne inférieure pour la valeur de la solution optimale du problème initial. Enfin, des expériences numériques sont effectuées pour montrer que les approximations proposées sont efficaces. Nous considérons enfin un jeu stochastique à n joueurs non-coopératif. Lorsque l'ensemble de stratégies de chaque joueur contient un ensemble de contraintes linéaires stochastiques, nous modélisons ces contraintes sous la forme de contraintes en probabilité jointes. Pour chaque joueur, nous formulons les contraintes en probabilité dont les variables aléatoires sont soit normalement distribuées, soit elliptiquement distribuées, soit encore définies dans le cadre de l’optimisation distributionnellement robuste. Sous certaines conditions, nous montrons l’existence d’un équilibre de Nash pour ces jeux stochastiques. / Chance constrained optimization is a natural and widely used approaches to provide profitable and reliable decisions under uncertainty. And the topics around the theory and applications of chance constrained problems are interesting and attractive. However, there are still some important issues requiring non-trivial efforts to solve. In view of this, we will systematically investigate chance constrained problems from the following perspectives. As the basis for chance constrained problems, we first review some main research results about chance constraints in three perspectives: convexity of chance constraints, reformulations and approximations for chance constraints and distributionally robust chance constraints. For stochastic geometric programs, we formulate consider a joint rectangular geometric chance constrained program. With elliptically distributed and pairwise independent assumptions for stochastic parameters, we derive a reformulation of the joint rectangular geometric chance constrained programs. As the reformulation is not convex, we propose new convex approximations based on the variable transformation together with piecewise linear approximation methods. Our numerical results show that our approximations are asymptotically tight. When the probability distributions are not known in advance or the reformulation for chance constraints is hard to obtain, bounds on chance constraints can be very useful. Therefore, we develop four upper bounds for individual and joint chance constraints with independent matrix vector rows. Based on the one-side Chebyshev inequality, Chernoff inequality, Bernstein inequality and Hoeffding inequality, we propose deterministic approximations for chance constraints. In addition, various sufficient conditions under which the aforementioned approximations are convex and tractable are derived. To reduce further computational complexity, we reformulate the approximations as tractable convex optimization problems based on piecewise linear and tangent approximations. Finally, based on randomly generated data, numerical experiments are discussed in order to identify the tight deterministic approximations. In some complex systems, the distribution of the random parameters is only known partially. To deal with the complex uncertainties in terms of the distribution and sample data, we propose a data-driven mixture distribution based uncertainty set. The data-driven mixture distribution based uncertainty set is constructed from the perspective of simultaneously estimating higher order moments. Then, with the mixture distribution based uncertainty set, we derive a reformulation of the data-driven robust chance constrained problem. As the reformulation is not a convex program, we propose new and tight convex approximations based on the piecewise linear approximation method under certain conditions. For the general case, we propose a DC approximation to derive an upper bound and a relaxed convex approximation to derive a lower bound for the optimal value of the original problem, respectively. We also establish the theoretical foundation for these approximations. Finally, simulation experiments are carried out to show that the proposed approximations are practical and efficient. We consider a stochastic n-player non-cooperative game. When the strategy set of each player contains a set of stochastic linear constraints, we model the stochastic linear constraints of each player as a joint chance constraint. For each player, we assume that the row vectors of the matrix defining the stochastic constraints are pairwise independent. Then, we formulate the chance constraints with the viewpoints of normal distribution, elliptical distribution and distributionally robustness, respectively. Under certain conditions, we show the existence of a Nash equilibrium for the stochastic game.
8

Applications and algorithms for two-stage robust linear optimization / Applications et algorithmes pour l'optimisation linéaire robuste en deux étapes

Costa da Silva, Marco Aurelio 13 November 2018 (has links)
Le domaine de recherche de cette thèse est l'optimisation linéaire robuste en deux étapes. Nous sommes intéressés par des algorithmes d'exploration de sa structure et aussi pour ajouter des alternatives afin d'atténuer le conservatisme inhérent à une solution robuste. Nous développons des algorithmes qui incorporent ces alternatives et sont personnalisés pour fonctionner avec des exemples de problèmes à moyenne ou grande échelle. En faisant cela, nous expérimentons une approche holistique du conservatisme en optimisation linéaire robuste et nous rassemblons les dernières avancées dans des domaines tels que l'optimisation robuste basée sur les données, optimisation robuste par distribution et optimisation robuste adaptative. Nous appliquons ces algorithmes dans des applications définies du problème de conception / chargement du réseau, problème de planification, problème combinatoire min-max-min et problème d'affectation de la flotte aérienne. Nous montrons comment les algorithmes développés améliorent les performances par rapport aux implémentations précédentes. / The research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations.
9

[en] CONSERVATIVE-SOLUTION METHODOLOGIES FOR STOCHASTIC PROGRAMMING: A DISTRIBUTIONALLY ROBUST OPTIMIZATION APPROACH / [pt] METODOLOGIAS PARA OBTENÇÃO DE SOLUÇÕES CONSERVADORAS PARA PROGRAMAÇÃO ESTOCÁSTICA: UMA ABORDAGEM DE OTIMIZAÇÃO ROBUSTA À DISTRIBUIÇÕES

CARLOS ANDRES GAMBOA RODRIGUEZ 20 July 2021 (has links)
[pt] A programação estocástica dois estágios é uma abordagem matemática amplamente usada em aplicações da vida real, como planejamento da operação de sistemas de energia, cadeias de suprimentos, logística, gerenciamento de inventário e planejamento financeiro. Como a maior parte desses problemas não pode ser resolvida analiticamente, os tomadores de decisão utilizam métodos numéricos para obter uma solução quase ótima. Em algumas aplicações, soluções não convergidas e, portanto, sub-ótimas terminam sendo implementadas devido a limitações de tempo ou esforço computacional. Nesse contexto, os métodos existentes fornecem uma solução otimista sempre que a convergência não é atingida. As soluções otimistas geralmente geram altos níveis de arrependimento porque subestimam os custos reais na função objetivo aproximada. Para resolver esse problema, temos desenvolvido duas metodologias de solução conservadora para problemas de programação linear estocástica dois estágios com incerteza do lado direito e suporte retangular: Quando a verdadeira distribuição de probabilidade da incerteza é conhecida, propomos um problema DRO (Distributionally Robust Optimization) baseado em esperanças condicionais adaptadas à uma partição do suporte cuja complexidade cresce exponencialmente com a dimensionalidade da incerteza; Quando apenas observações históricas da incerteza estão disponíveis, propomos um problema de DRO baseado na métrica de Wasserstein a fim de incorporar ambiguidade sobre a real distribuição de probabilidade da incerteza. Para esta última abordagem, os métodos existentes dependem da enumeração dos vértices duais do problema de segundo estágio, tornando o problema DRO intratável em aplicações práticas. Nesse contexto, propomos esquemas algorítmicos para lidar com a complexidade computacional de ambas abordagens. Experimentos computacionais são apresentados para o problema do fazendeiro, o problema de alocação de aviões, e o problema do planejamento da operação do sistema elétrico (unit ommitmnet problem). / [en] Two-stage stochastic programming is a mathematical framework widely used in real-life applications such as power system operation planning, supply chains, logistics, inventory management, and financial planning. Since most of these problems cannot be solved analytically, decision-makers make use of numerical methods to obtain a near-optimal solution. Some applications rely on the implementation of non-converged and therefore sub-optimal solutions because of computational time or power limitations. In this context, the existing methods provide an optimistic solution whenever convergence is not attained. Optimistic solutions often generate high disappointment levels because they consistently underestimate the actual costs in the approximate objective function. To address this issue, we have developed two conservative-solution methodologies for two-stage stochastic linear programming problems with right-hand-side uncertainty and rectangular support: When the actual data-generating probability distribution is known, we propose a DRO problem based on partition-adapted conditional expectations whose complexity grows exponentially with the uncertainty dimensionality; When only historical observations of the uncertainty are available, we propose a DRO problem based on the Wasserstein metric to incorporate ambiguity over the actual data-generating probability distribution. For this latter approach, existing methods rely on dual vertex enumeration of the second-stage problem rendering the DRO problem intractable in practical applications. In this context, we propose algorithmic schemes to address the computational complexity of both approaches. Computational experiments are presented for the farmer problem, aircraft allocation problem, and the stochastic unit commitment problem.
10

[en] PORTFOLIO SELECTION VIA DATA-DRIVEN DISTRIBUTIONALLY ROBUST OPTIMIZATION / [pt] SELEÇÃO DE CARTEIRAS DE ATIVOS FINANCEIROS VIA DATA-DRIVEN DISTRIBUTIONALLY ROBUST OPTIMIZATION

JOAO GABRIEL FELIZARDO S SCHLITTLER 07 January 2019 (has links)
[pt] Otimização de portfólio tradicionalmente assume ter conhecimento da distribuição de probabilidade dos retornos ou pelo menos algum dos seus momentos. No entanto, é sabido que a distribuição de probabilidade dos retornos muda com frequência ao longo do tempo, tornando difícil a utilização prática de modelos puramente estatísticos, que confiam indubitavelmente em uma distribuição estimada. Em contrapartida, otimização robusta considera um completo desconhecimento da distribuição dos retornos, e por isto, buscam uma solução ótima para todas as realizações possíveis dentro de um conjunto de incerteza dos retornos. Mais recentemente na literatura, técnicas de distributionally robust optimization permitem lidar com a ambiguidade com relação à distribuição dos retornos. No entanto essas técnicas dependem da construção do conjunto de ambiguidade, ou seja, distribuições de probabilidade a serem consideradas. Neste trabalho, propomos a construção de conjuntos de ambiguidade poliédricos baseado somente em uma amostra de retornos. Nestes conjuntos, as relações entre variáveis são determinadas pelos dados de maneira não paramétrica, sendo assim livre de possíveis erros de especificação de um modelo estocástico. Propomos um algoritmo para construção do conjunto e, dado o conjunto, uma reformulação computacionalmente tratável do problema de otimização de portfólio. Experimentos numéricos mostram que uma melhor performance do modelo em comparação com benchmarks selecionados. / [en] Portfolio optimization traditionally assumes knowledge of the probability distribution of returns or at least some of its moments. However is well known that the probability distribution of returns changes over time, making difficult the use of purely statistic models which undoubtedly rely on an estimated distribution. On the other hand robust optimization consider a total lack of knowledge about the distribution of returns and therefore it seeks an optimal solution for all the possible realizations wuthin a set of uncertainties of the returns. More recently the literature shows that distributionally robust optimization techniques allow us to deal with ambiguity regarding the distribution of returns. However these methods depend on the construction of the set of ambiguity, that is, all distribution of probability to be considered. This work proposes the construction of polyhedral ambiguity sets based only on a sample of returns. In those sets, the relations between variables are determined by the data in a non-parametric way, being thus free of possible specification errors of a stochastic model. We propose an algorithm for constructing the ambiguity set, and then a computationally treatable reformulation of the portfolio optimization problem. Numerical experiments show that a better performance of the model compared to selected benchmarks.

Page generated in 0.1333 seconds