• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 25
  • 19
  • 11
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 223
  • 223
  • 56
  • 50
  • 43
  • 40
  • 38
  • 38
  • 34
  • 29
  • 27
  • 23
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Krigeage pour la conception de turbomachines : grande dimension et optimisation multi-objectif robuste / Kriging for turbomachineries conception : high dimension and multi-objective robust optimization

Ribaud, Mélina 17 October 2018 (has links)
Dans le secteur de l'automobile, les turbomachines sont des machines tournantes participant au refroidissement des moteurs des voitures. Leur performance dépend de multiples paramètres géométriques qui déterminent leur forme. Cette thèse s'inscrit dans le projet ANR PEPITO réunissant industriels et académiques autour de l'optimisation de ces turbomachines. L'objectif du projet est de trouver la forme du ventilateur maximisant le rendement en certains points de fonctionnement. Dans ce but, les industriels ont développé des codes CFD (computational fluid dynamics) simulant le fonctionnement de la machine. Ces codes sont très coûteux en temps de calcul. Il est donc impossible d'utiliser directement le résultat de ces simulations pour conduire une optimisation.Par ailleurs, lors de la construction des turbomachines, on observe des perturbations sur les paramètres d'entrée. Elles sont le reflet de fluctuations des machines de production. Les écarts observés sur la forme géométrique finale de la turbomachine peuvent provoquer une perte de performance conséquente. Il est donc nécessaire de prendre en compte ces perturbations et de procéder à une optimisation robuste à ces fluctuations. Dans ce travail de thèse, nous proposons des méthodes basées sur du krigeage répondant aux deux principales problématiques liées à ce contexte de simulations coûteuses :• Comment construire une bonne surface de réponse pour le rendement lorsqu'il y a beaucoup de paramètres géométriques ?• Comment procéder à une optimisation du rendement efficace tout en prenant en compte les perturbations des entrées ?Nous répondons à la première problématique en proposant plusieurs algorithmes permettant de construire un noyau de covariance pour le krigeage adapté à la grande dimension. Ce noyau est un produit tensoriel de noyaux isotropes où chacun de ces noyaux est lié à un sous groupe de variables d'entrée. Ces algorithmes sont testés sur des cas simulés et sur une fonction réelle. Les résultats montrent que l'utilisation de ce noyau permet d'améliorer la qualité de prédiction en grande dimension. Concernant la seconde problématique, nous proposons plusieurs stratégies itératives basées sur un co-krigeage avec dérivées pour conduire l'optimisation robuste. A chaque itération, un front de Pareto est obtenu par la minimisation de deux objectifs calculés à partir des prédictions de la fonction coûteuse. Le premier objectif représente la fonction elle-même et le second la robustesse. Cette robustesse est quantifiée par un critère estimant une variance locale et basée sur le développement de Taylor. Ces stratégies sont comparées sur deux cas tests en petite et plus grande dimension. Les résultats montrent que les meilleures stratégies permettent bien de trouver l'ensemble des solutions robustes. Enfin, les méthodes proposées sont appliquées sur les cas industriels propres au projet PEPITO. / The turbomachineries are rotary machines used to cool down the automotive engines. Their efficiency is impacted by a high number of geometric parameters that describe the shape.My thesis is fully funded by the ANR project PEPITO where industrials and academics collaborate. The aim of this project is to found the turbomachineries shape that maximizes the efficiency.That is why, industrials have developed numerical CFD (Computational fluid dynamics) codes that simulate the work of turbomachineries. However, the simulations are time-consuming. We cannot directly use the simulations provided to perform the optimization.In addition, during the production line, the input variables are subjected to perturbations. These perturbations are due to the production machineries fluctuations. The differences observed in the final shape of the turbomachinery can provoke a loss of efficiency. These perturbations have to be taken into account to conduct an optimization robust to the fluctuations. In this thesis, since the context is time consuming simulations we propose kriging based methods that meet the requirements of industrials. The issues are: • How can we construct a good response surface for the efficiency when the number of input variables is high?• How can we lead to an efficient optimization on the efficiency that takes into account the inputs perturbations?Several algorithms are proposed to answer to the first question. They construct a covariance kernel adapted to high dimension. This kernel is a tensor product of isotropic kernels in each subspace of input variables. These algorithms are benchmarked on some simulated case and on a real function. The results show that the use of this kernel improved the prediction quality in high dimension. For the second question, seven iterative strategies based on a co-kriging model are proposed to conduct the robust optimization. In each iteration, a Pareto front is obtained by the minimization of two objective computed from the kriging predictions. The first one represents the function and the second one the robustness. A criterion based on the Taylor theorem is used to estimate the local variance. This criterion quantifies the robustness. These strategies are compared in two test cases in small and higher dimension. The results show that the best strategies have well found the set of robust solutions. Finally, the methods are applied on the industrial cases provided by the PEPITO project.
52

Essays on Approximation Algorithms for Robust Linear Optimization Problems

Lu, Brian Yin January 2016 (has links)
Solving optimization problems under uncertainty has been an important topic since the appearance of mathematical optimization in the mid 19th century. George Dantzig’s 1955 paper, “Linear Programming under Uncertainty” is considered one of the ten most influential papers in Management Science [26]. The methodology introduced in Dantzig’s paper is named stochastic programming, since it assumes an underlying probability distribution of the uncertain input parameters. However, stochastic programming suffers from the “curse of dimensionality”, and knowing the exact distribution of the input parameter may not be realistic. On the other hand, robust optimization models the uncertainty using a deterministic uncertainty set. The goal is to optimize the worst-case scenario from the uncertainty set. In recent years, many studies in robust optimization have been conducted and we refer the reader to Ben-Tal and Nemirovski [4–6], El Ghaoui and Lebret [19], Bertsimas and Sim [15, 16], Goldfarb and Iyengar [23], Bertsimas et al. [8] for a review of robust optimization. Computing an optimal adjustable (or dynamic) solution to a robust optimization problem is generally hard. This motivates us to study the hardness of approximation of the problem and provide efficient approximation algorithms. In this dissertation, we consider adjustable robust linear optimization problems with packing and covering formulations and their approximation algorithms. In particular, we study the performances of static solution and affine solution as approximations for the adjustable robust problem. Chapter 2 and 3 consider two-stage adjustable robust linear packing problem with uncertain second-stage constraint coefficients. For general convex, compact and down-monotone uncertainty sets, the problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters [22]. In particular, for a fairly general class of uncertainty sets, we show that the two-stage adjustable robust problem is NP-hard to approximate within a factor that is better than Ω(logn), where n is the number of columns of the uncertain coefficient matrix. On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efficiently. We study the performance of static solutions an approximation for the adjustable robust problem and relate its optimality to a transformation of the uncertain set. With this transformation, we show that for a fairly general class of uncertainty sets, static solution is optimal for the adjustable robust problem. This is surprising since the static solution is widely perceived as highly conservative. Moreover, when the static solution is not optimal, we provide an instance-based tight approximation bound that is related to a measure of non-convexity of the transformation of the uncertain set. We also show that for two-stage problems, our bound is at least as good (and in many case significantly better) as the bound given by the symmetry of the uncertainty set [11, 12]. Moreover, our results can be generalized to the case where the objective coefficients and right-hand-side are also uncertainty. In Chapter 3, we focus on the two-stage problems with a family of column-wise and constraint-wise uncertainty sets where any constraint describing the set involves entries of only a single column or a single row. This is a fairly general class of uncertainty sets to model constraint coefficient uncertainty. Moreover, it is the family of uncertainty sets that gives the previous hardness result. On the positive side, we show that a static solution is an O(\log n · min(\log \Gamma, \log(m+n))-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \Gamma is the maximum possible ratio of upper bounds of the uncertain constraint coefficients. Therefore, for constant \Gamma, surprisingly the performance bound for static solutions matches the hardness of approximation for the adjustable problem. Furthermore, in general the static solution provides nearly the best efficient approximation for the two-stage adjustable robust problem. In Chapter 4, we extend our result in Chapter 2 to a multi-stage adjustable robust linear optimization problem. In particular, we consider the case where the choice of the uncertain constraint coefficient matrix for each stage is independent of the others. In real world applications, decision problems are often of multiple stages and a iterative implementation of two-stage solution may result in a suboptimal solution for multi-stage problem. We consider the static solution for the adjustable robust problem and the transformation of the uncertainty sets introduced in Chapter 2. We show that the static solution is optimal for the adjustable robust problem when the transformation of the uncertainty set for each stage is convex. Chapters 5 considers a two-stage adjustable robust linear covering problem with uncertain right-hand-side parameter. As mentioned earlier, such problems are often intractable due to astronomically many extreme points of the uncertainty set. We introduce a new approximation framework where we consider a “simple” set that is “close” to the original uncertainty set. Moreover, the adjustable robust problem can be solved efficiently over the extended set. We show that the approximation bound is related to a geometric factor that represents the Banach-Mazur distance between the two sets. Using this framework, we provide approximation bounds that are better than the bounds given by an affine policy in [7] for a large class of interesting uncertainty sets. For instance, we provide an approximation solution that gives a m^{1/4}-approximation for the two-stage adjustable robust problem with hypersphere uncertainty set, while the affine policy has an approximation ratio of O(\sqrt{m}). Moreover, our bound for general p-norm ball is m^{\frac{p-1}{p^2}} as opposed to m^{1/p} as given by an affine policy.
53

Um modelo de decisão para produção e comercialização de produtos agrícolas diversificáveis. / A decision model for production and commerce of diversifiable agricultural products.

Oliveira, Sydnei Marssal de 20 June 2012 (has links)
A ascensão de um grande número de pessoas em países em desenvolvimento para a classe média, no inicio do século XXI, aliado ao movimento político para transferência de base energética para os biocombustíveis vêm aumentando a pressão sobre os preços das commodities agrícolas e apresentando novas oportunidades e cenários administrativos para os produtores agrícolas dessas commodities, em especial aquelas que podem se diversificar em muitos subprodutos para atender diferentes mercados, como o de alimentos, químico, têxtil e de energia. Nesse novo ambiente os produtores podem se beneficiar dividindo adequadamente a produção entre os diferentes subprodutos, definindo o melhor momento para a comercialização através de estoques, e ainda controlar sua exposição ao risco através de posições no mercado de derivativos. A literatura atual pouco aborda o tema da diversificação e seu impacto nas decisões de produção e comercialização agrícola e portanto essa tese tem o objetivo de propor um modelo de decisão fundado na teoria de seleção de portfólios capaz de decidir a divisão da produção entre diversos subprodutos, as proporções a serem estocadas e o momento mais adequado para a comercialização e por fim as posições em contratos futuros para fins de proteção ou hedge. Adicionalmente essa tese busca propor que esse modelo seja capaz de lidar com incerteza em parâmetros, em especial parâmetros que provocam alto impacto nos resultados, como é o caso dos retornos previstos no futuro. Como uma terceira contribuição, esse trabalho busca ainda propor um modelo de previsão de preços mais sofisticado que possa ser aplicado a commodities agrícolas, em especial um modelo híbrido ou hierárquico, composto de dois modelos, um primeiro modelo fundado sob a teoria de processos estocásticos e do Filtro de Kalman e um segundo modelo, para refinar os resultados do primeiro modelo de previsão, baseado na teoria de redes neurais, com a finalidade de considerar variáveis exógenas. O modelo híbrido de previsão de preços foi testado com dados reais do mercado sucroalcooleiro brasileiro e indiano, gerando resultados promissores, enquanto o modelo de decisão de parâmetros de produção, comercialização, estocagem e hedge se mostrou uma ferramenta útil para suporte a decisão após ser testado com dados reais do mercado sucroalcooleiro brasileiro e do mercado de milho, etanol e biodiesel norte-americano. / The rise of a large number of people in developing countries for the middle class at the beginning of the century, combined with the political movement to transfer the energy base for biofuels has been increasing pressure on prices of agricultural commodities and presenting new opportunities and administrative scenarios for agricultural producers of these commodities, especially those who may diversify into many products to meet different markets such as food, chemicals, textiles and energy. In this new environment producers can achieve benefits properly dividing production between different products, setting the best time to market through inventories, and still control their risk exposure through positions in the derivatives market. The literature poorly addresses the issue of diversification and its impact on agricultural production and commercialization decisions and therefore this thesis aims to propose a decision model based on the theory of portfolio selection able to decide the division of production between different products, the proportions to be stored and timing for marketing and finally the positions in futures contracts to hedge. Additionally this thesis attempts to propose that this model is capable of dealing with uncertainty in parameters, especially parameters that cause high impact on the results, as is the case of expected returns in the future. As a third contribution this paper seeks to also propose a model more sophisticated to forecast prices that can be applied to agricultural commodities, especially a hybrid or hierarchical model, composed of two models, a first one based on the theory of stochastic processes and Kalman filter and a second one to refine the results of the first prediction model, based on the theory of neural networks in order to consider the exogenous variables. The hybrid model for forecasting prices has been tested with real data from the Brazilian and Indian sugar ethanol market, generating promising results, while the decision model parameters of production, commercialization, storage and hedge proved a useful tool for decision support after being tested with real data from Brazilian sugar ethanol market and the corn, ethanol and biodiesel market in U.S.A.
54

Some robust optimization methods for inverse problems.

January 2009 (has links)
Wang, Yiran. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 70-73). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.6 / Chapter 1.1 --- Overview of the subject --- p.6 / Chapter 1.2 --- Motivation --- p.8 / Chapter 2 --- Inverse Medium Scattering Problem --- p.11 / Chapter 2.1 --- Mathematical Formulation --- p.11 / Chapter 2.1.1 --- Absorbing Boundary Conditions --- p.12 / Chapter 2.1.2 --- Applications --- p.14 / Chapter 2.2 --- Preliminary Results --- p.17 / Chapter 2.2.1 --- Weak Formulation --- p.17 / Chapter 2.2.2 --- About the Unique Determination --- p.21 / Chapter 3 --- Unconstrained Optimization: Steepest Decent Method --- p.25 / Chapter 3.1 --- Recursive Linearization Method Revisited --- p.25 / Chapter 3.1.1 --- Frechet differentiability --- p.26 / Chapter 3.1.2 --- Initial guess --- p.28 / Chapter 3.1.3 --- Landweber iteration --- p.30 / Chapter 3.1.4 --- Numerical Results --- p.32 / Chapter 3.2 --- Steepest Decent Analysis --- p.35 / Chapter 3.2.1 --- Single Wave Case --- p.36 / Chapter 3.2.2 --- Multiple Wave Case --- p.39 / Chapter 3.3 --- Numerical Experiments and Discussions --- p.43 / Chapter 4 --- Constrained Optimization: Augmented Lagrangian Method --- p.51 / Chapter 4.1 --- Method Review --- p.51 / Chapter 4.2 --- Problem Formulation --- p.54 / Chapter 4.3 --- First Order Optimality Condition --- p.56 / Chapter 4.4 --- Second Order Optimality Condition --- p.60 / Chapter 4.5 --- Modified Algorithm --- p.62 / Chapter 5 --- Conclusions and Future Work --- p.68 / Bibliography --- p.70
55

Robustní přístupy v optimalizaci portfolia se stochastickou dominancí / Robust approaches in portfolio optimization with stochastic dominance

Kozmík, Karel January 2019 (has links)
We use modern approach of stochastic dominance in portfolio optimization, where we want the portfolio to dominate a benchmark. Since the distribution of returns is often just estimated from data, we look for the worst distribution that differs from empirical distribution at maximum by a predefined value. First, we define in what sense the distribution is the worst for the first and second order stochastic dominance. For the second order stochastic dominance, we use two different formulations for the worst case. We derive the robust stochastic dominance test for all the mentioned approaches and find the worst case distribution as the optimal solution of a non-linear maximization problem. Then we derive programs to maximize an objective function over the weights of the portfolio with robust stochastic dominance in constraints. We consider robustness either in returns or in probabilities for both the first and the second order stochastic dominance. To the best of our knowledge nobody was able to derive such program before. We apply all the derived optimization programs to real life data, specifically to returns of assets captured by Dow Jones Industrial Average, and we analyze the problems in detail using optimal solutions of the optimization programs with multiple setups. The portfolios calculated using...
56

Optimization under uncertainty: conic programming representations, relaxations, and approximations

Xu, Guanglin 01 August 2017 (has links)
In practice, the presence of uncertain parameters in optimization problems introduces new challenges in modeling and solvability to operations research. There are three main paradigms proposed for optimization problems under uncertainty. These include stochastic programming, robust optimization, and sensitivity analysis. In this thesis, we examine, improve, and combine the latter two paradigms in several relevant models and applications. In the second chapter, we study a two-stage adjustable robust linear optimization problem in which the right-hand sides are uncertain and belong to a compact, convex, and tractable uncertainty set. Under standard and simple assumptions, we reformulate the two-stage problem as a copositive optimization program, which in turns leads to a class of tractable semidefinite-based approximations that are at least as strong as the affine policy, which is a well studied tractable approximation in the literature. We examine our approach over several examples from the literature and the results demonstrate that our tractable approximations significantly improve the affine policy. In particular, our approach recovers the optimal values of a class of instances of increasing size for which the affine policy admits an arbitrary large gap. In the third chapter, we leverage the concept of robust optimization to conduct sensitivity analysis of the optimal value of linear programming (LP). In particular, we propose a framework for sensitivity analysis of LP problems, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex, and tractable uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case LP and two-stage adjustable linear programming. We define the best-case and worst-case LP optimal values over the uncertainty set. As the concept aligns well with the general spirit of robust optimization, we denote our approach as robust sensitivity analysis. While the best-case and worst-case optimal values are difficult to compute in general, we prove that they equal the optimal values of two separate, but related, copositive programs. We then develop tight, tractable conic relaxations to provide bounds on the best-case and worst case optimal values, respectively. We also develop techniques to assess the quality of the bounds, and we validate our approach computationally on several examples from—and inspired by—the literature. We find that the bounds are very strong in practice and, in particular, are at least as strong as known results for specific cases from the literature. In the fourth chapter of this thesis, we study the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing all distributions that could have generated the observed samples with a high confidence. The problem of interest is to investigate the bound on the expected optimal value over the Wasserstein ambiguity set. Under standard assumptions, we reformulate the problem into a copositive programming problem, which naturally leads to a tractable semidefinite-based approximation. We compare our approach with a moment-based approach from the literature for two applications. The numerical results illustrate the effectiveness of our approach. Finally, we conclude the thesis with remarks on some interesting open questions in the field of optimization under uncertainty. In particular, we point out that some interesting topics that can be potentially studied by copositive programming techniques.
57

Reliable Design and Operations of Infrastructure Systems

An, Yu 03 November 2014 (has links)
The reliability issue of the infrastructure systems has become one of the major concerns of the system operators. This dissertation is a collection of four published and working papers that address the specific reliable design and operations problems from three different application settings: transportation/telecommunications network, distribution network, and power plant. In these four projects, key random factors like site disruption and uncertain demand are explicitly considered and proper research tools including stochastic programming, robust optimization, and variants of robust optimization are applied to formulate the problems based on which the important and challenging modelling elements (nonlinear congestion, disruption caused demand variation, etc.) can be introduced and studied. Besides, for each of the optimization models, we also develop advanced solution algorithms that can solve large-scale instances within a short amount of time and devise comprehensive numerical experiments to derive insights. The modelling techniques and solution methods can be easily extended to study reliability issues in other applications.
58

A Robust Optimization Approach to the Self-scheduling Problem Using Semidefinite Programming

Landry, Jason Conrad January 2009 (has links)
In deregulated electricity markets, generating companies submit energy bids which are derived from a self-schedule. In this thesis, we propose an improved semidefinite programming-based model for the self-scheduling problem. The model provides the profit-maximizing generation quantities of a single generator over a multi-period horizon and accounts for uncertainty in prices using robust optimization. Within this robust framework, the price information is represented analytically as an ellipsoid. The risk-adversity of the decision maker is taken into account by a scaling parameter. Hence, the focus of the model is directed toward the trade-off between profit and risk. The bounds obtained by the proposed approach are shown to be significantly better than those obtained by a mean-variance approach from the literature. We then apply the proposed model within a branch-and-bound algorithm to improve the quality of the solutions. The resulting solutions are also compared with the mean-variance approach, which is formulated as a mixed-integer quadratic programming problem. The results indicate that the proposed approach produces solutions which are closer to integrality and have lower relative error than the mean-variance approach.
59

Cardinality Constrained Robust Optimization Applied to a Class of Interval Observers

McCarthy, Philip James January 2013 (has links)
Observers are used in the monitoring and control of dynamical systems to deduce the values of unmeasured states. Designing an observer requires having an accurate model of the plant — if the model parameters are characterized imprecisely, the observer may not provide reliable estimates. An interval observer, which comprises an upper and lower observer, bounds the plant's states from above and below, given the range of values of the imprecisely characterized parameters, i.e., it defines an interval in which the plant's states must lie at any given instant. We propose a linear programming-based method of interval observer design for two cases: 1) only the initial conditions of the plant are uncertain; 2) the dynamical parameters are also uncertain. In the former, we optimize the transient performance of the interval observers, in the sense that the volume enclosed by the interval is minimized. In the latter, we optimize the steady state performance of the interval observers, in the sense that the norm of the width of the interval is minimized at steady state. Interval observers are typically designed to characterize the widest interval that bounds the states. This thesis proposes an interval observer design method that utilizes additional, but still-incomplete information, that enables the designer to identify tighter bounds on the uncertain parameters under certain operating conditions. The number of bounds that can be refined defines a class of systems. The definition of this class is independent of the specific parameters whose bounds are refined. Applying robust optimization techniques, under a cardinality constrained model of uncertainty, we design a single observer for an entire class of systems. These observers guarantee a minimum level of performance with respect to the aforementioned metrics, as we optimize the worst-case performance over a given class of systems. The robust formulation allows the designer to tune the level of uncertainty in the model. If many of the uncertain parameter bounds can be refined, the nominal performance of the observer can be improved, however, if few or none of the parameter bounds can be refined, the nominal performance of the observer can be designed to be more conservative.
60

A Robust Optimization Approach to the Self-scheduling Problem Using Semidefinite Programming

Landry, Jason Conrad January 2009 (has links)
In deregulated electricity markets, generating companies submit energy bids which are derived from a self-schedule. In this thesis, we propose an improved semidefinite programming-based model for the self-scheduling problem. The model provides the profit-maximizing generation quantities of a single generator over a multi-period horizon and accounts for uncertainty in prices using robust optimization. Within this robust framework, the price information is represented analytically as an ellipsoid. The risk-adversity of the decision maker is taken into account by a scaling parameter. Hence, the focus of the model is directed toward the trade-off between profit and risk. The bounds obtained by the proposed approach are shown to be significantly better than those obtained by a mean-variance approach from the literature. We then apply the proposed model within a branch-and-bound algorithm to improve the quality of the solutions. The resulting solutions are also compared with the mean-variance approach, which is formulated as a mixed-integer quadratic programming problem. The results indicate that the proposed approach produces solutions which are closer to integrality and have lower relative error than the mean-variance approach.

Page generated in 0.1082 seconds