• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 48
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 212
  • 212
  • 36
  • 36
  • 36
  • 36
  • 35
  • 32
  • 30
  • 25
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
102

Optimizing a biomass supply system: consideration of pellet quality and transportation under extreme events

Aladwan, Badr S 06 August 2021 (has links)
This dissertation studies a framework in support biomass wood pellet supply chain. The worldwide wood pellet market is growing at a phenomenal rate. However, the economic sustainment of this business depends on how well the producers manage the uncertainty associated with biomass yield and quality. In the first part of the dissertation, we propose a two-stage stochastic programming model that optimizes different critical decisions (e.g., harvesting, storage, transportation, quality inspection, and production decisions) of a biomass-to-pellet supply system under biomass yield and quality uncertainty to economically produce pellets while accounting for the different pellet standards set forward by the U.S. and European markets. The study develops a hybrid algorithm that combines Sample Average Approximation with an enhanced Progressive Hedging algorithm. We propose two parallelization schemes to efficiently speed up the convergence of the overall algorithm. We use Mississippi as a testing ground to visualize and validate the algorithms performance. Experimental results indicate that the biomass-to-pellet supply system is sensitive to the biomass quality parameters (e.g., ash and moisture contents). In the second part of the dissertation, we propose a bi-level mixed-integer linear programming model that captures important features such as the hurricane’s degree, quality of damaged timbers, price-related issues, optimizes different critical decisions (e.g., purchasing, storage, and transportation decisions) of a post-hurricane damaged timber management problem. Lack of efficient tools to manage the wood market interactions in the post-hurricane situation increases timber salvage loss drastically. The overall goal is to provide an efficient decision-making tool for planning and recovering damaged timber to maximize its monetary value and mitigate its negative ecological impacts. Due to the complexity associated with solving the proposed model, we developed two exact solution methods, namely, the enhanced Benders decomposition and the Benders-based branch-and-cut algorithms, to efficiently solve the model in a reasonable time-frame. We use 15 coastal counties in southeast Mississippi to visualize and validate the algorithms' performance. Key managerial insights are drawn on the sensitivity of a number of critical parameters, such as selling/purchasing prices offered by the landowners/mills, quality-level, and deterioration rate of the damaged timbers on their economic recovery following a natural catastrophe.
103

Optimal regulating power market bidding strategies in hydropower systems

Olsson, Magnus January 2005 (has links)
Unforeseen changes in production or consumption in power systems lead to changes in grid frequency. This can cause damages to the system, or to frequency sensitive equipment at the consumers. The system operator (SO) is the responsible for balancing production and consumption in the system. The regulating market is the market place where the SO can sell or purchase electricity in order to balance unforeseen events. Producers acting on the regulating market must be able to change their production levels fast (within minutes) when required. Hydropower is therefore suitable for trading on the regulating market because of its flexibility in power production. This thesis describes models that hydropower owners can use to generate optimal bidding strategies when the regulating market is considered. When planning for trading on the market, the prices are not known. Therefore, the prices are considered as stochastic variables. The planning problems in this thesis are based on multi-stage stochastic optimization, where the uncertain power prices are represented by scenario trees. The scenario trees are generated by simulation of price scenarios, which is achieved by using a model based on ARIMA and Markov processes. Two optimization models are presented in this thesis: * Model for generation of optimal bidding strategies for the regulating market. * Model for generation of optimal bidding strategies for the spot market when trading on the regulating market is considered. The described models are applied in a case study with real data from the Nordic power system. Conclusions of the thesis are that the proposed approaches of modelling prices and generation of bidding strategies are possible to use, and that the models produces reasonable data when applied to real data. / Oväntade produktions- eller konsumtionsändringar i kraftsystem leder till ändringar i nätfrekvens. Detta kan orsaka skador på systemet eller på frekvenskänslig utrustning hos konsumenterna. Systemoperatören (SO) är den ansvarige för att balansera produktion och konsumtion i kraftsystemet. Till sin hjälp har SO reglermarknaden, som är den handelsplats där SO köper eller säljer el för att balansera oväntade händelser i systemet. Producenter som agerar på reglermarknaden måste snabbt (inom minuter) kunna ändra sina produktionsnivåer om så behövs. Vattenkraft är därför lämplig för handel på reglermarknaden på grund av dess flexibilitet i kraftproduktion. Denna avhandling beskriver modeller som vattenkraftägare kan använda för generering av optimala budstrategier då reglermarknaden beaktas. När en producents planering för handel på marknaden utförs är marknadspriserna okända. Dessa är därför betraktade som stokastiska variabler. Planeringmodellerna som presenteras i denna avhandling är baserade på multi-periodisk stokastisk programmering, där de osäkra marknadspriserna är representerade av ett scenarieträd. Scenarierna i trädet genereras genom simulering av marknadspriser. En prismodell, baserad på ARIMA- och Markovprocesser, har därför utvecklats. Två olika optimeringsmodeller presenteras i denna avhandling: * Model för generering av optimala budstrategier för reglermarknaden. * Model för generering av optimala budstrategier för spotmarknaden då handel på reglermarknaden beaktas. Modellerna tillämpas i en studie där data från den nordiska elmarknaden appliceras. Slutsatser i avhandlingen är att de föreslagna ansatserna för modellering av priser och generering av budstrategier är möjliga att anvÄanda, samt att modellerna producerar rimliga resultat när applicerade på verkliga data. / QC 20101217
104

Integrated and Coordinated Relief Logistics Planning Under Uncertainty for Relief Logistics Operations

Kamyabniya, Afshin 22 September 2022 (has links)
In this thesis, we explore three critical emergency logistics problems faced by healthcare and humanitarian relief service providers for short-term post-disaster management. In the first manuscript, we investigate various integration mechanisms (fully integrated horizontal-vertical, horizontal, and vertical resource sharing mechanisms) following a natural disaster for a multi-type whole blood-derived platelets, multi-patient logistics network. The goal is to reduce the amount of shortage and wastage of multi-blood-group of platelets in the response phase of relief logistics operations. To solve the logistics model for a large scale problem, we develop a hybrid exact solution approach involving an augmented epsilon-constraint and Lagrangian relaxation algorithms and demonstrate the model's applicability for a case study of an earthquake. Due to uncertainty in the number of injuries needing multi-type blood-derived platelets, we apply a robust optimization version of the proposed model which captures the expected performance of the system. The results show that the performance of the platelets logistics network under coordinated and integrated mechanisms better control the level of shortage and wastage compared with that of a non-integrated network. In the second manuscript, we propose a two-stage casualty evacuation model that involves routing of patients with different injury levels during wildfires. The first stage deals with field hospital selection and the second stage determines the number of patients that can be transferred to the selected hospitals or shelters via different routes of the evacuation network. The goal of this model is to reduce the evacuation response time, which ultimately increase the number of evacuated people from evacuation assembly points under limited time windows. To solve the model for large-scale problems, we develop a two-step meta-heuristic algorithm. To consider multiple sources of uncertainty, a flexible robust approach considering the worst-case and expected performance of the system simultaneously is applied to handle any realization of the uncertain parameters. The results show that the fully coordinated evacuation model in which the vehicles can freely pick up and off-board the patients at different locations and are allowed to start their next operations without being forced to return to the departure point (evacuation assembly points) outperforms the non-coordinated and non-integrated evacuation models in terms of number of evacuated patients. In the third manuscript, we propose an integrated transportation and hospital capacity model to optimize the assignment of relevant medical resources to multi-level-injury patients in the time of a MCI. We develop a finite-horizon MDP to efficiently allocate resources and hospital capacities to injured people in a dynamic fashion under limited time horizon. We solve this model using the linear programming approach to ADP, and by developing a two-phase heuristics based on column generation algorithm. The results show better policies can be derived for allocating limited resources (i.e., vehicles) and hospital capacities to the injured people compared with the benchmark. Each paper makes a worthwhile contribution to the humanitarian relief operations literature and can help relief and healthcare providers optimize resource and service logistics by applying the proposed integration and coordination mechanisms.
105

Solution of Constrained Clustering Problems through Homotopy Tracking

Easterling, David R. 15 January 2015 (has links)
Modern machine learning methods are dependent on active optimization research to improve the set of methods available for the efficient and effective extraction of information from large datasets. This, in turn, requires an intense and rigorous study of optimization methods and their possible applications to crucial machine learning applications in order to advance the potential benefits of the field. This thesis provides a study of several modern optimization techniques and supplies a mathematical inquiry into the effectiveness of homotopy methods to attack a fundamental machine learning problem, effective clustering under constraints. The first part of this thesis provides an empirical survey of several popular optimization algorithms, along with one approach that is cutting-edge. These algorithms are tested against deeply challenging real-world problems with vast numbers of local minima, and compares and contrasts the benefits of each when confronted with problems of different natures. The second part of this thesis proposes a new homotopy map for use with constrained clustering problems. This thesis explores the connections between the map and the problem, providing several theorems to justify the use of the map and making use of modern homotopy tracking software to compare an optimization that employs the map with several modern approaches to solving the same problem. / Ph. D.
106

On Enabling Virtualization and Millimeter Wave Technologies in Cellular Networks

Chatterjee, Shubhajeet 15 October 2020 (has links)
Wireless network virtualization (WNV) and millimeter wave (mmW) communications are emerging as two key technologies for cellular networks. Virtualization in cellular networks enables wireless services to be decoupled from network resources (e.g., infrastructure and spectrum) so that multiple virtual networks can be built using a shared pool of network resources. At the same time, utilization of the large bandwidth available in mmW frequency band would help to overcome ongoing spectrum scarcity issues. In this context, this dissertation presents efficient frameworks for building virtual networks in sub-6 GHz and mmW bands. Towards developing the frameworks, first, we derive a closed-form expression for the downlink rate coverage probability of a typical sub-6 GHz cellular network with known base station (BS) locations and stochastic user equipment (UE) locations and channel conditions. Then, using the closed-form expression, we develop a sub-6 GHz virtual resource allocation framework that aggregates, slices, and allocates the sub-6 Ghz network resources to the virtual networks in such a way that the virtual networks' sub-6 GHz downlink coverage and rate demands are probabilistically satisfied while resource over-provisioning is minimized in the presence of uncertainty in UE locations and channel conditions. Furthermore, considering the possibility of lack of sufficient sub-6 GHz resources to satisfy the rate coverage demands of all virtual networks, we design a prioritized sub-6 GHz virtual resource allocation scheme where virtual networks are built sequentially based on their given priorities. To this end, we develop static frameworks that allocate sub-6 GHz resources in the presence of uncertainty in UE locations and channel conditions, i.e., before the UE locations and channel conditions are revealed. As a result, when a slice of a BS serves its associated UEs, it can be over-satisfied (i.e., resources left after satisfying the rate demands of all UEs) or under-satisfied (i.e., lack of resources to satisfy the rate demands of all UEs). On the other hand, it is extremely challenging to execute the entire virtual resource allocation process in real time due to the small transmission time intervals (TTIs) of cellular technologies. Taking this into consideration, we develop an efficient scheme that performs the virtual resource allocation in two phases, i.e., virtual network deployment phase (static) and statistical multiplexing phase (adaptive). In the virtual network deployment phase, sub-6 GHz resources are aggregated, sliced, and allocated to the virtual networks considering the presence of uncertainty in UE locations and channel conditions, without knowing which realization of UE locations and channel conditions will occur. Once the virtual networks are deployed, each of the aggregated BSs performs statistical multiplexing, i.e., allocates excess resources from the over-satisfied slices to the under-satisfied slices, according to the realized channel conditions of associated UEs. In this way, we further improve the sub-6 GHz resource utilization. Next, we steer our focus on the mmW virtual resource allocation process. MmW systems typically use beamforming techniques to compensate for the high pathloss. The directional communication in the presence of uncertainty in UE locations and channel conditions, make maintaining connectivity and performing initial access and cell discovery challenging. To address these challenges, we develop an efficient framework for mmW virtual network deployment and UE assignment. The deployment decisions (i.e., the required set of mmW BSs and their optimal beam directions) are taken in the presence of uncertainty in UE locations and channel conditions, i.e., before the UE locations and channel conditions are revealed. Once the virtual networks are deployed, an optimal mmW link (or a fallback sub-6 GHz link) is assigned to each UE according to the realized UE locations and channel conditions. Our numerical results demonstrate the gains brought by our proposed scheme in terms of minimizing resource over-provisioning while probabilistically satisfying virtual networks' sub-6 GHz and mmW demands in the presence of uncertainty in UE locations and channel conditions. / Doctor of Philosophy / In cellular networks, mobile network operators (MNOs) have been sharing resources (e.g., infrastructure and spectrum) as a solution to extend coverage, increase capacity, and decrease expenditures. Recently, due to the advent of 5G wireless services with enormous coverage and capacity demands and potential revenue losses due to over-provisioning to serve peak demands, the motivation for sharing and virtualization has significantly increased in cellular networks. Through wireless network virtualization (WNV), wireless services can be decoupled from the network resources so that various services can efficiently share the resources. At the same time, utilization of the large bandwidth available in millimeter wave (mmW) frequency band would help to overcome ongoing spectrum scarcity issues. However, due to the inherent features of cellular networks, i.e., the uncertainty in user equipment (UE) locations and channel conditions, enabling WNV and mmW communications in cellular networks is a challenging task. Specifically, we need to build the virtual networks in such a way that UE demands are satisfied, isolation among the virtual networks are maintained, and resource over-provisioning is minimized in the presence of uncertainty in UE locations and channel conditions. In addition, the mmW channels experience higher attenuation and blockage due to their small wavelengths compared to conventional sub-6 GHz channels. To compensate for the high pathloss, mmW systems typically use beamforming techniques. The directional communication in the presence of uncertainty in UE locations and channel conditions, make maintaining connectivity and performing initial access and cell discovery challenging. Our goal is to address these challenges and develop optimization frameworks to efficiently enable virtualization and mmW technologies in cellular networks.
107

Modèles déterministes et stochastiques pour la résolution numérique du problème de maintien de séparation entre aéronefs / Deterministic and stochastic models for the numerical resolution of the aircraft separation problem

Omer, Jérémy, Jean, Guy 27 February 2013 (has links)
Cette thèse s’inscrit dans le domaine de la programmation mathématique appliquée à la séparation d’aéronefs stabilisés en altitude. L’objectif est le développement d’algorithmes de résolution de conflits aériens ; l’enjeu étant d’augmenter la capacité de l’espace aérien afin de diminuer les retards et d’autoriser un plus grand nombre d’aéronefs à suivre leur trajectoire optimale. En outre, du fait de l’imprécision des prédictions relatives à la météo ou à l’état des aéronefs, l’incertitude sur les données est une caractéristique importante du problème. La démarche suivie dans ce mémoire s’attache d’abord au problème déterministe dont l’étude est nettement plus simple. Pour cela, quatre modèles basés sur la programmation non linéaire et sur la programmation linéaire à variables mixtes sont développés en intégrant notamment un critère reflétant la consommation de carburant et la durée de vol. Leur comparaison sur un ensemble de scénarios de test met en évidence l’intérêt d’utiliser un modèle linéaire approché pour l’étude du problème avec incertitudes. Un champ de vent aléatoire, corrélé en temps et en espace, ainsi qu’une erreur gaussienne sur la mesure de la vitesse sont ensuite pris en compte.Dans un premier temps, le problème déterministe est adapté en ajoutant une marge sur la norme de séparation grâce au calcul d’une approximation des probabilités de conflits. Finalement, une formulation stochastique avec recours est développée. Ainsi, les erreurs aléatoires sont explicitement incluses dans le modèle afin de tenir compte de la possibilité d’ordonner des manoeuvres de recours lorsque les erreurs observées engendrent de nouveaux conflits. / This thesis belongs to the field of mathematical programming, applied to the separation of aircraft stabilised on the same altitude. The primary objective is to develop algorithms for the resolution of air conflicts. The expected benefit of such algorithm is to increase the capacity of the airspace in order to reduce the number of late flights and let more aircraft follow their optimal trajectory. Moreover, meteorological forecast and trajectory predictions being inexact,the uncertainty on the data is an important issue. The approach that is followed focuses on the deterministic problem in the first place because it is much simpler. To do this, four nonlinear and mixed integer linear programming models, including a criterion based on fuel consumption and flight duration, are developed. Their comparison on a benchmark of scenarios shows the relevance of using an approximate linear model for the study of the problem with uncertainties.A random wind field, correlated in space and time, as well as speed measures with Gaussianerrors are then taken into account. As a first step, the deterministic problem is adapted by computinga margin from an approximate calculation of conflict probabilities and by adding it tothe reference separation distance. Finally, a stochastic formulation with recourse is developed.In this model, the random errors are explicitly included in order to consider the possibility of ordering recourse actions if the observed errors cause new conflicts.
108

Arbitrer coût et flexibilité dans la Supply Chain / Balancing cost and flexibility in Supply Chain

Gaillard de Saint Germain, Etienne 17 December 2018 (has links)
Cette thèse développe des méthodes d'optimisation pour la gestion de la Supply Chain et a pour thème central la flexibilité définie comme la capacité à fournir un service ou un produit au consommateur dans un environnement incertain. La recherche a été menée dans le cadre d'un partenariat entre Argon Consulting, une société indépendante de conseil en Supply Chain et l'École des Ponts ParisTech. Dans cette thèse, nous développons trois sujets rencontrés par Argon Consulting et ses clients et qui correspondent à trois différents niveaux de décision (long terme, moyen terme et court terme).Lorsque les entreprises élargissent leur portefeuille de produits, elles doivent décider dans quelles usines produire chaque article. Il s'agit d'une décision à long terme, car une fois qu'elle est prise, elle ne peut être facilement modifiée. Plus qu'un problème d'affectation où un article est produit par une seule usine, ce problème consiste à décider si certains articles doivent être produits par plusieurs usines et par lesquelles. Cette interrogation est motivée par la grande incertitude de la demande. En effet, pour satisfaire la demande, l'affectation doit pouvoir équilibrer la charge de travail entre les usines. Nous appelons ce problème le multi-sourcing de la production. Comme il ne s'agit pas d'un problème récurrent, il est essentiel de tenir compte du risque au moment de décider le niveau de multi-sourcing. Nous proposons un modèle générique qui inclut les contraintes techniques du problème et une contrainte d'aversion au risque basée sur des mesures de risque issues de la théorie financière. Nous développons un algorithme et une heuristique basés sur les outils standards de la Recherche Opérationnelle et de l'Optimisation Stochastique pour résoudre le problème du multi-sourcing et nous testons leur efficacité sur des données réelles.Avant de planifier la production, certains indicateurs macroscopiques doivent être décidés à horizon moyen terme tels la quantité de matières premières à commander ou la taille des lots produits. Certaines entreprises utilisent des modèles de stock en temps continu, mais ces modèles reposent souvent sur un compromis entre les coûts de stock et les coûts de lancement. Ces derniers sont des coûts fixes payés au lancement de la production et sont difficiles à estimer en pratique. En revanche, à horizon moyen terme, la flexibilité des moyens de production est déjà fixée et les entreprises estiment facilement le nombre maximal de lancements. Poussés par cette observation, nous proposons des extensions de certains modèles classiques de stock en temps continu, sans coût de lancement et avec une limite sur le nombre d'installations. Nous avons utilisé les outils standard de l'Optimisation Continue pour calculer les indicateurs macroscopiques optimaux.Enfin, la planification de la production est une décision à court terme qui consiste à décider quels articles doivent être produits par la ligne de production pendant la période en cours. Ce problème appartient à la classe bien étudiée des problèmes de Lot-Sizing. Comme pour les décisions à moyen terme, ces problèmes reposent souvent sur un compromis entre les coûts de stock et les coûts de lancement. Fondant notre modèle sur ces considérations industrielles, nous gardons le même point de vue (aucun coût de lancement et une borne supérieure sur le nombre de lancement) et proposons un nouveau modèle.Bien qu'il s'agisse de décisions à court terme, les décisions de production doivent tenir compte de la demande future, qui demeure incertaine. Nous résolvons notre problème de planification de la production à l'aide d'outils standard de Recherche Opérationnelle et d'Optimisation Stochastique, nous testons l'efficacité sur des données réelles et nous la comparons aux heuristiques utilisées par les clients d'Argon Consulting / This thesis develops optimization methods for Supply Chain Management and is focused on the flexibility defined as the ability to deliver a service or a product to a costumer in an uncertain environment. The research was conducted throughout a partnership between Argon Consulting, which is an independent consulting firm in Supply Chain Operations and the École des Ponts ParisTech. In this thesis, we explore three topics that are encountered by Argon Consulting and its clients and that correspond to three different levels of decision (long-term, mid-term and short-term).When companies expand their product portfolio, they must decide in which plants to produce each item. This is a long-term decision since once it is decided, it cannot be easily changed. More than a assignment problem where one item is produced by a single plant, this problem consists in deciding if some items should be produced on several plants and by which plants. This is motivated by a highly uncertain demand. So, in order to satisfy the demand, the assignment must be able to balance the workload between plants. We call this problem the multi-sourcing of production. Since it is not a repeated problem, it is essential to take into account the risk when making the multi-sourcing decision. We propose a generic model that includes the technical constraints of the assignment and a risk-averse constraint based on risk measures from financial theory. We develop an algorithm and a heuristic based on standard tools from Operations Research and Stochastic Optimization to solve the multi-sourcing problem and we test their efficiency on real datasets.Before planning the production, some macroscopic indicators must be decided at mid-term level such as the quantity of raw materials to order or the size of produced lots. Continuous-time inventory models are used by some companies but these models often rely on a trade-off between holding costs and setups costs. These latters are fixed costs paid when production is launched and are hard to estimate in practice. On the other hand, at mid-term level, flexibility of the means of production is already fixed and companies easily estimate the maximal number of setups. Motivated by this observation, we propose extensions of some classical continuous-time inventory models with no setup costs and with a bound on the number of setups. We used standard tools from Continuous Optimization to compute the optimal macroscopic indicators.Finally, planning the production is a short-term decision consisting in deciding which items must be produced by the assembly line during the current period. This problem belongs to the well-studied class of Lot-Sizing Problems. As for mid-term decisions, these problems often rely on a trade-off between holding and setup costs. Basing our model on industrial considerations, we keep the same point of view (no setup cost and a bound on the number of setups) and propose a new model. Although these are short-term decisions, production decisions must take future demand into account, which remains uncertain. We solve our production planning problem using standard tools from Operations Research and Stochastic Optimization, test the efficiency on real datasets, and compare it to heuristics used by Argon Consulting's clients
109

Intégration des incertitudes liées aux prévisions de consommation et production à la gestion prévisionnelle d'un réseau de distribution / Management of a distribution network considering uncertain consumption and production forecasts

Buire, Jérôme 14 December 2018 (has links)
La gestion prévisionnelle des réseaux de distribution imposée par les codes de réseaux européens nécessite une connaissance approfondie de leur comportement et implique de prendre en compte la volatilité des énergies renouvelables et les capacités de prévision à l’horizon J-1 de la consommation et de la production. En effet, les valeurs déterministes les plus probables des prévisions ne sont plus suffisantes pour pouvoir prédire et gérer à l’avance un réseau. Une modélisation et une optimisation stochastiques permettent un choix, au plus juste, de paramètres de contrôle.La thèse se concentre la prise en compte, dans la modélisation et l’optimisation, des incertitudes des réseaux de distribution. Une modélisation stochastique de réseau est proposée, elle intègre les incertitudes liées au régleur en charge et aux prévisions de consommation et de production. Les contrôleurs des générateurs, le régleur en charge et les gradins de condensateurs permettent de limiter les fluctuations des tensions des nœuds et de la puissance réactive à l’interface et de respecter les exigences contractuelles. Industriellement, les contrôleurs des générateurs sont caractérisés par des lois de commande linéaires ou linéaires par morceaux. En effectuant des hypothèses sur la nature stochastique des données, on peut montrer que les tensions aux nœuds sont des variables gaussiennes ou des sommes de variables gaussiennes par morceaux. Une optimisation stochastique basée sur ces modèles permet de choisir les paramètres des contrôleurs qui minimisent les risques de surtension et des efforts de générateurs, sans avoir à mettre en œuvre des méthodes coûteuses en temps de calcul de type Monte Carlo / The voltage profiles inside the network and power flows at the transport-distribution interface are modified under the massive insertion of renewable sources in distribution grids. The system’s uncertainties cannot be handled by local controllers which parameters are tuned at the actuator installation stage. A solution, widely accepted in the literature, consists of achieving a centralized optimization of the actuators references (distributed generators reactive powers, reference voltage of the On Load Tap Changer, capacitor banks reactive power). Within this framework, a supervisor computes all references at the same time and delivers the references to each actuators, which requires an efficient and reliable communication system.The main contribution of the thesis is to design an alternative approach which keeps the local control structures which settings will be updated on an hourly basis. The optimization relies on a stochastic representation of the grid that accounts for the On Load Tap Changer uncertainties and day ahead forecasts of the productions and consumptions. It is shown that every variable of the system can be represented by Gaussian or sum of truncated Gaussian variables. A stochastic optimization allows to select the controllers settings that minimize overvoltages and control efforts, without using time-consuming algorithms such as Monte-Carlo methods. This work will demonstrate that an appropriate management of uncertainties spares unnecessary and costly oversizing
110

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>

Page generated in 0.1493 seconds