Spelling suggestions: "subject:"cample average approximation"" "subject:"5ample average approximation""
1 |
Optimal Supply Chain Configuration for the Additive Manufacturing of Biomedical ImplantsEmelogu, Adindu Ahurueze 09 December 2016 (has links)
In this dissertation, we study two important problems related to additive manufacturing (AM). In the first part, we investigate the economic feasibility of using AM to fabricate biomedical implants at the sites of hospitals AM versus traditional manufacturing (TM). We propose a cost model to quantify the supply-chain level costs associated with the production of biomedical implants using AM technology, and formulate the problem as a two-stage stochastic programming model, which determines the number of AM facilities to be established and volume of product flow between manufacturing facilities and hospitals at a minimum cost. We use the sample average approximation (SAA) approach to obtain solutions to the problem for a real-world case study of hospitals in the state of Mississippi. We find that the ratio between the unit production costs of AM and TM (ATR), demand and product lead time are key cost parameters that determine the economic feasibility of AM. In the second part, we investigate the AM facility deployment approaches which affect both the supply chain network cost and the extent of benefits derived from AM. We formulate the supply chain network cost as a continuous approximation model and use optimization algorithms to determine how centralized or distributed the AM facilities should be and how much raw materials these facilities should order so that the total network cost is minimized. We apply the cost model to a real-world case study of hospitals in 12 states of southeastern USA. We find that the demand for biomedical implants in the region, fixed investment cost of AM machines, personnel cost of operating the machines and transportation cost are the major factors that determine the optimal AM facility deployment configuration. In the last part, we propose an enhanced sample average approximation (eSAA) technique that improves the basic SAA method. The eSAA technique uses clustering and statistical techniques to overcome the sample size issue inherent in basic SAA. Our results from extensive numerical experiments indicate that the eSAA can perform up to 699% faster than the basic SAA, thereby making it a competitive solution approach of choice in large scale stochastic optimization problems.
|
2 |
[en] METHODOLGY FOR THE DEFINITION OF AN INSURANCE CONTRACT OPTIMAL PARAMETERS IN THE OIL AND GAS INDUSTRY / [pt] METODOLOGIA PARA DEFINIÇÃO DOS PARÂMETROS ÓTIMOS DE UM CONTRATO DE SEGUROS NA INDÚSTRIA DE ÓLEO E GÁSANA PATRICIA BARROS TORRACA 01 February 2021 (has links)
[pt] As operações das empresas de óleo e gás são naturalmente perigosas e
suscetíveis a ocorrência de acidentes. As perdas financeiras associadas a acidentes
podem ser elevadas. Para evitar esse risco, é comum que as empresas adquiram
seguros. No entanto, determinar seus parâmetros adequados requer estimativas de
exposição ao risco, o que ainda é uma tarefa difícil. Para lidar com essa questão,
alguns autores sugerem uma caracterização de incerteza baseada em barreiras de
segurança. Essa abordagem facilita a definição das consequências e também atua
de forma mais preditiva quando comparada aos modelos baseados apenas em dados
históricos. Um modelo de otimização é sugerido, utilizando os resultados obtidos
com o método de caracterização de incerteza mencionado. Como as funções de
perdas não são completamente conhecidas, de forma a resolver o problema
estocástico, uma abordagem de Sample Average Approximation (SAA) é usada. Os
resultados obtidos foram comparados à situação sem seguro e a outras duas opções
de contrato de seguros. O modelo de otimização proposto foi o que conferiu maior
previsibilidade dos valores de perdas, apresentando o menor desvio-padrão.
Ressalta-se que a segunda melhor opção obteve um desvio-padrão 102 por cento a mais do
que o obtido com o seguro otimizado. Além disso, o modelo também proporcionou
maior proteção contra os eventos extremos, característica representada pelos
menores valores de VaR e CVaR, com a segunda melhor opção apresentando um
CVaR 41 por cento maior do que o obtido com o seguro otimizado. / [en] Operations in oil and gas companies are naturally dangerous and susceptible
to the occurrence of accidents. The financial losses due to accident damages can be
elevated. To avoid the risk of high expenses, it is usual for firms to acquire
insurance. However, setting the right parameters for an insurance contract requires
estimating the firm s risk exposure, which is still a hard task. To handle this issue,
some authors suggest uncertainty characterization models based on safety barriers
and precursor information. This approach facilitates the definition of consequences
and also acts in a more predictive way when compared to usual models based only
on historical data. Then, an optimization model is suggested, using the results
obtained with the uncertainty characterization method mentioned as one of its
inputs. As loss functions are not fully known, in order to solve the stochastic
problem, a Sample Average Approximation (SAA) approach is used. The results
obtained were compared to the situation where the company does not acquire
insurance and to other two insurance contract options. The optimization model
proposed was the one that granted greater predictability to the loss values,
presenting the smallest standard deviation. The second best option presented a
standard deviation 102 percent greater than the one obtained with the optimized
insurance. Also, the model provided greater protection against extreme events,
characteristic shown by smaller VaR and CVaR values, with the second best option
presenting a CVaR 41 percent greater than the optimized model s CVaR.
|
3 |
Systematic Design of Bulk Recycling Systems under UncertaintyWei, Jing 13 May 2004 (has links)
The fast growing waste stream of electronic and other complex consumer products is making the bulk recycling problem an important environmental protection issue. These products must be recycled because they contain hazardous materials such as lead and mercury. The focus of this thesis is the development of systematic methods for designing systems to recover mixed plastics from electronic products such as computers and televisions.
Bulk recycling systems are similar to other chemical engineering process systems. Therefore they can be synthesized and designed using some existing techniques that have been applied to distillation and reaction systems. However, the existence of various uncertainties from different sources, such as the variation of component fractions and product prices, makes it crucial to design a flexible and sustainable system, and is also a major challenge in this research. Another challenge is that plastics can be separated by different mechanisms based on different properties, but separating a mix of plastics often requires using a combination of different methods because they can have overlapping differentiating properties. Therefore many decisions are to be made including which methods to choose and how to connect them.
To address the problem systematically, the design under uncertainty problem was formulated as a stochastic Mixed Integer Nonlinear Program (sMINLP). A Sample Average Approximation (SAA) method wrapped on the Outer Approximation method has been developed in this thesis to solve such problems efficiently. Therefore, large design under uncertainty problems can be solved without intractable computational difficulty. To allow making choices from separation methods by different mechanisms, this research modeled various plastics separation methods taking account of the distribution of particle properties and unified them using a canonical partition curve representation. Finally, an overall design method was proposed in this work to incorporate the design of size reduction units into the separation system.
This research is the first formal development of a systematic method in this area to account for uncertainties and interactions between process steps.
|
4 |
Airline network revenue management : integrated optimization of hub location and capacity allocation / Gestion des revenus dans un réseau de compagnies aériennes : optimisation intégrée de la localisation de plateforme et du dimensionnement de capacitéHou, Yanting 22 November 2019 (has links)
La gestion des revenus d’un réseau de compagnies aériennes, un des problèmes le plus critiques dans le secteur du transport aérien, a reçu une attention significative depuis ces dernière décennies. Cependant, de nombreuses problématiques doivent encore être traitées. Cette thèse étudie quatre nouveaux problèmes de la gestion des revenus dans un réseau de compagnies aériennes. D'abord, un problème de dimensionnement de capacité du réseau avec alliances concurrentes est étudié. Dans ce problème, les concurrences horizontales et verticales sont considérées et la demande est supposée déterministe. L’objectif est de maximiser les revenus globaux de l’alliance en déterminant la capacité (en nombre de places) dans les vols pour chaque classe tarifaire de chaque compagnie. Le problème est formulé en programmation linéaire en nombres entiers et résolu à l’aide du solveur CPLEX. Deuxièmement, un problème intégrant la localisation de p-hub médian et le dimensionnement des capacités (places) est étudié pour maximiser une combinaison du bénéfice moyen et du bénéfice au pire cas. Pour ce problème, un seul hub à capacité illimitée est considéré. De plus, les incertitudes sur la demande sont représentées à l’aide d’un ensemble fini des scénarios. Le problème est formulé en programmation stochastique à deux étapes. Ensuite, un algorithme génétique (GA) est proposé pour résoudre le problème pour chaque scénario. Les résultats numériques montrent que la méthode est meilleure que celles dans la littérature qui considèrent uniquement le bénéfice moyen. Le troisième problème étudié est une extension naturelle du deuxième dans lequel la capacité de hub à localiser est limitée et les perturbations qui peuvent impacter la capacité du hub, telles que des conditions météorologiques, sont prises en compte. Deux formulations du problème sont proposées : (1) une programmation stochastique à deux étapes sur la base des scénarios, et (2) optimisation hybride de programmation stochastique à deux étapes à l’aide de pondération. Ensuite, l’approximation moyenne par échantillonnage (SAA) et le GA sont appliqués pour résoudre le problème, respectivement. Les résultats numériques montrent que la SAA est plus performante que le GA. Le quatrième problème est aussi une extension du deuxième problème où la compagnie aérienne doit respecter le niveau d'émissions de CO2 imposé. Le problème est modélisé en programmation stochastique à deux étapes sur la base des scénarios. De plus, une méthode SAA est proposée pour sa résolution. / As one of critical problems in aviation industry, airline network revenue management has received significant attention in recent decades. However, many issues still need to be addressed. This thesis investigates four new airline network revenue management problems. Firstly, a network capacity allocation problem with competitive alliances is studied. In this problem, horizontal and vertical competitions and deterministic demand are considered. The aim is to maximize the global alliance revenue by determining the (seat) capacities in flights for each fare class of each airline. The problem is formulated into a mixed integer programming and is solved by a commercial solver CPLEX. Secondly, an integrated p-hub median location and (seat) capacity allocation problem is investigated to maximize the combined average-case and worst-case profits of an airline. For this problem, an uncapacitated hub is considered and uncertain demand is represented by a finite set of scenarios. The studied problem is formulated based on a two-stage stochastic programming framework. Then a Genetic Algorithm (GA) is proposed to solve the problem for each scenario. Computational results show that the proposed method outperforms those in the literature only considering average-case profit. The third studied problem is a generalization of the second one in which the capacity of hub to be located is limited and disruptions which can impact airline hub capacity, such as adverse weather, are considered. Two formulations of the problem are proposed based on : (1) a scenario-based two-stage stochastic programming, and (2) a weight-based hybrid two-stage stochastic programming-robust optimization framework. Then a Sample Average Approximation (SAA) method and a GA are applied to solve them, respectively. Computational results show that the SAA is more effective than the GA. The fourth problem is also an extension of the second one where an airline is subjected to a CO2 emission limit. The problem is modeled into a scenario-based two-stage stochastic programming. And a SAA method is proposed to solve it.
|
5 |
Modeling the Effects of Electric Power Disruption and Expansion on the Operations of EV Charging StationsKabli, Mohannad Reda A 10 August 2018 (has links)
The projected and current adoption rates of electric vehicles are increasing. Since electric vehicles require that they be recharged continually over time, the energy needs to support them is immense and growing. Given existing infrastructure is insufficient to supply the projected energy needs, models are necessary to help decision makers plan for how to best expand the power grid to meet this need. A successful power grid expansion is one that enables charging stations to service the electric vehicle community. Thus, plans for power expansion need to be coordinated between the power grid and charging station investors. The infrastructure for the charging stations has to also be resilient and reliable to absorb this increase in load. Charging stations therefore should be included in the plans for post power disruption planning. In this work, two two-stage stochastic programming models are developed that can be used to determine a power grid expansion plan that sup- ports the energy needs, or load, from an uncertain set of electric vehicles geographically dispersed over a region. Another three-stage stochastic programming model is presented, where the decisions are made first to select which charging stations to install and expand uninterruptible power supply units and renewable energy sources. Then, when the disrup- tion occurs in the second-stage, repairs in power system and charging stations take place ahead of the arrival of panicked population to prepare for the expected surge in power de- mand. Finally, as demand is unveiled, managerial and operational decisions at the charging stations are made in the third-stage. To solve the mathematical models, we utilize hybrid approaches which mainly make use of Sample Average Approximation and Progressive Hedging algorithm. To validate the proposed model and gain key insights, we perform computational experiments using realistic data representing the Washington, DC area. Our computational results indicate the robustness of the proposed algorithm while providing a number of managerial insights to the decision makers.
|
6 |
A Location-Inventory Problem for Customers with Time ConstraintsE, Fan January 2016 (has links)
In this paper, a two-stage stochastic facility location problem integrated with inven- tory and recourse decisions is studied and solved. This problem is inspired by an industrial supply chain design problem of a large retail chain with slow-moving prod- ucts. Uncertainty is expressed by a discrete and finite set of scenarios. Recourse actions can be taken after the realization of random demands. Location, inventory, transportation, and recourse decisions are integrated into a mixed-integer program with an objective minimizing the expected total cost. A dual heuristic procedure is studied and embedded into the sample average approximation (SAA) method. The computation experiments demonstrate that our combined SAA with dual heuristic algorithm has a similar performance on solution quality and a much shorter compu- tational time. / Thesis / Master of Applied Science (MASc)
|
7 |
Negative Selection - An Absolute Measure of Arbitrary Algorithmic Order Execution / Negativna selekcija - Apsolutna mera algoritamskog izvršenja proizvoljnog nalogaLončar Sanja 18 September 2017 (has links)
<p>Algorithmic trading is an automated process of order execution on electronic stock markets. It can be applied to a broad range of financial instruments, and it is characterized by a signicant investors' control over the execution of his/her orders, with the principal goal of finding the right balance between costs and risk of not (fully) executing an order. As the measurement of execution performance gives information whether best execution is achieved, a signicant number of diffeerent benchmarks is used in practice. The most frequently used are price benchmarks, where some of them are determined before trading (Pre-trade benchmarks), some during the trading day (In-traday benchmarks), and some are determined after the trade (Post-trade benchmarks). The two most dominant are VWAP and Arrival Price, which is along with other pre-trade price benchmarks known as the Implementation Shortfall (IS).</p><p>We introduce Negative Selection as a posteriori measure of the execution algorithm performance. It is based on the concept of Optimal Placement, which represents the ideal order that could be executed in a given time win-dow, where the notion of ideal means that it is an order with the best execution price considering market conditions during the time window. Negative Selection is dened as a difference between vectors of optimal and executed orders, with vectors dened as a quantity of shares at specied price positionsin the order book. It is equal to zero when the order is optimally executed; negative if the order is not (completely) filled, and positive if the order is executed but at an unfavorable price.</p><p>Negative Selection is based on the idea to offer a new, alternative performance measure, which will enable us to find the optimal trajectories and construct optimal execution of an order.</p><p>The first chapter of the thesis includes a list of notation and an overview of denitions and theorems that will be used further in the thesis. Chapters 2 and 3 follow with a theoretical overview of concepts related to market microstructure, basic information regarding benchmarks, and theoretical background of algorithmic trading. Original results are presented in chapters 4 and 5. Chapter 4 includes a construction of optimal placement, definition and properties of Negative Selection. The results regarding the properties of a Negative Selection are given in [35]. Chapter 5 contains the theoretical background for stochastic optimization, a model of the optimal execution formulated as a stochastic optimization problem with regard to Negative Selection, as well as original work on nonmonotone line search method [31], while numerical results are in the last, 6th chapter.</p> / <p>Algoritamsko trgovanje je automatizovani proces izvršavanja naloga na elektronskim berzama. Može se primeniti na širok spektar nansijskih instrumenata kojima se trguje na berzi i karakteriše ga značajna kontrola investitora nad izvršavanjem njegovih naloga, pri čemu se teži nalaženju pravog balansa izmedu troška i rizika u vezi sa izvršenjem naloga. S ozirom da se merenjem performasi izvršenja naloga određuje da li je postignuto najbolje izvršenje, u praksi postoji značajan broj različitih pokazatelja. Najčešće su to pokazatelji cena, neki od njih se određuju pre trgovanja (eng. Pre-trade), neki u toku trgovanja (eng. Intraday), a neki nakon trgovanja (eng. Post-trade). Dva najdominantnija pokazatelja cena su VWAP i Arrival Price koji je zajedno sa ostalim "pre-trade" pokazateljima cena poznat kao Implementation shortfall (IS).</p><p>Pojam negative selekcije se uvodi kao "post-trade" mera performansi algoritama izvršenja, polazeći od pojma optimalnog naloga, koji predstavlja idealni nalog koji se mogao izvrsiti u datom vremenskom intervalu, pri ćemu se pod pojmom "idealni" podrazumeva nalog kojim se postiže najbolja cena u tržišnim uslovima koji su vladali u toku tog vremenskog intervala. Negativna selekcija se definiše kao razlika vektora optimalnog i izvršenog naloga, pri čemu su vektori naloga defisani kao količine akcija na odgovarajućim pozicijama cena knjige naloga. Ona je jednaka nuli kada je nalog optimalno izvršen; negativna, ako nalog nije (u potpunosti) izvršen, a pozitivna ako je nalog izvršen, ali po nepovoljnoj ceni.</p><p>Uvođenje mere negativne selekcije zasnovano je na ideji da se ponudi nova, alternativna, mera performansi i da se u odnosu na nju nađe optimalna trajektorija i konstruiše optimalno izvršenje naloga.</p><p>U prvom poglavlju teze dati su lista notacija kao i pregled definicija i teorema neophodnih za izlaganje materije. Poglavlja 2 i 3 bave se teorijskim pregledom pojmova i literature u vezi sa mikrostrukturom tržišta, pokazateljima trgovanja i algoritamskim trgovanjem. Originalni rezultati su predstavljeni u 4. i 5. poglavlju. Poglavlje 4 sadrži konstrukciju optimalnog naloga, definiciju i osobine negativne selekcije. Teorijski i praktični rezultati u vezi sa osobinama negativna selekcije dati su u [35]. Poglavlje 5 sadrži teorijske osnove stohastičke optimizacije, definiciju modela za optimalno izvršenje, kao i originalni rad u vezi sa metodom nemonotonog linijskog pretraživanja [31], dok 6. poglavlje sadrži empirijske rezultate.</p>
|
8 |
An empirical analysis of scenario generation methods for stochastic optimizationLöhndorf, Nils 17 May 2016 (has links) (PDF)
This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well as a new method referred to as Voronoi cell sampling. Solution quality is assessed by measuring the error that arises from using scenarios to solve a multi-dimensional newsvendor problem, for which analytical solutions are available. In addition to the expected value, the work also studies scenario quality when minimizing the expected shortfall using the conditional value-at-risk. To quickly solve problems with millions of random parameters, a reformulation of the risk-averse newsvendor problem is proposed which can be solved via Benders decomposition. The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions. A controversial finding concerns evidence for the ineffectiveness of widely used methods based on minimizing probability metrics under high-dimensional randomness.
|
9 |
Data Science and the Ice-Cream Vendor ProblemAzasoo, Makafui 01 August 2021 (has links)
Newsvendor problems in Operations Research predict the optimal inventory levels necessary to meet uncertain demands. This thesis examines an extended version of a single period multi-product newsvendor problem known as the ice cream vendor problem. In the ice cream vendor problem, there are two products – ice cream and hot chocolate – which may be substituted for one another if the outside temperature is no too hot or not too cold. In particular, the ice cream vendor problem is a data-driven extension of the conventional newsvendor problem which does not require the assumption of a specific demand distribution, thus allowing the demand for ice cream and hot chocolate respectively to be temperature dependent. Using Discrete Event Simulation, we first simulate a real-world scenario of an ice cream vendor problem via a demand whose expected value is a function of temperature. A sample average approximation technique is subsequently used to transform the stochastic newsvendor program into a feature-driven linear program based on the exogenous factors of probability of rainfall and temperature. The resulting problem is a multi-product newsvendor linear program with L1-regularization. The solution to this problem yields the expected cost to the ice cream vendor as well as the optimal order quantities for ice cream and hot chocolate, respectively.
|
10 |
Models and Algorithms to Solve a Reliable and Congested Biomass Supply Chain Network Designing Problem under UncertaintyPoudel, Sushil Raj 06 May 2017 (has links)
This dissertation studies two important problems in the field of biomass supply chain network. In the first part of the dissertation, we study the pre-disaster planning problem that seeks to strengthen the links between the multi-modal facilities of a biomass supply chain network. A mixed-integer nonlinear programming model is developed to determine the optimal locations for multi-modal facilities and bio-refineries, offer suggestions on reliability improvement at vulnerable links, production at bio-refineries, and make transportation decision under both normal and disrupted scenarios. The aim is to assist investors in determining which links’ reliability can be improved under specific budget limitations so that the biouel supply chain network can prevent possible losses when transportation links are disrupted because of natural disasters. We used states Mississippi and Alabama as a testing ground for our model. As part of numerical experimentation, some realistic hurricane scenarios are presented to determine the potential impact that pre-investing may have on improving the bio-mass supply chain network’s reliability on vulnerable transportation links considering limited budget availability. In the second part of the dissertation, we study the impact of feedstock supply uncertainty on the design and management of an inbound biomass coiring supply chain network. A two-stage stochastic mixed integer linear programming model is developed to determine the optimal use of multi-modal facilities, biomass storage and processing plants, and shipment routes for delivering biomass to coal plants under feedstock supply uncertainty while considering congestion into account. To represent a more realistic case, we generated a scenario tree based on the prediction errors obtained from historical and forecasted feedstock supply availability. We linearized the nonlinear problem and solved with high quality and in a time efficient manner by using a hybrid decomposition algorithm that connects a Constraint generation algorithm with Sample average approximation algorithm and enhanced Progressive hedging algorithm. We used states Mississippi and Alabama as a testing ground for our study and conducted thorough computational experiments to test our model and to draw managerial insights.
|
Page generated in 0.155 seconds