Spelling suggestions: "subject:"[een] SAMPLE AVERAGE APPROXIMATION"" "subject:"[enn] SAMPLE AVERAGE APPROXIMATION""
11 |
Best Longitudinal Adjustment of Satellite Trajectories for the Observation of Forest Fires (Blastoff): A Stochastic Programming Approach to Satellite System DesignHoskins, Aaron Bradley 06 May 2017 (has links)
Forest fires cause a significant amount of damage and destruction each year. Optimally dispatching resources reduces the amount of damage a forest fire can cause. Models predict the fire spread to provide the data required to optimally dispatch resources. However, the models are only as accurate as the data used to build them. Satellites are one valuable tool in the collection of data for the forest fire models. Satellites provide data on the types of vegetation, the wind speed and direction, the soil moisture content, etc. The current operating paradigm is to passively collect data when possible. However, images from directly overhead provide better resolution and are easier to process. Maneuvering a constellation of satellites to fly directly over the forest fire provides higher quality data than is achieved with the current operating paradigm. Before launch, the location of the forest fire is unknown. Therefore, it is impossible to optimize the initial orbits for the satellites. Instead, the expected cost of maneuvering to observe the forest fire determines the optimal initial orbits. A two-stage stochastic programming approach is well suited for this class of problem where initial decisions are made with an uncertain future and then subsequent decisions are made once a scenario is realized. A repeat ground track orbit provides a non-maneuvering, natural solution providing a daily flyover of the forest fire. However, additional maneuvers provide a second daily flyover of the forest fire. The additional maneuvering comes at a significant cost in terms of additional fuel, but provides more data collection opportunities. After data are collected, ground stations receive the data for processing. Optimally selecting the ground station locations reduce the number of built ground stations and reduces the data fusion issues. However, the location of the forest fire alters the optimal ground station sites. A two-stage stochastic programming approach optimizes the selection of ground stations to maximize the expected amount of data downloaded from a satellite. The approaches of selecting initial orbits and ground station locations including uncertainty will provide a robust system to reduce the amount of damage caused by forest fires.
|
12 |
[en] PARTITION-BASED METHOD FOR TWO-STAGE STOCHASTIC LINEAR PROGRAMMING PROBLEMS WITH COMPLETE RECOURSE / [pt] MÉTODO DE PARTIÇÃO PARA PROBLEMAS DE PROGRAMAÇÃO LINEAR ESTOCÁSTICA DOIS ESTÁGIOS COM RECURSO COMPLETOCARLOS ANDRES GAMBOA RODRIGUEZ 22 March 2018 (has links)
[pt] A parte mais difícil de modelar os problemas de tomada de decisão do mundo real, é a incerteza associada a realização de eventos futuros. A programação estocástica se encarrega desse assunto; o objetivo é achar soluções que sejam factíveis para todas as possíveis realizações dos dados, otimizando o valor esperado de algumas funções das variáveis de decisão e de incerteza. A abordagem mais estudada está baseada em simulação de Monte Carlo e o método SAA (Sample Average Appmwimation) o qual é uma formulação
do problema verdadeiro para cada realização da data incerta, que pertence a um conjunto finito de cenários uniformemente distribuídos. É possível provar que o valor ótimo e a solução ótima do problema SAA converge a seus homólogos do problema verdadeiro quando o número de cenários é suficientemente grande.Embora essa abordagem seja útil ali existem fatores limitantes sobre o custo computacional para obter soluções mais precisas aumentando o número de cenários; no entanto o fato mais importante é que o problema SAA é função de cada amostra gerada e por essa razão é aleatório, o qual significa que
a sua solução também é incerta, e para medir essa incerteza e necessário considerar o número de replicações do problema SAA afim de estimar a dispersão da solução, aumentando assim o custo computacional. O propósito deste trabalho é apresentar uma abordagem alternativa baseada em um método de partição que permite obter cotas para estimar deterministicamente a solução do problema original, com aplicação da desigualdade de Jensen e de técnicas de otimização robusta. No final se analisa
a convergência dos algoritmos de solução propostos. / [en] The hardest part of modelling decision-making problems in the real world, is the uncertainty associated to realizations of futures events. The stochastic programming is responsible about this subject; the target is
finding solutions that are feasible for all possible realizations of the unknown data, optimizing the expected value of some functions of decision variables and random variables. The approach most studied is based on Monte Carlo simulation and the Sample Average Approximation (SAA) method which is a kind of
discretization of expected value, considering a finite set of realizations or scenarios uniformly distributed. It is possible to prove that the optimal value and the optimal solution of the SAA problem converge to their counterparts of the true problem when the number of scenarios is sufficiently big. Although that approach is useful, there exist limiting factors about the computational cost to increase the scenarios number to obtain a better solution; but the most important fact is that SAA problem is function of each sample generated, and for that reason is random, which means that the solution is also uncertain, and to measure its uncertainty it is necessary consider the replications of SAA problem to estimate the dispersion of the
estimated solution, increasing even more the computational cost. The purpose of this work is presenting an alternative approach based on robust optimization techniques and applications of Jensen s inequality,
to obtain bounds for the optimal solution, partitioning the support of distribution (without scenarios creation) of unknown data, and taking advantage of the convexity. At the end of this work the convergence of the bounding problem and the proposed solution algorithms are analyzed.
|
13 |
Models and Algorithms to Solve Electric Vehicle Charging Stations Designing and Managing Problem under UncertaintyQuddus, Md Abdul 14 December 2018 (has links)
This dissertation studies a framework in support electric vehicle (EV) charging station expansion and management decisions. In the first part of the dissertation, we present mathematical model for designing and managing electric vehicle charging stations, considering both long-term planning decisions and short-term hourly operational decisions (e.g., number of batteries charged, discharged through Battery-to-Grid (B2G), stored, Vehicle-to-Grid (V2G), renewable, grid power usage) over a pre-specified planning horizon and under stochastic power demand. The model captures the non-linear load congestion effect that increases exponentially as the electricity consumed by plugged-in EVs approaches the capacity of the charging station and linearizes it. The study proposes a hybrid decomposition algorithm that utilizes a Sample Average Approximation and an enhanced Progressive Hedging algorithm (PHA) inside a Constraint Generation algorithmic framework to efficiently solve the proposed optimization model. A case study based on a road network of Washington, D.C. is presented to visualize and validate the modeling results. Computational experiments demonstrate the effectiveness of the proposed algorithm in solving the problem in a practical amount of time. Finding of the study include that incorporating the load congestion factor encourages the opening of large-sized charging stations, increases the number of stored batteries, and that higher congestion costs call for a decrease in the opening of new charging stations. The second part of the dissertation is dedicated to investigate the performance of a collaborative decision model to optimize electricity flow among commercial buildings, electric vehicle charging stations, and power grid under power demand uncertainty. A two-stage stochastic programming model is proposed to incorporate energy sharing and collaborative decisions among network entities with the aim of overall energy network cost minimization. We use San Francisco, California as a testing ground to visualize and validate the modeling results. Computational experiments draw managerial insights into how different key input parameters (e.g., grid power unavailability, power collaboration restriction) affect the overall energy network design and cost. Finally, a novel disruption prevention model is proposed for designing and managing EV charging stations with respect to both long-term planning and short-term operational decisions, over a pre-determined planning horizon and under a stochastic power demand. Long-term planning decisions determine the type, location, and time of established charging stations, while short-term operational decisions manage power resource utilization. A non-linear term is introduced into the model to prevent the evolution of excessive temperature on a power line under stochastic exogenous factors such as outside temperature and air velocity. Since the re- search problem is NP-hard, a Sample Average Approximation method enhanced with a Scenario Decomposition algorithm on the basis of Lagrangian Decomposition scheme is proposed to obtain a good-quality solution within a reasonable computational time. As a testing ground, the road network of Washington, D.C. is considered to visualize and validate the modeling results. The results of the analysis provide a number of managerial insights to help decision makers achieving a more reliable and cost-effective electricity supply network.
|
14 |
Electric Vehicle Charging Network Design and Control StrategiesWu, Fei January 2016 (has links)
No description available.
|
15 |
Optimizacija problema sa stohastičkim ograničenjima tipa jednakosti – kazneni metodi sa promenljivom veličinom uzorka / Optimization of problems with stochastic equality constraints – penaltyvariable sample size methodsRožnjik Andrea 24 January 2019 (has links)
<p>U disertaciji je razmatran problem stohastičkog programiranja s ograničenjima tipa jednakosti, odnosno problem minimizacije s ograničenjima koja su u obliku matematičkog očekivanja. Za rešavanje posmatranog problema kreirana su dva iterativna postupka u kojima se u svakoj iteraciji računa s uzoračkim očekivanjem kao aproksimacijom matematičkog očekivanja. Oba postupka koriste prednosti postupaka s promenljivom veličinom uzorka zasnovanih na adaptivnom ažuriranju veličine uzorka. To znači da se veličina uzorka određuje na osnovu informacija u tekućoj iteraciji. Konkretno, tekuće informacije o preciznosti aproksimacije očekivanja i tačnosti aproksimacije rešenja problema definišu veličinu uzorka za narednu iteraciju. Oba iterativna postupka su zasnovana na linijskom pretraživanju, a kako je u pitanju problem s ograničenjima, i na kvadratnom kaznenom postupku prilagođenom stohastičkom okruženju. Postupci su zasnovani na istim idejama, ali s različitim pristupom.<br />Po prvom pristupu postupak je kreiran za rešavanje SAA reformulacije problema stohastičkog programiranja, dakle za rešavanje aproksimacije originalnog problema. To znači da je uzorak definisan pre iterativnog postupka, pa je analiza konvergencije algoritma deterministička. Pokazano je da se, pod standardnim pretpostavkama, navedenim algoritmom dobija podniz iteracija čija je tačka nagomilavanja KKT tačka SAA reformulacije.<br />Po drugom pristupu je formiran algoritam za rešavanje samog problema<br />stohastičkog programiranja, te je analiza konvergencije stohastička. Predstavljenim algoritmom se generiše podniz iteracija čija je tačka nagomilavanja, pod standardnim pretpostavkama za stohastičku optimizaciju, skoro sigurno<br />KKT tačka originalnog problema.<br />Predloženi algoritmi su implementirani na istim test problemima. Rezultati numeričkog testiranja prikazuju njihovu efikasnost u rešavanju posmatranih problema u poređenju s postupcima u kojima je ažuriranje veličine uzorka<br />zasnovano na unapred definisanoj šemi. Za meru efikasnosti je upotrebljen<br />broj izračunavanja funkcija. Dakle, na osnovu rezultata dobijenih na skupu<br />testiranih problema može se zaključiti da se adaptivnim ažuriranjem veličine<br />uzorka može uštedeti u broju evaluacija funkcija kada su u pitanju i problemi s<br />ograničenjima.<br />Kako je posmatrani problem deterministički, a formulisani postupci su stohastički, prva tri poglavlja disertacije sadrže osnovne pojmove determinističke<br />i stohastiˇcke optimizacije, ali i kratak pregled definicija i teorema iz drugih<br />oblasti potrebnih za lakše praćenje analize originalnih rezultata. Nastavak disertacije čini prikaz formiranih algoritama, analiza njihove konvergencije i numerička implementacija.<br /> </p> / <p>Stochastic programming problem with equality constraints is considered within thesis. More precisely, the problem is minimization problem with constraints in the form of mathematical expectation. We proposed two iterative methods for solving considered problem. Both procedures, in each iteration, use a sample average function instead of the mathematical expectation function, and employ the advantages of the variable sample size method based on adaptive updating the sample size. That means, the sample size is determined at every iteration using information from the current iteration. Concretely, the current precision of the approximation of expectation and the quality of the approximation of solution determine the sample size for the next iteration. Both iterative procedures are based on the line search technique as well as on the quadratic penalty method adapted to stochastic environment, since the considered problem has constraints. Procedures relies on same ideas, but the approach is different.<br />By first approach, the algorithm is created for solving an SAA reformulation of the stochastic programming problem, i.e., for solving the approximation of the original problem. That means the sample size is determined before the iterative procedure, so the convergence analyses is deterministic. We show that, under the standard assumptions, the proposed algorithm generates a subsequence which accumulation point is the KKT point of the SAA problem. Algorithm formed by the second approach is for solving the stochastic programming problem, and therefore the convergence analyses is stochastic. It generates a subsequence with accumulation point that is almost surely the KKT point of the original problem, under the standard assumptions for stochastic optimization.for sample size. The number of function evaluations is used as measure of efficiency. Results of the set of tested problems suggest that it is possible to make smaller number of function evaluations by adaptive sample size scheduling in the case of constrained problems, too.<br />Since the considered problem is deterministic, but the formed procedures are stochastic, the first three chapters of thesis contain basic notations of deterministic and stochastic optimization, as well as a short sight of definitions and theorems from another fields necessary for easier tracking the original results analysis. The rest of thesis consists of the presented algorithms, their convergence analysis and numerical implementation.</p>
|
16 |
Line search methods with variable sample size / Metodi linijskog pretrazivanja sa promenljivom velicinom uzorkaKrklec Jerinkić Nataša 17 January 2014 (has links)
<p>The problem under consideration is an unconstrained optimization problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.</p><p>The main goal of this thesis is to develop the clas of methods that can decrease the cost of an algorithm by decreasing the number of function evaluations. The idea is to decrease the sample size whenever it seems to be reasonable - roughly speaking, we do not want to impose a large precision, i.e. a large sample size when we are far away from the solution we search for. The detailed description of the new methods <br />is presented in Chapter 4 together with the convergence analysis. It is shown that the approximate solution is of the same quality as the one obtained by dealing with the full sample from the start.</p><p>Another important characteristic of the methods that are proposed here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search along it until we obtain a sucient decrease in the function value. The sucient decrease is determined throughout the line search rule. In Chapter 4, that rule is supposed to be monotone, i.e. we are imposing strict decrease of the function value. In order to decrease the cost of the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented. </p><p>In Chapter 6, numerical results are presented. The test problems are various - some of them are academic and some of them are real world problems. The academic problems are here to give us more insight into the behavior of the algorithms. On the other hand, data that comes from the real world problems are here to test the real applicability of the proposed algorithms. In the rst part of that chapter, the focus is on the variable sample size techniques. Different implementations of the proposed algorithm are compared to each other and to the other sample schemes as well. The second part is mostly devoted to the comparison of the various line search rules combined with dierent search directions in the variable sample size framework. The overall numerical results show that using the variable sample size can improve the performance of the algorithms signicantly, especially when the nonmonotone line search rules are used.</p><p>The rst chapter of this thesis provides the background material for the subsequent chapters. In Chapter 2, basics of the nonlinear optimization are presented and the focus is on the line search, while Chapter 3 deals with the stochastic framework. These chapters are here to provide the review of the relevant known results, while the rest of the thesis represents the original contribution. </p> / <p>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki trošak i uobičajeno je da se ukupan trošak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.</p><p>Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne trošskove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti (velikog uzorka) kada smo daleko od rešsenja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija rešenja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.</p><p>Još jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vršsi pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono što znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja troškova optimizacije kao i proširenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.</p><p>Numerički rezultati su predstavljeni u šestom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u ponašanje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim šemama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.</p><p>U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosadašnjih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.</p>
|
17 |
Risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problemsTekaya, Wajdi 14 March 2013 (has links)
The main objective of this thesis is to investigate risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems. The purpose of hydrothermal system operation planning is to define an operation strategy which, for each stage of the planning period, given the system state at the beginning of the stage, produces generation targets for each plant. This problem can be formulated as a large scale multistage stochastic linear programming problem. The energy rationing that took place in Brazil in the period 2001/2002 raised the question of whether a policy that is based on a criterion of minimizing the expected cost (i.e. risk neutral approach) is a valid one when it comes to meet the day-to-day supply requirements and taking into account severe weather conditions that may occur. The risk averse methodology provides a suitable framework to remedy these deficiencies. This thesis attempts to provide a better understanding of the risk averse methodology from the practice perspective and suggests further possible alternatives using robust optimization techniques. The questions investigated and the contributions of this thesis are as follows.
First, we suggest a multiplicative autoregressive time series model for the energy inflows that can be embedded into the optimization problem that we investigate. Then, computational aspects related to the stochastic dual dynamic programming (SDDP) algorithm are discussed. We investigate the stopping criteria of the algorithm and provide a framework for assessing the quality of the policy. The SDDP method works reasonably well when the number of state variables is relatively small while the number of stages can be large. However, as the number of state variables increases the convergence of the SDDP algorithm can become very slow. Afterwards, performance improvement techniques of the algorithm are discussed. We suggest a subroutine to eliminate the redundant cutting planes in the future cost functions description which allows a considerable speed up factor. Also, a design using high performance computing techniques is discussed. Moreover, an analysis of the obtained policy is outlined with focus on specific aspects of the long term operation planning problem. In the risk neutral framework, extreme events can occur and might cause considerable social costs. These costs can translate into blackouts or forced rationing similarly to what happened in 2001/2002 crisis. Finally, issues related to variability of the SAA problems and sensitivity to initial conditions are studied. No significant variability of the SAA problems is observed.
Second, we analyze the risk averse approach and its application to the hydrothermal operation planning problem. A review of the methodology is suggested and a generic description of the SDDP method for coherent risk measures is presented. A detailed study of the risk averse policy is outlined for the hydrothermal operation planning problem using different risk measures. The adaptive risk averse approach is discussed under two different perspectives: one through the mean-$avr$ and the other through the mean-upper-semideviation risk measures. Computational aspects for the hydrothermal system operation planning problem of the Brazilian interconnected power system are discussed and the contributions of the risk averse methodology when compared to the risk neutral approach are presented. We have seen that the risk averse approach ensures a reduction in the high quantile values of the individual stage costs. This protection comes with an increase of the average policy value - the price of risk aversion. Furthermore, both of the risk averse approaches come with practically no extra computational effort and, similarly to the risk neutral method, there was no significant variability of the SAA problems.
Finally, a methodology that combines robust and stochastic programming approaches is investigated. In many situations, such as the operation planning problem, the involved uncertain parameters can be naturally divided into two groups, for one group the robust approach makes sense while for the other the stochastic programming approach is more appropriate. The basic ideas are discussed in the multistage setting and a formulation with the corresponding dynamic programming equations is presented. A variant of the SDDP algorithm for solving this class of problems is suggested. The contributions of this methodology are illustrated with computational experiments of the hydrothermal operation planning problem and a comparison with the risk neutral and risk averse approaches is presented. The worst-case-expectation approach constructs a policy that is less sensitive to unexpected demand increase with a reasonable loss on average when compared to the risk neutral method. Also, we comp are the suggested method with a risk averse approach based on coherent risk measures. On the one hand, the idea behind the risk averse method is to allow a trade off between loss on average and immunity against unexpected extreme scenarios. On the other hand, the worst-case-expectation approach consists in a trade off between a loss on average and immunity against unanticipated demand increase. In some sense, there is a certain equivalence between the policies constructed using each of these methods.
|
18 |
[en] A STOCHASTIC APPROACH FOR OFFSHORE FLIGHT SCHEDULING OPTIMIZATION / [pt] UMA ABORDAGEM ESTOCÁSTICA PARA A OTIMIZAÇÃO DA PROGRAMAÇÃO DE VOOS OFFSHOREYAN BARBOZA BASTOS 23 December 2020 (has links)
[pt] A Petrobras, maior empresa de óleo e gás do Brasil e uma das maiores do mundo, possui mais de 94 porcento da sua produção proveniente de campos offshore. Na região Sudeste o transporte dos trabalhadores para as unidades marítimas de exploração e produção é realizado por modal aéreo, através de helicópteros afretados de médio a grande porte. Para atender ao grande número de voos, a Petrobras possui uma central de planejamento e programação de voos, cujo objetivo é construir escalas de
atendimento eficientes, em relação ao uso de recursos e ao nível de serviço. Um dos desafios enfrentados é gerar, manualmente, programações dos voos em situações de ruptura do atendimento, como por exemplo quando ocorre interrupção de pousos e decolagens devido a condições meteorológicas adversas (exigindo que os voos sejam programados para horários posteriores aos previamente planejados). Nessa dissertação de mestrado, é proposta uma abordagem de programação estocástica para gerar a programação de voos offshore ótima do ponto de vista do nível de serviço, reduzindo os atrasos esperados nos voos.
Considerando a característica combinatória dos problemas de agendamento, utilizou-se o método de Aproximação pela Média Amostral (SAA) para gerar os cenários do modelo de programação estocástica. Um modelo de Simulação de Eventos Discretos também foi desenvolvido para avaliar o nível de serviço
das programações de voos geradas. Os resultados numéricos indicam que a abordagem estocástica pode reduzir atrasos imprevisíveis, que causam grande impacto nos passageiros e na cadeia de suprimentos. / [en] Petrobras, the largest oil and gas company in Brazil and one of the largest in the world, has more than 94 percent of its production from offshore fields. In the Southeast region, workers are transported to offshore exploration and production units by air, using medium size to large size chartered helicopters.
To serve the large number of flights, Petrobras has a flight planning and scheduling center, with the objective of building efficient service scales, related to the use of resources and the level of service. One of the challenges faced is to generate, manually, flight schedules in situations of disruption of service, such as when there is an interruption of landings and takeoffs due to adverse weather conditions (requiring that flights be scheduled for times after those previously planned). In this master s thesis, a stochastic programming approach is proposed to generate the optimal offshore flight schedule from the service level point of view, reducing expected flight delays. Considering the combinatorial characteristic
of scheduling problems, the Sample Average Approximation (SAA) method was used to generate the scenarios of the stochastic programming model. A Discrete Event Simulation model was also developed to evaluate the service level of the generated flight schedules. The numerical results indicate
that the stochastic approach can reduce unpredictable delays, which have a major impact on passengers and the supply chain.
|
19 |
Stochastic optimization of staffing for multiskill call centersTa, Thuy Anh 12 1900 (has links)
Dans cette thèse, nous étudions le problème d’optimisation des effectifs dans les centres d’appels, dans lequel nous visons à minimiser les coûts d’exploitation tout en offrant aux clients une qualité de service (QoS) élevée. Nous introduisons également l'utilisation de contraintes probabilistes qui exigent que la qualité de service soit satisfaite avec une probabilité donnée. Ces contraintes sont adéquates dans le cas où la performance est mesurée sur un court intervalle de temps, car les mesures de QoS sont des variables aléatoires sur une période donnée. Les problèmes de personnel proposés sont difficiles en raison de l'absence de forme analytique pour les contraintes probabilistes et doivent être approximées par simulation. En outre, les fonctions QoS sont généralement non linéaires et non convexes. Nous considérons les problèmes d’affectation personnel dans différents contextes et étudions les modèles proposés tant du point de vue théorique que pratique. Les méthodologies développées sont générales, en ce sens qu'elles peuvent être adaptées et appliquées à d'autres problèmes de décision dans les systèmes de files d'attente.
La thèse comprend trois articles traitant de différents défis en matière de modélisation et de résolution de problèmes d'optimisation d’affectation personnel dans les centres d'appels à compétences multiples. Les premier et deuxième article concernent un problème d'optimisation d'affectation de personnel en deux étapes sous l'incertitude. Alors que dans le second, nous étudions un modèle général de programmation stochastique discrète en deux étapes pour fournir une garantie théorique de la consistance de l'approximation par moyenne échantillonnale (SAA) lorsque la taille des échantillons tend vers l'infini, le troisième applique l'approche du SAA pour résoudre le problème d’optimisation d'affectation de personnel en deux étapes avec les taux d’arrivée incertain. Les deux articles indiquent la viabilité de l'approche SAA dans notre contexte, tant du point de vue théorique que pratique.
Pour être plus précis, dans le premier article, nous considérons un problème stochastique discret général en deux étapes avec des contraintes en espérance. Nous formulons un problème SAA avec échantillonnage imbriqué et nous montrons que, sous certaines hypothèses satisfaites dans les exemples de centres d'appels, il est possible d'obtenir les solutions optimales du problème initial en résolvant son SAA avec des échantillons suffisamment grands. De plus, nous montrons que la probabilité que la solution optimale du problème de l’échantillon soit une solution optimale du problème initial tend vers un de manière exponentielle au fur et à mesure que nous augmentons la taille des échantillons. Ces résultats théoriques sont importants, non seulement pour les applications de centre d'appels, mais également pour d'autres problèmes de prise de décision avec des variables de décision discrètes.
Le deuxième article concerne les méthodes de résolution d'un problème d'affectation en personnel en deux étapes sous incertitude du taux d'arrivée. Le problème SAA étant coûteux à résoudre lorsque le nombre de scénarios est important. En effet, pour chaque scénario, il est nécessaire d'effectuer une simulation pour estimer les contraintes de QoS. Nous développons un algorithme combinant simulation, génération de coupes, renforcement de coupes et décomposition de Benders pour résoudre le problème SAA. Nous montrons l'efficacité de l'approche, en particulier lorsque le nombre de scénarios est grand.
Dans le dernier article, nous examinons les problèmes de contraintes en probabilité sur les mesures de niveau de service. Notre méthodologie proposée dans cet article est motivée par le fait que les fonctions de QoS affichent généralement des courbes en S et peuvent être bien approximées par des fonctions sigmoïdes appropriées. Sur la base de cette idée, nous avons développé une nouvelle approche combinant la régression non linéaire, la simulation et la recherche locale par région de confiance pour résoudre efficacement les problèmes de personnel à grande échelle de manière viable. L’avantage principal de cette approche est que la procédure d’optimisation peut être formulée comme une séquence de simulations et de résolutions de problèmes de programmation linéaire. Les résultats numériques basés sur des exemples réels de centres d'appels montrent l'efficacité pratique de notre approche.
Les méthodologies développées dans cette thèse peuvent être appliquées dans de nombreux autres contextes, par exemple les problèmes de personnel et de planification dans d'autres systèmes basés sur des files d'attente avec d'autres types de contraintes de QoS. Celles-ci soulèvent également plusieurs axes de recherche qu'il pourrait être intéressant d'étudier. Par exemple, une approche de regroupement de scénarios pour atténuer le coût des modèles d'affectation en deux étapes, ou une version d'optimisation robuste en distribution pour mieux gérer l'incertitude des données. / In this thesis, we study the staffing optimization problem in multiskill call centers, in which we aim at minimizing the operating cost while delivering a high quality of service (QoS) to customers. We also introduce the use of chance constraints which require that the QoSs are met with a given probability. These constraints are adequate in the case when the performance is measured over a short time interval as QoS measures are random variables in a given time period. The proposed staffing problems are challenging in the sense that the stochastic constraints have no-closed forms and need to be approximated by simulation. In addition, the QoS functions are typically non-linear and non-convex. We consider
staffing optimization problems in different settings and study the proposed models in both theoretical and practical aspects. The methodologies developed are general, in the sense that they can be adapted and applied to other staffing/scheduling problems in queuing-based systems.
The thesis consists of three articles dealing with different challenges in modeling and solving staffing optimization problems in multiskill call centers.
The first and second articles concern a two-stage staffing optimization problem under uncertainty. While in the first one, we study a general two-stage discrete stochastic programming model to provide a theoretical guarantee for the consistency of the sample average approximation (SAA) when the sample sizes go to infinity, the second one applies the SAA approach to solve the two-stage staffing optimization problem under arrival rate uncertainty. Both papers indicate the viability of the SAA approach in our context, in both theoretical and practical aspects.
To be more precise, in the first article, we consider a general two-stage discrete stochastic problem with expected value constraints. We formulate
its SAA with nested sampling. We show that under some assumptions that hold in call center examples, one can obtain the optimal solutions of the original problem by solving its SAA with large enough sample sizes. Moreover, we show that the probability that the optimal solution of the sample problem is an optimal solution of the original problem, approaches one exponentially fast as we increase the sample sizes. These theoretical findings are important, not only for call center applications, but also for other decision-making problems with discrete decision variables.
The second article concerns solution methods to solve a two-stage staffing problem under arrival rate uncertainty. It is motivated by the fact that the SAA version of the two-stage staffing problem becomes expensive to solve with a large number of scenarios, as for each scenario, one needs to use simulation to approximate the QoS constraints. We develop an algorithm that combines simulation, cut generation, cut strengthening and Benders decomposition to solve the SAA problem. We show the efficiency of the approach, especially when the number of scenarios is large.
In the last article, we consider problems with chance constraints on the service level measures. Our methodology proposed in this article is motivated by the fact that the QoS functions generally display ``S-shape'' curves and might be well approximated by appropriate sigmoid functions. Based on this idea, we develop a novel approach that combines non-linear regression, simulation and trust region local search to efficiently solve large-scale staffing problems in a viable way. The main advantage of the approach is that the optimization procedure can be formulated as a sequence of steps of performing simulation and solving linear programming models. Numerical results based on real-life call center examples show the practical viability of our approach.
The methodologies developed in this thesis can be applied in many other settings, e.g., staffing and scheduling problems in other queuing-based systems with other types of QoS constraints. These also raise several research directions that might be interesting to investigate. For examples, a clustering approach to mitigate the expensiveness of the two-stage staffing models, or a distributionally robust optimization version to better deal with data uncertainty.
|
Page generated in 0.0391 seconds