• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 42
  • 22
  • 20
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 331
  • 331
  • 122
  • 62
  • 53
  • 44
  • 39
  • 37
  • 37
  • 37
  • 36
  • 35
  • 33
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Comparative Statics Analysis of Some Operations Management Problems

Zeng, Xin 19 September 2012 (has links)
We propose a novel analytic approach for the comparative statics analysis of operations management problems on the capacity investment decision and the influenza (flu) vaccine composition decision. Our approach involves exploiting the properties of the underlying mathematical models, and linking those properties to the concept of stochastic orders relationship. The use of stochastic orders allows us to establish our main results without restriction to a specific distribution. A major strength of our approach is that it is "scalable," i.e., it applies to capacity investment decision problem with any number of non-independent (i.e., demand or resource sharing) products and resources, and to the influenza vaccine composition problem with any number of candidate strains, without a corresponding increase in computational effort. This is unlike the current approaches commonly used in the operations management literature, which typically involve a parametric analysis followed by the use of the implicit function theorem. Providing a rigorous framework for comparative statics analysis, which can be applied to other problems that are not amenable to traditional parametric analysis, is our main contribution. We demonstrate this approach on two problems: (1) Capacity investment decision, and (2) influenza vaccine composition decision. A comparative statics analysis is integral to the study of these problems, as it allows answers to important questions such as, "does the firm acquire more or less of the different resources available as demand uncertainty increases? does the firm benefit from an increase in demand uncertainty? how does the vaccine composition change as the yield uncertainty increases?" Using our proposed approach, we establish comparative statics results on how the newsvendor's expected profit and optimal capacity decision change with demand risk and demand dependence in multi-product multi-resource newsvendor networks; and how the societal vaccination benefit, the manufacturer's profit, and the vaccine output change with the risk of random yield of strains. / Ph. D.
172

Stochastic Scheduling for a Network of MEMS Job Shops

Varadarajan, Amrusha 31 January 2007 (has links)
This work is motivated by the pressing need for operational control in the fabrication of Microelectromechanical systems or MEMS. MEMS are miniature three-dimensional integrated electromechanical systems with the ability to absorb information from the environment, process this information and suitably react to it. These devices offer tremendous advantages owing to their small size, low power consumption, low mass and high functionality, which makes them very attractive in applications with stringent demands on weight, functionality and cost. While the system''s "brain" (device electronics) is fabricated using traditional IC technology, the micromechanical components necessitate very intricate and sophisticated processing of silicon or other suitable substrates. A dearth of fabrication facilities with micromachining capabilities and a lengthy gestation period from design to mass fabrication and commercial acceptance of the product in the market are factors most often implicated in hampering the growth of MEMS. These devices are highly application specific with low production volumes and the few fabs that do possess micromachining capabilities are unable to offer a complete array of fabrication processes in order to be able to cater to the needs of the MEMS R&D community. A distributed fabrication network has, therefore, emerged to serve the evolving needs of this high investment, low volume MEMS industry. Under this environment, a central facility coordinates between a network of fabrication centers (Network of MEMS job shops -- NMJS) containing micromachining capabilities. These fabrication centers include commercial, academic and government fabs, which make their services available to the ordinary customer. Wafers are shipped from one facility to another until all processing requirements are met. The lengthy and intricate process sequences that need to be performed over a network of capital intensive facilities are complicated by dynamic job arrivals, stochastic processing times, sequence-dependent set ups and travel between fabs. Unless the production of these novel devices is carefully optimized, the benefits of distributed fabrication could be completely overshadowed by lengthy lead times, chaotic routings and costly processing. Our goal, therefore, is to develop and validate an approach for optimal routing (assignment) and sequencing of MEMS devices in a network of stochastic job shops with the objective of minimizing the sum of completion times and the cost incurred, given a set of fabs, machines and an expected product mix. In view of our goal, we begin by modeling the stochastic NMJS problem as a two-stage stochastic program with recourse where the first-stage variables are binary and the second-stage variables are continuous. The key decision variables are binary and pertain to the assignment of jobs to machines and their sequencing for processing on the machines. The assignment variables essentially fix the route of a job as it travels through the network because these variables specify the machine on which each job-operation must be performed out of several candidate machines. Once the assignment is decided upon, sequencing of job-operations on each machine follows. The assignment and sequencing must be such that they offer the best solution (in terms of the objective) possible in light of all the processing time scenarios that can be realized. We present two approaches for solving the stochastic NMJS problem. The first approach is based on the L-shaped method (credited to van Slyke and Wets, 1969). Since the NMJS problem lacks relatively complete recourse, the first-stage solution can be infeasible to the second-stage problem in that the first stage solution may either violate the reentrant flow conditions or it may create a deadlock. In order to alleviate these infeasibilities, we develop feasibility cuts which when appended to the master problem eliminate the infeasible solution. Alternatively, we also develop constraints to explicitly address these infeasibilities directly within the master problem. We show how a deadlock involving 2 or 3 machines arises if and only if a certain relationship between operations and a certain sequence amongst them exists. We generalize this argument to the case of m machines, which forms the basis for our deadlock prevention constraints. Computational results at the end of Chapter 3 compare the relative merits of a model which relies solely on feasibility cuts with models that incorporate reentrant flow and deadlock prevention constraints within the master problem. Experimental evidence reveals that the latter offers appreciable time savings over the former. Moreover, in a majority of instances we see that models that carry deadlock prevention constraints in addition to the reentrant flow constraints provide at par or better performance than those that solely carry reentrant flow constraints. We, next, develop an optimality cut which when appended to the master problem helps in eliminating the suboptimal master solution. We also present alternative optimality and feasibility cuts obtained by modifying the disjunctive constraints in the subproblem so as to eliminate the big H terms in it. Although any large positive number can be used as the value of H, a conservative estimate may improve computational performance. In light of this, we develop a conservative upper bound for operation completion times and use it as the value of H. Test instances have been generated using a problem generator written in JAVA. We present computational results to evaluate the impact of a conservative estimate for big H on run time, analyze the effect of the different optimality cuts and demonstrate the performance of the multicut method (Wets, 1981) which differs from the L-shaped method in that the number of optimality cuts it appends is equal to the number of scenarios in each iteration. Experimentation indicates that Model 2, which uses the standard optimality cut in conjunction with the conservative estimate for big H, almost always outperforms Model 1, which also uses the standard optimality cut but uses a fixed value of 1000 for big H. Model 3, which employs the alternative optimality cut with the conservative estimate for big H, requires the fewest number of iterations to converge to the optimum but it also incurs the maximum premium in terms of computational time. This is because the alternative optimality cut adds to the complexity of the problem in that it appends additional variables and constraints to the master as well as the subproblems. In the case of Model 4 (multicut method), the segregated optimality cuts accurately reflect the shape of the recourse function resulting in fewer overall iterations but the large number of these cuts accumulate over the iterations making the master problem sluggish and so this model exhibits a variable performance for the various datasets. These experiments reveal that a compact master problem and a conservative estimate for big H positively impact the run time performance of a model. Finally, we develop a framework for a branch-and-bound scheme within which the L-shaped method, as applied to the NMJS problem, can be incorporated so as to further enhance its performance. Our second approach for solving the stochastic NMJS problem relies on the tight LP relaxation observed for the deterministic equivalent of the model. We, first, solve the LP relaxation of the deterministic equivalent problem, and then, fix certain binary assignment variables that take on a value of either a 0 or a 1 in the relaxation. Based on this fixing of certain assignment variables, additional logical constraints have been developed that lead to the fixing of some of the sequencing variables too. Experimental results, comparing the performance of the above LP heuristic procedure with CPLEX over the generated test instances, illustrate the effectiveness of the heuristic procedure. For the largest problems (5 jobs, 10 operations/job, 12 machines, 7 workcenters, 7 scenarios) solved in this experiment, an average savings of as much as 4154 seconds and 1188 seconds was recorded in a comparison with Models 1 and 2, respectively. Both of these models solve the deterministic equivalent of the stochastic NMJS problem but differ in that Model 1 uses a big H value of 1000 whereas Model 2 uses the conservative upper bound for big H developed in this work. The maximum optimality gap observed for the LP heuristic over all the data instances solved was 1.35%. The LP heuristic, therefore, offers a powerful alternative to solving these problems to near-optimality with a very low computational burden. We also present results pertaining to the value of the stochastic solution for various data instances. The observed savings of up to 8.8% over the mean value approach underscores the importance of using a solution that is robust over all scenarios versus a solution that approximates the randomness through expected values. We, next, present a dynamic stochastic scheduling approach (DSSP) for the NMJS problem. The premise behind this undertaking is that in a real-life implementation that is faithful to the two-stage procedure, assignment (routing) and sequencing decisions will be made for all the operations of all the jobs at the outset and these will be followed through regardless of the actual processing times realized for individual operations. However, it may be possible to refine this procedure if information on actual processing time realizations for completed operations could be utilized so that assignment and sequencing decisions for impending operations are adjusted based on the evolving scenario (which may be very different from the scenarios modeled) while still hedging against future uncertainty. In the DSSP approach, the stochastic programming model for the NMJS problem is solved at each decision point using the LP heuristic in a rolling horizon fashion while incorporating constraints that model existing conditions in the shop floor and the actual processing times realized for the operations that have been completed. The implementation of the DSSP algorithm is illustrated through an example problem. The results of the DSSP approach as applied to two large problem instances are presented. The performance of the DSSP approach is evaluated on three fronts; first, by using the LP heuristic at each decision point, second, by using an optimal algorithm at each decision point, and third, against the two-stage stochastic programming approach. Results from the experimentation indicate that the DSSP approach using the LP heuristic at each decision point generates superior assignment and sequencing decisions than the two-stage stochastic programming approach and provides solutions that are near-optimal with a very low computational burden. For the first instance involving 40 operations, 12 machines and 3 processing time scenarios, the DSSP approach using the LP heuristic yields the same solution as the optimal algorithm with a total time savings of 71.4% and also improves upon the two-stage stochastic programming solution by 1.7%. In the second instance, the DSSP approach using the LP heuristic yields a solution with an optimality gap of 1.77% and a total time savings of 98% over the optimal algorithm. In this case, the DSSP approach with the LP heuristic improves upon the two-stage stochastic programming solution by 6.38%. We conclude by presenting a framework for the DSSP approach that extends the basic DSSP algorithm to accommodate jobs whose arrival times may not be known in advance. / Ph. D.
173

Randomization for Efficient Nonlinear Parametric Inversion

Sariaydin, Selin 04 June 2018 (has links)
Nonlinear parametric inverse problems appear in many applications in science and engineering. We focus on diffuse optical tomography (DOT) in medical imaging. DOT aims to recover an unknown image of interest, such as the absorption coefficient in tissue to locate tumors in the body. Using a mathematical (forward) model to predict measurements given a parametrization of the tissue, we minimize the misfit between predicted and actual measurements up to a given noise level. The main computational bottleneck in such inverse problems is the repeated evaluation of this large-scale forward model, which corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, to efficiently compute derivative information, we need to solve, repeatedly, linear systems with the adjoint for each detector and frequency. As rapid advances in technology allow for large numbers of sources and detectors, these problems become computationally prohibitive. In this thesis, we introduce two methods to drastically reduce this cost. To efficiently implement Newton methods, we extend the use of simultaneous random sources to reduce the number of linear system solves to include simultaneous random detectors. Moreover, we combine simultaneous random sources and detectors with optimized ones that lead to faster convergence and more accurate solutions. We can use reduced order models (ROM) to drastically reduce the size of the linear systems to be solved in each optimization step while still solving the inverse problem accurately. However, the construction of the ROM bases still incurs a substantial cost. We propose to use randomization to drastically reduce the number of large linear solves needed for constructing the global ROM bases without degrading the accuracy of the solution to the inversion problem. We demonstrate the efficiency of these approaches with 2-dimensional and 3-dimensional examples from DOT; however, our methods have the potential to be useful for other applications as well. / Ph. D.
174

An evidential answer for the capacitated vehicle routing problem with uncertain demands / Une réponse évidentielle pour le problème de tournée de véhicules avec contrainte de capacité et demandes incertaines

Helal, Nathalie 20 December 2017 (has links)
Le problème de tournées de véhicules avec contrainte de capacité est un problème important en optimisation combinatoire. L'objectif du problème est de déterminer l'ensemble des routes, nécessaire pour servir les demandes déterministes des clients ayant un cout minimal, tout en respectant la capacité limite des véhicules. Cependant, dans de nombreuses applications réelles, nous sommes confrontés à des incertitudes sur les demandes des clients. La plupart des travaux qui ont traité ce problème ont supposé que les demandes des clients étaient des variables aléatoires. Nous nous proposons dans cette thèse de représenter l'incertitude sur les demandes des clients dans le cadre de la théorie de l'évidence - un formalisme alternatif pour modéliser les incertitudes. Pour résoudre le problème d'optimisation qui résulte, nous généralisons les approches de modélisation classiques en programmation stochastique. Précisément, nous proposons deux modèles pour ce problème. Le premier modèle, est une extension de l'approche chance-constrained programming, qui impose des bornes minimales pour la croyance et la plausibilité que la somme des demandes sur chaque route respecte la capacité des véhicules. Le deuxième modèle étend l'approche stochastic programming with recourse: l'incertitude sur les recours (actions correctives) possibles sur chaque route est représentée par une fonction de croyance et le coût d'une route est alors son coût classique (sans recours) additionné du pire coût espéré des recours. Certaines propriétés de ces deux modèles sont étudiées. Un algorithme de recuit simulé est adapté pour résoudre les deux modèles et est testé expérimentalement. / The capacitated vehicle routing problem is an important combinatorial optimisation problem. Its objective is to find a set of routes of minimum cost, such that a fleet of vehicles initially located at a depot service the deterministic demands of a set of customers, while respecting capacity limits of the vehicles. Still, in many real-life applications, we are faced with uncertainty on customer demands. Most of the research papers that handled this situation, assumed that customer demands are random variables. In this thesis, we propose to represent uncertainty on customer demands using evidence theory - an alternative uncertainty theory. To tackle the resulting optimisation problem, we extend classical stochastic programming modelling approaches. Specifically, we propose two models for this problem. The first model is an extension of the chance-constrained programming approach, which imposes certain minimum bounds on the belief and plausibility that the sum of the demands on each route respects the vehicle capacity. The second model extends the stochastic programming with recourse approach: it represents by a belief function for each route the uncertainty on its recourses (corrective actions) and defines the cost of a route as its classical cost (without recourse) plus the worst expected cost of its recourses. Some properties of these two models are studied. A simulated annealing algorithm is adapted to solve both models and is experimentally tested.
175

Risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems

Tekaya, Wajdi 14 March 2013 (has links)
The main objective of this thesis is to investigate risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems. The purpose of hydrothermal system operation planning is to define an operation strategy which, for each stage of the planning period, given the system state at the beginning of the stage, produces generation targets for each plant. This problem can be formulated as a large scale multistage stochastic linear programming problem. The energy rationing that took place in Brazil in the period 2001/2002 raised the question of whether a policy that is based on a criterion of minimizing the expected cost (i.e. risk neutral approach) is a valid one when it comes to meet the day-to-day supply requirements and taking into account severe weather conditions that may occur. The risk averse methodology provides a suitable framework to remedy these deficiencies. This thesis attempts to provide a better understanding of the risk averse methodology from the practice perspective and suggests further possible alternatives using robust optimization techniques. The questions investigated and the contributions of this thesis are as follows. First, we suggest a multiplicative autoregressive time series model for the energy inflows that can be embedded into the optimization problem that we investigate. Then, computational aspects related to the stochastic dual dynamic programming (SDDP) algorithm are discussed. We investigate the stopping criteria of the algorithm and provide a framework for assessing the quality of the policy. The SDDP method works reasonably well when the number of state variables is relatively small while the number of stages can be large. However, as the number of state variables increases the convergence of the SDDP algorithm can become very slow. Afterwards, performance improvement techniques of the algorithm are discussed. We suggest a subroutine to eliminate the redundant cutting planes in the future cost functions description which allows a considerable speed up factor. Also, a design using high performance computing techniques is discussed. Moreover, an analysis of the obtained policy is outlined with focus on specific aspects of the long term operation planning problem. In the risk neutral framework, extreme events can occur and might cause considerable social costs. These costs can translate into blackouts or forced rationing similarly to what happened in 2001/2002 crisis. Finally, issues related to variability of the SAA problems and sensitivity to initial conditions are studied. No significant variability of the SAA problems is observed. Second, we analyze the risk averse approach and its application to the hydrothermal operation planning problem. A review of the methodology is suggested and a generic description of the SDDP method for coherent risk measures is presented. A detailed study of the risk averse policy is outlined for the hydrothermal operation planning problem using different risk measures. The adaptive risk averse approach is discussed under two different perspectives: one through the mean-$avr$ and the other through the mean-upper-semideviation risk measures. Computational aspects for the hydrothermal system operation planning problem of the Brazilian interconnected power system are discussed and the contributions of the risk averse methodology when compared to the risk neutral approach are presented. We have seen that the risk averse approach ensures a reduction in the high quantile values of the individual stage costs. This protection comes with an increase of the average policy value - the price of risk aversion. Furthermore, both of the risk averse approaches come with practically no extra computational effort and, similarly to the risk neutral method, there was no significant variability of the SAA problems. Finally, a methodology that combines robust and stochastic programming approaches is investigated. In many situations, such as the operation planning problem, the involved uncertain parameters can be naturally divided into two groups, for one group the robust approach makes sense while for the other the stochastic programming approach is more appropriate. The basic ideas are discussed in the multistage setting and a formulation with the corresponding dynamic programming equations is presented. A variant of the SDDP algorithm for solving this class of problems is suggested. The contributions of this methodology are illustrated with computational experiments of the hydrothermal operation planning problem and a comparison with the risk neutral and risk averse approaches is presented. The worst-case-expectation approach constructs a policy that is less sensitive to unexpected demand increase with a reasonable loss on average when compared to the risk neutral method. Also, we comp are the suggested method with a risk averse approach based on coherent risk measures. On the one hand, the idea behind the risk averse method is to allow a trade off between loss on average and immunity against unexpected extreme scenarios. On the other hand, the worst-case-expectation approach consists in a trade off between a loss on average and immunity against unanticipated demand increase. In some sense, there is a certain equivalence between the policies constructed using each of these methods.
176

Integrated Modeling of Electric Power System Operations and Electricity Market Risks with Applications

Sun, Haibin 14 November 2006 (has links)
Through integrated modeling of power system operations and market risks, this thesis addresses a variety of important issues on market signals modeling, generation capacity scheduling, and electricity forward trading. The first part of the thesis addresses a central problem of transmission investment which is to model market signals for transmission adequacy. The proposed system simulation framework, combined with the stochastic price model, provides a powerful tool for capturing the characteristics of market prices dynamics and evaluating transmission investment. We advocate the use of an AC power flow formulations instead since it allocates transmission losses correctly and reveals the economic incentives of voltage requirements. By incorporating reliability constraints in the market dispatch, the resulting market prices yield incentives for market participants to invest in additional transmission capacity. The second part of the thesis presents a co-optimization modeling framework that incorporates market participation and market price uncertainties into the capacity allocation decision-making problem through a stochastic programming formulation. Optimal scenario-dependent generation scheduling strategies are obtained. The third part of the thesis is devoted to analyzing the risk premium present in the electricity day-ahead forward price over the real-time spot price. This study establishes a quantitative model for incorporating transmission congestion into the analysis of electricity day-ahead forward risk premium. Evidences from empirical studies confirm the significant statistical relationship between the day-ahead forward risk premium and the shadow price premiums on transmission flowgates.
177

[en] A STOCHASTIC PROGRAMMING MODEL FOR THE STRATEGIC PLANNING OF THE OIL SUPPLY CHAIN / [pt] MODELO DE PROGRAMAÇÃO ESTOCÁSTICA PARA O PLANEJAMENTO ESTRATÉGICO DA CADEIA INTEGRADA DE PETRÓLEO

GABRIELA PINTO RIBAS 06 October 2008 (has links)
[pt] A indústria do petróleo é uma das mais importantes e dinâmicas do Brasil. Em uma indústria naturalmente integrada como a petrolífera, é necessário um adequado planejamento estratégico da cadeia integrada de petróleo que contemple todos os seus processos, como a produção de petróleo, refino, distribuição e comercialização de derivados. Além disso, a indústria de petróleo está suscetível a diversas incertezas relacionadas a preço de petróleo e derivados, oferta de óleo bruto e demanda de produtos. Em face destas oportunidades e desafios, foi desenvolvido no âmbito desta dissertação um modelo de programação estocástica para o planejamento estratégico da cadeia de petróleo brasileira. O modelo contempla as refinarias e suas unidades de processos, as propriedades dos petróleos e derivados, a logística nacional e decisões de comercialização de petróleo e derivados, incluindo incertezas associadas a preço de mercado, produção de petróleo nacional e demanda interna de derivados. A partir do modelo estocástico foram formulados um modelo robusto e um modelo MinMax no intuito de comparar o desempenho e a qualidade da solução estocástica. Os modelos propostos foram aplicados a um exemplo real, com 17 refinarias e 3 centrais petroquímicas que processam 50 produtos intermediários, destinados a produção de 10 derivados associados à demanda nacional, 8 campos de exploração de petróleo, 14 produtores gás natural, 1 produtor de óleo vegetal, 13 terminais, 4 bases de distribuição e 278 arcos de transporte. Na análise de resultados foram utilizadas medidas como Valor Esperado da Informação Perfeita (EVPI) e Valor da Solução Estocástica (VSS). / [en] The oil industry is one of the most important and dynamic in Brazil. As the oil industry naturally integrated, we need an appropriate strategic planning to the oil supply chain that consider all its processes, such as oil production, refining, distribution and refined products marketing. Moreover, the oil industry is susceptible to various uncertainties regarding the oil and products price, crude oil supply and products demand. In light of these opportunities and challenges, it was developed in this dissertation a stochastic programming model for the strategic planning of the Brazilian oil supply chain. The model includes refineries and process units, oils and their products properties, logistics and national marketing decisions of oil and products, including uncertainties associated with market price, oil domestic production and refined products domestic demand. Based on the stochastic model a robust model and a MinMax model were formulated in order to compare the performance and quality of the stochastic solution. The proposed models were applied to a real example, with 17 refineries and 3 petrochemical power plants that process 50 intermediate products, intended to production of 10 final products associated to national demand, 8 oil fields, 14 natural gas producers, 1 vegetal oil producer, 13 terminals, 4 delivery points and 278 arches of transport. In the results analysis was used as measures the Expected Value of Perfect Information (EVPI) and the Value of the Stochastic Solution (VSS).
178

Optimisation Multi-échelon du stock avec incertitude sur l'approvisionnement et la demande / Multi-echelon Inventory optimization under supply and demand uncertainty

Firoozi, Mehdi 03 December 2018 (has links)
Des stratégies d'approvisionnement pérennes sont nécessaires pour les gestionnaires de la chaîne d'approvisionnement afin de faire face aux incertitudes d’approvisionnement et de demande. La diminution des niveaux de service et l'augmentation simultanée des coûts de stockage sont les impacts les plus importants de ces incertitudes. Les perturbations peuvent être causées par des discontinuités de l’approvisionnement, de l'instabilité politique, des catastrophes naturelles et des grèves des employés. Elles pourraient avoir un effet important sur la performance de la chaîne d'approvisionnement. Pour faire face à de telles perturbations, les modèles d'optimisation des stocks doivent être adaptés pour couvrir une structure de réseau multi-échelons et envisager des stratégies d'approvisionnement alternatives telles que le transport latéral (lateral transshipment) et plusieurs sources d’approvisionnement. Dans ce travail, une approche de modélisation basée sur des scénarios est proposée pour résoudre un problème d'optimisation multi-échelons des stocks. En prenant en compte la demande stochastique et les incertitudes sur les capacités de production, le modèle minimise le coût opérationnel total (coûts de stockage, de transport et de retard) tout en optimisant la gestion des stocks et les flux des marchandises. Afin de faire face aux incertitudes, plusieurs échantillons de scénarios sont générés par Monte Carlo et les exemples correspondants d'approximation (SAA) des programmes sont résolus pour obtenir une politique de réponse adéquate au système d'inventaire en cas de perturbations. De nombreuses expériences numériques sont menées et les résultats permettent d'acquérir des connaissances sur l'impact des perturbations sur le coût total du réseau et le niveau de service. / Supply Chain Management (SCM) is an important part of most companies and applying the appropriate strategy is essential for managers in competitive industries and markets. In this context, Inventory Management plays a crucial role. Different inventory systems are widely used in practice. However, it is fundamentally difficult to optimize, especially in multi-echelon networks. A key challenge in managing inventory is dealing with uncertainties in supply and demand. The simultaneous decrease of customer service and increase of inventory-related costs are the most significant effects of such uncertainties. To deal with this pattern, supply chain managers need to establish more effective and more flexible sourcing and distribution strategies. In this thesis, a “framework to optimize inventory decisions in multi-echelon distribution networks under supply and demand uncertainty” is proposed. In the first part of the research work, multi-echelon distribution systems, subject to demand uncertainty, are studied. Such distribution systems are one of the most challenging inventory network topologies to analyze. The optimal inventory and sourcing policies for these systems are not yet unknown. We consider a basic type of distribution network with a single family product through a periodic review setting. Based on this property, a two-stage mixed integer programming approach is proposed to find the optimal inventory-related decisions considering the non-stationary demand pattern. The model, which is based on a Distribution Requirements Planning (DRP) approach, minimizes the expected total cost composed of the fixed allocation, inventory holding, procurement, transportation, and back-ordering costs. Alternative inventory optimization models, including the lateral transshipment strategy and multiple sourcing, are thus built, and the corresponding stochastic programs are solved using the sample average approximation method. Several problem instances are generated to validate the applicability of the model and to evaluate the benefit of lateral transshipments and multiple sourcing in reducing the expected total costs of the distribution network. An empirical investigation is also conducted to validate the numerical findings by using the case of a major French retailer’s distribution network. The second part of the research work is focused on the structure of the optimal inventory policy which is investigated under supply disruptions. A two-stage stochastic model is proposed to solve a capacitated multi-echelon inventory optimization problem considering a stochastic demand as well as uncertain throughput capacity and possible inventory losses, due to disruptions. The model minimizes the total cost, composed of fixed allocation cost, inventory holding, transportation and backordering costs by optimizing inventory policy and flow decisions. The inventory is controlled according to a reorder point order-up-to-level (s, S) policy. In order to deal with the uncertainties, several scenario samples are generated by Monte Carlo method. Corresponding sample average approximations programs are solved to obtain the adequate response policy to the inventory system under disruptions. In addition, extensive numerical experiments are conducted. The results enable insights to be gained into the impact of disruptions on the network total cost and service level. In both parts of the research, insights are offered which could be valuable for practitioners. Further research possibilities are also provided.
179

Optimisation of heat exchanger network maintenance scheduling problems

Al Ismaili, Riham January 2018 (has links)
This thesis focuses on the challenges that arise from the scheduling of heat exchanger network maintenance problems which undergo fouling and run continuously over time. The original contributions of the current research consist of the development of novel optimisation methodologies for the scheduling of cleaning actions in heat exchanger network problems, the application of the novel solution methodology developed to other general maintenance scheduling problems, the development of a stochastic programming formulation using this optimisation technique and its application to these scheduling problems with parametric uncertainty. The work presented in this thesis can be divided into three areas. To efficiently solve this non-convex heat exchanger network maintenance scheduling problem, new optimisation strategies are developed. The resulting contributions are outlined below. In the first area, a novel methodology is developed for the solution of the heat exchanger network maintenance scheduling problems, which is attributed towards a key discovery in which it is observed that these problems exhibit bang-bang behaviour. This indicates that when integrality on the binary decision variables is relaxed, the solution will tend to either the lower or the upper bound specified, obviating the need for integer programming solution techniques. Therefore, these problems are in ac- tuality optimal control problems. To suitably solve these problems, a feasible path sequential mixed integer optimal control approach is proposed. This methodology is coupled with a simple heuristic approach and applied to a range of heat exchanger network case studies from crude oil refinery preheat trains. The demonstrated meth- odology is shown to be robust, reliable and efficient. In the second area of this thesis, the aforementioned novel technique is applied to the scheduling of the regeneration of membranes in reverse osmosis networks which undergo fouling and are located in desalination plants. The results show that the developed solution methodology can be generalised to other maintenance scheduling problems with decaying performance characteristics. In the third and final area of this thesis, a stochastic programming version of the feasible path mixed integer optimal control problem technique is established. This is based upon a multiple scenario approach and is applied to two heat exchanger network case studies of varying size and complexity. Results show that this methodology runs automatically with ease without any failures in convergence. More importantly due to the significant impact on economics, it is vital that uncertainty in data is taken into account in the heat exchanger network maintenance scheduling problem, as well as other general maintenance scheduling problems when there is a level of uncertainty in parameter values.
180

Modelling the risk of underfunding in ALM models

Alwohaibi, Maram January 2017 (has links)
Asset and Liability Management (ALM) models have become well established decision tools for pension funds. ALMs are commonly modelled as multi-stage, in which a large terminal wealth is required, while at intermediate time periods, constraints on the funding ratio, that is, the ratio of assets to liabilities, are imposed. Underfunding occurs when the funding ratio is too low; a target value for funding ratios is pre-specified by the decision maker. The risk of underfunding has been usually modelled by employing established risk measures; this controls one single aspect of the funding ratio distributions. For example, controlling the expected shortfall below the target has limited power in controlling shortfall under worst-case scenarios. We propose ALM models in which the risk of underfunding is modelled based on the concept of Second Order Stochastic Dominance (SSD). This is a criterion of ranking random variables - in our case funding ratios - that takes the entire distributions of interest into account and works under the widely accepted assumptions of decision makers being rational and risk averse. In the proposed SSD models, investment decisions are taken such that the resulting short-term distribution of the funding ratio is non-dominated with respect to SSD, while a constraint is imposed on the expected terminal wealth. This is done by considering progressively larger tails of the funding ratio distribution and considering target levels for them; a target distribution is thus implied. Different target distributions lead to different SSD efficient solutions. Improved distributions of funding ratios may be thus achieved, compared to the existing risk models for ALM. This is the first contribution of this thesis. Interesting results are obtained in the special case when the target distribution is deterministic, specified by one single outcome. In this case, we can obtain equivalent risk minimisation models, with risk defined as expected shortfall or as worst case loss. This represents the second contribution. The third contribution is a framework for scenario generation based on the "Birth, Immigration, Death, Emigration" (BIDE) population model and the Empirical copula; the scenarios are used to evaluate the proposed models and their special cases both in-sample and out-of-sample. As an application, we consider the planning problem of a large DB pension fund in Saudi Arabia.

Page generated in 0.1007 seconds