Spelling suggestions: "subject:"simulationlation aptimization"" "subject:"simulationlation anoptimization""
1 |
Designing Better Allocation Policies for Influenza VaccineDemirbilek, Mustafa January 2013 (has links)
Influenza has been one of the most infectious diseases for roughly 2400 years. The most effective way to prevent influenza outbreaks and eliminate their seasonal effects is vaccination. The distribution of influenza vaccine to various groups in the population becomes an important decision determining the effectiveness of vaccination for the entire population. We developed a simulation model using the Epifire C++ application program [2] to simulate influenza transmission under a given vaccination strategy. Our model can generate a network that can be configured with different degree distributions, transmission rates, number of nodes and edges, infection periods, and perform chain-binomial based simulation of SIR (Susceptible-Infectious-Recovered) disease transmission. Furthermore, we integrated NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) for optimizing vaccine allocation to various age groups. We calibrate our model according to age specific attack rates from the 1918 pandemic. In our simulation model, we evaluate three different vaccine policies for 36 different scenarios and 1000 individuals: The policy of the Advisory Committee on Immunization Practices (ACIP), former recommendations of the Centers for Disease Control and Prevention (CDC), and new suggestions of the CDC. We derive the number of infected people at the end of each run and calculated the corresponding cost and years of life lost. As a result, we observe that optimized vaccine distribution ensures less infected people and years of life lost compared to the fore-mentioned policies in almost all cases. On the other hand, total costs for the policies are close to each other. Former CDC policy ensures slightly lower cost than other policies and our proposed in some cases.
|
2 |
Racionalizační projekt pracoviště svařování ohřívačů / Rationalization project of workplace for Hot-water Heater weldingVarjan, Matúš January 2010 (has links)
The aim of the thesis is to rationalize the water heaters welding area in Tatramat company - ohrievače s.r.o. The rationalization consists of three parts. The first part deals with the arrangement of the workplaces, the second part re-evaluates the monthly production planning. The third part describes in detail the production of one type, which based on simulations created in the simulation software Witness, compares the recorded time in company informartion system Orfert to the real production time in the operation. Each individual part offers optimization proposals and merging them into one unit, will create an efficient, transparent and economically value adding rationalization of the water heaters welding area.
|
3 |
Adaptive Sampling Line Search for Simulation OptimizationRagavan, Prasanna Kumar 08 March 2017 (has links)
This thesis is concerned with the development of algorithms for simulation optimization (SO), a special case of stochastic optimization where the objective function can only be evaluated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that eliminates the need for any user defined parameters. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on a continuous feasible set mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. We also show that ADALINE converges ``almost surely'' to the set of local minima. Finally, our numerical results suggest that ADALINE converges to a local minimum faster, outperforming other advanced SO algorithms that utilize variable sampling strategies.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using a Sample Average Approximation (SAA) approach, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution quickly than SAA. / Ph. D. / This thesis is concerned with the development of algorithms for simulation optimization (SO), where the objective function does not have an analytical form, and can only be estimated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that minimizes the need for user defined parameter. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on continuous feasible sets mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. Finally, our numerical results suggest that ADALINE converges to a local minimum faster than the best available SO algorithm for the purpose.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using traditional optimization techniques, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution more quickly than traditional optimization.
|
4 |
Fleet Sizing and Scheduling Model of Container Carriers between Two PortsElyamak, Alaa Mustapha 01 January 2008 (has links)
Globalization and containerization have changed the shipping industry and carriers are challenged to reshape their operational planning in order to maintain their market share. The objective of this paper is to formulate a model to determine the optimal fleet size and sailing frequency that minimizes total shipping and inventory (wait) costs for a container shipping company. The proposed model assumes an arrival process that follows a Poisson rate. We first consider unlimited ship capacity and propose a solution to determine the required fleet size and the optimal sailing frequency. We then extend the work to consider limited ship capacity. Furthermore, we introduce a cost component associated with outsourcing shipments due to insufficient capacity. The outsourced shipment is utilized when the number of containers at a port exceeds the available capacity. In the general case, a closed form solution could not be derived. Therefore, a simulation study is undertaken to analyze optimal fleet sizing, scheduling, and outsourcing policies under varying paramaters. Our study investigates the trade-off between building capacity and outsourcing in the context of cargo shipment. The model proves to be a reliable tool to determine optimal delay time at ports and optimal fleet size.
|
5 |
SIMULATION AND OPTIMIZATION OF A CROSSDOCKING OPERATION IN A JUST-IN-TIME ENVIRONMENTHauser, Karina 01 January 2002 (has links)
In an ideal Just-in-Time (JIT) production environment, parts should be delivered to the workstationsat the exact time they are needed and in the exact quantity required. In reality, formost components/subassemblies this is neither practical nor economical. In this study, thematerial flow of the crossdocking operation at the Toyota Motor Manufacturing plant inGeorgetown, KY (TMMK) is simulated and analyzed.At the Georgetown plant between 80 and 120 trucks are unloaded every day, with approximately1300 different parts being handled in the crossdocking area. The crossdocking areaconsists of 12 lanes, each lane corresponding to one section of the assembly line. Whereassome pallets contain parts designated for only one lane, other parts are delivered in such smallquantities that they arrive as mixed pallets. These pallets have to be sorted/crossdocked intothe proper lanes before they can be delivered to the workstations at the assembly line. Thisprocedure is both time consuming and costly.In this study, the present layout of the crossdocking area at Toyota and a layout proposed byToyota are compared via simulation with three newly designed layouts. The simulation modelswill test the influence of two different volumes of incoming quantities, the actual volumeas it is now and one of 50% reduced volume. The models will also examine the effects ofcrossdocking on the performance of the system, simulating three different percentage levelsof pallets that have to be crossdocked.The objectives of the initial study are twofold. First, simulations of the current system,based on data provided by Toyota, will give insight into the dynamic behavior and the materialflow of the existing arrangement. These simulations will simultaneously serve to validateour modeling techniques. The second objective is to reduce the travel distances in the crossdockingarea; this will reduce the workload of the team members and decrease the lead timefrom unloading of the truck to delivery to the assembly line. In the second phase of theproject, the design will be further optimized. Starting with the best layouts from the simulationresults, the lanes will be rearranged using a genetic algorithm to allow the lanes withthe most crossdocking traffic to be closest together.The different crossdocking quantities and percentages of crossdocking pallets in the simulationsallow a generalization of the study and the development of guidelines for layouts ofother types of crossdocking operations. The simulation and optimization can be used as abasis for further studies of material flow in JIT and/or crossdocking environments.
|
6 |
Interactions of Uncertainty and Optimization: Theory, Algorithms, and Applications to Chemical Site OperationsAmaran, Satyajith 01 September 2014 (has links)
This thesis explores different paradigms for incorporating uncertainty with optimization frameworks for applications in chemical engineering and site-wide operations. First, we address the simulation optimization problem, which deals with the search for optimal input parameters to black-box stochastic simulations which are potentially expensive to evaluate. We include a comprehensive literature survey of the state-of-the-art in the area, propose a new provably convergent trust region-based algorithm, and discuss implementation details along with extensive computational experience, including examples for chemical engineering applications. Next, we look at the problem of long-term site-wide maintenance turnaround planning. Turnarounds involve the disruption of production for significant periods of time, and may incur enormous costs in terms of maintenance manpower as well as lost sales. The problem involves (1) the simulation of profit deterioration due to wear and tear followed by the determination of how frequently a particular turnaround should take place; and (2) the consideration of site network structure and turnaround frequencies to determine how turnarounds of different plants may be coordinated over a long-term horizon. We investigate two mixed-integer models, the first of which determines optimal frequencies of individual plant turnarounds, while the second considers maximizing long-term profit through coordination of turnarounds across the site. We then turn to more conventional methods of dealing with optimization under uncertainty, and make use of a combined robust optimization and stochastic programming approach to medium-term maintenance planning in integrated chemical sites. The nature of the uncertainty considered affects two aspects of maintenance planning, one of which is most suitably addressed through a robust optimization framework, while the other is better handled with stochastic programming models. In summary, we highlight the importance of considering uncertainty in optimization as well as the choice of approach or paradigm used through chemical engineering applications that span varied domains and time scales.
|
7 |
Rychlý a částečně překládaný simulátor pro aplikačně specifické procesory / Fast and Partially Translated Simulator for Application-Specific ProcessorsRichtarik, Pavel January 2018 (has links)
The major objective of this work is to analyse possibilities of using simulation within the development of application-specific instruction-set processors, to explore and compare some common simulation techniques and to use the collected information to design a new simulation tool suitable for utilization in the processors development and optimization. This thesis presents the main requirements on the new simulator and describes the design and implementation of its key parts with emphasis on the high performance.
|
8 |
Optimalizace procesů výrobní linky / Process Optimization of the Production LineProkop, Aleš January 2008 (has links)
This thesis presents simulation method and simulation software Witness as a decision management instrument of a company, applied in STEKO spol. s r.o. The goal of this thesis is to create a model of real business process in the simulation program and following optimalization.
|
9 |
Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm ApproachesEskandari, Hamidreza 01 January 2006 (has links)
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
|
10 |
Sampling Laws for Stochastically Constrained Simulation Optimization on Finite SetsHunter, Susan R. 24 October 2011 (has links)
Consider the context of selecting an optimal system from among a finite set of competing systems, based on a "stochastic" objective function and subject to multiple "stochastic" constraints. In this context, we characterize the asymptotically optimal sample allocation that maximizes the rate at which the probability of false selection tends to zero in two scenarios: first in the context of general light-tailed distributions, and second in the specific context in which the objective function and constraints may be observed together as multivariate normal random variates.
In the context of general light-tailed distributions, we present the optimal allocation as the result of a concave maximization problem for which the optimal solution is the result of solving one of two nonlinear systems of equations. The first result of its kind, the optimal allocation is particularly easy to obtain in contexts where the underlying distributions are known or can be assumed, e.g., normal, Bernoulli. A consistent estimator for the optimal allocation and a corresponding sequential algorithm for implementation are provided. Various numerical examples demonstrate where and to what extent the proposed allocation differs from competing algorithms.
In the context of multivariate normal distributions, we present an exact, asymptotically optimal allocation. This allocation is the result of a concave maximization problem in which there are at least as many constraints as there are suboptimal systems. Each constraint corresponding to a suboptimal system is a convex optimization problem. Thus the optimal allocation may easily be obtained in the context of a "small" number of systems, where the quantifier "small" depends on the available computing resources. A consistent estimator for the optimal allocation and a fully sequential algorithm, fit for implementation, are provided. The sequential algorithm performs significantly better than equal allocation in finite time across a variety of randomly generated problems.
The results presented in the general and multivariate normal context provide the first foundation of exact asymptotically optimal sampling methods in the context of "stochastically" constrained simulation optimization on finite sets. Particularly, the general optimal allocation model is likely to be most useful when correlation between the objective and constraint estimators is low, but the data are non-normal. The multivariate normal optimal allocation model is likely to be useful when the multivariate normal assumption is reasonable or the correlation is high. / Ph. D.
|
Page generated in 0.1503 seconds