• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 34
  • 13
  • 10
  • 10
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 298
  • 298
  • 298
  • 102
  • 50
  • 50
  • 46
  • 42
  • 42
  • 32
  • 31
  • 30
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

MDRIP: A Hybrid Approach to Parallelisation of Discrete Event Simulation

Chao, Daphne (Yu Fen) January 2006 (has links)
The research project reported in this thesis considers Multiple Distributed Replications in Parallel (MDRIP), a hybrid approach to parallelisation of quantitative stochastic discrete-event simulation. Parallel Discrete-Event Simulation (PDES) generally covers distributed simulation or simulation with replicated trials. Distributed simulation requires model partitioning and synchronisation among submodels. Simulation with replicated trials can be executed on-line by applying Multiple Replications in Parallel (MRIP). MDRIP has been proposed for overcoming problems related to the large size of simulated models and their complexity, as well as with the problem of controlling the accuracy of the final simulation results. A survey of PDES investigates several primary issues which are directly related to the parallelisation of DES. A secondary issue related to implementation efficiency is also covered. Statistical analysis as a supporting issue is described. The AKAROA2 package is an implementation of making such supporting issue effortless. Existing solutions proposed for PDES have exclusively focused on collecting of output data during simulation and conducting analysis of these data when simulation is finished. Such off-line statistical analysis of output data offers no control of statistical errors of the final estimates. On-line control of statistical errors during simulation has been successfully implemented in AKAROA2, an automated controller of output data analysis during simulation executed in MRIP. However, AKAROA2 cannot be applied directly to distributed simulation. This thesis reports results of a research project aimed at employing AKAROA2 for launching multiple replications of distributed simulation models and for on-line sequential control of statistical errors associated with a distributed performance measure; i.e. with a performance measure which depends on output data being generated by a number of submodels of distributed simulation. We report changes required in the architecture of AKAROA2 to make MDRIP possible. A new MDRIP-related component of AKAROA2, a distributed simulation engine mdrip engine, is introduced. Stochastic simulation in its MDRIP version, as implemented in AKAROA2, has been tested in a number of simulation scenarios. We discuss two specific simulation models employed in our tests: (i) a model consisting of independent queues, and (ii) a queueing network consisting of tandem connection of queueing systems. In the first case, we look at the correctness of message orderings from the distributed messages. In the second case, we look at the correctness of output data analysis when the analysed performance measures require data from all submodels of a given (distributed) simulation model. Our tests confirm correctness of our mdrip engine design in the cases considered; i.e. in models in which causality errors do not occur. However, we argue that the same design principles should be applicable in the case of distributed simulation models with (potential) causality errors.
22

Sequential Analysis of Quantiles and Probability Distributions by Replicated Simulations

Eickhoff, Mirko January 2007 (has links)
Discrete event simulation is well known to be a powerful approach to investigate behaviour of complex dynamic stochastic systems, especially when the system is analytically not tractable. The estimation of mean values has traditionally been the main goal of simulation output analysis, even though it provides limited information about the analysed system's performance. Because of its complexity, quantile analysis is not as frequently applied, despite its ability to provide much deeper insights into the system of interest. A set of quantiles can be used to approximate a cumulative distribution function, providing fuller information about a given performance characteristic of the simulated system. This thesis employs the distributed computing power of multiple computers by proposing new methods for sequential and automated analysis of quantile-based performance measures of such dynamic systems. These new methods estimate steady state quantiles based on replicating simulations on clusters of workstations as simulation engines. A general contribution to the problem of the length of the initial transient is made by considering steady state in terms of the underlying probability distribution. Our research focuses on sequential and automated methods to guarantee a satisfactory level of confidence of the final results. The correctness of the proposed methods has been exhaustively studied by means of sequential coverage analysis. Quantile estimates are used to investigate underlying probability distributions. We demonstrate that synchronous replications greatly assist this kind of analysis.
23

Insight generation in simulation studies : an empirical exploration

Gogi, Anastasia January 2016 (has links)
This thesis presents an empirical research that aims to explore insight generation in discrete-event simulation (DES) studies. It is often claimed that simulation is useful for generating insights. There is, however, almost no empirical evidence to support this claim. The factors of a simulation intervention that affect the occurrence of insight are not clear. A specific claim is that watching the animated display of a simulation model is more helpful in making better decisions than relying on the statistical outcomes generated from simulation runs; but again, there is very limited evidence to support this. To address this dearth of evidence, two studies are implemented: a quantitative and a qualitative study. In the former, a laboratory-based experimental study is used, where undergraduate students were placed in three separate groups and given a task to solve using a model with only animation, a model with only statistical results, or using no model at all. In the qualitative study, semi-structured interviews with simulation consultants were carried out, where participants were requested to account examples of projects in which clients change their problem understanding and generate more effective ideas. The two separated parts of the study found different types of evidence to support that simulation generates insight. The experimental study suggests that insights are generated more rapidly from statistical results than the use of animation. Research outcomes from the interviews include descriptions of: the phase of a simulation study where insight emerges; the role of different methods applied and means used in discovering and overcoming discontinuity in thinking (for instance, the role of consultant s influence in problem understanding); how some factors of a simulation intervention are associated with the processes of uncovering and overcoming discontinuity in thinking (for example, the role of clients team in the selection of methods used to communicate results); and the role of the model and consultant in generating new ideas. This thesis contributes to the limited existing literature by providing a more in depth understanding of insight in the context of simulation and empirical evidence on the insight-enabling benefits of simulation based on an operational definition. The findings of the study provide new insights into the factors of simulation that support fast and creative problem solving.
24

Examining the Impact of Experimental Design Strategies on the Predictive Accuracy of Quantile Regression Metamodels for Computer Simulations of Manufacturing Systems

January 2016 (has links)
abstract: This thesis explores the impact of different experimental design strategies for the development of quantile regression based metamodels of computer simulations. In this research, the objective is to compare the resulting predictive accuracy of five experimental design strategies, each of which is used to develop metamodels of a computer simulation of a semiconductor manufacturing facility. The five examined experimental design strategies include two traditional experimental design strategies, sphere packing and I-optimal, along with three hybrid design strategies, which were developed for this research and combine desirable properties from each of the more traditional approaches. The three hybrid design strategies are: arbitrary, centroid clustering, and clustering hybrid. Each of these strategies is analyzed and compared based on common experimental design space, which includes the investigation of four densities of design point placements three different experimental regions to predict four different percentiles from the cycle time distribution of a semiconductor manufacturing facility. Results confirm that the predictive accuracy of quantile regression metamodels depends on both the location and density of the design points placed in the experimental region. They also show that the sphere packing design strategy has the best overall performance in terms of predictive accuracy. However, the centroid clustering hybrid design strategy, developed for this research, has the best predictive accuracy for cases in which only a small number of simulation resources are available from which to develop a quantile regression metamodel. / Dissertation/Thesis / Masters Thesis Engineering 2016
25

The Effect of Heterogeneous Servers on the Service Level Predicted by Erlang-A

Griffith, Edward Shane 19 May 2011 (has links)
Thousands of call centers operate in the United States employing millions of people. Since personnel costs represent as much as 80% of the total operating expense of these centers, it is important for call center managers to determine an appropriate staffing level required to maintain the desired operating performance. Historically, queueing models serve an important role in this regard. The one most commonly used is the Erlang-C model. The Erlang-C model has several assumptions, however, which are required for the predicted performance measures to be valid. One assumption that has received significant attention from researchers is that callers have infinite patience and will not terminate a call until the service is complete regardless of the wait time. Since this assumption is not likely to occur in reality, researchers have suggested using Erlang-A instead. Erlang-A does consider caller patience and allows for calls to be abandoned prior to receiving service. However, the use of Erlang-A still requires an assumption that is very unlikely to occur in practice - the assumption that all agents provide service at the same rate. Discrete event simulation is used to examine the effects of agent heterogeneity on the operating performance of a call center compared to the theoretical performance measures obtained from Erlang-A. Based on the simulation results, it is concluded that variability in agent service rate does not materially affect call center performance except to the extent that the variability changes the average handle time of the call center weighted by the number of calls handled and not weighted by agent. This is true regardless of call center size, the degree of agent heterogeneity, and the distribution shape of agent variability. The implication for researchers is that it is unnecessary to search for an analytic solution to relax the Erlang-A assumption that agents provide service at the same rate. Several implications for managers are discussed including the reliability of using Erlang-A to determine staffing levels, the importance of considering the service rates of the agents rather than the average handle time, and the unintended consequence of call routing schemes which route calls to faster rather than slower agents.
26

Impact evaluation of an automatic identificationtechnology on inventory management : A simulation approach with the focus on RFID

Petersson, Martin January 2020 (has links)
Automatic identification system is a prominent technology used in warehouses to give managers real time information of their products and assistance to warehouse employees in keeping an accurate inventory record. This kind of assistance is needed as an inaccurate inventory leads to profit loss due to misplacement or other mistakes. This project cooperated with an organization called Stora Enso specifically one of their forest nursery to find a solution to improve their inventory management system. Their current inventory system is a manual process which leads to mistakes occurring that affects the inventory accuracy. This thesis project evaluates automatic identification systems to observe if the technology is a possible solution and aims to answer the research question ”What are the significant impacts an automatic identification system has on an inventory management system?”. From the automatic identification evaluation one system is picked for further evaluation and due to its advantages radio frequency identification (RFID) is picked. To evaluate RFID in a warehouse setting a discrete-event simulation was created that simulates the forest nursery’s warehouse. The simulation is then used to evaluate the impact of different RFID implementations and their respective cost. The simulation results show that just a simple RFID implementation can improve inventory accuracy and remove some of the mistakes a manual system has with a relatively low direct cost. It also shows that a full RFID implementation that gives full visibility of a warehouse can almost remove inventory mistakes however the cost analysis shows that it requires a large investment.
27

Current State Simulation Scope of Improvement and Forecast Demand Analysis at AstraZeneca using Discrete Event Simulation.

Kasula, Siva Sai Krishna January 2020 (has links)
In this rapidly changing product demand market, the pharmaceutical companies have adapted their production system to be more flexible and agile. In order to meet the demand, production lines need to be more efficient and effective. Even a small improvement is a great achievement as these production lines are designed to produce large volumes of medicines. To test the efficiency and effectiveness of the lines by analyzing production data would be time taking and needs the involvement of experts from different departments. When production lines are subjected to change, previous analysis done will no longer be valid and needs to be repeated again. Instead, this can be replaced with discrete even simulation analysis (DES).     DES is one of the key technology in developing a production system in this industry 4.0 era. As the production systems become more and more complicated it becomes difficult to understand and analyze the behavior of the system if there are any changes brought up in the system. Simulation is the right technology to analyze and understand the behavior of the real system when undergone small or big changes.  The purpose of this case study is to make use of DES using ExtenSim as a simulation tool at the case company in order to develop a virtual model of a production system containing five production lines to understand the behavior and analyze the production lines to identify possible improvement and evaluate the feasibility of production system to achieve the forecasted demand. Possible improvements are identified from the simulation results of the current state model and a future state simulation model is developed with the improvements. Furthermore, this future state simulation model is used to analyze the feasibility of production lines for forecasted demand. By developing the simulation model was identified that the production lines were not efficient and are underutilized as that the company assumed.
28

Improving Patients Experience in an Emergency Department using Systems Engineering Approach

Khazaei, Hosein 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Healthcare industry in United States of America is facing a big paradox. Although US is a leader in the industry of medical devices, medical practices and medical researches, however there isnt enough satisfaction and quality in performance of US healthcare operations. Despite the big investments and budgets associated with US healthcare, there are big threats to US healthcare operational side, that reduces the quality of care. In this research study, a step by step Systems Engineering approach is applied to improve healthcare delivery process in an Emergency Department of a hospital located in Indianapolis, Indiana. In this study, different type of systems engineering tools and techniques are used to improve the quality of care and patients satisfaction in ED of Eskenazi hospital. Having a simulation model will help to have a better understanding of the ED process and learn more about the bottlenecks of the process. Simulation model is verified and validated using different techniques like applying extreme and moderate conditions and comparing model results with historical data. 4 different what if scenarios are proposed and tested to find out about possible LOS improvements. Additionally, those scenarios are tested in both regular and an increased patient arrival rate. The optimal selected what-if scenario can reduce the LOS by 37 minutes compared to current ED setting. Additionally, by increasing the patient arrival rate patients may stay in the ED up to 6 hours. However, with the proposed ED setting, patients will only spend an additional 106 minutes compared to the regular patient arrival rate.
29

A RECONFIGURABLE SIMULATOR FOR COUPLED CONVEYORS

Hayslip, Nunzio January 2006 (has links)
No description available.
30

Developing An Object-oriented Approach For Operations Simulation In Speedes

Wasadikar, Amit 01 January 2005 (has links)
Using simulation techniques, performance of any proposed system can be tested for different scenarios with a generated model. However, it is difficult to rapidly create simulation models that will accurately represent the complexity of the system. In recent years, Object-Oriented Discrete-Event Simulation has emerged as the potential technology to implement rapid simulation schemes. A number of software based on programming languages like C++ and Java are available for carrying out Object Oriented Discrete-Event Simulation. These software packages establish a general framework for simulation in computer programs, but need to be further customized for desired end-use applications. In this thesis, a generic simulation library is created for the distributed Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES). This library offers classes to model the functionality of servers, processes, resources, transporters, and decisions. The library is expected to produce efficient simulation models in less time and with a lesser amount of coding. The class hierarchy is modeled using the Unified Modeling Language (UML). To test the library, the existing SPEEDES Space Shuttle Model is enhanced and recreated. This enhanced model is successfully validated against the original Arena model.

Page generated in 0.1546 seconds