Spelling suggestions: "subject:"discretizations.simulation"" "subject:"tischtennissimulation""
41 |
Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event SimulationSarkar, Falguni 05 1900 (has links)
Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently.
In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead.
In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of them on a network of workstations. The other algorithm has been analyzed probabilistically. The reason for choosing network of workstations is their low cost and the availability of efficient message passing softwares like PVM and MPI. All of these load balancing algorithms piggyback on the existing GVT computation algorithms and try to balance the speed of simulation in different lps. We have also designed an optimal GVT computation algorithm for the hypercubes and studied its performance with respect to the other GVT computation algorithms by simulating a hypercube in our network cluster.
We use the topological properties of a star network in order to design an algorithm for computing a safe time-window for parallel discrete event simulation. We have analyzed and simulated the behavior of an open queuing network resembling such an architecture. Our algorithm is also extended for hierarchical stars and for recursive window computation.
|
42 |
Thread Safe Multi-Tier Priority Queue for Managing Pending Events in Multi-Threaded Discrete Event SimulationsDePero, Matthew Michael 28 August 2018 (has links)
No description available.
|
43 |
Optimization approaches for designing baseball scout networks under uncertaintyOzlu, Ahmet Oguzhan 27 May 2016 (has links)
Major League Baseball (MLB) is a 30-team North American professional baseball league and Minor League Baseball (MiLB) is the hierarchy of developmental professional baseball teams for MLB. Most MLB players first develop their skills in MiLB, and MLB teams employ scouts, experts who evaluate the strengths, weaknesses, and overall potential of these players. In this dissertation, we study the problem of designing a scouting network for a Major League Baseball (MLB) team. We introduce the problem to the operations research literature to help teams make strategic and operational level decisions when managing their scouting resources. The thesis consists of three chapters that aim to address decisions such as how the scouts should be assigned to the available MiLB teams, how the scouts should be routed around the country, how many scouts are needed to perform the major scouting tasks, are there any trade-off s between the scouting objectives, and if there are any, what are the outcomes and insights. In the first chapter, we study the problem of assigning and scheduling minor league scouts for Major League Baseball (MLB) teams. There are multiple objectives in this problem. We formulate the problem as an integer program, use decomposition and both column-generation-based and problem-specific heuristics to solve it, and evaluate policies on multiple objective dimensions based on 100 bootstrapped season schedules. Our approach can allow teams to improve operationally by finding better scout schedules, to understand quantitatively the strategic trade-offs inherent in scout assignment policies, and to select the assignment policy whose strategic and operational performance best meets their needs. In the second chapter, we study the problem under uncertainty. In reality we observe that there are always disruptions to the schedules: players are injured, scouts become unavailable, games are delayed due to bad weather, etc. We presented a minor league baseball season simulator that generates random disruptions to the scout's schedules and uses optimization based heuristic models to recover the disrupted schedules. We evaluated the strategic benefits of different policies for team-to-scout assignment using the simulator. Our results demonstrate that the deterministic approach is insufficient for evaluating the benefits and costs of each policy, and that a simulation approach is also much more effective at determining the value of adding an additional scout to the network. The real scouting network design instances we solved in the first two chapters have several detailed complexities that can make them hard to study, such as idle day constraints, varying season lengths, off days for teams in the schedule, days where some teams play and others do not, etc. In the third chapter, we analyzed a simplified version of the Single Scout Problem (SSP), stripping away much of the real-world complexities that complicate SSP instances. Even for this stylized, archetypal version of SSP, we find that even small instances can be computationally difficult. We showed by reduction from Minimum Cost Hamiltonian Path Problem that archetypal version of SSP is NP-complete, even without all of the additional complexity introduced by real scheduling and scouting operations.
|
44 |
Understanding the effects of different levels of product monitoring on maintenance operations : a simulation approachAlabdulkarim, Abdullah A. January 2013 (has links)
The move towards integrating products and services has increased significantly. As a result, some business models, such as Product Service Systems (PSS) have been developed. PSS emphasises the sale of use of the product rather than the sale of the product itself. In this case, product ownership lies with the manufacturers/suppliers. Customers will be provided with a capable and available product for their use. In PSS, manufacturers/suppliers are penalised for any down time of their product according to the PSS contract. This has formed a pressure on the service providers (maintenance teams) to assure the availability of their products in use. This pressure increases as the products are scattered in remote places (customer locations). Authors have urged that different product monitoring levels are applied to enable service providers to monitor their products remotely allowing maintenance to be performed accordingly. They claim that by adopting these monitoring levels, the product performance will increase. Their claim is based on reasoning, not on experimental/empirical methods. Therefore, further experimental research is required to observe the effect of such monitoring levels on complex maintenance operations systems as a whole which includes e.g. product location, different types of failure, labour and their skills and locations, travel times, spare part inventory, etc. In the literature, monitoring levels have been classified as Reactive, Diagnostics, and Prognostics. This research aims to better understand and evaluate the complex maintenance operations of a product in use with different levels of product monitoring strategies using a Discrete Event Simulation (DES) approach. A discussion of the suitability of DES over other techniques has been provided. DES has proven its suitability to give a better understanding of the product monitoring levels on the wider maintenance system. The requirements for simulating a complex maintenance operation have been identified and documented. Two approaches are applied to gather these generic requirements. The first is to identify those requirements of modelling complex maintenance operations in a literature review. This is followed by conducting interviews with academics and industrial practitioners to find out more requirements that were not captured in the literature. As a result, a generic conceptual model is assimilated. A simulation module is built through the Witness software package to represent different product monitoring levels (Reactive, Diagnostics, and Prognostics). These modules are then linked with resources (e.g. labour, tools, and spare parts). To ensure the ease of use and rapid build of such a complex maintenance system through these modules, an Excel interface is developed and named as Product Monitoring Levels Simulation (PMLS). The developed PMLS tool needed to be demonstrated and tested for tool validation purposes. Three industrial case studies are presented and different experimentations are carried out to better understand the effect of different product monitoring levels on the complex maintenance operations. Face to face validation with case companies is conducted followed by an expert validation workshop. This work presents a novel Discrete Event Simulation (DES) approach which is developed to support maintenance operations decision makers in selecting the appropriate product monitoring level for their particular operation. This unique approach provides numerical evidence and proved that the higher product monitoring level does not always guarantee higher product availability.
|
45 |
A generic framework for hybrid simulation in healthcareChahal, Kirandeep January 2010 (has links)
Healthcare problems are complex; they exhibit both detail and dynamic complexity. It has been argued that Discrete Event Simulation (DES), with its ability to capture detail, is ideal for problems exhibiting this type of complexity. On the other hand, System Dynamics (SD) with its focus on feedback and nonlinear relationships lends itself naturally to comprehend dynamic complexity. Although these modelling paradigms provide valuable insights, neither of them are proficient in capturing both detail and dynamic complexity to the same extent. It has been argued in literature that a hybrid approach, wherein SD and DES are integrated symbiotically, will provide more realistic picture of complex systems with fewer assumptions and less complexity. In spite of wide recognition of healthcare as a complex multi- dimensional system, there has not been any reported study which utilises hybrid simulation. This could be attributed to the fact that due to fundamental differences, the mixing of methodologies is quite challenging. In order to overcome these challenges a generic theoretical framework for hybrid simulation is required. However, there is presently no such generic framework which provides guidance about integration of SD and DES to form hybrid models. This research has attempted to provide such a framework for hybrid simulation which can be utilised in healthcare domain. On the basis of knowledge induced from literature, three requirements for the generic framework have been established. It is argued that the framework for hybrid simulation should be able to provide answers to Why (why hybrid simulation is required), What (what information is exchanged between SD and DES models) and How (how SD and DES models are going to interact with each other over the time to exchange information) within the context of implementation of hybrid simulation to different problem scenarios. In order to meet these requirements, a three-phase generic framework for hybrid simulation has been proposed. Each phase of the framework is mapped to an established requirement and provides guidelines for addressing that requirement. The proposed framework is then evaluated theoretically based on its ability to meet these requirements by using multiple cases, and accordingly modified. It is further evaluated empirically with a single case study comprising of Accident and Emergency department of a London district general hospital. The purpose of this empirical evaluation is to identify the limitations of the framework with regard to the implementation of hybrid models. It is realised during implementation that the modified framework has certain limitations pertaining to the exchange of information between SD and DES models. These limitations are reflected upon and addressed in the final framework. The main contribution of this thesis is the generic framework for hybrid simulation which has been applied within healthcare context. Through an extensive review of existing literature in hybrid simulation, the thesis has also contributed to knowledge in multi-method approaches. A further contribution is that this research has attempted to quantify the impact of intangible benefits of information systems into tangible business process improvements. It is expected that this work will encourage those engaged in simulation (e.g., researchers, practitioners, decision makers) to realise the potential of cross-fertilisation of the two simulation paradigms.
|
46 |
Simulation modelling of distributed-shared memory multiprocessorsMarurngsith, Worawan January 2006 (has links)
Distributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture.
|
47 |
Simulation modeling for the impact of triage liaison physician on emergency department to reduce overcrowdingYang, Jie 03 January 2017 (has links)
Emergency department (ED) overcrowding has been a common complaint in Emergency Medicine in Canada for many years. Its adverse effects of prolonged waiting times cause patient dissatisfaction and unsafety. Previous studies indicate that adding a physician in triage (PIT) can increase accuracy and efficiency in the initial process of patient evaluation. However, the scientific evidence of the PIT impact on ED is far away from sufficient before its widespread implementation. This research is to search solutions using PIT to identify areas of improvement for the ED patient flow, based upon a validated discrete-event simulation (DES) model. As an efficient decision-making tool, the DES model also helps to develop an understanding of the current ED performance and quantitatively test various design alternatives for ED operations. / February 2017
|
48 |
Investigation of the workforce effect of an assembly line using multi-objective optimizationLópez De La Cova Trujillo, Miguel Angel, Bertilsson, Niklas January 2016 (has links)
ABSTRACT The aim of industrial production changed from mass production at the beginning of the 20th century. Today, production flexibility determines manufacturing companies' course of action. In this sense, Volvo Group Trucks Operations is interested in meeting customer demand in their assembly lines by adjusting manpower. Thus, this investigation attempts to analyze the effect of manning on the main final assembly line for thirteen-liter heavy-duty diesel engines at Volvo Group Trucks Operations in Skövde by means of discrete-event simulation. This project presents a simulation model that simulates the assembly line. With the purpose of building the model data were required. One the one hand, qualitative data were collected to improve the knowledge in the fields related to the project topic, as well as to solve the lack of information in certain points of the project. On the other hand, simulation model programming requires quantitative data. Once the model was completed, simulation results were obtained through simulation-based optimization. This optimization process tested 50,000 different workforce scenarios to find the most efficient solutions for three different sequences. Among all results, the most interesting one for Volvo is the one which render 80% of today’s throughput with the minimum number of workers. Consequently, as a case study, a bottleneck analysis and worker performance analysis was performed for this scenario. Finally, a flexible and fully functional model that delivers the desired results was developed. These results provide a comparison among different manning scenarios considering throughput as main measurement of the main final assembly line performance. After analyzing the results, system output behavior was revealed. This behavior allows predicting optimal system output for a given number of operators.
|
49 |
A distributed simulation methodology for large-scale hybrid modelling and simulation of emergency medical servicesAnagnostou, Anastasia January 2014 (has links)
Healthcare systems are traditionally characterised by complexity and heterogeneity. With the continuous increase in size and shrinkage of available resources, the healthcare sector faces the challenge of delivering high quality services with fewer resources. Healthcare organisations cannot be seen in isolation since the services of one such affects the performance of other healthcare organisations. Efficient management and forward planning, not only locally but rather across the whole system, could support healthcare sector to overcome the challenges. An example of closely interwoven organisations within the healthcare sector is the emergency medical services (EMS). EMS operate in a region and usually consist of one ambulance service and the available accident and emergency (A&E) departments within the coverage area. EMS provide, mainly, pre-hospital treatment and transport to the appropriate A&E units. The life-critical nature of EMS demands continuous systems improvement practices. Modelling and Simulation (M&S) has been used to analyse either the ambulance services or the A&E departments. However, the size and complexity of EMS systems constitute the conventional M&S techniques inadequate to model the system as a whole. This research adopts the approach of distributed simulation to model all the EMS components as individual and composable simulations that are able to run as standalone simulation, as well as federates in a distributed simulation (DS) model. Moreover, the hybrid approach connects agent-based simulation (ABS) and discrete event simulation (DES) models in order to accommodate the heterogeneity of the EMS components. The proposed FIELDS Framework for Integrated EMS Large-scale Distributed Simulation supports the re-use of existing, heterogeneous models that can be linked with the High Level Architecture (HLA) protocol for distributed simulation in order to compose large-scale simulation models. Based on FIELDS, a prototype ABS-DES distributed simulation EMS model was developed based on the London EMS. Experiments were conducted with the model and the system was tested in terms of performance and scalability measures to assess the feasibility of the proposed approach. The yielded results indicate that it is feasible to develop hybrid DS models of EMS that enables holistic analysis of the system and support model re-use. The main contributions of this thesis is a distributed simulation methodology that derived along the process of conducting this project, the FIELDS framework for hybrid EMS distributed simulation studies that support re-use of existing simulation models, and a prototype distributed simulation model that can be potentially used as a tool for EMS analysis and improvement.
|
50 |
IMPROVING PATIENTS EXPERIENCE IN AN EMERGENCY DEPARTMENT USING SYSTEMS ENGINEERING APPROACHHosein Khazaei (7037723) 14 August 2019 (has links)
Healthcare industry in United States of America is facing a big paradox. Although
US is a leader in the industry of medical devices, medical practices and medical
researches, however there isnt enough satisfaction and quality in performance of US
healthcare operations. Despite the big investments and budgets associated with US
healthcare, there are big threats to US healthcare operational side, that reduces the
quality of care. In this research study, a step by step Systems Engineering approach
is applied to improve healthcare delivery process in an Emergency Department of
a hospital located in Indianapolis, Indiana. In this study, different type of systems
engineering tools and techniques are used to improve the quality of care and patients
satisfaction in ED of Eskenazi hospital. Having a simulation model will help to have
a better understanding of the ED process and learn more about the bottlenecks of
the process. Simulation model is verified and validated using different techniques
like applying extreme and moderate conditions and comparing model results with
historical data. 4 different what if scenarios are proposed and tested to find out
about possible LOS improvements. Additionally, those scenarios are tested in both
regular and an increased patient arrival rate. The optimal selected what-if scenario
can reduce the LOS by 37 minutes compared to current ED setting. Additionally,
by increasing the patient arrival rate patients may stay in the ED up to 6 hours.
However, with the proposed ED setting, patients will only spend an additional 106
minutes compared to the regular patient arrival rate.<br>
|
Page generated in 0.1205 seconds