Spelling suggestions: "subject:"discrete event desimulations"" "subject:"discrete event bysimulations""
1 |
Simulation of Sawmill Yard Operations Using Software AgentsMadipally, Sunil veer Kumar January 2011 (has links)
Bergkvist insjön AB is a sawmill yard which is capable of producing 350,000 cubic meter of timber every year this requires lot of internal resources. Sawmill operations can be classified as unloading, sorting, storage and production of timber. In the company we have trucks arriving at random they have to be unloaded and sent back at the earliest to avoid queuing up of trucks creating a problem for truck owners. The sawmill yard has to operate with two log stackers that does several tasks including transporting the logs from trucks to measurement station where the logs will be sorted into classes and dropped into pockets from pockets to the sorted timber yard where they are stored and finally from there to sawmill for final processing. The main issue that needs to be answered here is the lining up trucks that are waiting to be unload, creating a problem for both sawmill as well as the truck owners and given huge production volume, it is certain that handling of resources is top priority. A key challenge in handling of resources would be unloading of trucks and finding a way to optimize internal resources.To address this problem i have experimented on different ways of using internal resources, i have designed different cases, in case 1 we have both the log stackers working on sawmill and measurement station. The main objective of having this case is to make sawmill and measurement station to work all the time. Then in case 2, i have divided the work between both the log stackers, one log stacker will be working on sawmill and pocket_control and second log stacker will be working on measurement station and truck. Then in case 3 we have only one log stacker working on all the agents, this case was designed to reduce cost of production, as the experiment cannot be done in real-time due to operational cost, for this purpose simulation is used, preliminary investigation into simulation results suggested that case 2 is the best option has it reduced waiting time of trucks considerably when compared with other cases and it showed 50% increase in optimizing internal resources.
|
2 |
Using Actors to Implement Sequential Simulations2015 April 1900 (has links)
This thesis investigates using an approach based on the Actors paradigm for implementing a discrete
event simulation system and comparing the results with more traditional approaches. The goal of this work
is to determine if using Actors for sequential programming is viable. If Actors are viable for this type of
programming, then it follows that they would be usable for general programming. One potential advantage
of using Actors instead of traditional paradigms for general programming would be the elimination of a
distinction between designing for a sequential environment and a concurrent/distributed one. Using Actors
for general programming may also allow for a single implementation that can be deployed on both single core
and multiple core systems.
Most of the existing discussions about the Actors model focus on its strengths in distributed environments
and its ability to scale with the amount of available computing resources. The chosen system for implementation
is intentionally sequential to allow for examination of the behaviour of existing Actors implementations
where managing concurrency complexity is not the primary task. Multiple implementations of the simulation
system were built using different languages (C++, Erlang, and Java) and different paradigms, including
traditional ones and Actors. These different implementations were compared quantitatively, based on their
execution time, memory usage, and code complexity.
The analysis of these comparisons indicates that for certain existing development environments, Erlang/OTP,
following the Actors paradigm, produces a comparable or better implementation than traditional
paradigms. Further research is suggested to solidify the validity of the results presented in this research and
to extend their applicability.
|
3 |
OPTIMIZATION AND SIMULATION OF JUST-IN-TIME SUPPLY PICKUP AND DELIVERY SYSTEMSChuah, Keng Hoo 01 January 2004 (has links)
A just-in-time supply pickup and delivery system (JSS) manages the logistic operations between a manufacturing plant and its suppliers by controlling the sequence, timing, and frequency of container pickups and parts deliveries, thereby coordinating internal conveyance, external conveyance, and the operation of cross-docking facilities. The system is important to just-in-time production lines that maintain small inventories. This research studies the logistics, supply chain, and production control of JSS. First, a new meta-heuristics approach (taboo search) is developed to solve a general frequency routing (GFR) problem that has been formulated in this dissertation with five types of constraints: flow, space, load, time, and heijunka. Also, a formulation for cross-dock routing (CDR) has been created and solved. Second, seven issues concerning the structure of JSS systems that employ the previously studied common frequency routing (CFR) problem (Chuah and Yingling, in press) are explored to understand their impacts on operational costs of the system. Finally, a discreteevent simulation model is developed to study JSS by looking at different types of variations in demand and studying their impacts on the stability of inventory levels in the system. The results show that GFR routes at high frequencies do not have common frequencies in the solution. There are some common frequencies at medium frequencies and none at low frequency, where effectively the problem is simply a vehicle routing problem (VRP) with time windows. CDR is an extension of VRP-type problems that can be solved quickly with meta-heuristic approaches. GFR, CDR, and CFR are practical routing strategies for JSS with taboo search or other types of meta-heuristics as solvers. By comparing GFR and CFR solutions to the same problems, it is shown that the impacts of CFR restrictions on cost are minimal and in many cases so small as to make simplier CFR routes desirable. The studies of JSS structural features on the operating costs of JSS systems under the assumption of CFR routes yielded interesting results. First, when suppliers are clustered, the routes become more efficient at mid-level, but not high or low, frequencies. Second, the cost increases with the number of suppliers. Third, negotiating broad time windows with suppliers is important for cost control in JSS systems. Fourth, an increase or decrease in production volumes uniformly shifts the solutions cost versus frequency curve. Fifth, increased vehicle capacity is important in reducing costs at low and medium frequencies but far less important at high frequencies. Lastly, load distributions among the suppliers are not important determinants of transportation costs as long as the average loads remain the same. Finally, a one-supplier, one-part-source simulation model shows that the systems inventory level tends to be sticky to the reordering level. JSS is very stable, but it requires reliable transportation to perform well. The impact to changes in kanban levels (e.g., as might occur between route planning intervals when production rates are adjusted) is relatively long term with dynamic after-effects on inventory levels that take a long time to dissapate. A gradual change in kanban levels may be introduced, prior to the changeover, to counter this effect.
|
4 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
5 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
6 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. January 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
7 |
Large Scale Computer Investigations of Non-Equilibrium Surface Growth for Surfaces from Parallel Discrete Event SimulationsVerma, Poonam Santosh 08 May 2004 (has links)
The asymptotic scaling properties of conservative algorithms for parallel discrete-event simulations (e.g.: for spatially distributed parallel simulations of dynamic Monte Carlo for spin systems) of one-dimensional systems with system size $L$ is studied. The particular case studied here is the case of one or two elements assigned to each processor element. The previously studied case of one element per processor is reviewed, and the two elements per processor case is presented. The key concept is a simulated time horizon which is an evolving non equilibrium surface, specific for the particular algorithm. It is shown that the flat-substrate initial condition is responsible for the existence of an initial non-scaling regime. Various methods to deal with this non-scaling regime are documented, both the final successful method and unsuccessful attempts. The width of this time horizon relates to desynchronization in the system of processors. Universal properties of the conservative time horizon are derived by constructing a distribution of the interface width at saturation.
|
8 |
Virtual time-aware virtual machine systemsYoginath, Srikanth B. 27 August 2014 (has links)
Discrete dynamic system models that track, maintain, utilize, and evolve virtual time are referred to as virtual time systems (VTS). The realization of VTS using virtual machine (VM) technology offers several benefits including fidelity, scalability, interoperability, fault tolerance and load balancing. The usage of VTS with VMs appears in two ways: (a) VMs within VTS, and (b) VTS over VMs. The former is prevalent in high-fidelity cyber infrastructure simulations and cyber-physical system simulations, wherein VMs form a crucial component of VTS. The latter appears in the popular Cloud computing services, where VMs are offered as computing commodities and the VTS utilizes VMs as parallel execution platforms.
Prior to our work presented here, the simulation community using VM within VTS (specifically, cyber infrastructure simulations) had little awareness of the existence of a fundamental virtual time-ordering problem. The correctness problem was largely unnoticed and unaddressed because of the unrecognized effects of fair-share multiplexing of VMs to realize virtual time evolution of VMs within VTS. The dissertation research reported here demonstrated the latent incorrectness of existing methods, defined key correctness benchmarks, quantitatively measured the incorrectness, proposed and implemented novel algorithms to overcome incorrectness, and optimized the solutions to execute without a performance penalty. In fact our novel, correctness-enforcing design yields better runtime performance than the traditional (incorrect) methods.
Similarly, the VTS execution over VM platforms such as Cloud computing services incurs large performance degradation, which was not known until our research uncovered the fundamental mismatch between the scheduling needs of VTS execution and those of traditional parallel workloads. Consequently, we designed a novel VTS-aware hypervisor scheduler and showed significant performance gains in VTS execution over VM platforms. Prior to our work, the performance concern of VTS over VM was largely unaddressed due to the absence of an understanding of execution policy mismatch between VMs and VTS applications. VTS follows virtual-time order execution whereas the conventional VM execution follows fair-share policy. Our research quantitatively uncovered the exact cause of poor performance of VTS in VM platforms. Moreover, we proposed and implemented a novel virtual time-aware execution methodology that relieves the degradation and provides over an order of magnitude faster execution than the traditional virtual time-unaware execution.
|
9 |
Non-Equilibrium Surface Growth For Competitive Growth Models And Applications To Conservative Parallel Discrete Event SimulationsVerma, Poonam Santosh 15 December 2007 (has links) (PDF)
Non-equilibrium surface growth for competitive growth models in (1+1) dimensions, particularly mixing random deposition (RD) with correlated growth process which occur with probability $p$ are studied. The composite mixtures are found to be in the universality class of the correlated growth process, and a nonuniversal exponent $\delta$ is identified in the scaling in $p$. The only effects of the RD admixture are dilations of the time and height scales which result in a slowdown of the dynamics of building up the correlations. The bulk morphology is taken into account and is reflected in the surface roughening, as well as the scaling behavior. It is found that the continuum equations and scaling laws for RD added, in particular, to Kardar-Parisi-Zhang (KPZ) processes are partly determined from the underlying bulk structures. Nonequilibrium surface growth analysis are also applied to a study of the static and dynamic load balancing for a conservative update algorithm for Parallel Discrete Event Simulations (PDES). This load balancing is governed by the KPZ equation. For uneven load distributions in conservative PDES simulations, the simulated (virtual) time horizon (VTH) per Processing Element (PE) and the imulated time horizon per volume element $N_{v}$ are used to study the PEs progress in terms of utilization. The width of these time horizons relates to the desynchronization of the system of processors, and is related to the memory requirements of the PEs. The utilization increases when the dynamic, rather than static, load balancing is performed.
|
Page generated in 0.1268 seconds