• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 67
  • 30
  • 22
  • 15
  • 14
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 455
  • 455
  • 340
  • 140
  • 125
  • 90
  • 56
  • 55
  • 55
  • 53
  • 51
  • 50
  • 48
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

SUPERVISORY CONTROL AND FAILURE DIAGNOSIS OF DISCRETE EVENT SYSTEMS: A TEMPORAL LOGIC APPROACH

Jiang, Shengbing 01 January 2002 (has links)
Discrete event systems (DESs) are systems which involve quantities that take a discrete set of values, called states, and which evolve according to the occurrence of certain discrete qualitative changes, called events. Examples of DESs include many man-made systems such as computer and communication networks, robotics and manufacturing systems, computer programs, and automated trac systems. Supervisory control and failure diagnosis are two important problems in the study of DESs. This dissertation presents a temporal logic approach to the control and failure diagnosis of DESs. For the control of DESs, full branching time temporal logic-CTL* is used to express control specifications. Control problem of DES in the temporal logic setting is formulated; and the controllability of DES is defined. By encoding the system with a CTL formula, the control problem of CTL* is reduced to the decision problem of CTL*. It is further shown that the control problem of CTL* (resp., CTL{computation tree logic) is complete for deterministic double (resp., single) exponential time. A sound and complete supervisor synthesis algorithm for the control of CTL* is provided. Special cases of the control of computation tree logic (CTL) and linear-time temporal logic (LTL) are also studied; and for which algorithms of better complexity are provided. For the failure diagnosis of DESs, LTL is used to express fault specifications. Failure diagnosis problem of DES in the temporal logic setting is formulated; and the diagnosability of DES is defined. The problem of testing the diagnosability is reduced to that of model checking. An algorithm for the test of diagnosability and the synthesis of a diagnoser is obtained. The algorithm has a polynomial complexity in the number of system states and the number of fault specifications. For the diagnosis of repeated failures in DESs, different notions of repeated failure diagnosability, K-diagnosability, [1,K]-diagnosability, and [1,1]-diagnosability, are introduced. Polynomial algorithms for checking these various notions of repeated failure diagnosability are given, and a procedure of polynomial complexity for the on-line diagnosis of repeated failures is also presented.
42

MDRIP: A Hybrid Approach to Parallelisation of Discrete Event Simulation

Chao, Daphne (Yu Fen) January 2006 (has links)
The research project reported in this thesis considers Multiple Distributed Replications in Parallel (MDRIP), a hybrid approach to parallelisation of quantitative stochastic discrete-event simulation. Parallel Discrete-Event Simulation (PDES) generally covers distributed simulation or simulation with replicated trials. Distributed simulation requires model partitioning and synchronisation among submodels. Simulation with replicated trials can be executed on-line by applying Multiple Replications in Parallel (MRIP). MDRIP has been proposed for overcoming problems related to the large size of simulated models and their complexity, as well as with the problem of controlling the accuracy of the final simulation results. A survey of PDES investigates several primary issues which are directly related to the parallelisation of DES. A secondary issue related to implementation efficiency is also covered. Statistical analysis as a supporting issue is described. The AKAROA2 package is an implementation of making such supporting issue effortless. Existing solutions proposed for PDES have exclusively focused on collecting of output data during simulation and conducting analysis of these data when simulation is finished. Such off-line statistical analysis of output data offers no control of statistical errors of the final estimates. On-line control of statistical errors during simulation has been successfully implemented in AKAROA2, an automated controller of output data analysis during simulation executed in MRIP. However, AKAROA2 cannot be applied directly to distributed simulation. This thesis reports results of a research project aimed at employing AKAROA2 for launching multiple replications of distributed simulation models and for on-line sequential control of statistical errors associated with a distributed performance measure; i.e. with a performance measure which depends on output data being generated by a number of submodels of distributed simulation. We report changes required in the architecture of AKAROA2 to make MDRIP possible. A new MDRIP-related component of AKAROA2, a distributed simulation engine mdrip engine, is introduced. Stochastic simulation in its MDRIP version, as implemented in AKAROA2, has been tested in a number of simulation scenarios. We discuss two specific simulation models employed in our tests: (i) a model consisting of independent queues, and (ii) a queueing network consisting of tandem connection of queueing systems. In the first case, we look at the correctness of message orderings from the distributed messages. In the second case, we look at the correctness of output data analysis when the analysed performance measures require data from all submodels of a given (distributed) simulation model. Our tests confirm correctness of our mdrip engine design in the cases considered; i.e. in models in which causality errors do not occur. However, we argue that the same design principles should be applicable in the case of distributed simulation models with (potential) causality errors.
43

Sequential Analysis of Quantiles and Probability Distributions by Replicated Simulations

Eickhoff, Mirko January 2007 (has links)
Discrete event simulation is well known to be a powerful approach to investigate behaviour of complex dynamic stochastic systems, especially when the system is analytically not tractable. The estimation of mean values has traditionally been the main goal of simulation output analysis, even though it provides limited information about the analysed system's performance. Because of its complexity, quantile analysis is not as frequently applied, despite its ability to provide much deeper insights into the system of interest. A set of quantiles can be used to approximate a cumulative distribution function, providing fuller information about a given performance characteristic of the simulated system. This thesis employs the distributed computing power of multiple computers by proposing new methods for sequential and automated analysis of quantile-based performance measures of such dynamic systems. These new methods estimate steady state quantiles based on replicating simulations on clusters of workstations as simulation engines. A general contribution to the problem of the length of the initial transient is made by considering steady state in terms of the underlying probability distribution. Our research focuses on sequential and automated methods to guarantee a satisfactory level of confidence of the final results. The correctness of the proposed methods has been exhaustively studied by means of sequential coverage analysis. Quantile estimates are used to investigate underlying probability distributions. We demonstrate that synchronous replications greatly assist this kind of analysis.
44

Using Actors to Implement Sequential Simulations

2015 April 1900 (has links)
This thesis investigates using an approach based on the Actors paradigm for implementing a discrete event simulation system and comparing the results with more traditional approaches. The goal of this work is to determine if using Actors for sequential programming is viable. If Actors are viable for this type of programming, then it follows that they would be usable for general programming. One potential advantage of using Actors instead of traditional paradigms for general programming would be the elimination of a distinction between designing for a sequential environment and a concurrent/distributed one. Using Actors for general programming may also allow for a single implementation that can be deployed on both single core and multiple core systems. Most of the existing discussions about the Actors model focus on its strengths in distributed environments and its ability to scale with the amount of available computing resources. The chosen system for implementation is intentionally sequential to allow for examination of the behaviour of existing Actors implementations where managing concurrency complexity is not the primary task. Multiple implementations of the simulation system were built using different languages (C++, Erlang, and Java) and different paradigms, including traditional ones and Actors. These different implementations were compared quantitatively, based on their execution time, memory usage, and code complexity. The analysis of these comparisons indicates that for certain existing development environments, Erlang/OTP, following the Actors paradigm, produces a comparable or better implementation than traditional paradigms. Further research is suggested to solidify the validity of the results presented in this research and to extend their applicability.
45

Insight generation in simulation studies : an empirical exploration

Gogi, Anastasia January 2016 (has links)
This thesis presents an empirical research that aims to explore insight generation in discrete-event simulation (DES) studies. It is often claimed that simulation is useful for generating insights. There is, however, almost no empirical evidence to support this claim. The factors of a simulation intervention that affect the occurrence of insight are not clear. A specific claim is that watching the animated display of a simulation model is more helpful in making better decisions than relying on the statistical outcomes generated from simulation runs; but again, there is very limited evidence to support this. To address this dearth of evidence, two studies are implemented: a quantitative and a qualitative study. In the former, a laboratory-based experimental study is used, where undergraduate students were placed in three separate groups and given a task to solve using a model with only animation, a model with only statistical results, or using no model at all. In the qualitative study, semi-structured interviews with simulation consultants were carried out, where participants were requested to account examples of projects in which clients change their problem understanding and generate more effective ideas. The two separated parts of the study found different types of evidence to support that simulation generates insight. The experimental study suggests that insights are generated more rapidly from statistical results than the use of animation. Research outcomes from the interviews include descriptions of: the phase of a simulation study where insight emerges; the role of different methods applied and means used in discovering and overcoming discontinuity in thinking (for instance, the role of consultant s influence in problem understanding); how some factors of a simulation intervention are associated with the processes of uncovering and overcoming discontinuity in thinking (for example, the role of clients team in the selection of methods used to communicate results); and the role of the model and consultant in generating new ideas. This thesis contributes to the limited existing literature by providing a more in depth understanding of insight in the context of simulation and empirical evidence on the insight-enabling benefits of simulation based on an operational definition. The findings of the study provide new insights into the factors of simulation that support fast and creative problem solving.
46

Examining the Impact of Experimental Design Strategies on the Predictive Accuracy of Quantile Regression Metamodels for Computer Simulations of Manufacturing Systems

January 2016 (has links)
abstract: This thesis explores the impact of different experimental design strategies for the development of quantile regression based metamodels of computer simulations. In this research, the objective is to compare the resulting predictive accuracy of five experimental design strategies, each of which is used to develop metamodels of a computer simulation of a semiconductor manufacturing facility. The five examined experimental design strategies include two traditional experimental design strategies, sphere packing and I-optimal, along with three hybrid design strategies, which were developed for this research and combine desirable properties from each of the more traditional approaches. The three hybrid design strategies are: arbitrary, centroid clustering, and clustering hybrid. Each of these strategies is analyzed and compared based on common experimental design space, which includes the investigation of four densities of design point placements three different experimental regions to predict four different percentiles from the cycle time distribution of a semiconductor manufacturing facility. Results confirm that the predictive accuracy of quantile regression metamodels depends on both the location and density of the design points placed in the experimental region. They also show that the sphere packing design strategy has the best overall performance in terms of predictive accuracy. However, the centroid clustering hybrid design strategy, developed for this research, has the best predictive accuracy for cases in which only a small number of simulation resources are available from which to develop a quantile regression metamodel. / Dissertation/Thesis / Masters Thesis Engineering 2016
47

The Effect of Heterogeneous Servers on the Service Level Predicted by Erlang-A

Griffith, Edward Shane 19 May 2011 (has links)
Thousands of call centers operate in the United States employing millions of people. Since personnel costs represent as much as 80% of the total operating expense of these centers, it is important for call center managers to determine an appropriate staffing level required to maintain the desired operating performance. Historically, queueing models serve an important role in this regard. The one most commonly used is the Erlang-C model. The Erlang-C model has several assumptions, however, which are required for the predicted performance measures to be valid. One assumption that has received significant attention from researchers is that callers have infinite patience and will not terminate a call until the service is complete regardless of the wait time. Since this assumption is not likely to occur in reality, researchers have suggested using Erlang-A instead. Erlang-A does consider caller patience and allows for calls to be abandoned prior to receiving service. However, the use of Erlang-A still requires an assumption that is very unlikely to occur in practice - the assumption that all agents provide service at the same rate. Discrete event simulation is used to examine the effects of agent heterogeneity on the operating performance of a call center compared to the theoretical performance measures obtained from Erlang-A. Based on the simulation results, it is concluded that variability in agent service rate does not materially affect call center performance except to the extent that the variability changes the average handle time of the call center weighted by the number of calls handled and not weighted by agent. This is true regardless of call center size, the degree of agent heterogeneity, and the distribution shape of agent variability. The implication for researchers is that it is unnecessary to search for an analytic solution to relax the Erlang-A assumption that agents provide service at the same rate. Several implications for managers are discussed including the reliability of using Erlang-A to determine staffing levels, the importance of considering the service rates of the agents rather than the average handle time, and the unintended consequence of call routing schemes which route calls to faster rather than slower agents.
48

The Impact of the User Interface on Simulation Usability and Solution Quality

Montgomery, Bruce Ray 01 January 2011 (has links)
This research outlines a study that was performed to determine the effects of user interface design variations on the usability and solution quality of complex, multivariate discrete-event simulations. Specifically, this study examined four key research questions: what are the user interface considerations for a given simulation model, what are the current best practices in user interface design for simulations, how is usability best evaluated for simulation interfaces, and specifically what are the measured effects of varying levels of usability of interface elements on simulation operations such as data entry and solution analysis. The overall goal of the study was to show the benefit of applied usability practices in simulation design, supported by experimental evidence from testing two alternative simulation user interfaces designed with varying usability. The study employed directed research in usability and simulation design to support design of an experiment that addressed the core problem of interface effects on simulation. In keeping with the study goal of demonstrating usability practices, the experimental procedures were analogous to the development processes recommended in supporting literature for usability-based design lifecycles. Steps included user and task analysis, concept and use modeling, paper prototypes of user interfaces for initial usability assessment, interface development and assessment, and user-based testing of actual interfaces with an actual simulation model. The experimental tests employed two interfaces designed with selected usability variations, each interacting with the same core simulation model. The experimental steps were followed by an analysis of quantitative and qualitative data gathered, including data entry time, interaction errors, solution quality measures, and user acceptance data. The study resulted in mixed support for the hypotheses that improvements in usability of simulation interface elements will improve data entry, solution quality, and overall simulation interactions. Evidence for data entry was mixed, for solution quality was positive to neutral, and for overall usability was very positive. As a secondary benefit, the study demonstrated application of usability-based interface design best practices and processes that could provide guidelines for increasing usability of future discrete-event simulation interface designs. Examination of the study results also provided suggestions for possible future research on the investigation topics.
49

Impact evaluation of an automatic identificationtechnology on inventory management : A simulation approach with the focus on RFID

Petersson, Martin January 2020 (has links)
Automatic identification system is a prominent technology used in warehouses to give managers real time information of their products and assistance to warehouse employees in keeping an accurate inventory record. This kind of assistance is needed as an inaccurate inventory leads to profit loss due to misplacement or other mistakes. This project cooperated with an organization called Stora Enso specifically one of their forest nursery to find a solution to improve their inventory management system. Their current inventory system is a manual process which leads to mistakes occurring that affects the inventory accuracy. This thesis project evaluates automatic identification systems to observe if the technology is a possible solution and aims to answer the research question ”What are the significant impacts an automatic identification system has on an inventory management system?”. From the automatic identification evaluation one system is picked for further evaluation and due to its advantages radio frequency identification (RFID) is picked. To evaluate RFID in a warehouse setting a discrete-event simulation was created that simulates the forest nursery’s warehouse. The simulation is then used to evaluate the impact of different RFID implementations and their respective cost. The simulation results show that just a simple RFID implementation can improve inventory accuracy and remove some of the mistakes a manual system has with a relatively low direct cost. It also shows that a full RFID implementation that gives full visibility of a warehouse can almost remove inventory mistakes however the cost analysis shows that it requires a large investment.
50

Current State Simulation Scope of Improvement and Forecast Demand Analysis at AstraZeneca using Discrete Event Simulation.

Kasula, Siva Sai Krishna January 2020 (has links)
In this rapidly changing product demand market, the pharmaceutical companies have adapted their production system to be more flexible and agile. In order to meet the demand, production lines need to be more efficient and effective. Even a small improvement is a great achievement as these production lines are designed to produce large volumes of medicines. To test the efficiency and effectiveness of the lines by analyzing production data would be time taking and needs the involvement of experts from different departments. When production lines are subjected to change, previous analysis done will no longer be valid and needs to be repeated again. Instead, this can be replaced with discrete even simulation analysis (DES).     DES is one of the key technology in developing a production system in this industry 4.0 era. As the production systems become more and more complicated it becomes difficult to understand and analyze the behavior of the system if there are any changes brought up in the system. Simulation is the right technology to analyze and understand the behavior of the real system when undergone small or big changes.  The purpose of this case study is to make use of DES using ExtenSim as a simulation tool at the case company in order to develop a virtual model of a production system containing five production lines to understand the behavior and analyze the production lines to identify possible improvement and evaluate the feasibility of production system to achieve the forecasted demand. Possible improvements are identified from the simulation results of the current state model and a future state simulation model is developed with the improvements. Furthermore, this future state simulation model is used to analyze the feasibility of production lines for forecasted demand. By developing the simulation model was identified that the production lines were not efficient and are underutilized as that the company assumed.

Page generated in 0.0312 seconds