• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 67
  • 30
  • 22
  • 16
  • 14
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 457
  • 457
  • 342
  • 140
  • 127
  • 90
  • 56
  • 56
  • 55
  • 53
  • 51
  • 50
  • 48
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Automatic Translation of Moore Finite State Machines into Timed Discrete Event System Supervisors / Automatic Translation of Moore FSM into TDES Supervisors

Mahmood, Hina January 2023 (has links)
In the area of Discrete Event Systems (DES), formal verification techniques are important in examining a variety of system properties including controllability and nonblocking. Nonetheless, in reality, most software and hardware practitioners are not proficient in formal methods which holds them back from the formal representation and verification of their systems. Alternatively, it is a common observation that control engineers are typically familiar with Moore synchronous Finite State Machines (FSM) and use them to express their controllers’ behaviour. Taking this into consideration, we devise a generic and structured approach to automatically translate Moore synchronous FSM into timed DES (TDES) supervisors. In this thesis, we describe our FSM-TDES translation method, present a set of algorithms to realize the translation steps and rules, and demonstrate the application and correctness of our translation approach with the help of an example. In order to develop our automatic FSM-TDES translation approach, we exploit the structural similarity created by the sampled-data (SD) supervisory control theory between the two models. To build upon the SD framework, first we address a related issue of disabling the tick event in order to force an eligible prohibitable event in the SD framework. To do this, we introduce a new synchronization operator called the SD synchronous product (||SD), adapt the existing TDES and SD properties, and devise our ||SD setting. We formally verify the controllability and nonblocking properties of our ||SD setting by establishing logical equivalence between the existing SD setting and our ||SD setting. We present algorithms to implement our ||SD setting in the DES research tool, DESpot. The formulation of the ||SD operator provides twofold benefits. First, it simplifies the design logic of the TDES supervisors that are modelled in the SD framework. This results in improving the ease of manually designing SD controllable TDES supervisors, and reduced verification time of the closed-loop system. We demonstrate these benefits by applying our ||SD setting to an example system. Second, it bridges the gap between theoretical supervisors and physical controllers with respect to event forcing. This makes our FSM-TDES translation approach relatively uncomplicated. Our automatic FSM-TDES translation approach enables the designers to obtain a formal representation of their controllers without designing TDES supervisors by hand and without requiring formal methods expertise. Overall, this work should increase the adoption of the SD supervisory control theory in particular, and formal methods in general, in the industry by facilitating software and hardware practitioners in the formal representation and verification of their control systems. / Dissertation / Doctor of Philosophy (PhD)
72

Optimization approaches for designing baseball scout networks under uncertainty

Ozlu, Ahmet Oguzhan 27 May 2016 (has links)
Major League Baseball (MLB) is a 30-team North American professional baseball league and Minor League Baseball (MiLB) is the hierarchy of developmental professional baseball teams for MLB. Most MLB players first develop their skills in MiLB, and MLB teams employ scouts, experts who evaluate the strengths, weaknesses, and overall potential of these players. In this dissertation, we study the problem of designing a scouting network for a Major League Baseball (MLB) team. We introduce the problem to the operations research literature to help teams make strategic and operational level decisions when managing their scouting resources. The thesis consists of three chapters that aim to address decisions such as how the scouts should be assigned to the available MiLB teams, how the scouts should be routed around the country, how many scouts are needed to perform the major scouting tasks, are there any trade-off s between the scouting objectives, and if there are any, what are the outcomes and insights. In the first chapter, we study the problem of assigning and scheduling minor league scouts for Major League Baseball (MLB) teams. There are multiple objectives in this problem. We formulate the problem as an integer program, use decomposition and both column-generation-based and problem-specific heuristics to solve it, and evaluate policies on multiple objective dimensions based on 100 bootstrapped season schedules. Our approach can allow teams to improve operationally by finding better scout schedules, to understand quantitatively the strategic trade-offs inherent in scout assignment policies, and to select the assignment policy whose strategic and operational performance best meets their needs. In the second chapter, we study the problem under uncertainty. In reality we observe that there are always disruptions to the schedules: players are injured, scouts become unavailable, games are delayed due to bad weather, etc. We presented a minor league baseball season simulator that generates random disruptions to the scout's schedules and uses optimization based heuristic models to recover the disrupted schedules. We evaluated the strategic benefits of different policies for team-to-scout assignment using the simulator. Our results demonstrate that the deterministic approach is insufficient for evaluating the benefits and costs of each policy, and that a simulation approach is also much more effective at determining the value of adding an additional scout to the network. The real scouting network design instances we solved in the first two chapters have several detailed complexities that can make them hard to study, such as idle day constraints, varying season lengths, off days for teams in the schedule, days where some teams play and others do not, etc. In the third chapter, we analyzed a simplified version of the Single Scout Problem (SSP), stripping away much of the real-world complexities that complicate SSP instances. Even for this stylized, archetypal version of SSP, we find that even small instances can be computationally difficult. We showed by reduction from Minimum Cost Hamiltonian Path Problem that archetypal version of SSP is NP-complete, even without all of the additional complexity introduced by real scheduling and scouting operations.
73

Understanding the effects of different levels of product monitoring on maintenance operations : a simulation approach

Alabdulkarim, Abdullah A. January 2013 (has links)
The move towards integrating products and services has increased significantly. As a result, some business models, such as Product Service Systems (PSS) have been developed. PSS emphasises the sale of use of the product rather than the sale of the product itself. In this case, product ownership lies with the manufacturers/suppliers. Customers will be provided with a capable and available product for their use. In PSS, manufacturers/suppliers are penalised for any down time of their product according to the PSS contract. This has formed a pressure on the service providers (maintenance teams) to assure the availability of their products in use. This pressure increases as the products are scattered in remote places (customer locations). Authors have urged that different product monitoring levels are applied to enable service providers to monitor their products remotely allowing maintenance to be performed accordingly. They claim that by adopting these monitoring levels, the product performance will increase. Their claim is based on reasoning, not on experimental/empirical methods. Therefore, further experimental research is required to observe the effect of such monitoring levels on complex maintenance operations systems as a whole which includes e.g. product location, different types of failure, labour and their skills and locations, travel times, spare part inventory, etc. In the literature, monitoring levels have been classified as Reactive, Diagnostics, and Prognostics. This research aims to better understand and evaluate the complex maintenance operations of a product in use with different levels of product monitoring strategies using a Discrete Event Simulation (DES) approach. A discussion of the suitability of DES over other techniques has been provided. DES has proven its suitability to give a better understanding of the product monitoring levels on the wider maintenance system. The requirements for simulating a complex maintenance operation have been identified and documented. Two approaches are applied to gather these generic requirements. The first is to identify those requirements of modelling complex maintenance operations in a literature review. This is followed by conducting interviews with academics and industrial practitioners to find out more requirements that were not captured in the literature. As a result, a generic conceptual model is assimilated. A simulation module is built through the Witness software package to represent different product monitoring levels (Reactive, Diagnostics, and Prognostics). These modules are then linked with resources (e.g. labour, tools, and spare parts). To ensure the ease of use and rapid build of such a complex maintenance system through these modules, an Excel interface is developed and named as Product Monitoring Levels Simulation (PMLS). The developed PMLS tool needed to be demonstrated and tested for tool validation purposes. Three industrial case studies are presented and different experimentations are carried out to better understand the effect of different product monitoring levels on the complex maintenance operations. Face to face validation with case companies is conducted followed by an expert validation workshop. This work presents a novel Discrete Event Simulation (DES) approach which is developed to support maintenance operations decision makers in selecting the appropriate product monitoring level for their particular operation. This unique approach provides numerical evidence and proved that the higher product monitoring level does not always guarantee higher product availability.
74

A generic framework for hybrid simulation in healthcare

Chahal, Kirandeep January 2010 (has links)
Healthcare problems are complex; they exhibit both detail and dynamic complexity. It has been argued that Discrete Event Simulation (DES), with its ability to capture detail, is ideal for problems exhibiting this type of complexity. On the other hand, System Dynamics (SD) with its focus on feedback and nonlinear relationships lends itself naturally to comprehend dynamic complexity. Although these modelling paradigms provide valuable insights, neither of them are proficient in capturing both detail and dynamic complexity to the same extent. It has been argued in literature that a hybrid approach, wherein SD and DES are integrated symbiotically, will provide more realistic picture of complex systems with fewer assumptions and less complexity. In spite of wide recognition of healthcare as a complex multi- dimensional system, there has not been any reported study which utilises hybrid simulation. This could be attributed to the fact that due to fundamental differences, the mixing of methodologies is quite challenging. In order to overcome these challenges a generic theoretical framework for hybrid simulation is required. However, there is presently no such generic framework which provides guidance about integration of SD and DES to form hybrid models. This research has attempted to provide such a framework for hybrid simulation which can be utilised in healthcare domain. On the basis of knowledge induced from literature, three requirements for the generic framework have been established. It is argued that the framework for hybrid simulation should be able to provide answers to Why (why hybrid simulation is required), What (what information is exchanged between SD and DES models) and How (how SD and DES models are going to interact with each other over the time to exchange information) within the context of implementation of hybrid simulation to different problem scenarios. In order to meet these requirements, a three-phase generic framework for hybrid simulation has been proposed. Each phase of the framework is mapped to an established requirement and provides guidelines for addressing that requirement. The proposed framework is then evaluated theoretically based on its ability to meet these requirements by using multiple cases, and accordingly modified. It is further evaluated empirically with a single case study comprising of Accident and Emergency department of a London district general hospital. The purpose of this empirical evaluation is to identify the limitations of the framework with regard to the implementation of hybrid models. It is realised during implementation that the modified framework has certain limitations pertaining to the exchange of information between SD and DES models. These limitations are reflected upon and addressed in the final framework. The main contribution of this thesis is the generic framework for hybrid simulation which has been applied within healthcare context. Through an extensive review of existing literature in hybrid simulation, the thesis has also contributed to knowledge in multi-method approaches. A further contribution is that this research has attempted to quantify the impact of intangible benefits of information systems into tangible business process improvements. It is expected that this work will encourage those engaged in simulation (e.g., researchers, practitioners, decision makers) to realise the potential of cross-fertilisation of the two simulation paradigms.
75

OPTIMIZATION AND SIMULATION OF JUST-IN-TIME SUPPLY PICKUP AND DELIVERY SYSTEMS

Chuah, Keng Hoo 01 January 2004 (has links)
A just-in-time supply pickup and delivery system (JSS) manages the logistic operations between a manufacturing plant and its suppliers by controlling the sequence, timing, and frequency of container pickups and parts deliveries, thereby coordinating internal conveyance, external conveyance, and the operation of cross-docking facilities. The system is important to just-in-time production lines that maintain small inventories. This research studies the logistics, supply chain, and production control of JSS. First, a new meta-heuristics approach (taboo search) is developed to solve a general frequency routing (GFR) problem that has been formulated in this dissertation with five types of constraints: flow, space, load, time, and heijunka. Also, a formulation for cross-dock routing (CDR) has been created and solved. Second, seven issues concerning the structure of JSS systems that employ the previously studied common frequency routing (CFR) problem (Chuah and Yingling, in press) are explored to understand their impacts on operational costs of the system. Finally, a discreteevent simulation model is developed to study JSS by looking at different types of variations in demand and studying their impacts on the stability of inventory levels in the system. The results show that GFR routes at high frequencies do not have common frequencies in the solution. There are some common frequencies at medium frequencies and none at low frequency, where effectively the problem is simply a vehicle routing problem (VRP) with time windows. CDR is an extension of VRP-type problems that can be solved quickly with meta-heuristic approaches. GFR, CDR, and CFR are practical routing strategies for JSS with taboo search or other types of meta-heuristics as solvers. By comparing GFR and CFR solutions to the same problems, it is shown that the impacts of CFR restrictions on cost are minimal and in many cases so small as to make simplier CFR routes desirable. The studies of JSS structural features on the operating costs of JSS systems under the assumption of CFR routes yielded interesting results. First, when suppliers are clustered, the routes become more efficient at mid-level, but not high or low, frequencies. Second, the cost increases with the number of suppliers. Third, negotiating broad time windows with suppliers is important for cost control in JSS systems. Fourth, an increase or decrease in production volumes uniformly shifts the solutions cost versus frequency curve. Fifth, increased vehicle capacity is important in reducing costs at low and medium frequencies but far less important at high frequencies. Lastly, load distributions among the suppliers are not important determinants of transportation costs as long as the average loads remain the same. Finally, a one-supplier, one-part-source simulation model shows that the systems inventory level tends to be sticky to the reordering level. JSS is very stable, but it requires reliable transportation to perform well. The impact to changes in kanban levels (e.g., as might occur between route planning intervals when production rates are adjusted) is relatively long term with dynamic after-effects on inventory levels that take a long time to dissapate. A gradual change in kanban levels may be introduced, prior to the changeover, to counter this effect.
76

DIAGNOSIS OF CONDITION SYSTEMS

Ashley, Jeffrey 01 January 2004 (has links)
In this dissertation, we explore the problem of fault detection and fault diagnosis for systems modeled as condition systems. A condition system is a Petri net based framework of components which interact with each other and the external environment through the use of condition signals. First, a system FAULT is defined as an observed behavior which does not correspond to any expected behavior, where the expected behavior is defined through condition system models. A DETECTION is the determination that the system is not behaving as expected according to the model of the system. A DIAGNOSIS of this fault localizes the subsystem that is the source of the discrepancy between output and expected observations. We characterize faults as a behavior relaxation of model components. We then show that detection and diagnosis can be determined in a finite number of calculations. The exact solution can be computationally involved, so we also present methods to perform a rapid detection and diagnosis. We have also included a chapter on a conversion from the condition system framework into a linear-time temporal logic(LTL) framework.
77

Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations

De Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
78

Simulation modelling of distributed-shared memory multiprocessors

Marurngsith, Worawan January 2006 (has links)
Distributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture.
79

Emergence at the Fundamental Systems Level: Existence Conditions for Iterative Specifications

Zeigler, Bernard, Muzy, Alexandre 09 November 2016 (has links)
Conditions under which compositions of component systems form a well-defined system-of-systems are here formulated at a fundamental level. Statement of what defines a well-defined composition and sufficient conditions guaranteeing such a result offers insight into exemplars that can be found in special cases such as differential equation and discrete event systems. For any given global state of a composition, two requirements can be stated informally as: (1) the system can leave this state, i.e., there is at least one trajectory defined that starts from the state; and (2) the trajectory evolves over time without getting stuck at a point in time. Considered for every global state, these conditions determine whether the resultant is a well-defined system and, if so, whether it is non-deterministic or deterministic. We formulate these questions within the framework of iterative specifications for mathematical system models that are shown to be behaviorally equivalent to the Discrete Event System Specification (DEVS) formalism. This formalization supports definitions and proofs of the afore-mentioned conditions. Implications are drawn at the fundamental level of existence where the emergence of a system from an assemblage of components can be characterized. We focus on systems with feedback coupling where existence and uniqueness of solutions is problematic.
80

Simulation modeling for the impact of triage liaison physician on emergency department to reduce overcrowding

Yang, Jie 03 January 2017 (has links)
Emergency department (ED) overcrowding has been a common complaint in Emergency Medicine in Canada for many years. Its adverse effects of prolonged waiting times cause patient dissatisfaction and unsafety. Previous studies indicate that adding a physician in triage (PIT) can increase accuracy and efficiency in the initial process of patient evaluation. However, the scientific evidence of the PIT impact on ED is far away from sufficient before its widespread implementation. This research is to search solutions using PIT to identify areas of improvement for the ED patient flow, based upon a validated discrete-event simulation (DES) model. As an efficient decision-making tool, the DES model also helps to develop an understanding of the current ED performance and quantitatively test various design alternatives for ED operations. / February 2017

Page generated in 0.0285 seconds