Spelling suggestions: "subject:"discrete every"" "subject:"iscrete every""
21 |
Fault-Tolerant Supervisory ControlMulahuwaish, Aos January 2019 (has links)
In this thesis, we investigate the problem of fault tolerance in the framework of discrete-event systems (DES). We introduce our setting, and then provide a set of fault-tolerant definitions designed to capture different types of fault scenarios and to ensure that our system remains controllable and nonblocking in each scenario.
This is a passive approach that relies upon inherent redundancy in the system being controlled, and focuses on the intermittent occurrence of faults.
Our approach provides an easy method for users to add fault events to a system model and is based on user designed supervisors and verification. As synthesis algorithms have higher complexity than verification algorithms, our approach should be applicable to larger systems than existing active fault-recovery methods that are synthesis based. Also, modular supervisors are typically easier to understand and
implement than the results of synthesis.
Finally, our approach does not require expensive (in terms of algorithm complexity) fault diagnosers to work. Diagnosers are, however, required by existing methods to know when to switch to a recovery supervisor. As a result, the response time of diagnosers is not an issue for us. Our supervisors are designed to handle the original and the faulted system.
In this thesis, we next present algorithms to verify these properties followed by complexity analyses and correctness proofs of the algorithms. Finally, examples are provided to illustrate our approach.
In the above framework, permanent faults can be modelled, but the current method was onerous. To address this, we then introduce a new modeling approach for permanent faults that is easy to use, as well as a set of new permanent fault-tolerant definitions. These definitions are designed to capture several types of permanent fault scenarios and to ensure that our system remains controllable and nonblocking in each scenario. New definitions and scenarios were required as the previous ones were incompatible with the new permanent fault modeling approach.
We then present algorithms to verify these properties followed by complexity analyses and correctness proofs of the algorithms. An example is then provided to illustrate our approach.
Finally, we extend the above intermittent and permanent fault-tolerant approach to the timed DES setting. As before, we introduced new fault-tolerant properties and algorithms. We then provide complexity analyses and correctness proofs for the algorithms. An example is then provided to illustrate our approach. / Thesis / Doctor of Philosophy (PhD)
|
22 |
Functional and performance analysis of discrete event network simulation toolsMusa, Ahmad S., Awan, Irfan U. 31 March 2022 (has links)
Yes / Researchers have used the simulation technique to develop new networks and test, modify, and optimize existing ones. The scientific community has developed a wide range of network simulators to fulfil these objectives and facilitate this creative process. However, selecting a suitable simulator appropriate for a given purpose requires a comprehensive study of network simulators. The current literature on network simulators has limitations. Limited simulators have been included in the studies with functional and performance criteria appropriate for comparison not been considered, and a reasonable selection model for selecting the suitable simulator has not been presented. To overcome these limitations, we studied twenty-three existing network simulators with classifications, additional comparison parameters, system limitations, and comparisons using several criteria. / This work was supported by the Petroleum Technology Development Fund (PTDF) Nigeria with grant number PTDF/ED/PHD/MAS/179/17.
|
23 |
Profile Driven Partitioning Of Parallel Simulation ModelsAlt, Aaron J. 10 October 2014 (has links)
No description available.
|
24 |
Autonomous finite capacity scheduling using biological control principlesManyonge, Lawrence January 2012 (has links)
The vast majority of the research efforts in finite capacity scheduling over the past several years has focused on the generation of precise and almost exact measures for the working schedule presupposing complete information and a deterministic environment. During execution, however, production may be the subject of considerable variability, which may lead to frequent schedule interruptions. Production scheduling mechanisms are developed based on centralised control architecture in which all of the knowledge base and databases are modelled at the same location. This control architecture has difficulty in handling complex manufacturing systems that require knowledge and data at different locations. Adopting biological control principles refers to the process where a schedule is developed prior to the start of the processing after considering all the parameters involved at a resource involved and updated accordingly as the process executes. This research reviews the best practices in gene transcription and translation control methods and adopts these principles in the development of an autonomous finite capacity scheduling control logic aimed at reducing excessive use of manual input in planning tasks. With autonomous decision-making functionality, finite capacity scheduling will as much as practicably possible be able to respond autonomously to schedule disruptions by deployment of proactive scheduling procedures that may be used to revise or re-optimize the schedule when unexpected events occur. The novelty of this work is the ability of production resources to autonomously take decisions and the same way decisions are taken by autonomous entities in the process of gene transcription and translation. The idea has been implemented by the integration of simulation and modelling techniques with Taguchi analysis to investigate the contributions of finite capacity scheduling factors, and determination of the ‘what if’ scenarios encountered due to the existence of variability in production processes. The control logic adopts the induction rules as used in gene expression control mechanisms, studied in biological systems. Scheduling factors are identified to that effect and are investigated to find their effects on selected performance measurements for each resource in used. How they are used to deal with variability in the process is one major objective for this research as it is because of the variability that autonomous decision making becomes of interest. Although different scheduling techniques have been applied and are successful in production planning and control, the results obtained from the inclusion of the autonomous finite capacity scheduling control logic has proved that significant improvement can still be achieved.
|
25 |
Hybrid DES-based Vehicular Network Simulator with Multichannel OperationsWang, Le 16 April 2019 (has links)
Vehicular Ad-hoc Network (VANET) is considered to be a viable technology for inter- vehicle communications for the purpose of improving road safety and efficiency. The En- hanced Distribution Channel Access (EDCA) mechanism and multichannel operations are introduced to ensure the Quality of Service (QoS). Therefore, it is necessary to create an accurate vehicular network simulator that guarantees the vehicular communications will work as described in the protocols. A comprehensive vehicular network simulator should consider the interaction between mobility models and network protocols. In this dissertation, a novel vehicular network simulation environment, VANET Toolbox, designed using discrete-event system (DES) is presented. The APP layer DES Module of the proposed simulator integrates vehicular mo- bility operations with message generation functions. The MAC layer DES module supports single channel and multichannel EDCA operations. The PHY layer DES module supports bit-level processing. Compared with packet-based simulator such as NS-3, the proposed PHY layer is more realistic and accurate. The EDCA scheme is evaluated and compared with the traditional Carrier-Sensing Mul- tiple Access (CSMA) scheme, with the simulations proving that data with different priorities can coexist in the same channel. The multichannel operation for the EDCA scheme is also analyzed in this dissertation. The multichannel switching operation and coordination may cause packet dropping or increased latency to the communication. The simulations show that with heavy network traffic, multichannel communication performs better than single channel communication. From the perspective of safety-related messages, the multichannel operation is able to isolate the interference from the non-safety messages in order to achieve a better packet delivery rate and latency. On the other hand, the non-safety messages can achieve high throughput with reasonable latency from multichannel communication under heavy load traffic scenario.
|
26 |
Where is my inhaler? : A simulation and optimization study of the Quality Control on Symbicort Turbuhaler at AstraZeneca / Var är min inhalator? : En simulerings- och optimeringsstudie på kvalitetskontrollen av Symbicort Turbuhaler vid AstraZenecaHaddad, Shirin, Nilsson, Marie January 2019 (has links)
Symbicort Turbuhaler is a medical device produced by the pharmaceutical company AstraZeneca for the treatment of asthma and symptoms of chronic obstructive pulmonary disease. The delivery reliability of the product is dependent on the performance of the whole supply chain and as part of the chain the results from the department, Quality Control (QC), are mandatory to release the produced batches to the market. The performance of QC is thus an important part of the supply chain. In order to reduce the risk of supply problems and market shortage, it is very important to investigate whether the performance of QC can be improved. The purpose of the thesis is to provide AstraZeneca with scientifically based data to identify sensitive parameters and readjust work procedures in order to improve the performance of QC. The goal of this thesis is to map out the flow of the QC Symbicort Turbuhaler operation and construct a model of it. The model is intended to be used to simulate and optimize different parameters, such as the inflow of batch samples, the utilization of the instrumentation and staff workload. QC is modelled in a simulation software. The model is used to simulate and optimize different scenarios following a discrete event simulation and an optimization technique based on evolution strategies. By reducing the number of analytical robots from 14 to 10, it is possible to maintain existing average lead time. Through a reduction, the utilization of the robots increases, simultaneously the workload decreases for some of the staff. However, it is not possible to extend the durability of the system suitability test (SST), and still achieve existing average lead time. From the investigation of different parameters, it is found that, an added laboratory engineer at the high-performance liquid chromatography (HPLC) station has the best outcome on lead time and overall equipment effectiveness. However, a reduced laboratory engineer at the Minispice robots has the worst outcome. With the resources available today the lead times cannot be maintained in the long run, if the inflow is of 35 batch samples a week or more. By adding a laboratory engineer at the HPLC station and by using a SST with durability of 48 hours, the best outcome in terms of average lead time and number of batch samples with a lead time less than 10 days is received. / Symbicort Turbuhaler är en medicinsk enhet som tillverkas av läkemedelsföretaget AstraZeneca för behandling av astma och symptomen för kronisk obstruktiv lungsjukdom. Leveranssäkerheten för produkten är beroende av hela försörjningskedjans prestanda och som en del utav kedjan är resultaten från kvalitetskontrollen (QC) obligatoriska för att släppa en batch av produkten till marknaden. QCs prestanda är därför en viktig del av försörjningskedjan. För att minska risken för leveransproblem och produktbrist på marknaden är det viktigt att undersöka huruvida prestandan hos QC kan förbättras. Syftet med arbetet är att ge AstraZeneca vetenskapligt baserat data för att identifiera känsliga parametrar och justera arbetssätt för att förbättra prestandan hos QC. Målet med detta arbete är att kartlägga flödet av QC Symbicort Turbuhaler och konstruera en modell utifrån det flödet. Modellen är avsedd för att simulera och optimera olika parametrar, såsom inflödet av batchprover, utnyttjande av instrumentering och arbetsbelastning av personal. Genom att minska antalet analytiska robotar från 14 till 10, är det möjligt att bibehålla befintlig genomsnittlig ledtid. Genom denna minskning ökar utnyttjandet av robotarna, samtidigt som arbetsbelastningen minskar för en del av bemanningen. Det är inte möjligt att förlänga hållbarheterna på robotarnas systemtest (SST) och fortfarande uppnå befintlig genomsnittlig ledtid. Vid undersökning av olika parametrar indikerar resultatet att en ytterligare laboratorieingenjör vid högpresterande vätskekromatografi-stationen (HPLC) har den bästa effekten på ledtid och produktionseffektivitet. En laboratorieingenjör som reduceras från Minispice-robotarna har däremot den värsta effekten. Med de resurser som finns tillgängliga idag kan ledtiderna inte bibehållas långsiktigt om inflödet är 35 batchprover per vecka eller mer. Genom att addera en laboratorieingenjör vid HPLC-stationen och användaen SST med en hållbarhet på 48 timmar, erhålls det bästa resultatet i termer av genomsnittlig ledtid och antal batchprover som har en individuell ledtid på mindre än 10 dagar.
|
27 |
Discrete-Event Simulation for Hospital Resource Planning : Possibilities and RequirementsSteins, Krisjanis January 2010 (has links)
<p>The delivery of health care services has been under pressure due to limited funding and increasing demand. This has highlighted the need to increase not only the effectiveness but also the efficiency of health care delivery. Discrete-event simulation has been suggested as an analysis tool in health care management to support the planning of health care resources.</p><p>The overall purpose of this thesis is to investigate the possibilities and requirements for using discrete-event simulation in analyzing and planning the use of hospital resources. This is achieved by three case studies that focus on improvements in patient flow of emergency patients that require a radiology examination, intensive care unit capacity planning and operating room allocation strategies, respectively.</p><p>The first case investigates the current stage of digitization and process orientation in hospital care as a prerequisite for efficient process simulation and analysis. The study reveals an emergency-radiology patient flow process that is not very well measured and uncovers disparate information systems storing incompatible and fragmented data. These results indicate that the current degree of process orientation and the current IT infrastructure does not enable efficient use of quantitative process analysis and management tools like simulation.</p><p>In the second case the possibilities to develop generic hospital unit simulation models by building and validating a generic intensive care unit (ICU) model are explored. The results show that some of the modeling approaches described in literature cannot replicate the actual behavior observed in all studied ICUs. It is important to identify patient groups for different admission priorities, to account for over-utilizations in the model logic, and to discover and properly model dependencies in the input data. The research shows that it is possible to develop a generic ICU simulation model that could realistically describe the performance of different real ICUs in terms of occupancy, coverage and transfers.</p><p>The value of simulation modeling in health care management is examined in the third case through the development and use of a simulation model for optimal resource allocation and patient flow in a hospital operating department. The goal of the simulation modeling in this case was to identify bottlenecks in the patient flow and to try different alternatives for allocation of operating room capacity in order to increase the utilization of operating room resources. The final model was used to evaluate four different proposed changes to operating room time allocation.</p>
|
28 |
Control of Batch Processes Based on Hierarchical Petri NetsONOGI, Katsuaki, KURIMOTO, Hidekazu, HASHIZUME, Susumu, ITO, Takashi, YAJIMA, Tomoyuki 01 November 2004 (has links)
No description available.
|
29 |
Evaluating Lean Manufacturing Proposals through Discrete Event Simulation – A Case Study at Alfa LavalDetjens, Sönke, Flores, Erik January 2013 (has links)
In their strive for success in competitive markets companies often turn to Lean philosophy. However, for many companies Lean benefits are hard to substantialize especially when their ventures have met success through traditional manufacturing approaches. Traditional Lean tools analyze current situations or help Lean implementation. Therefore productions facilities require tools that enhance the evaluation of Lean proposals in such a way that decisions are supported by quantitative data and not only on a gut feeling. This thesis proposes how Discrete Event Simulation may be used as an evaluation tool in production process improvement to decide which proposal best suits Lean requirements. Theoretical and empirical studies were carried out. Literature review helped define the problem. A case study was performed at Alfa Laval to investigate through a holistic approach how and why did this tool provide a solution to the research questions. Case study analysis was substantiated with Discrete Event Simulation models for the evaluation of current and future state Lean proposals. Results of this study show that Discrete Event Simulation was not designed and does not function as a Lean specific tool. The use of Discrete Event Simulation in Lean assessment applications requires the organization to understand the principles of Lean and its desired effects. However, the use of traditional static Lean tools such as Value Stream Mapping and dynamic Discrete Event Simulation complement each other in a variety of ways. Discrete Event Simulation provides a unique condition to account for process variability and randomness. Both measurement of and reduction in variability through simulation provide insight to Lean implementation strategies.
|
30 |
Discrete event modelling and Simulation of an Assembly Line at GKN Driveline Köping ABYesilgul, Mustafa, Nasser, Firas January 2013 (has links)
Today’s economic conditions force companies and organizations to work more effectively in their processes due to different reasons. Especially; after the Second World War, owing to the changing business perception and strong competition between companies, new terms such as productivity, flexible systems, efficiency, and lean came into industrial engineering discipline. However, these kinds of terms also brought a new question. How are they reached? At that point, discrete event simulation has been used as an effective method to give an answer to this question. From this perspective; this project focuses on discrete event simulation and its role in real industrial processes. The main interest of this paper is discrete event simulation, but in this study we also tried to give some detailed information about other types of simulations such as continuous and discrete rate. Basically, we can say that this paper consists of several parts. In the beginning of this paper, the reader can find some theoretical information about simulation itself and the requirements for implementing it on real processes. Secondly, we tried to explain different types of simulations and the reason why we used discrete event simulation instead of continuous or discrete rate in our case study. Furthermore, one of the main areas of this research is to inform the reader about how computer support is used as a simulation tool by today’s companies. To do this, a powerful software, Extendsim8, is described in detail. The reader is able to find all the information about how to create discrete event models in this software. In case study part, we are able to find the results of the five months work that we did between February and June at GKNDriveline Köping AB in Sweden. In these five months, we had been busy with analyzing an assembly line, collecting data, creating a simulation model, discussion with workers and engineers and doing some tests such as validation & verification. In this part, the reader can find all the information about the production line and the simulation model. In conclusion, reader can find the results of the project at the end with the visualization of future state. As it will be discussed repeatedly in the paper, validation is one of the important steps in a simulation project. Therefore, in order to see the reliability of our simulation model, different calculations and tests were made. Last of all, some of results will be shown by graphs and tables in order to give better insight to reader.
|
Page generated in 0.0868 seconds