• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 34
  • 16
  • 14
  • 10
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 320
  • 320
  • 308
  • 111
  • 53
  • 51
  • 47
  • 46
  • 44
  • 35
  • 34
  • 33
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

High performance, scalable, and expressive modeling environment to study mobile malware in large dynamic networks

Channakeshava, Karthik 18 October 2011 (has links)
Advances in computing and communication technologies are blurring the distinction between today's PCs and mobile phones. With expected smart phones sales to skyrocket, lack of awareness regarding securing them, and access to personal and proprietary information, has resulted in the recent surge of mobile malware. In addition to using traditional social-engineering techniques such as email and file-sharing, malware unique to Bluetooth, Short Messaging Service (SMS) and Multimedia Messaging Service (MMS) messages are being used. Large scale simulations of malware on wireless networks have becomes important and studying them under realistic device deployments is important to obtain deep insights into their dynamics and devise ways to control them. In this dissertation, we present EpiNet: an individual-based scalable high-performance oriented modeling environment for simulating the spread of mobile malware over large, dynamic networks. EpiNet can be used to undertake comprehensive studies during both planning and response phase of a malware epidemic in present and future generation wireless networks. Scalability is an important design consideration and the current EpiNet implementation can scale to 3-5 million device networks and case studies show that large factorial designs on million device networks can be executed within a day on 100 node clusters. Beyond compute time, EpiNet has been designed for analysts to easily represent a range of interventions and evaluating their efficacy. The results indicate that Bluetooth malware with very low initial infection size will not result in a major wireless epidemic. The dynamics are dependent on the network structure and, activity-based mobility models or their variations can yield realistic spread dynamics. Early detection of the malware is extremely important in controlling the spread. Non-adaptive response strategies using static graph measures such as degree and betweenness are not effective. Device-based detection mechanisms provide a much better means to control the spread and only effective when detection occurs early on. Automatic signature generation can help in detecting newer strains of the malware and signature distributions through a central server results in better control of the spread. Centralized dissemination of patches are required to reach a large proportion of devices to be effective in slowing the spread. Non-adaptive dynamic graph measures such as vulnerability are found to be more effective. Our studies of SMS and hybrid malware show that SMS-only malware spread slightly faster than Bluetooth-only malware and do not spread to all devices. Hybrid malware spread orders of magnitude faster than either SMS-only or Bluetooth-only malware and can cause significant damage. Bluetooth-only malware spread faster than SMS-only malware in cases where density of devices in the proximity of an infected device is higher. Hybrid malware can be much more damaging than Bluetooth-only or SMS-only malware and we need mechanisms that can prevent such an outbreak. EpiNet provide a means to propose, implement and evaluate the response mechanisms in realistic and safe settings. / Ph. D.
42

Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand

Chandan, Shridhar 05 February 2014 (has links)
Realistic mobility and cellular traffic modeling is key to various wireless networking applications and have a significant impact on network performance. Planning and design, network resource allocation and performance evaluation in cellular networks require realistic traffic modeling. We propose a Discrete Event Simulation framework, Diamond - (Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand) to model and analyze realistic activity based mobility and spectrum demand patterns. The framework can be used for spatio-temporal estimation of load, in deciding location of a new base station, contingency planning, and estimating the resilience of the existing infrastructure. The novelty of this framework lies in its ability to capture a variety of complex, realistic and dynamically changing events effectively. Our initial results show that the framework can be instrumental in contingency planning and dynamic spectrum allocation. / Master of Science
43

Developing a Discrete Event Simulation Methodology to support a Six Sigma Approach for Manufacturing Organization - Case study.

Hussain, Anees, Munive-Hernandez, J. Eduardo, Campean, Felician 17 March 2019 (has links)
Yes / Competition in the manufacturing industry is growing at an accelerated rate due to globalization trend. This global competition urges manufacturing organizations to review and improve their processes in order to enhance and maintain their competitive advantage. One of those initiatives is the implementation of the Six Sigma methodology to analyze and reduce variation hence improving the processes of manufacturing organizations. This paper presents a Discrete Event Simulation methodology to support a Six Sigma approach for manufacturing organizations. Several approaches to implement Six Sigma focus on improving time management and reducing cycle time. However, these efforts may fail in their effective and practical implementation to achieve the desired results. Following the proposed methodology, a Discrete Event Simulation model was built to assist decision makers in understanding the behavior of the current manufacturing process. This approach helps to systematically define, measure and analyze the current state process to test different scenarios to improve performance. The paper is amongst the first to offer a simulation methodology to support a process improvement approach. It applies an action research strategy to develop and validate the proposed modelling methodology in a British manufacturing organization competing in global markets.
44

Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation

Sarkar, Falguni 05 1900 (has links)
Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead. In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of them on a network of workstations. The other algorithm has been analyzed probabilistically. The reason for choosing network of workstations is their low cost and the availability of efficient message passing softwares like PVM and MPI. All of these load balancing algorithms piggyback on the existing GVT computation algorithms and try to balance the speed of simulation in different lps. We have also designed an optimal GVT computation algorithm for the hypercubes and studied its performance with respect to the other GVT computation algorithms by simulating a hypercube in our network cluster. We use the topological properties of a star network in order to design an algorithm for computing a safe time-window for parallel discrete event simulation. We have analyzed and simulated the behavior of an open queuing network resembling such an architecture. Our algorithm is also extended for hierarchical stars and for recursive window computation.
45

Thread Safe Multi-Tier Priority Queue for Managing Pending Events in Multi-Threaded Discrete Event Simulations

DePero, Matthew Michael 28 August 2018 (has links)
No description available.
46

Optimization approaches for designing baseball scout networks under uncertainty

Ozlu, Ahmet Oguzhan 27 May 2016 (has links)
Major League Baseball (MLB) is a 30-team North American professional baseball league and Minor League Baseball (MiLB) is the hierarchy of developmental professional baseball teams for MLB. Most MLB players first develop their skills in MiLB, and MLB teams employ scouts, experts who evaluate the strengths, weaknesses, and overall potential of these players. In this dissertation, we study the problem of designing a scouting network for a Major League Baseball (MLB) team. We introduce the problem to the operations research literature to help teams make strategic and operational level decisions when managing their scouting resources. The thesis consists of three chapters that aim to address decisions such as how the scouts should be assigned to the available MiLB teams, how the scouts should be routed around the country, how many scouts are needed to perform the major scouting tasks, are there any trade-off s between the scouting objectives, and if there are any, what are the outcomes and insights. In the first chapter, we study the problem of assigning and scheduling minor league scouts for Major League Baseball (MLB) teams. There are multiple objectives in this problem. We formulate the problem as an integer program, use decomposition and both column-generation-based and problem-specific heuristics to solve it, and evaluate policies on multiple objective dimensions based on 100 bootstrapped season schedules. Our approach can allow teams to improve operationally by finding better scout schedules, to understand quantitatively the strategic trade-offs inherent in scout assignment policies, and to select the assignment policy whose strategic and operational performance best meets their needs. In the second chapter, we study the problem under uncertainty. In reality we observe that there are always disruptions to the schedules: players are injured, scouts become unavailable, games are delayed due to bad weather, etc. We presented a minor league baseball season simulator that generates random disruptions to the scout's schedules and uses optimization based heuristic models to recover the disrupted schedules. We evaluated the strategic benefits of different policies for team-to-scout assignment using the simulator. Our results demonstrate that the deterministic approach is insufficient for evaluating the benefits and costs of each policy, and that a simulation approach is also much more effective at determining the value of adding an additional scout to the network. The real scouting network design instances we solved in the first two chapters have several detailed complexities that can make them hard to study, such as idle day constraints, varying season lengths, off days for teams in the schedule, days where some teams play and others do not, etc. In the third chapter, we analyzed a simplified version of the Single Scout Problem (SSP), stripping away much of the real-world complexities that complicate SSP instances. Even for this stylized, archetypal version of SSP, we find that even small instances can be computationally difficult. We showed by reduction from Minimum Cost Hamiltonian Path Problem that archetypal version of SSP is NP-complete, even without all of the additional complexity introduced by real scheduling and scouting operations.
47

Understanding the effects of different levels of product monitoring on maintenance operations : a simulation approach

Alabdulkarim, Abdullah A. January 2013 (has links)
The move towards integrating products and services has increased significantly. As a result, some business models, such as Product Service Systems (PSS) have been developed. PSS emphasises the sale of use of the product rather than the sale of the product itself. In this case, product ownership lies with the manufacturers/suppliers. Customers will be provided with a capable and available product for their use. In PSS, manufacturers/suppliers are penalised for any down time of their product according to the PSS contract. This has formed a pressure on the service providers (maintenance teams) to assure the availability of their products in use. This pressure increases as the products are scattered in remote places (customer locations). Authors have urged that different product monitoring levels are applied to enable service providers to monitor their products remotely allowing maintenance to be performed accordingly. They claim that by adopting these monitoring levels, the product performance will increase. Their claim is based on reasoning, not on experimental/empirical methods. Therefore, further experimental research is required to observe the effect of such monitoring levels on complex maintenance operations systems as a whole which includes e.g. product location, different types of failure, labour and their skills and locations, travel times, spare part inventory, etc. In the literature, monitoring levels have been classified as Reactive, Diagnostics, and Prognostics. This research aims to better understand and evaluate the complex maintenance operations of a product in use with different levels of product monitoring strategies using a Discrete Event Simulation (DES) approach. A discussion of the suitability of DES over other techniques has been provided. DES has proven its suitability to give a better understanding of the product monitoring levels on the wider maintenance system. The requirements for simulating a complex maintenance operation have been identified and documented. Two approaches are applied to gather these generic requirements. The first is to identify those requirements of modelling complex maintenance operations in a literature review. This is followed by conducting interviews with academics and industrial practitioners to find out more requirements that were not captured in the literature. As a result, a generic conceptual model is assimilated. A simulation module is built through the Witness software package to represent different product monitoring levels (Reactive, Diagnostics, and Prognostics). These modules are then linked with resources (e.g. labour, tools, and spare parts). To ensure the ease of use and rapid build of such a complex maintenance system through these modules, an Excel interface is developed and named as Product Monitoring Levels Simulation (PMLS). The developed PMLS tool needed to be demonstrated and tested for tool validation purposes. Three industrial case studies are presented and different experimentations are carried out to better understand the effect of different product monitoring levels on the complex maintenance operations. Face to face validation with case companies is conducted followed by an expert validation workshop. This work presents a novel Discrete Event Simulation (DES) approach which is developed to support maintenance operations decision makers in selecting the appropriate product monitoring level for their particular operation. This unique approach provides numerical evidence and proved that the higher product monitoring level does not always guarantee higher product availability.
48

A generic framework for hybrid simulation in healthcare

Chahal, Kirandeep January 2010 (has links)
Healthcare problems are complex; they exhibit both detail and dynamic complexity. It has been argued that Discrete Event Simulation (DES), with its ability to capture detail, is ideal for problems exhibiting this type of complexity. On the other hand, System Dynamics (SD) with its focus on feedback and nonlinear relationships lends itself naturally to comprehend dynamic complexity. Although these modelling paradigms provide valuable insights, neither of them are proficient in capturing both detail and dynamic complexity to the same extent. It has been argued in literature that a hybrid approach, wherein SD and DES are integrated symbiotically, will provide more realistic picture of complex systems with fewer assumptions and less complexity. In spite of wide recognition of healthcare as a complex multi- dimensional system, there has not been any reported study which utilises hybrid simulation. This could be attributed to the fact that due to fundamental differences, the mixing of methodologies is quite challenging. In order to overcome these challenges a generic theoretical framework for hybrid simulation is required. However, there is presently no such generic framework which provides guidance about integration of SD and DES to form hybrid models. This research has attempted to provide such a framework for hybrid simulation which can be utilised in healthcare domain. On the basis of knowledge induced from literature, three requirements for the generic framework have been established. It is argued that the framework for hybrid simulation should be able to provide answers to Why (why hybrid simulation is required), What (what information is exchanged between SD and DES models) and How (how SD and DES models are going to interact with each other over the time to exchange information) within the context of implementation of hybrid simulation to different problem scenarios. In order to meet these requirements, a three-phase generic framework for hybrid simulation has been proposed. Each phase of the framework is mapped to an established requirement and provides guidelines for addressing that requirement. The proposed framework is then evaluated theoretically based on its ability to meet these requirements by using multiple cases, and accordingly modified. It is further evaluated empirically with a single case study comprising of Accident and Emergency department of a London district general hospital. The purpose of this empirical evaluation is to identify the limitations of the framework with regard to the implementation of hybrid models. It is realised during implementation that the modified framework has certain limitations pertaining to the exchange of information between SD and DES models. These limitations are reflected upon and addressed in the final framework. The main contribution of this thesis is the generic framework for hybrid simulation which has been applied within healthcare context. Through an extensive review of existing literature in hybrid simulation, the thesis has also contributed to knowledge in multi-method approaches. A further contribution is that this research has attempted to quantify the impact of intangible benefits of information systems into tangible business process improvements. It is expected that this work will encourage those engaged in simulation (e.g., researchers, practitioners, decision makers) to realise the potential of cross-fertilisation of the two simulation paradigms.
49

Simulation modelling of distributed-shared memory multiprocessors

Marurngsith, Worawan January 2006 (has links)
Distributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture.
50

Simulation modeling for the impact of triage liaison physician on emergency department to reduce overcrowding

Yang, Jie 03 January 2017 (has links)
Emergency department (ED) overcrowding has been a common complaint in Emergency Medicine in Canada for many years. Its adverse effects of prolonged waiting times cause patient dissatisfaction and unsafety. Previous studies indicate that adding a physician in triage (PIT) can increase accuracy and efficiency in the initial process of patient evaluation. However, the scientific evidence of the PIT impact on ED is far away from sufficient before its widespread implementation. This research is to search solutions using PIT to identify areas of improvement for the ED patient flow, based upon a validated discrete-event simulation (DES) model. As an efficient decision-making tool, the DES model also helps to develop an understanding of the current ED performance and quantitatively test various design alternatives for ED operations. / February 2017

Page generated in 0.0926 seconds