• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 15
  • 13
  • 10
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 308
  • 308
  • 308
  • 108
  • 52
  • 51
  • 46
  • 45
  • 44
  • 33
  • 33
  • 32
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Framework for robust design: a forecast environment using intelligent discrete event simulation

Beisecker, Elise K. 29 March 2012 (has links)
The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach. The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition. The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms. The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point. Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm. The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
72

Building a new production line : Problems, pitfalls and how to gain social sustainability

Telander, Andreas, Fahlgren, Jessica January 2015 (has links)
This thesis has been performed in collaboration with Volvo Cars Engine in Skövde, Sweden and Zhangjia-kou, China in order to receive a bachelor degree in automation engineering from the University of Skövde. The project focuses on analyzing the capacity of a future production line by using discrete event simulation. The production line is built in two different discrete event simulation software, FACTS analyzer and Plant Simulation. The focus of the study will be to compare the output results from the two software in order to give recommendations for which software to use in similar cases. This is done in order for Volvo Cars Corporation to have as a basis for further work in similar cases. The aim of the work is to verify the planned capacity of the new production line and to perform a leadership study with Chinese engineers in order to find out how they view the Swedish leadership and how this can be adapted to China and the Chinese culture and give recommendations for future work. The results of the capacity analysis show that the goals of parts produced can be reached for both planned capacities but also that there are potential constraints that have been identified in the system. The results of the leadership study also show that the overall approach should be slightly adapted to be better suited for the Chinese culture. The comparison of the two simulation software suggests that FACTS Analyzer is suit-able to use when less complex logic or systems are represented, however when building more complex models consisting of more complex logic Plant Simulation is more suitable.
73

The Effects of Altering Discharge Policies to Alternate Level of Care Patient Flow

Grover, Lata 20 November 2012 (has links)
Alternate Level of Care (ALC) patients are patients that stay in the acute care setting while waiting to be transferred to an ALC facility. They are not receiving the appropriate type of care and are occupying acute care resources. ALC patients occupy 5,200 patient beds everyday in Canada, and 12 percent of these ALC patients die during their waiting period. This study evaluates Toronto General Hospital's (TGH) discharge policy in the General Surgery and General Internal Medicine (GIM) departments using a discrete-event simulation. For long-term care ALC patients, it was found that applying to one extra application or maximizing the number of short waiting list facilities in their total number of applications significantly reduces the number of ALC days and the number of died in hospital patients. Knowing if discharge policies can decrease ALC days is not only significant to TGH but also to other health care institutions.
74

The Effects of Altering Discharge Policies to Alternate Level of Care Patient Flow

Grover, Lata 20 November 2012 (has links)
Alternate Level of Care (ALC) patients are patients that stay in the acute care setting while waiting to be transferred to an ALC facility. They are not receiving the appropriate type of care and are occupying acute care resources. ALC patients occupy 5,200 patient beds everyday in Canada, and 12 percent of these ALC patients die during their waiting period. This study evaluates Toronto General Hospital's (TGH) discharge policy in the General Surgery and General Internal Medicine (GIM) departments using a discrete-event simulation. For long-term care ALC patients, it was found that applying to one extra application or maximizing the number of short waiting list facilities in their total number of applications significantly reduces the number of ALC days and the number of died in hospital patients. Knowing if discharge policies can decrease ALC days is not only significant to TGH but also to other health care institutions.
75

Linkage of Truck-and-shovel Operations to Short-term Mine Plans Using Discrete Event Simulation

Torkamani, Elmira Unknown Date
No description available.
76

Improving Emergency Department performance using Discrete-event and Agent-based Simulation

Kaushal, Arjun 14 February 2014 (has links)
This thesis investigates the causes of the long wait-time for patients in Emergency department (ED) of Victoria General Hospital, and suggests changes for improvements. Two prominent simulation techniques have been used to replicate the ED in a simulation model. These are Discrete-event simulation (DES) and Agent-based modeling (ABM). While DES provides the basic modeling framework ABM has been used to incorporate human behaviour in the ED. The patient flow in the ED has been divided into 3 phases: input, throughput, and output. Model results show that there could be multiple interventions to reduce time taken to be seen by the doctor for the first time (also called WTBS) either in the output phase or in the input phase. The model is able to predict that a reduction in the output phase would cause reduction in the WTBS but it is not equipped to suggest how this reduction can be achieved. To reduce WTBS by making interventions in the input phase this research proposes a strategy called fast-track treatment (FTT). This strategy helps the model to dynamically re-allocate resources if needed to alleviate high WTBS. Results show that FTT can reduce WTBS times by up-to 40%.
77

Understanding the effects of different levels of product monitoring on maintenance operations: A simulation approach

Alabdulkarim, Abdullah A. 10 1900 (has links)
The move towards integrating products and services has increased significantly. As a result, some business models, such as Product Service Systems (PSS) have been developed. PSS emphasises the sale of use of the product rather than the sale of the product itself. In this case, product ownership lies with the manufacturers/suppliers. Customers will be provided with a capable and available product for their use. In PSS, manufacturers/suppliers are penalised for any down time of their product according to the PSS contract. This has formed a pressure on the service providers (maintenance teams) to assure the availability of their products in use. This pressure increases as the products are scattered in remote places (customer locations). Authors have urged that different product monitoring levels are applied to enable service providers to monitor their products remotely allowing maintenance to be performed accordingly. They claim that by adopting these monitoring levels, the product performance will increase. Their claim is based on reasoning, not on experimental/empirical methods. Therefore, further experimental research is required to observe the effect of such monitoring levels on complex maintenance operations systems as a whole which includes e.g. product location, different types of failure, labour and their skills and locations, travel times, spare part inventory, etc. In the literature, monitoring levels have been classified as Reactive, Diagnostics, and Prognostics. This research aims to better understand and evaluate the complex maintenance operations of a product in use with different levels of product monitoring strategies using a Discrete Event Simulation (DES) approach. A discussion of the suitability of DES over other techniques has been provided. DES has proven its suitability to give a better understanding of the product monitoring levels on the wider maintenance system. The requirements for simulating a complex maintenance operation have been identified and documented. Two approaches are applied to gather these generic requirements. The first is to identify those requirements of modelling complex maintenance operations in a literature review. This is followed by conducting interviews with academics and industrial practitioners to find out more requirements that were not captured in the literature. As a result, a generic conceptual model is assimilated. A simulation module is built through the Witness software package to represent different product monitoring levels (Reactive, Diagnostics, and Prognostics). These modules are then linked with resources (e.g. labour, tools, and spare parts). To ensure the ease of use and rapid build of such a complex maintenance system through these modules, an Excel interface is developed and named as Product Monitoring Levels Simulation (PMLS). The developed PMLS tool needed to be demonstrated and tested for tool validation purposes. Three industrial case studies are presented and different experimentations are carried out to better understand the effect of different product monitoring levels on the complex maintenance operations. Face to face validation with case companies is conducted followed by an expert validation workshop. This work presents a novel Discrete Event Simulation (DES) approach which is developed to support maintenance operations decision makers in selecting the appropriate product monitoring level for their particular operation. This unique approach provides numerical evidence and proved that the higher product monitoring level does not always guarantee higher product availability.
78

Economic evaluation of health care technologies : a comparison of alternative decision modelling techniques

Karnon, J. D. January 2001 (has links)
The focus of this thesis is on the application of decision models to the economic evaluation of health care technologies. The primary objective addresses the correct choice of modelling technique, as the attributes of the chosen technique could have a significant impact on the process, as well as the results, of an evaluation. Separate decision models, a Markov process and a discrete event simulation (DES) model are applied to a case study evaluation comparing alternative adjuvant therapies for early breast cancer. The case study models are built and analysed as stochastic models: whereby probability distributions are specified to represent the uncertainty about the true values of the model input parameters. Three secondary objectives are also specified. Firstly, the empirical application of the alternative decision models requires the specification of a 'modelling process' that is not well defined in the health economics literature. Secondly, a comparison of alternative methods for specifying probability distributions to describe the uncertainty in the model's input parameters is undertaken. The final secondary objective covers the application of methods for valuing the collection of additional information to inform the resource allocation decision. The empirical application of the two relevant modelling techniques clarifies the potential advantages derived from the increased flexibility provided by DES over Markov models. The thesis concludes that the use of DES should be strongly considered if either of the following issues appear relevant: model parameters are a function of the time spent in particular states, or the data describing the timing of events are not in the form of transition probabilities. The full description of the modelling process provides a resource for health economists wanting to use decision models. No definitive process is established, however, as there exist competing methods for various stages of the modelling process. The main conclusion from the comparison of methods for specifying probability distributions around the input parameters is that the theoretically specified distributions are most likely to provide a common baseline for comparisons between evaluations. The central question that remains to be addressed is which method is the most theoretically correct? The application of a Vol analysis provides useful insights into the methods employed and leads to the identification of particular methodological issues requiring future research in this area.
79

Improving Emergency Department performance using Discrete-event and Agent-based Simulation

Kaushal, Arjun 14 February 2014 (has links)
This thesis investigates the causes of the long wait-time for patients in Emergency department (ED) of Victoria General Hospital, and suggests changes for improvements. Two prominent simulation techniques have been used to replicate the ED in a simulation model. These are Discrete-event simulation (DES) and Agent-based modeling (ABM). While DES provides the basic modeling framework ABM has been used to incorporate human behaviour in the ED. The patient flow in the ED has been divided into 3 phases: input, throughput, and output. Model results show that there could be multiple interventions to reduce time taken to be seen by the doctor for the first time (also called WTBS) either in the output phase or in the input phase. The model is able to predict that a reduction in the output phase would cause reduction in the WTBS but it is not equipped to suggest how this reduction can be achieved. To reduce WTBS by making interventions in the input phase this research proposes a strategy called fast-track treatment (FTT). This strategy helps the model to dynamically re-allocate resources if needed to alleviate high WTBS. Results show that FTT can reduce WTBS times by up-to 40%.
80

Monitoring And Checking Of Discrete Event Simulations

Ulu, Buket 01 January 2003 (has links) (PDF)
Discrete event simulation is a widely used technique for decision support. The results of the simulation must be reliable for critical decision making problems. Therefore, much research has concentrated on the verification and validation of simulations. In this thesis, we apply a well-known dynamic verification technique, assertion checking method, as a validation technique. Our aim is to validate the particular runs of the simulation model, rather than the model itself. As a case study, the operations of a manufacturing cell have been simulated. The cell, which is METUCIM Laboratory at the Mechanical Engineering Department of METU, has a robot and a conveyor to carry the materials, and two machines to manufacture the items, and a quality control to measure the correctness of the manufactured items. This simulation is monitored and checked by using the Monitoring and Checking (MaC) tool, a prototype developed at the University of Pennsylvania. The separation of low-level implementation details (pertaining to the code) from the high-level requirement specifications (pertaining to the simuland) helps keep monitoring and checking the simulations at an abstract level.

Page generated in 0.1276 seconds