• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 67
  • 30
  • 22
  • 16
  • 14
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 457
  • 457
  • 342
  • 140
  • 127
  • 90
  • 56
  • 56
  • 55
  • 53
  • 51
  • 50
  • 48
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Communication Network Performance Evaluation of a Distribution Network Power Quality Monitoring System

Chen, Ching-Fu 03 July 2001 (has links)
Power quality has a great effect on the operation of system loads. To analyze its effects and the possible economic losses due to system disturbances, there is an immediate need of a power quality monitoring system. With an effective communication system, network disturbance data can be gathered and analyzed efficiently such that outage duration and its consequent losses can be reduced. This thesis presents communication network performance simulation results of different types of communication schemes used in a power quality monitoring system. Discrete event simulation method is used to study the end-to-end delay times of different communication architectures. Based on these simulation results, system designers can choose the best option to meet their data communication requirements in power quality monitoring.
112

Hardware acceleration for conservative parallel discrete event simulation on multi-core systems

Lynch, Elizabeth Whitaker 07 February 2011 (has links)
Multi-core architectures are becoming more common and core counts continue to increase. There are six- and eight-core chips currently in production, such as Intel Gulftown, and many-core chips with dozens of cores, such as the Intel Teraflops 80-core chip, are projected in the next five years. However, adding more cores often does not improve the performance of applications. It would be desirable to take advantage of the multi-core environment to speed up parallel discrete event simulation. The current bottleneck for many parallel simulations is time synchronization. This is especially true for simulations of wireless networks and on-chip networks, which have low lookahead. Message passing is also a common simulation bottleneck. In order to address the issue of time synchronization, we have designed hardware at a functional level that performs the time synchronization for parallel discrete event simulation asynchronously and in just a few clock cycles, eliminating the need for global communication with message passing or lock contention for shared memory. This hardware, the Global Synchronization Unit, consists of 3 register files, each the size of the number of cores, and is accessed using 5 new atomic instructions. In order to reduce the simulation overhead from message passing, we have also designed two independent pieces of hardware at a functional level, the Atomic Shared Heap and Atomic Message Passing, which can be used to perform lock-free, zero-copy message passing on a multi-core system. The impact of these specialized hardware units on the performance of parallel discrete event simulation is assessed and compared to traditional shared-memory techniques.
113

Framework for robust design: a forecast environment using intelligent discrete event simulation

Beisecker, Elise K. 29 March 2012 (has links)
The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach. The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition. The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms. The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point. Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm. The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
114

Distributed supervisory control of workflows [electronic resource] / by Pranav Deshpande.

Deshpande, Pranav. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 83 pages. / Thesis (M.S.I.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: The need for redesigning existing business processes to improve their efficiency makes it essential to adequately represent, study, and automate them. The WFMC defines "workflow" as computerized facilitation or automation of a business process in whole or part. It is actually a representation of the given process, which is made up of well-defined collection of activities called tasks. Modeling and specification of a workflow involves the following steps: 1) Provide formalism for modeling and specification of workflow 2) specify the tasks together with the associated information and 3) enter the applicable business rules in form of inter-task dependencies. Earlier attempts at modeling of workflows are based on a centralized control approach, has limited applicability for modeling and control of real life workflow due to computational complexity. In this thesis, a distributed supervisory control approach is described and shown to be computationally tractable. / ABSTRACT: The application of such an approach is demonstrated with a case study. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
115

Building a new production line : Problems, pitfalls and how to gain social sustainability

Telander, Andreas, Fahlgren, Jessica January 2015 (has links)
This thesis has been performed in collaboration with Volvo Cars Engine in Skövde, Sweden and Zhangjia-kou, China in order to receive a bachelor degree in automation engineering from the University of Skövde. The project focuses on analyzing the capacity of a future production line by using discrete event simulation. The production line is built in two different discrete event simulation software, FACTS analyzer and Plant Simulation. The focus of the study will be to compare the output results from the two software in order to give recommendations for which software to use in similar cases. This is done in order for Volvo Cars Corporation to have as a basis for further work in similar cases. The aim of the work is to verify the planned capacity of the new production line and to perform a leadership study with Chinese engineers in order to find out how they view the Swedish leadership and how this can be adapted to China and the Chinese culture and give recommendations for future work. The results of the capacity analysis show that the goals of parts produced can be reached for both planned capacities but also that there are potential constraints that have been identified in the system. The results of the leadership study also show that the overall approach should be slightly adapted to be better suited for the Chinese culture. The comparison of the two simulation software suggests that FACTS Analyzer is suit-able to use when less complex logic or systems are represented, however when building more complex models consisting of more complex logic Plant Simulation is more suitable.
116

A discrete event approach for model-based location tracking of inhabitants in smart homes

Danancher, Mickaël 02 December 2013 (has links) (PDF)
Life expectancy has continuously increased in most countries over the last decades and will probably continue to increase in the future. This leads to new challenges relative to the autonomy and the independence of elderly. The development of Smart Homes is a direction to face these challenges and to enable people to live longer in a safe and comfortable environment. Making a home smart consists in placing sensors, actuators and a controller in the house in order to take into account the behavior of their inhabitants and to act on their environment to improve their safety, health and comfort. Most of these approaches are based on the real-time indoor Location Tracking of the inhabitants. In this thesis, a whole new approach for model-based Location Tracking of an a priori unknown number of inhabitants is proposed. This approach is based on Discrete Event Systems paradigms, theory and tools. The usage of Finite Automata (FA) to model the detectable motion of the inhabitants as well as different methods to create such FA models have been developed. Based on these models, algorithms to perform efficient Location Tracking are defined. Finally, several approaches aiming at evaluating the relevance of the instrumentation of a Smart Home with the objective of Location Tracking are proposed. The approach has also been fully implemented and tested. Throughout the thesis, the different contributions are illustrated on case studies.
117

The Effects of Altering Discharge Policies to Alternate Level of Care Patient Flow

Grover, Lata 20 November 2012 (has links)
Alternate Level of Care (ALC) patients are patients that stay in the acute care setting while waiting to be transferred to an ALC facility. They are not receiving the appropriate type of care and are occupying acute care resources. ALC patients occupy 5,200 patient beds everyday in Canada, and 12 percent of these ALC patients die during their waiting period. This study evaluates Toronto General Hospital's (TGH) discharge policy in the General Surgery and General Internal Medicine (GIM) departments using a discrete-event simulation. For long-term care ALC patients, it was found that applying to one extra application or maximizing the number of short waiting list facilities in their total number of applications significantly reduces the number of ALC days and the number of died in hospital patients. Knowing if discharge policies can decrease ALC days is not only significant to TGH but also to other health care institutions.
118

The Effects of Altering Discharge Policies to Alternate Level of Care Patient Flow

Grover, Lata 20 November 2012 (has links)
Alternate Level of Care (ALC) patients are patients that stay in the acute care setting while waiting to be transferred to an ALC facility. They are not receiving the appropriate type of care and are occupying acute care resources. ALC patients occupy 5,200 patient beds everyday in Canada, and 12 percent of these ALC patients die during their waiting period. This study evaluates Toronto General Hospital's (TGH) discharge policy in the General Surgery and General Internal Medicine (GIM) departments using a discrete-event simulation. For long-term care ALC patients, it was found that applying to one extra application or maximizing the number of short waiting list facilities in their total number of applications significantly reduces the number of ALC days and the number of died in hospital patients. Knowing if discharge policies can decrease ALC days is not only significant to TGH but also to other health care institutions.
119

Reengineering Primary Health Care for Information and Communication Technology

Leung, Gloria Unknown Date
No description available.
120

Linkage of Truck-and-shovel Operations to Short-term Mine Plans Using Discrete Event Simulation

Torkamani, Elmira Unknown Date
No description available.

Page generated in 0.0217 seconds