• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 120
  • 119
  • 12
  • 10
  • 10
  • 6
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 504
  • 93
  • 87
  • 87
  • 85
  • 75
  • 75
  • 70
  • 69
  • 60
  • 52
  • 50
  • 50
  • 47
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A Decentralized Service Based Architecture for Fault Tolerant Control

Li, Rui January 2012 (has links)
Fault Tolerant Control Systems (FTCSs) are control systems including fault tolerant control. These systems are famous for enabling reliability, maintainability and survival ability in safe vehicle design. In some SCANIA Electronic Control Units (ECUs), the ECUs FTCS is based on a centralized fault detector to detect faults and a centralized reconfigurator to reconfigure the system with degraded performance rather than, for example completely shutting down the engine. However, with the size increasing in mechatronic system, the centralized architecture poses some problems in terms of performance, complexity and engineering facility. This thesis will present a Decentralized Service Based Architecture for FTCS. It is a hierarchy architecture composed of a completely decentralized fault diagnoser and a completely decentralized reconfigurator. The decentralized implementation in this thesis is exemplified on part of the Exhaust Emission Control 3 (EEC3) system, one of the ECUs of SCANIA. There are two main parts, denoted a decentralized diagnostic manager (DIMA) and the service based communication framework for the interaction between DIMA and reconfiguration. Compared to the centralized architecture, a decentralized action handler has been built locally in each software module so that actions can be activated as soon as the fault is detected, through which a fast and guaranteed response can be obtained. The concept Service means that the dependency between modules which is solely based on fault propagation. Service communication framework reduces the complexity of the original FTCS. Each ECU can be regarded as a node in the entire communication network of the mechantronic system in SCANIA, and once all the nodes are implemented with the decentralized service based architecture, Bayesian Network can be constructed to model the FTCS with uncertainties.
212

Prioritized Reconfiguration of Interdependent Critical Infrastructure Systems

Kleppinger, David Lawrence 06 May 2010 (has links)
This dissertation contains an examination of the problem of reconfiguration for restoration in critical infrastructure systems, with regard for the prioritization of those systems and the relationships between them. The complexity of the reconfiguration problem is demonstrated, and previous efforts to present solutions to the problem are discussed. This work provides a number of methods by which reconfiguration for restoration of an arbitrary number of prioritized interdependent critical infrastructure systems can be achieved. A method of modeling systems called Graph Trace Analysis is used to enable generic operation on various system types, and a notation for writing algorithms with Graph Trace analysis models is presented. The algorithms described are compared with each other and with prior work when run on a model of actual electrical distribution systems. They operate in a greedy fashion, attempting to restore loads in decreasing priority order. The described algorithms are also run on example models to demonstrate the ability to reconfigure interdependent infrastructure systems and systems which do not operate radially. / Ph. D.
213

Model-Centric Interdependent Critical Infrastructure System Recovery Analysis and Metrics

Russell, Kevin Joseph 29 June 2016 (has links)
This dissertation defines and evaluates new operations management modeling concepts for use with interdependent critical infrastructure system reconfiguration and recovery analysis. The work combines concepts from Graph Trace Analysis (GTA), Object Oriented Analysis and Design (OOAandD), the Unified Modeling Language (UML) and Physical Network Modeling; and applies them to naval ship reduced manned Hull, Mechanical, Electrical and Damage Control (HMEandDC) system design and operation management. OOAandD problem decomposition is used to derive a natural solution structure that simplifies integration and uses mission priority and mission time constraint relationships to reduce the number of system states that must be evaluated to produce a practical solution. New concepts presented include the use of dependency components and automated system model traces to structure mission priority based recovery analysis and mission readiness measures that can be used for automating operations management analysis. New concepts for developing power and fluid system GTA loop flow analysis convergence measures and acceleration factors are also presented. / Ph. D.
214

An integrated data- and capability-driven approach to the reconfiguration of agent-based production systems

Scrimieri, Daniele, Adalat, Omar, Afazov, S., Ratchev, S. 13 December 2022 (has links)
Yes / Industry 4.0 promotes highly automated mechanisms for setting up and operating flexible manufacturing systems, using distributed control and data-driven machine intelligence. This paper presents an approach to reconfiguring distributed production systems based on complex product requirements, combining the capabilities of the available production resources. A method for both checking the “realisability” of a product by matching required operations and capabilities, and adapting resources is introduced. The reconfiguration is handled by a multi-agent system, which reflects the distributed nature of the production system and provides an intelligent interface to the user. This is all integrated with a self-adaptation technique for learning how to improve the performance of the production system as part of a reconfiguration. This technique is based on a machine learning algorithm that generalises from past experience on adjustments. The mechanisms of the proposed approach have been evaluated on a distributed robotic manufacturing system, demonstrating their efficacy. Nevertheless, the approach is general and it can be applied to other scenarios. / This work was supported by the SURE Research Projects Fund of the University of Bradford and the European Commission (grant agreement no. 314762). / Research Development Fund Publication Prize Award winner, Nov 2022
215

Adaptive Overcurrent Protection Scheme for Shipboard Power Systems

Amann, Nicholas Paul 07 August 2004 (has links)
Future naval ships will be all-electric, with an integrated power system that combines the propulsion power system with the rest of the ship?s electrical distribution system. Reconfiguration of the power system will increase fight-through and survivability of ships, but will also require the systems that support the power system, such as the protection system, to be automatically updated to match current power system needs. This thesis presents an adaptive relaying scheme for shipboard power systems, to automatically modify relay settings after power system topology changes. Multiple Groups of relay settings are predetermined and stored in the digital relays that are protecting the power system. The active Group of settings is automatically determined based on the open/close status of breakers and switches. The developed protection scheme is tested on two test cases by digital simulation using CAPE software and on one case by closed-loop simulation with RTDS and SEL-351S relays.
216

Open Innovation Practices and Innovation Performance: A Dynamic Capabilities Approach

Ovuakporie, Oghogho D. January 2018 (has links)
The thesis will be available at the end of the embargo: 31st May 2024
217

Smart Distribution System Automation: Network Reconfiguration and Energy Management

Ding, Fei 06 February 2015 (has links)
No description available.
218

SYNTHESIS OF VIRTUAL PIPELINES ON VIRTEX-BASED FPGAs

DASASATHYAN, SRINIVASAN 11 October 2001 (has links)
No description available.
219

Wormhole Run-Time Reconfiguration: Conceptualization and VLSI Design of a High Performance Computing System

Bittner, Ray Albert Jr. 23 January 1997 (has links)
In the past, various approaches to the high performance numerical computing problem have been explored. Recently, researchers have begun to explore the possibilities of using Field Programmable Gate Arrays (FPGAs) to solve numerically intensive problems. FPGAs offer the possibility of customization to any given application, while not sacrificing applicability to a wide problem domain. Further, the implementation of data flow graphs directly in silicon makes FPGAs very attractive for these types of problems. Unfortunately, current FPGAs suffer from a number of inadequacies with respect to the task. They have lower transistor densities than ASIC solutions, and hence less potential computational power per unit area. Routing overhead generally makes an FPGA solution slower than an ASIC design. Bit-oriented computational units make them unnecessarily inefficient for implementing tasks that are generally word-oriented. And finally, in large volumes, FPGAs tend to be more expensive per unit due to their lower transistor density. To combat these problems, researchers are now exploiting the unique advantage that FPGAs exhibit over ASICs: reconfigurability. By customizing the FPGA to the task at hand, as the application executes, it is hoped that the cost-performance product of an FPGA system can be shown to be a better solution than a system implemented by a collection of custom ASICs. Such a system is called a Configurable Computing Machine (CCM). Many aspects of the design of the FPGAs available today hinder the exploration of this field. This thesis addresses many of these problems and presents the embodiment of those solutions in the Colt CCM. By offering word grain reconfiguration and the ability to partially reconfigure at computational element resolution, the Colt can offer higher effective utilization over traditional FPGAs. Further, the majority of the pins of the Colt can be used for both normal I/O and for chip reconfiguration. This provides higher reconfiguration bandwidth contrasted with the low percentage of pins used for reconfiguration of FPGAs. Finally, Colt uses a distributed reconfiguration mechanism called Wormhole Run-Time Reconfiguration (RTR) that allows multiple data ports to simultaneously program different sections of the chip independently. Used as the primary example of Wormhole RTR in the patent application, Colt is the first system to employ this computing paradigm. / Ph. D.
220

Register Transfer Level Simulation Acceleration via Hardware/Software Process Migration

Blumer, Aric David 16 November 2007 (has links)
The run-time reconfiguration of Field Programmable Gate Arrays (FPGAs) opens new avenues to hardware reuse. Through the use of process migration between hardware and software, an FPGA provides a parallel execution cache. Busy processes can be migrated into hardware-based, parallel processors, and idle processes can be migrated out increasing the utilization of the hardware. The application of hardware/software process migration to the acceleration of Register Transfer Level (RTL) circuit simulation is developed and analyzed. RTL code can exhibit a form of locality of reference such that executing processes tend to be executed again. This property is termed executive temporal locality, and it can be exploited by migration systems to accelerate RTL simulation. In this dissertation, process migration is first formally modeled using Finite State Machines (FSMs). Upon FSMs are built programs, processes, migration realms, and the migration of process state within a realm. From this model, a taxonomy of migration realms is developed. Second, process migration is applied to the RTL simulation of digital circuits. The canonical form of an RTL process is defined, and transformations of HDL code are justified and demonstrated. These transformations allow a simulator to identify basic active units within the simulation and combine them to balance the load across a set of processors. Through the use of input monitors, executive locality of reference is identified and demonstrated on a set of six RTL designs. Finally, the implementation of a migration system is described which utilizes Virtual Machines (VMs) and Real Machines (RMs) in existing FPGAs. Empirical and algorithmic models are developed from the data collected from the implementation to evaluate the effect of optimizations and migration algorithms. / Ph. D.

Page generated in 0.0933 seconds