• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1484
  • 547
  • 296
  • 191
  • 80
  • 32
  • 30
  • 27
  • 22
  • 13
  • 10
  • 10
  • 10
  • 10
  • 10
  • Tagged with
  • 3350
  • 628
  • 610
  • 555
  • 544
  • 412
  • 400
  • 372
  • 364
  • 347
  • 338
  • 337
  • 314
  • 268
  • 256
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Low-cost motor drive embedded fault diagnosis systems

Akin, Bilal 15 May 2009 (has links)
Electric motors are used widely in industrial manufacturing plants. Bearing faults, insulation faults, and rotor faults are the major causes of electric motor failures. Based on the line current analysis, this dissertation mainly deals with the low cost incipient fault detection of inverter-fed driven motors. Basically, low order inverter harmonics contributions to fault diagnosis, a motor drive embedded condition monitoring method, analysis of motor fault signatures in noisy line current, and a few specific applications of proposed methods are studied in detail. First, the effects of inverter harmonics on motor current fault signatures are analyzed in detail. The introduced fault signatures due to harmonics provide additional information about the motor faults and enhance the reliability of fault decisions. It is theoretically and experimentally shown that the extended fault signatures caused by the inverter harmonics are similar and comparable to those generated by the fundamental harmonic on the line current. In the next chapter, the reference frame theory is proposed as a powerful toolbox to find the exact magnitude and phase quantities of specific fault signatures in real time. The faulty motors are experimentally tested both offline, using data acquisition system, and online, employing the TMS320F2812 DSP to prove the effectiveness of the proposed tool. In addition to reference frame theory, another digital signal processor (DSP)-based phasesensitive motor fault signature detection is presented in the following chapter. This method has a powerful line current noise suppression capability while detecting the fault signatures. It is experimentally shown that the proposed method can determine the normalized magnitude and phase information of the fault signatures even in the presence of significant noise. Finally, a signal processing based fault diagnosis scheme for on-board diagnosis of rotor asymmetry at start-up and idle mode is presented. It is quite challenging to obtain these regular test conditions for long enough time during daily vehicle operations. In addition, automobile vibrations cause a non-uniform air-gap motor operation which directly affects the inductances of electric motor and results quite noisy current spectrum. The proposed method overcomes the challenges like aforementioned ones simply by testing the rotor asymmetry at zero speed.
232

Development of a computer-aided fault tree synthesis methodology for quantitative risk analysis in the chemical process industry

Wang, Yanjun 17 February 2005 (has links)
There has been growing public concern regarding the threat to people and environment from industrial activities, thus more rigorous regulations. The investigation of almost all the major accidents shows that we could have avoided those tragedies with effective risk analysis and safety management programs. High-quality risk analysis is absolutely necessary for sustainable development. As a powerful and systematic tool, fault tree analysis (FTA) has been adapted to the particular need of chemical process quantitative risk analysis (CPQRA) and found great applications. However, the application of FTA in the chemical process industry (CPI) is limited. One major barrier is the manual synthesis of fault trees. It requires a thorough understanding of the process and is vulnerable to individual subjectivity. The quality of FTA can be highly subjective and variable. The availability of a computer-based FTA methodology will greatly benefit the CPI. The primary objective of this research is to develop a computer-aided fault tree synthesis methodology for CPQRA. The central idea is to capture the cause-and-effect logic around each item of equipment directly into mini fault trees. Special fault tree models have been developed to manage special features. Fault trees created by this method are expected to be concise. A prototype computer program is provided to illustrate the methodology. Ideally, FTA can be standardized through a computer package that reads information contained in process block diagrams and provides automatic aids to assist engineers in generating and analyzing fault trees. Another important issue with regard to QRA is the large uncertainty associated with available failure rate data. In the CPI, the ranges of failure rates observed could be quite wide. Traditional reliability studies using point values of failure rates may result in misleading conclusions. This dissertation discusses the uncertainty with failure rate data and proposes a procedure to deal with data uncertainty in determining safety integrity level (SIL) for a safety instrumented system (SIS). Efforts must be carried out to obtain more accurate values of those data that might actually impact the estimation of SIL. This procedure guides process hazard analysts toward a more accurate SIL estimation and avoids misleading results due to data uncertainty.
233

UpRight fault tolerance

Clement, Allen Grogan 13 November 2012 (has links)
Experiences with computer systems indicate an inconvenient truth: computers fail and they fail in interesting ways. Although using redundancy to protect against fail-stop failures is common practice, non-fail-stop computer and network failures occur for a variety of reasons including power outage, disk or memory corruption, NIC malfunction, user error, operating system and application bugs or misconfiguration, and many others. The impact of these failures can be dramatic, ranging from service unavailability to stranding airplane passengers on the runway to companies closing. While high-stakes embedded systems have embraced Byzantine fault tolerant techniques, general purpose computing continues to rely on techniques that are fundamentally crash tolerant. In a general purpose environment, the current best practices response to non-fail-stop failures can charitably be described as pragmatic: identify a root cause and add checksums to prevent that error from happening again in the future. Pragmatic responses have proven effective for patching holes and protecting against faults once they have occurred; unfortunately the initial damage has already been done, and it is difficult to say if the patches made to address previous faults will protect against future failures. We posit that an end-to-end solution based on Byzantine fault tolerant (BFT) state machine replication is an efficient and deployable alternative to current ad hoc approaches favored in general purpose computing. The replicated state machine approach ensures that multiple copies of the same deterministic application execute requests in the same order and provides end-to-end assurance that independent transient failures will not lead to unavailability or incorrect responses. An efficient and effective end-to-end solution covers faults that have already been observed as well as failures that have not yet occurred, and it provides structural confidence that developers won't have to track down yet another failure caused by some unpredicted memory, disk, or network behavior. While the promise of end-to-end failure protection is intriguing, significant technical and practical challenges currently prevent adoption in general purpose computing environments. On the technical side, it is important that end-to-end solutions maintain the performance characteristics of deployed systems: if end-to-end solutions dramatically increase computing requirements, dramatically reduce throughput, or dramatically increase latency during normal operation then end-to-end techniques are a non-starter. On the practical side, it is important that end-to-end approaches be both comprehensible and easy to incorporate: if the cost of end-to-end solutions is rewriting an application or trusting intricate and arcane protocols, then end-to-end solutions will not be adopted. In this thesis we show that BFT state machine replication can and be used in deployed systems. Reaching this goal requires us to address both the technical and practical challenges previously mentioned. We revisiting disparate research results from the last decade and tweak, refine, and revise the core ideas to fit together into a coherent whole. Addressing the practical concerns requires us to simplify the process of incorporating BFT techniques into legacy applications. / text
234

Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

Bin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
235

FAULT LINKS: IDENTIFYING MODULE AND FAULT TYPES AND THEIR RELATIONSHIP

Michael, Inies Raphael Chemmannoor 01 January 2004 (has links)
The presented research resulted in a generic component taxonomy, a generic code-faulttaxonomy, and an approach to tailoring the generic taxonomies into domain-specific aswell as project-specific taxonomies. Also, a means to identify fault links was developed.Fault links represent relationships between the types of code-faults and the types ofcomponents being developed or modified. For example, a fault link has been found toexist between Controller modules (that forms a backbone for any software via. itsdecision making characteristics) and Control/Logic faults (such as unreachable code).The existence of such fault links can be used to guide code reviews, walkthroughs, testingof new code development, as well as code maintenance. It can also be used to direct faultseeding. The results of these methods have been validated. Finally, we also verified theusefulness of the obtained fault links through an experiment conducted using graduatestudents. The results were encouraging.
236

FPGA TO POWER SYSTEM THEORIZATION FOR A FAULT LOCATION AND SPECIFICATION ALGORITHM

Yeoman, Christina 01 January 2013 (has links)
Fault detection and location algorithms have allowed for the power industry to alter the power grid from the traditional model to becoming a smart grid. This thesis implements an already established algorithm for detecting faults, as well as an impedance-based algorithm for detecting where on the line the fault has occurred and develops a smart algorithm for future HDL conversion using Simulink. Using the algorithms, the ways in which this implementation can be used to create a smarter grid are the fundamental basis for this research. Simulink was used to create a two-bus power system, create environment variables, and then Matlab was used to program the algorithm such that it could be FPGA-implementable, where the ways in which one can retrieve the data from a power line has been theorized. This novel approach to creating a smarter grid was theorized and created such that real-world applications may be further implemented in the future.
237

Fault tolerant techniques for asynchronous networks on chip

Zhang, Guangda January 2016 (has links)
Advancing semiconductor technology is boosting the core count on a single chip to achieve continuously increasing performance, posing a growing demand for scalable, efficient and reliable on-chip interconnection. However this advance also makes the electronics increasingly vulnerable to faults. Inter-core connection is increasingly provided by Networks-on-Chip (NoCs), typically using conventional synchronous designs. Scaling makes it increasingly hard to avoid problems with clock distribution and in many chips a single, synchronous domain is inappropriate, anyway. In place of the well-studied synchronous NoCs, event-driven asynchronous NoCs have emerged as a promising replacement. Asynchronous NoCs have many promising advantages over synchronous ones; however, their fault-tolerance has rarely been studied. Implemented in a Quasi-Delay-Insensitive (QDI) fashion, asynchronous NoCs can achieve high timing-robustness but show complicated failure scenarios in the presence of faults and behave differently from synchronous ones, posing a challenge to asynchronous circuit advocates. This research studies the impact of different faults on QDI NoC fabrics and presents thorough and systematic fault-tolerant solutions at the circuit level, providing a holistic, efficient and resilient interconnection solution for QDI NoCs. The contributions of this research include: 1) a thorough analysis of fault impact on QDI NoCs; 2) a Delay-Insensitive Redundant Check (DIRC) coding scheme protecting QDI links from transient faults; 3) a novel time-out technique detecting the fault-caused physical-layer deadlock in a QDI NoC (the adaptability of a QDI circuit to timing variation makes it vulnerable to this kind of deadlock); 4) a fine-grained recovery technique utilising a Spatial Division Multiplexing (SDM) implementation to recover the deadlocked network from a link fault. Both unprotected and protected QDI NoCs are implemented, along with a fault simulation environment, to provide a detailed performance and fault-tolerance evaluation of these techniques. The improvements to the NoC operation, together with the costs in circuit overhead and throughput are enumerated using a typical example of QDI interconnection.
238

Increasing the availability of a service through Hot Passive Replication / Öka tillgängligheten för en tjänst genom hot passive replication

Bengtson, John, Jigin, Ola January 2015 (has links)
This bachelor thesis examines how redundancy is used to tolerate a process crash fault on a server in a system developed for emergency situations. The goal is to increase the availability of the service the system delivers. The redundant solution uses hot passive replication with one primary replica manager and one backup replica manager. With this approach, code for updating the backup, code for establishing a new primary and code to implement fault detection to detect a process crash has been written. After implementing the redundancy, the redundant solution has been evaluated. The first part of the evaluation showed that the redundant solution can deliver a service in case of a process crash on the primary replica manager. The second part of the evaluation showed that the average response time for an upload request and a download request had increased by 31\% compared to the non-redundant solution. The standard deviation was calculated for the response times and it showed that the response time of an upload request could be higher compared to the average response time. This large deviation has been investigated and the conclusion was that the database insertion was the reason.
239

Teorie a praxe vyhledávání poruch na kabelových vedeních VN v DS E.ON / Theory and practice of fault location on MV cable lines in E.ON distribution networks

Macků, Dominika January 2019 (has links)
This thesis gives an overview on the fault of high-voltage cable transmission lines in the distribution system of the E.ON company. The first part of this thesis introduces reader to the topic of cable transmission lines, types of commonly used cables and the most commonly occurring faults. In the second part of this thesis, the methods of locating faults in cable transmission lines by professional personnel are explained. The third part deals with the analysis of a group of chosen occurred faults. The final part focuses on the statistics of fault occurrence on South Moravian region from 1.1.2017 until 1.1.2019.
240

Faults and their influence on the dynamic behaviour of electric vehicles

Wanner, Daniel January 2013 (has links)
The increase of electronics in road vehicles comes along with a broad variety of possibilitiesin terms of safety, handling and comfort for the users. A rising complexityof the vehicle subsystems and components accompanies this development and has tobe managed by increased electronic control. More potential elements, such as sensors,actuators or software codes, can cause a failure independently or by mutually influencingeach other. There is a need of a structured approach to sort the faults from avehicle dynamics stability perspective.This thesis tries to solve this issue by suggesting a fault classification method and faulttolerantcontrol strategies. Focus is on typical faults of the electric driveline and thecontrol system, however mechanical and hydraulic faults are also considered. Duringthe work, a broad failure mode and effect analysis has been performed and the faultshave been modeled and grouped based on the effect on the vehicle dynamic behaviour.A method is proposed and evaluated, where faults are categorized into different levelsof controllability, i. e. levels on how easy or difficult it is to control a fault for the driver,but also for a control system.Further, fault-tolerant control strategies are suggested that can handle a fault with acritical controllability level. Two strategies are proposed and evaluated based on thecontrol allocation method and an electric vehicle with typical faults. It is shown thatthe control allocation approaches give less critical trajectory deviation compared to noactive control and a regular Electronic Stability Control algorithm.To conclude, this thesis work contributes with a methodology to analyse and developfault-tolerant solutions for electric vehicles with improved traffic safety. / <p>QC 20131010</p>

Page generated in 0.0494 seconds