• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1485
  • 547
  • 296
  • 191
  • 80
  • 32
  • 30
  • 27
  • 22
  • 13
  • 10
  • 10
  • 10
  • 10
  • 10
  • Tagged with
  • 3352
  • 628
  • 610
  • 555
  • 544
  • 412
  • 400
  • 372
  • 364
  • 347
  • 339
  • 337
  • 314
  • 268
  • 256
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Development of a Decision Support Tool to Test Energy Management Alarming Thresholds

Tarjan, Aaron 05 May 2011 (has links)
A novel model was developed to test the use of short data sets for testing various alarm thresholds as part of an energy management program. Several years of 15-minute interval data were utilized from five buildings in Jacksonville, Florida. The model aggregated the data by day type and occupancy so that there were four period types used. For all of the buildings’ meters, their daily usage by period type was tested against the threshold to determine if an alarm would be triggered, which would then be assigned a reward and cost based upon the type and duration of response. The risk management value was converted to dollars, in order to normalize the energy and time. It was determined that the 5-month short data set was the most appropriate choice for short data sets. In addition, it was concluded that the thresholds should be set between 0.8 and 1.0 standard deviation above the average of the short window. Several recommendations for further study are also enclosed.
322

Motion and evolution of the Chaochou Fault, Southern Taiwan

Hassler, Lauren E. 01 November 2005 (has links)
The Chaochou Fault (CCF) is both an important lithologic boundary and a significant topographic feature in the Taiwan orogenic belt. It is the geologic boundary between the Slate Belt to the east, and the Western Foothills to the west. Although the fault is known to be a high angle oblique sinistral thrust fault in places, both its kinematic history and its current role in the development of the orogen are poorly understood. Field fabric data suggest that structural orientations vary along strike, particularly in the middle segment, the suspected location of the intersection of the on-land Eurasian continent-ocean boundary and the Luzon Island Arc. Foliation/solution cleavage is oriented NE-SW and in the northern and southern sections, but ESE-WNW in the middle segment. Slip lineations also reveal a change in fault motion from dip-parallel in the north to a more scattered pattern in the south. This correlates somewhat with recent GPS results, which indicate that the direction of current horizontal surface motion changes along strike from nearly perpendicular to the fault in the northern field area, to oblique and nearly parallel to the fault in the southern field area. The magnitude of vertical surface motion vectors, relative to Lanyu Island, decreases to the south. Surface morphology parameters, including mountain front sinuosity and valley floor width/valley height ratio indicate higher activity and uplift in the north. These observations correlate well with published apatite/zircon fission track data that indicate un-reset ages in the south, and reset ages in the northern segment. Geodetic and geomorphic data indicate that the northern segment of the CCF and Slate Belt are currently undergoing rapid uplift related to oblique arc-continent collision between the Eurasian continent and the Luzon arc. The southern segment is significantly less active perhaps because the orogen is not yet involved in direct arc-continent collision.
323

The development and testing of an automated building commissioning anlaysis tool (abcat)

Curtin, Jonathan M. 15 May 2009 (has links)
More than $18 billion of energy is wasted annually in the U.S. commercial building sector. Retro-Commissioning services have proven to be successful with relatively short payback times, but tools that support the commissioning effort in maintaining the optimal energy performance in a building are just not readily available. The current work in the field of fault detection and diagnostics of HVAC systems, its cost, complexity and reliance on improved sensor technology, will require years until it can become the mainstay in building energy management. In the meantime, a simplified system is needed today that can be robust and universal enough to use in most types of buildings, address the main concerns of building owners by focusing on consumption deviations that significantly affect the bottom line and provide them some assistance in the remediation of these problems. This thesis presents the results of the development and testing of an advanced prototype of the Automated Building Commissioning Analysis Tool (ABCAT), which has detected three significant energy consumption deviations through four live building implementations. The ABCAT has also demonstrated additional functional benefits of tracking the savings due to retro-commissioning efforts, verifying billed utility data in addition to its primary function of detecting significant consumption faults. Although similar attempts have been made in FDD at the whole building level, the simplification, flexibility, robustness and benefits of this new approach are expected to exhibit the characteristics that will be desired and desperately needed by industry professionals.
324

Automated Fault Location In Smart Distribution Systems

Lotfifard, Saeed 2011 August 1900 (has links)
Fault location in distribution systems is a critical component of outage management and service restoration, which directly impacts feeder reliability and quality of the electricity supply. Improving fault location methods supports the Department of Energy (DOE) “Grid 2030” initiatives for grid modernization by improving reliability indices of the network. Improving customer average interruption duration index (CAIDI) and system average interruption duration index (SAIDI) are direct advantages of utilizing a suitable fault location method. As distribution systems are gradually evolving into smart distribution systems, application of more accurate fault location methods based on gathered data from various Intelligent Electronic Devices (IEDs) installed along the feeders is quite feasible. How this may be done and what is the needed methodology to come to such solution is raised and then systematically answered. To reach this goal, the following tasks are carried out: 1) Existing fault location methods in distribution systems are surveyed and their strength and caveats are studied. 2) Characteristics of IEDs in distribution systems are studied and their impacts on fault location method selection and implementation are detailed. 3) A systematic approach for selecting optimal fault location method is proposed and implemented to pinpoint the most promising algorithms for a given set of application requirements. 4) An enhanced fault location method based on voltage sag data gathered from IEDs along the feeder is developed. The method solves the problem of multiple fault location estimations and produces more robust results. 5) An optimal IED placement approach for the enhanced fault location method is developed and practical considerations for its implementation are detailed.
325

Otherworld - Giving Applications a Chance to Survive OS Kernel Crashes

Depoutovitch, Alexandre 06 January 2012 (has links)
The default behavior of all commodity operating systems today is to restart the system when a critical error is encountered in the kernel. This terminates all running applications with an attendant loss of "work in progress" that is non-persistent. Our thesis is that an operating system kernel is simply a component of a larger software system, which is logically well isolated from other components, such as applications, and therefore it should be possible to reboot the kernel without terminating everything else running on the same system. In order to prove this thesis, we designed and implemented a new mechanism, called Otherworld, that microreboots the operating system kernel when a critical error is encountered in the kernel, and it does so without clobbering the state of the running applications. After the kernel microreboot, Otherworld attempts to resurrect the applications that were running at the time of failure. It does so by restoring the application memory spaces, open files and other resources. In the default case it then continues executing the processes from the point at which they were interrupted by the failure. Optionally, applications can have user-level recovery procedures registered with the kernel, in which case Otherworld passes control to these procedures after having restored their process state. Recovery procedures might check the integrity of application data and restore resources Otherworld was not able to restore. We implemented Otherworld in Linux, but we believe that the technique can be applied to all commodity operating systems. In an extensive set of experiments on real-world applications (MySQL, Apache/PHP, Joe, vi), we show that Otherworld is capable of successfully microrebooting the kernel and restoring the applications in over 97\% of the cases. In the default case, Otherworld adds negligible overhead to normal execution. In an enhanced mode, Otherworld can provide extra application memory protection with overhead of between 4% and 12%.
326

Otherworld - Giving Applications a Chance to Survive OS Kernel Crashes

Depoutovitch, Alexandre 06 January 2012 (has links)
The default behavior of all commodity operating systems today is to restart the system when a critical error is encountered in the kernel. This terminates all running applications with an attendant loss of "work in progress" that is non-persistent. Our thesis is that an operating system kernel is simply a component of a larger software system, which is logically well isolated from other components, such as applications, and therefore it should be possible to reboot the kernel without terminating everything else running on the same system. In order to prove this thesis, we designed and implemented a new mechanism, called Otherworld, that microreboots the operating system kernel when a critical error is encountered in the kernel, and it does so without clobbering the state of the running applications. After the kernel microreboot, Otherworld attempts to resurrect the applications that were running at the time of failure. It does so by restoring the application memory spaces, open files and other resources. In the default case it then continues executing the processes from the point at which they were interrupted by the failure. Optionally, applications can have user-level recovery procedures registered with the kernel, in which case Otherworld passes control to these procedures after having restored their process state. Recovery procedures might check the integrity of application data and restore resources Otherworld was not able to restore. We implemented Otherworld in Linux, but we believe that the technique can be applied to all commodity operating systems. In an extensive set of experiments on real-world applications (MySQL, Apache/PHP, Joe, vi), we show that Otherworld is capable of successfully microrebooting the kernel and restoring the applications in over 97\% of the cases. In the default case, Otherworld adds negligible overhead to normal execution. In an enhanced mode, Otherworld can provide extra application memory protection with overhead of between 4% and 12%.
327

Broken Bar Detection in Synchronous Machines Based Wind Energy Conversion System

Rahimian, Mina Mashhadi 2011 August 1900 (has links)
Electrical machines are subject to different types of failures. Early detection of the incipient faults and fast maintenance may prevent costly consequences. Fault diagnosis of wind turbine is especially important because they are situated at extremely high towers and therefore inaccessible. For offshore plants, bad weather can prevent any repair actions for several weeks. In some of the new wind turbines synchronous generators are used and directly connected to the grid without the need of power converters. Despite intensive research efforts directed at rotor fault diagnosis in induction machines, the research work pertinent to damper winding failure of synchronous machines is very limited. This dissertation is concerned with the in-depth study of damper winding failure and its traceable symptoms in different machine signals and parameters. First, a model of a synchronous machine with damper winding based on the winding function approach is presented. Next, simulation and experimental results are presented and discussed. A specially designed inside-out synchronous machine with a damper winding is employed for the experimental setup. Finally, a novel analytical method is developed to predict the behavior of the left sideband amplitude for different numbers and locations of the broken bars. This analysis is based on the magnetic field theory and the unbalanced multiphase circuits. It is found that due to the asymmetrical structure of damper winding, the left sideband component in the stator current spectrum of the synchronous machine during steady state asynchronous operation is not similar to that of the induction machine with broken bars. As a result, the motor current signature analysis (MCSA) for detection rotor failures in the induction machine is usable to detect broken damper bars in synchronous machines. However, a novel intelligent-systems based approach is developed that can identify the severity of the damper winding failure. This approach potentially can be used in a non-invasive condition monitoring system to monitor the deterioration of a synchronous motor damper winding as the number of broken bars increase over time. Some other informative features such as speed spectrum, transient time, torque-speed curve and rotor slip are also found for damper winding diagnosis.
328

RADIC: a powerful fault-tolerant architecture

Amancio Duarte, Angelo 28 June 2007 (has links)
La tolerancia a fallos se ha convertido en un requerimiento importante para los ingenieros informáticos y los desarrolladores de software, debido a que la ocurrencia de fallos aumenta el coste de explotación de un computador paralelo. Por otro lado, las actividades realizadas por el mecanismo de tolerancia de fallo reducen las prestaciones del sistema desde el punto de vista del usuario. Esta tesis presenta una arquitectura tolerante a fallos para computadores paralelos, denominada RADIC (Redundant Array of Distributed Fault Tolerance Controllers,), que es simultáneamente transparente, descentralizada, flexible y escalable. RADIC es una arquitectura tolerante a fallos que se basa un controlador distribuido para manejar los fallos. Dicho controlador se basa en procesos dedicados, que comparten los recursos del usuario en el computador paralelo. Para validar el funcionamiento de la arquitectura RADIC, se realizó una implementación que sigue el estándar MPI-1 y que contiene los elementos de la arquitectura. Dicha implementación, denominada RADICMPI, permite verificar la funcionalidad de RADIC en situaciones sin fallo o bajo condiciones de fallo. Las pruebas se han realizado utilizando un inyector de fallos, involucrado en el código de RADICMPI, de manera que permite todas las condiciones necesarias para validar la operación del controlador distribuido de RADIC. También se utilizó la misma implementación para estudiar las consecuencias de usar RADIC en un ambiente real. Esto permitió evaluar la operación de la arquitectura en situaciones prácticas, y estudiar la influencia de los parámetros de RADIC sobre el funcionamiento del sistema. Los resultados probaron que la arquitectura de RADIC funciona correctamente y que es flexible, escalable, transparente y descentralizada. Además, RADIC estableció una arquitectura de tolerancia a fallos para sistemas basados en paso de mensajes. / Fault tolerance has become a major issue for computer engineers and software developers because the occurrence of faults increases the cost of using a parallel computer. On the other hand, the activities performed by the fault tolerance mechanism reduce the performance of the system from the user point of view. This thesis presents RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers,) a fault-tolerant architecture to parallel computers, which is simultaneously transparent, decentralized, flexible and scalable. RADIC is a fault-tolerant architecture that implements a fully distributed controller to manage faults. Such controller rests on dedicated processes, which share the user's resources in the parallel computer. In order to validate the operation of RADIC, we created RADICMPI, a message-passing implementation that includes the elements of the RADIC architecture and complies with the MPI-1 standard. RADICMPI served for to verifying the functionality of RADIC in scenarios with and without failures in the parallel computer. For the tests, we implemented a fault injector in RADICMPI in order to create the scenarios required to validate the operation of the RADIC distributed controller. We also used RADICMPI to study the practical aspects of using RADIC in a real environment. This allowed us to evaluate the operation of our architecture in practical situations, and to study the influence of the RADIC parameters over the system performance. The results proved that the RADIC architecture operated correctly and that it is flexible, scalable, transparent and decentralized. Furthermore, RADIC established a powerful fault-tolerant architecture model for message-passing systems.
329

Multipath Fault-tolerant Routing Policies to deal with Dynamic Link Failures in High Speed Interconnection Networks

Zarza, Gonzalo Alberto 08 July 2011 (has links)
Les xarxes d'interconnexió tenen com un dels seus objectius principals comunicar i enllaçar els nodes de processament dels sistemes de còmput d'altes prestacions. En aquest context, les fallades de xarxa tenen un impacte considerablement alt, ja que la majoria dels algorismes d'encaminament no van ser dissenyats per tolerar aquestes anomalies. A causa d'això, fins i tot una única fallada d'enllaç té la capacitat d'embussar missatges a la xarxa, provocant situacions de bloqueig o, encara pitjor, és capaç d'impedir la correcta finalització de les aplicacions que es trobin en execució en el sistema de còmput. En aquesta tesi presentem polítiques d'encaminament tolerants a fallades basades en els conceptes d'adaptabilitat i evitació de bloquejos, dissenyades per a xarxes afectades per un gran nombre de fallades d'enllaços. Es presenten dues contribucions al llarg de la tesi, a saber: un mètode d'encaminament tolerant a fallades multicamí, i una tècnica nova i escalable d'evitació de bloquejos. La primera de les contribucions de la tesi és un algorisme d'encaminament adaptatiu multicamí, anomenat Fault-tolerant Distributed Routing Balancing (FT-DRB), que permet explotar la redundància de camins de comunicació de les topologies de xarxa actuals, a fi de proveir tolerància a fallades a les xarxes d'interconnexió. La segona contribució de la tesi és la tècnica escalable d'evitació de bloquejos Non-blocking Adaptive Cycles (NAC). Aquesta tècnica va ser específicament dissenyada per funcionar en xarxes d'interconnexió que presentin un gran nombre de fallades d'enllaços. Aquesta tècnica va ser dissenyada i implementada amb la finalitat de servir al mètode d'encaminament descrit anteriorment, FT-DRB. / Las redes de interconexión tienen como uno de sus objetivos principales comunicar y enlazar los nodos de procesamiento de los sistemas de cómputo de altas prestaciones. En este contexto, los fallos de red tienen un impacto considerablemente alto, ya que la mayoría de los algoritmos de encaminamiento no fueron diseñados para tolerar dichas anomalías. Debido a esto, incluso un único fallo de en un enlace tiene la capacidad de atascar mensajes en la red, provocando situaciones de bloqueo o, peor aún, es capaz de impedir la correcta finalización de las aplicaciones que se encuentren en ejecución en el sistema de cómputo. En esta tesis presentamos políticas de encaminamiento tolerantes a fallos basadas en los conceptos de adaptabilidad y evitación de bloqueos, diseñadas para redes de comunicación afectadas por un gran número de fallos de enlaces. Se presentan dos contribuciones a lo largo de la tesis, a saber: un método de encaminamiento tolerante a fallos multicamino, y una novedosa y escalable técnica de evitación de bloqueos. La primera de las contribuciones de la tesis es un algoritmo de encaminamiento adaptativo multicamino, denominado Fault-tolerant Distributed Routing Balancing (FT-DRB), que permite explotar la redundancia de caminos de comunicación de las topologías de red actuales, a fin de proveer tolerancia a fallos a las redes de interconexión. La segunda contribución de la tesis es la técnica escalable de evitación de bloqueos Non-blocking Adaptive Cycles (NAC). Dicha técnica fue específicamente diseñada para funcionar en redes de interconexión que presenten un gran número de fallos de enlaces. Esta técnica fue diseñada e implementada con la finalidad de servir al método de encaminamiento descrito anteriormente, FT-DRB. / Interconnection networks communicate and link together the processing units of modern high-performance computing systems. In this context, network faults have an extremely high impact since most routing algorithms have not been designed to tolerate faults. Because of this, as few as one single link failure may stall messages in the network, leading to deadlock configurations or, even worse, prevent the finalization of applications running on computing systems. In this thesis we present fault-tolerant routing policies based on concepts of adaptability and deadlock freedom, capable of serving interconnection networks affected by a large number of link failures. Two contributions are presented throughout this thesis, namely: a multipath fault-tolerant routing method, and a novel and scalable deadlock avoidance technique. The first contribution of this thesis is the adaptive multipath routing method Fault-tolerant Distributed Routing Balancing (FT-DRB). This method has been designed to exploit the communication path redundancy available in many network topologies, allowing interconnection networks to perform in the presence of a large number of faults. The second contribution is the scalable deadlock avoidance technique Non-blocking Adaptive Cycles (NAC), specifically designed for interconnection networks suffering from a large number of failures. This technique has been designed and implemented with the aim of ensuring freedom from deadlocks in the proposed fault-tolerant routing method FT-DRB.
330

Enhancement of defect diagnosis based on the analysis of CMOS DUT behaviour

Arumí i Delgado, Daniel 11 July 2008 (has links)
Les dimensions dels transistors disminueixen per a cada nova tecnologia CMOS. Aquest alt nivell d'integració complica el procés de fabricació dels circuits integrats, apareixent nous mecanismes de fallada. En aquest sentit, els mètodes de diagnosi actuals no són capaços d'assumir els nous reptes que sorgeixen per a les tecnologies nanomètriques. A més, la inspecció física de fallades (Failure Analysis) no es pot aplicar des d'un bon començament, ja que els costos de la seva utilització són massa alts. Per aquesta raó, conèixer el comportament dels defectes i dels seus mecanismes de fallada és imprescindible per al desenvolupament de noves metodologies de diagnosi que puguin superar aquests nous reptes. En aquest context, aquesta tesi presenta l'anàlisi dels mecanismes de fallada i proposa noves metodologies de diagnosi per millorar la localització de ponts (bridge) i oberts (open). Per a la diagnosi de ponts, alguns treballs s'han beneficiat de la informació obtinguda durant el test de corrent (IDDQ). No obstant no han tingut en compte l'impacte del corrent de dowsntream. Per aquesta raó, en aquesta tesi s'analitza l'impacte d'aquest corrent degut als ponts i la seva dependència amb la tensió d'alimentació (VDD). A més, es presenta una nova metodologia de diagnosi basada en els múltiples nivells de corrent. Aquesta tècnica considera els corrents generats per les diferents xarxes connectades pel pont. Aquesta metodologia s'ha aplicat amb èxit a un conjunt de xips defectuosos de tecnologies de 0.18 µm i 90 nm.Com alternativa a les tècniques basades en corrent, els shmoo plots també poden ser útils per a la diagnosi. Tradicionalment s'ha considerat que valors baixos de VDD són més apropiats per a la detecció de ponts. Tanmateix es demostra en aquesta tesi que en presència de ponts connectant xarxes equilibrades, valors alts de VDD són fins i tot més apropiats que tensions baixes, amb la conseqüent implicació que això té per a la diagnosi.En relació als oberts, s'ha dissenyat i fabricat un xip amb la inclusió intencionada d'oberts complets (full opens) i oberts resistius. Experiments fets amb els xips demostren l'impacte de les capacitats d'acoblament de les línies veïnes. A més, pels oberts resistius s'ha comprovat la influència de l'efecte història i de la localització de l'obert en el retard. Tradicionalment s'ha considerat que el retard màxim s'obté quan un obert resistiu es troba al principi de la línia. No obstant això no es pot generalitzar a oberts poc resistius, ja que en aquests casos es demostra que el màxim retard s'obté per a una localització intermèdia. A partir dels resultats experimentals obtinguts amb el xip, s'ha desenvolupat una nova metodologia per a la diagnosi d'oberts complets a les línies d'interconnexió. Aquest mètode divideix la línia en diferents segments segons la informació de layout de la pròpia línia. Aleshores coneixent els valors de les línies veïnes, es prediu la tensió del node flotant, la qual es compara amb el resultat experimental obtingut a la màquina de test. Aquest mètode s'ha aplicat amb èxit a un seguit de xips defectuosos pertanyents a una tecnologia de 0.18 µm.Finalment, s'ha analitzat l'impacte que tenen els corrents de túnel a través del terminal de porta en presència d'un obert complet. Com les dimensions disminueixen per a cada nova tecnologia, l'òxid de porta és suficientment prim com per generar corrents de túnel que influencien el node flotant. Aquests corrents generen una evolució temporal al node flotant fins fer-lo arribar a un estat quiescent, el qual depèn de la tecnologia. Es comprova que aquestes evolucions temporals són de l'ordre de segons per a una tecnologia de 0.18 µm. Tanmateix les simulacions demostren que aquests temps disminueixen fins a uns quants µs per a tecnologies futures. Degut a l'impacte dels corrents de túnel, un seguit d'oberts complets s'han diagnosticat en xips de 0.18 µm. / Transistor dimensions are scaled down for every new CMOS technology. Such high level of integration has increased the complexity of the Integrated Circuits (ICs) manufacturing process, arising new complex failure mechanisms. However, present diagnosis methodologies cannot afford the challenges arisen for future technologies. Furthermore, physical failure analysis, although indispensable, is not feasible on its own, since it requires high cost equipment, tools and qualified personnel. For this reason, a detailed understanding and knowledge of defect behaviours is a key factor for the development of improved diagnosed methodologies to overcome the challenges of nanometer technologies. In this context, this thesis presents the analysis of existing and new failure mechanisms and proposed new diagnosis methodologies to improve the diagnosis of faults, focused on bridging and open faults.IDDQ is a well known technique for the diagnosis of bridging faults. However, previous works have not considered the impact of the downstream current for the diagnosis of such faults. In this thesis, the impact and the dependence of the downstream current with the power supply voltage (VDD) is analyzed and experimentally measured. Furthermore, a multiple level IDDQ based diagnosis technique is presented. This method takes benefit from the currents generated by the different network excitations. This technique is successfully applied to real defective devices from 0.18 µm and 90 nm technologies.As an alternative to current based techniques, shmoo plots can be also useful for diagnosis purposes. Low voltage has been traditionally considered as an advantageous condition for the detection of bridging faults. However, it is demonstrated that in presence of bridges connecting balanced n- and p-networks, high VDD values are also advantageous for the detection of bridges, which has its direct translation into diagnosis application. Experimental evidence of this fact is presented.Related to open faults, an experimental chip has been designed and fabricated in a 0.35 µm technology, where full and resistive open defects have been intentionally added. Different experiments have been carried out so that the impact of the neighbouring coupling capacitances has been quantified. Furthermore, for resistive opens, experiments have demonstrated the influence of the history effect and the location of the defect on the delay. Traditionally, it has been reported that the highest delay is obtained when the resistive open is located at the beginning of the net. Nevertheless, this thesis demonstrates that this is not true for low resistive open, since the highest delay is obtained for an intermediate location. Experimental measurements prove this behaviour.Derived from the results obtained with the fabricated chip, a new methodology for the diagnosis of interconnect full open defects is developed. The FOS (Full Open Segment) method divides the interconnect line into different segments based on the topology of the faulty line. Knowing the logic state of the neighbouring lines, the floating net voltage is predicted and compared with the experimental results obtained on the tester. This method has been successfully applied to a set of 0.18 µm defective devices. Finally, the impact of the gate tunnelling leakage currents on the behaviour of full open defects has also been analyzed. As technology dimensions are scaled down, the oxide thickness is thin enough so that the gate tunnelling leakage currents influence the behaviour of floating lines. They cause transient evolutions on the floating node until reaching the steady state, which is technology dependent. It is experimentally demonstrated that these evolutions are in the order of seconds for a 0.18µm technology. However, for future technologies, simulations show that the evolutions decrease down to a few µs. Based on this factor, some full open faults present in 0.18 µm technology devices are diagnosed.

Page generated in 0.0557 seconds