221 |
A foundation for fault tolerant components /Leal, William, January 2001 (has links)
No description available.
|
222 |
New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded DesignsEspinosa García, Jaime 03 November 2016 (has links)
Tesis por compendio / [EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones.
To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach.
The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way.
The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application. / [ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos.
Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística.
Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática.
Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación. / [CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus.
Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística.
Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica.
Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació. / Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146 / Compendio
|
223 |
Thwarting Electromagnetic Fault Injection Attack Utilizing Timing Attack CountermeasureGhodrati, Marjan 23 January 2018 (has links)
The extent of embedded systems' role in modern life has continuously increased over the years. Moreover, embedded systems are assuming highly critical functions with security requirements more than ever before. Electromagnetic fault injection (EMFI) is an efficient class of physical attacks that can compromise the immunity of secure cryptographic algorithms. Despite successful EMFI attacks, the effects of electromagnetic injection on a processor are not well understood. This includes lack of solid knowledge about how EMFI affects the circuit and deviates it from proper functionality. Also, effects of EM glitches on the global networks of a chip such as power, clock and reset network are not known. We believe to properly model EMFI and develop effective countermeasures, a deeper understanding of the EM effect on a chip is needed. In this thesis, we present a bottom-up analysis of EMFI effects on a RISC microprocessor. We study these effects at three levels: at the wire-level, at the chip-network level, and at the gate-level considering parameters such as EM-injection location and timing. We conclude that EMFI induces local timing errors implying current timing attack detection and prevention techniques can be adapted to overcome EMFI. To further validate our hypothesis, we integrate a configurable timing sensor into our microprocessor to evaluate its effectiveness against EMFI. / Master of Science / In the current technology era, embedded systems play a critical role in every human’s life. They are collecting very precise and private information of the users. So, they can become a potential target for the attackers to steal this valuable information. As a result, the security of these devices becomes a serious issue in this era.
Electromagnetic fault injection (EMFI) is an efficient class of physical attacks that can inject faults to the state of the processor and deviate it from its proper functionality. Despite its growing popularity among the attackers, limitations and capabilities of this attack are not very well understood. Several detection techniques have been proposed so far, but most of them are either very expensive to implement or not very effective. We believe to properly model EMFI and develop effective countermeasures, a deeper understanding of the EM effect on a chip is needed. In this research work, we try to perform a bottom-up analysis of EM fault injection on a RISC microprocessor and do a comprehensive study at all wire-level, chip-network level, and gate-level and finally propose a solution for it.
|
224 |
Fault Injection Attacks on RSA and CSIDHChiu, TingHung 16 May 2024 (has links)
Fault injection attacks are a powerful technique that intentionally induces faults during computations to leak secret information. This thesis studies the fault injection attack techniques. The thesis first categorizes various fault attack methods by fault model and fault analysis and gives examples of the various fault attacks on symmetric key cryptosystems and public key cryptosystems. The thesis then demonstrates fault injection attacks on RSA-CRT and constant time CSIDH. The fault attack consists of two main components: fault modeling, which examines methods for injecting faults in a target device, and fault analysis, which analyzes the resulting faulty outputs to deduce secrets in each cryptosystem. The thesis aims to provide a comprehensive survey on fault attack research, directions for further study on securing real-world cryptosystems against fault injection attacks, testing fault injection attacks with RSA-CRT, and demonstrate and evaluate fault injection attacks on constant time CSIDH. / Master of Science / Fault injection attacks are attacks where the attackers intentionally induce the fault in the device during the operation to obtain or recover secret information. The induced fault will impact the operation and cause the faulty output, providing the information to attackers. Many cryptographic algorithms and devices have been proven vulnerable to fault injection attacks. Cryptography is essential nowadays, as it is used to secure and protect confidential data. If the cryptosystem is broken, many system today will be compromised. Thus, this thesis focus on the fault injection attacks on the cryptosystems. This thesis introduces the background of fault injection attacks, categorizes them into different types, and provides examples of the attacks on cryptosystems. The thesis studies how the attacks work, including how the attack induces the fault in the device and how the attack analyzes the fault output they obtained. Specifically, I examine how these attacks affect two commonly used encryption methods: symmetric key cryptography and public key cryptography. Additionally, I implement the fault injection attack on RSA-CRT and emph{Commutative Supersingular Isogeny Diffie-Hellman}~(CSIDH). This research aims to understand the potential attack method on different cryptosystems and can explore mitigation or protection in the future.
|
225 |
Fault Discrimination Algorithm for Busbar Differential Protection Relaying Using Partial Operating Current CharacteristicsHossain, Monir 16 December 2016 (has links)
Differential protection is the unit protection system which is applied to protect a particular unit of power systems. Unit is known as zone in protection terminology which is equivalent to simple electrical node. In recent time, low impedance current differential protection schemes based on percentage restrained characteristics are widely used in power systems to protect busbar systems. The main application issue of these schemes is mis-operation due to current transformer (CT) saturation during close-in external faults. Researchers have suggested various solution of this problem; however, individually they are not sufficient to puzzle out all mis-operational scenarios. This thesis presents a new bus differential algorithm by defining alternative partial operating current characteristics of a differential protection zone and investigating its performance for all practical bus faults. Mathematical model of partial operating current and operating principle of the proposed bus differential relay are described in details. A CT saturation detection algorithm which includes fast and late CT saturation detection techniques is incorporated in relay design to increase the sensitivity of partial operating current based internal-external fault discriminator for high impedance internal faults. Performance of the proposed relay is validated by an extensive test considering all possible fault scenarios.
|
226 |
Spatiotemporal Evolution of Pleistocene and Late Oligocene-Early Miocene Deformation in the Mecca Hills, Southernmost San Andreas Fault ZoneMoser, Amy C. 01 May 2017 (has links)
Seismogenically active faults (those that produce earthquakes) are very complex systems that constantly change through time. When an earthquake occurs, the rocks surrounding a fault (the “fault rocks”) become altered or damaged. Studying these fault rocks directly can inform what processes operated in the fault and how the fault evolved in space and time. Examining these key aspects of faults helps us understand the earthquake hazards of active fault systems.
The Mecca Hills, southern California, consist of a set of hills adjacent to the southernmost San Andreas Fault. The topography is related to motion on the San Andreas fault, which poses the largest seismic hazard in the lower forty-eight United States. The southernmost San Andreas fault, and the Mecca Hills study location may be reaching the end of its earthquake cycle and is due for a major, potentially catastrophic earthquake. The seismic hazards of the region, coupled with its proximity to major populated areas (Coachella Valley, Los Angeles Basin) make it a critical research area to understand fault zone evolution and the protracted history of fault development.
The goal of this thesis was to directly examine the fault rocks in the Mecca Hills to understand how San Andreas-related faults in this area have evolved and behaved through time. This study integrates a variety of field and laboratory techniques to characterize the structural, geochemical, and thermal properties of the Mecca Hills fault rocks. The results herein document two distinct phases of deformation in the rocks exposed in the Mecca Hills, one around 24 million years ago and the other in the last one million years. This more recent phase of deformation is characterized by fault block exhumation and fluid flow in the fault zones, likely related to changing dynamics of the southernmost San Andreas Fault system. The older event informs how and when these rocks came close to Earth’s surface before the San Andreas Fault initiated.
|
227 |
The Progressive Evolution of the Champlain Thrust Fault Zone: Insights from a Structural Analysis of its ArchitectureMerson, Matthew 01 January 2018 (has links)
Near Burlington, Vermont, the Champlain Thrust fault placed massive Cambrian dolostones over calcareous shales of Ordovician age during the Ordovician Taconic Orogeny. Although the Champlain Thrust has been studied previously throughout the Champlain Valley, the architecture and structural evolution of its fault zone have never been systematically defined. To document these fault zone characteristics, a detailed structural analysis of multiple outcrops was completed along a 51 km transect between South Hero and Ferrisburgh, Vermont.
The Champlain Thrust fault zone is predominately within the footwall and preserves at least four distinct events that are heterogeneous is both style and slip direction. The oldest stage of structures—stage 1—are bedding parallel thrust faults that record a slip direction of top-to-the-W and generated localized fault propagation folds of bedding and discontinuous cleavages. This stage defines the protolith zone and has a maximum upper boundary of 205 meters below the Champlain Thrust fault surface. Stage 2 structures define the damage zone and form two sets of subsidiary faults form thrust duplexes that truncate older recumbent folds of bedding planes and early bedding-parallel thrusts. Slickenlines along stage 2 faults record a change in slip direction from top-to-the-W to top-to-the-NW. The damage zone is ~197 meters thick with its upper boundary marking the lower boundary of the fault core. The core, which is ~8 meters thick, is marked by the appearance of mylonite, phyllitic shales, fault gouge, fault breccia, and cataclastic lined faults. In addition, stage 3 sheath folds of bedding and cleavage are preserved as well as tight folds of stage 2 faults. Stage 3 faults include thrusts that record slip as top-to-the-NW and -SW and coeval normal faults that record slip as top-to-the-N and -S. The Champlain Thrust surface is the youngest event as it cuts all previous structures, and records fault reactivation with any top-to-the-W slip direction and a later top-to-the-S slip. Axes of mullions on this surface trend to the SE and do not parallel slickenlines.
The Champlain Thrust fault zone evolved asymmetrically across its principal slip surface through the process of strain localization and fault reactivation. Strain localization is characterized by the changes in relative age, motion direction along faults, and style of structures preserved within the fault zone. Reactivation of the Champlain Thrust surface and the corresponding change in slip direction was due to the influence of pre-existing structures at depth. This study defines the architecture of the Champlain Thrust fault zone and documents the importance of comparing the structural architecture of the fault zone core, damage zone, and protolith to determine the comprehensive fault zone evolution.
|
228 |
Fault Isolation in Distributed Embedded SystemsBiteus, Jonas January 2007 (has links)
To improve safety, reliability, and efficiency of automotive vehicles and other technical applications, embedded systems commonly use fault diagnosis consisting of fault detection and isolation. Since many systems are constructed as distributed embedded systems including multiple control units, it is necessary to perform global fault isolation using for example a central unit. However, the drawbacks with such a centralized method are the need of a powerful diagnostic unit and the sensitivity against disconnections of this unit. Two alternative methods to centralized fault isolation are presented in this thesis. The first method performs global fault isolation by a istributed sequential computation. For a set of studied systems, themethod gives, compared to a centralizedmethod, amean reduction inmaximumprocessor load on any unitwith 40 and 70%for systems consisting of four and eight units respectively. The second method instead extends the result of the local fault isolation performed in each unit such that the results are globally correct. By only considering the components affecting each specific unit, the extended result in each agent is kept small. For a studied automotive vehicle, the second method gives, compared to a centralized method, a mean reduction in the sizes of the results and the maximum processor load on any unit with 85 and 90% respectively. To perform fault diagnosis, diagnostic tests are commonly used. If the additional evaluation of tests can not improve the fault isolation of a component then the component is ready. Since the evaluation of a test comes with a cost in for example computational resources, it is valuable to minimize the number of tests that have to be evaluated before readiness is achieved for all components. A strategy is presented that decides in which order to evaluate tests such that readiness is achieved with as few evaluations of tests as possible. Besides knowing how fault diagnosis is performed, it is also interesting to assess the effect that fault diagnosis has on for example safety. Since fault tree analysis often is used to evaluate safety, this thesis contributes with a systematic method that includes the effect of fault diagnosis in fault trees. The safety enhancement due to the use of fault diagnosis can thereby be analyzed and quantified.
|
229 |
Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate StatisticsBin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant
performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability.
Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal
situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed.
Since most of the chemical and petrochemical plants are operated under closed loop
scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate
closed loop transfer function models and in the time domain using a simulation based approach.
The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
|
230 |
A Particle Filtering-based Framework for On-line Fault Diagnosis and Failure PrognosisOrchard, Marcos Eduardo 08 November 2007 (has links)
This thesis presents an on-line particle-filtering-based framework for fault diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology assumes the definition of a set of fault indicators, which are appropriate for monitoring purposes, the availability of real-time process measurements, and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions.
The incorporation of particle-filtering (PF) techniques in the proposed scheme not only allows for the implementation of real time algorithms, but also provides a solid theoretical framework to handle the problem of fault detection and isolation (FDI), fault identification, and failure prognosis. Founded on the concept of sequential importance sampling (SIS) and Bayesian theory, PF approximates the conditional state probability distribution by a swarm of points called particles and a set of weights representing discrete probability masses. Particles can be easily generated and recursively updated in real time, given a nonlinear process dynamic model and a measurement model that relates the states of the system with the observed fault indicators.
Two autonomous modules have been considered in this research. On one hand, the fault diagnosis module uses a hybrid state-space model of the plant and a particle-filtering algorithm to (1) calculate the probability of any given fault condition in real time, (2) estimate the probability density function (pdf) of the continuous-valued states in the monitored system, and (3) provide information about type I and type II detection errors, as well as other critical statistics. Among the advantages offered by this diagnosis approach is the fact that the pdf state estimate may be used as the initial condition in prognostic modules after a particular fault mode is isolated, hence allowing swift transitions between FDI and prognostic routines.
The failure prognosis module, on the other hand, computes (in real time) the pdf of the remaining useful life (RUL) of the faulty subsystem using a particle-filtering-based algorithm. This algorithm consecutively updates the current state estimate for a nonlinear state-space model (with unknown time-varying parameters) and predicts the evolution in time of the fault indicator pdf. The outcome of the prognosis module provides information about the precision and accuracy of long-term predictions, RUL expectations, 95% confidence intervals, and other hypothesis tests for the failure condition under study. Finally, inner and outer correction loops (learning schemes) are used to periodically improve the parameters that characterize the performance of FDI and/or prognosis algorithms. Illustrative theoretical examples and data from a seeded fault test for a UH-60 planetary carrier plate are used to validate all proposed approaches.
Contributions of this research include: (1) the establishment of a general methodology for real time FDI and failure prognosis in nonlinear processes with unknown model parameters, (2) the definition of appropriate procedures to generate dependable statistics about fault conditions, and (3) a description of specific ways to utilize information from real time measurements to improve the precision and accuracy of the predictions for the state probability density function (pdf).
|
Page generated in 0.054 seconds