• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 26
  • 25
  • 25
  • 25
  • 22
  • 15
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Investigations on CPI Centric Worst Case Execution Time Analysis

Ravindar, Archana January 2013 (has links) (PDF)
Estimating program worst case execution time (WCET) is an important problem in the domain of real-time systems and embedded systems that are deadline-centric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET- static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of run-time behavior, amount of instrumentation remains a concern. This thesis proposes a CPI-centric WCET analyzer that estimates WCET as a product of worst case instruction count (IC) estimated using static analysis and worst case cycles per instruction (CPI) computed using a function of measured CPI. In many programs, it is observed that IC and CPI values are correlated. Five different kinds of correlation are found. This correlation enables us to optimize WCET from the product of worst case IC and worst case CPI to a product of worst case IC and corresponding CPI. A prime advantage of viewing time in terms of CPI, enables us to make use of program phase behavior. In many programs, CPI varies in phases during execution. Within each phase, the variation is homogeneous and lies within a few percent of the mean. Coefficient of variation of CPI across phases is much greater than within a phase. Using this observation, we estimate program WCET in terms of its phases. Due to the nature of variation of CPI within a phase in such programs, we can use a simple probabilistic inequality- Chebyshev inequality, to compute bounds of CPI within a desired probability. In some programs that execute many paths depending on if-conditions, CPI variation is observed to be high. The thesis proposes a PC signature that is a low cost way of profiling path information which is used to isolate points of high CPI variation and divides a phase into smaller sub-phases of lower CPI variation. Chebyshev inequality is applied to sub-phases resulting in much tighter bounds. Provision to divide a phase into smaller sub-phases based on allowable variance of CPI within a sub-phase also exists. The proposed technique is implemented on simulators and on a native platform. Other advantages of phases in the context of timing analysis are also presented that include parallelized WCET analysis and estimation of remaining worst case execution time for a particular program run.
22

Étude de l'application de la théorie des valeurs extrêmes pour l'estimation fiable et robuste du pire temps d'exécution probabiliste / Study of the extreme value theory applicability for reliable and robust probabilistic worst-case execution time estimates

Guet, Fabrice 13 December 2017 (has links)
Dans les systèmes informatiques temps réel, les tâches logicielles sont contraintes par le temps. Pour garantir la sûreté du système critique contrôlé par le système temps réel, il est primordial d'estimer de manière sûre le pire temps d'exécution de chaque tâche. Les performances des processeurs actuels du commerce permettent de réduire en moyenne le temps d'exécution des tâches, mais la complexité des composants d'optimisation de la plateforme rendent difficile l'estimation du pire temps d'exécution. Il existe différentes approches d'estimation du pire temps d'exécution, souvent ségréguées et difficilement généralisables ou au prix de modèles coûteux. Les approches probabilistes basées mesures existantes sont vues comme étant rapides et simples à mettre en œuvre, mais souffrent d'un manque de systématisme et de confiance dans les estimations qu'elles fournissent. Les travaux de cette thèse étudient les conditions d'application de la théorie des valeurs extrêmes à une suite de mesures de temps d'exécution pour l'estimation du pire temps d'exécution probabiliste, et ont été implémentées dans l'outil diagxtrm. Les capacités et les limites de l'outil ont été étudiées grâce à diverses suites de mesures issues de systèmes temps réel différents. Enfin, des méthodes sont proposées pour déterminer les conditions de mesure propices à l'application de la théorie des valeurs extrêmes et donner davantage de confiance dans les estimations. / Software tasks are time constrained in real time computing systems. To ensure the safety of the critical systems that embeds the real time system, it is of paramount importance to safely estimate the worst-case execution time of each task. Modern commercial processors optimisation components enable to reduce in average the task execution time at the cost of a hard to determine task worst-case execution time. Many approaches for executing a task worst-case execution time exist but are usually segregated and hardly scalable, or by building very complex models. Measurement-based probabilistic timing analysis approaches are said to be easy and fast, but they suffer from a lack of systematism and confidence in their estimates. This thesis studies the applicability of the extreme value theory to a sequence of execution time measurements for the estimation of the probabilistic worst-case execution time, leading to the development of the diagxtrm tool. Thanks to a large panel of sequences of measurements from different real time systems, capabilities and limits of the tool are enlightened. Finally, a couple of methods are provided for determining measurements conditions that foster the application of the theory and raise more confidence in the estimates.
23

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
24

Aspect Analyzer: Ett verktyg för automatiserad exekveringstidsanalys av komponenter och aspekter / Aspect Analyzer: A Tool for Automated WCET Analysis of Aspects and Components

Uhlin, Pernilla January 2002 (has links)
<p>The increasing complexity in the development of a configurable real-time system has emerged new principles of software techniques, such as aspect-oriented software development and component-based software development. These techniques allow encapsulation of the system's crosscutting concerns and increase the modularity of the software. The properties of a component that influences the systems performance or semantics are specified separately in entities called aspects, while basic functionality of the property still remains in the component. </p><p>When building a real-time system, different sets of configurations of aspects and components can be combined, resulting in different configurations of the system. The temporal behavior of the system changes and a way to ensure the predictability of the system is needed. </p><p>This thesis presents a tool for aspect-level worst-case execution time analysis, which gives a priori information about the temporal behavior of the system, before the process of composing aspects with components.</p>
25

Verifikation av verktyget aspect analyzer / Aspect analyzer tool verification

Bodin, Joakim January 2003 (has links)
<p>Rising complexity in the development of real-time systems has made it crucial to have reusable components and a more flexible way of configuring these components into a coherent system. Aspect-oriented system development (AOSD) is a technique that allows one to put a system’s crosscutting concerns into"modules"that are called aspects. Applying AOSD in real-time and embedded system development one can expect reductions in the complexity of the system design and development. </p><p>A problem with AOSD in its current form is that it does not support predictability in the time domain. Hence, in order to use AOSD in real-time system development, we need to provide ways of analyzing temporal behavior of aspects, components and resulting system (made from weaving aspects and components). Aspect analyzer is a tool that computes the worst-case execution time (WCET) for a set of components and aspects, thus, enabling support for predictability in the time domain of aspect-oriented real-time software. </p><p>A limitation of the aspect analyzer, until now, were that no verification had been made whether the aspect analyzer would produce WCET values that were close to the measured or computed (with another WCET analysis technique) WCET of an aspect-oriented real-time system. Therefore, in this thesis we perform a verification of the correctness of the aspect analyzer using a number of different methods for WCET analysis. These investigations of the correctness of the output from the aspect analyzer gave confidence to the automated WCET analysis. In addition, performing this verification led to the identification of the steps necessary to compute the WCETs of a piece of program, when using a third party tool, which gives the ability to write accurate input files for the aspect analyzer.</p>
26

Towards Aspectual Component-Based Real-Time System Development

Tešanović, Aleksandra January 2003 (has links)
<p>Increasing complexity of real-time systems and demands for enabling their configurability and tailorability are strong motivations for applying new software engineering principles such as aspect-oriented and component-based software development. The integration of these two techniques into real-time systems development would enable: (i) efficient system configuration from the components in the component library based on the system requirements, (ii) easy tailoring of components and/or a system for a specific application by changing the behavior (code) of the component by aspect weaving, and (iii) enhanced flexibility of the real-time and embedded software through the notion of system configurability and component tailorability.</p><p>In this thesis we focus on applying aspect-oriented and component-based software development to real-time system development. We propose a novel concept of aspectual component-based real-time system development (ACCORD). ACCORD introduces the following into real-time system development: (i) a design method that assumes the decomposition of the real-time system into a set of components and a set of aspects, (ii) a real-time component model denoted RTCOM that supports aspect weaving while enforcing information hiding, (iii) a method and a tool for performing worst-case execution time analysis of different configurations of aspects and components, and (iv) a new approach to modelling of real-time policies as aspects.</p><p>We present a case study of the development of a configurable real-time database system, called COMET, using ACCORD principles. In the COMET example we show that applying ACCORD does have an impact on the real-time system development in providing efficient configuration of the real-time system. Thus, it could be a way for improved reusability and flexibility of real-time software, and modularization of crosscutting concerns.</p><p>In connection with development of ACCORD, we identify criteria that a design method for component-based real-time systems needs to address. The criteria include a well-defined component model for real-time systems, aspect separation, support for system configuration, and analysis of the composed real-time system. Using the identified set of criteria we provide an evaluation of ACCORD. In comparison with other approaches, ACCORD provides a distinct classification of crosscutting concerns in the real-time domain into different types of aspects, and provides a real-time component model that supports weaving of aspects into the code of a component, as well as a tool for temporal analysis of the weaved system.</p> / Report code: LiU-TEK-LIC-2003:23.
27

Developing Reusable and Reconfigurable Real-Time Software using Aspects and Components

Tešanović, Aleksandra January 2006 (has links)
Our main focus in this thesis is on providing guidelines, methods, and tools for design, configuration, and analysis of configurable and reusable real-time software, developed using a combination of aspect-oriented and component-based software development. Specifically, we define a reconfigurable real-time component model (RTCOM) that describes how a real-time component, supporting aspects and enforcing information hiding, could efficiently be designed and implemented. In this context, we outline design guidelines for development of real-time systems using components and aspects, thereby facilitating static configuration of the system, which is preferred for hard real-time systems. For soft real-time systems with high availability requirements we provide a method for dynamic system reconfiguration that is especially suited for resourceconstrained real-time systems and it ensures that components and aspects can be added, removed, or exchanged in a system at run-time. Satisfaction of real-time constraints is essential in the real-time domain and, for real-time systems built of aspects and components, analysis is ensured by: (i) a method for aspectlevel worst-case execution time analysis; (ii) a method for formal verification of temporal properties of reconfigurable real-time components; and (iii) a method for maintaining quality of service, i.e., the specified level of performance, during normal system operation and after dynamic reconfiguration. We have implemented a tool set with which the designer can efficiently configure a real-time system to meet functional requirements and analyze it to ensure that non-functional requirements in terms of temporal constraints and available memory are satisfied. In this thesis we present a proof-of-concept implementation of a configurable embedded real-time database, called COMET. The implementation illustrates how our methods and tools can be applied, and demonstrates that the proposed solutions have a positive impact in facilitating efficient development of families of realtime systems.
28

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
29

Commande robuste avec relâchement des contraintes temps-réel / Robust control under slackened real-time constraints

Andrianiaina, Patrick 26 October 2012 (has links)
Le processus de développement des systèmes avioniques suit des réglementations de sûreté de fonctionnement très strictes, incluant l'analyse du déterminisme et de la prédictibilité temporelle des systèmes. L'approche est basée sur la séparation des étapes de conception et d'implémentation. Une des plus grandes difficultés dans l'approche actuelle se trouve dans la détermination du WCET, qui est nécessaire pour prouver la satisfaction des contraintes de temps-réel dur du système. Dans cette thèse, une méthodologie de relâchement de contraintes temps-réels pour les systèmes de commandes digital est proposé. L'objectif est de réduire le conservatisme des approches traditionnelles basés sur le pire temps d'exécution, tout en préservant la stabilité et les performances de commandes. L'approche a été appliqué au système de commande de tangage d'un avion, ce qui a permi de montrer que le relâchement des contraintes temps réels améliore l'utilisation de la puissance de calcul disponible tout en préservant la stabilité et la qualité de commande du système. / The development process of critical avionics products are done under strict safety regulations. These regulations include determinism and predictability of the systems' timing. The overall approach is based on a separation of concerns between control design and implementation. One of the toughest challenges in the current approach is the determination of the WCET, in order to correctly size the system. In this thesis, a weakened implementation scheme for real-time feedback controllers is proposed to reduce the conservatism due to traditional worst-case considerations, while preserving the stability and control performance. The methodology is tested to the pitch control of an aircraft, showing that weakening the real-time constraints allows for saving computing power while preserving the system's stability and quality of control.
30

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.

Page generated in 0.0752 seconds