• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 110
  • 50
  • 34
  • 31
  • 27
  • 26
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards Predictable Real-Time Performance on Multi-Core Platforms

Kim, Hyoseung 01 June 2016 (has links)
Cyber-physical systems (CPS) integrate sensing, computing, communication and actuation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12 commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation. Our proposed techniques theoretically bound temporal interference that tasks may suffer from when accessing shared resources. Our techniques also involve software primitives and algorithms for real-time operating systems and hypervisors, which significantly reduce the degree of the temporal interference. Specifically, we tackle the issues of cache and memory contention, locking and synchronization, interrupt handling, and access control for computational accelerators such as general-purpose graphics processing units (GPGPUs), all of which are crucial to achieving predictable real-time performance on a modern multi-core platform. Our solutions are readily applicable to commodity multi-core platforms, and can be used not only for developing new systems but also migrating existing applications from single-core to multi-core platforms.
22

SYSTEMATIC LITERATURE REVIEW OF SAFETY-RELATED CHALLENGES FOR AUTONOMOUS SYSTEMS IN SAFETY-CRITICAL APPLICATIONS

Ojdanic, Milos January 2019 (has links)
An increased focus on the development of autonomous safety-critical systems requiresmore attention at ensuring safety of humans and the environment. The mainobjective of this thesis is to explore the state of the art and to identify the safetyrelatedchallenges being addressed for using autonomy in safety-critical systems. Inparticular, the thesis explores the nature of these challenges, the different autonomylevels they address and the type of safety measures as proposed solutions. Above all,we focus on the safety measures by a degree of adaptiveness, time of being activeand their ability of decision making. Collection of this information is performedby conducting a Systematic Literature Review of publications from the past 9 years.The results showed an increase in publications addressing challenges related to theuse of autonomy in safety-critical systems. We managed to identify four high-levelclasses of safety challenges. The results also indicate that the focus of research wason finding solutions for challenges related to full autonomous systems as well assolutions that are independent of the level of autonomy. Furthermore, consideringthe amount of publications, results show that non-learning solutions addressing theidentified safety challenges prevail over learning ones, active over passive solutionsand decisive over supportive solutions.
23

Analysing and supporting the reliability decision-making process in computing systems with a reliability evaluation framework / Analyser et supporter le processus de prise de décision dans la fiabilité des systèmes informatiques avec un framework d'évaluation de fiabilité

Kooli, Maha 01 December 2016 (has links)
La fiabilité est devenu un aspect important de conception des systèmes informatiques suite à la miniaturisation agressive de la technologie et le fonctionnement non interrompue qui introduisent un grand nombre de sources de défaillance des composantes matérielles. Le système matériel peut être affecté par des fautes causées par des défauts de fabrication ou de perturbations environnementales telles que les interférences électromagnétiques, les radiations externes ou les neutrons de haute énergie des rayons cosmiques et des particules alpha. Pour les systèmes embarqués et systèmes utilisés dans les domaines critiques pour la sécurité tels que l'avionique, l'aérospatiale et le transport, la présence de ces fautes peut endommager leurs composants et conduire à des défaillances catastrophiques. L'étude de nouvelles méthodes pour évaluer la fiabilité du système permet d'aider les concepteurs à comprendre les effets des fautes sur le système, et donc de développer des produits fiables et sûrs. En fonction de la phase de conception du système, le développement de méthodes d'évaluation de la fiabilité peut réduire les coûts et les efforts de conception, et aura un impact positif le temps de mise en marché du produit.L'objectif principal de cette thèse est de développer de nouvelles techniques pour évaluer la fiabilité globale du système informatique complexe. L'évaluation vise les fautes conduisant à des erreurs logicielles. Ces fautes peuvent se propager à travers les différentes structures qui composent le système complet. Elles peuvent être masquées lors de cette propagation soit au niveau technologique ou architectural. Quand la faute atteint la partie logicielle du système, elle peut endommager les données, les instructions ou le contrôle de flux. Ces erreurs peuvent avoir un impact sur l'exécution correcte du logiciel en produisant des résultats erronés ou empêcher l'exécution de l'application.Dans cette thèse, la fiabilité des différents composants logiciels est analysée à différents niveaux du système (en fonction de la phase de conception), mettant l'accent sur le rôle que l'interaction entre le matériel et le logiciel joue dans le système global. Ensuite, la fiabilité du système est évaluée grâce à des méthodologies d'évaluation flexible, rapide et précise. Enfin, le processus de prise de décision pour la fiabilité des systèmes informatiques est pris en charge avec les méthodes et les outils développés. / Reliability has become an important design aspect for computing systems due to the aggressive technology miniaturization and the uninterrupted performance that introduce a large set of failure sources for hardware components. The hardware system can be affected by faults caused by physical manufacturing defects or environmental perturbations such as electromagnetic interference, external radiations, or high-energy neutrons from cosmic rays and alpha particles.For embedded systems and systems used in safety critical fields such as avionic, aerospace and transportation, the presence of these faults can damage their components and leads to catastrophic failures. Investigating new methods to evaluate the system reliability helps designers to understand the effects of faults on the system, and thus to develop reliable and dependable products. Depending on the design phase of the system, the development of reliability evaluation methods can save the design costs and efforts, and will positively impact product time-to-market.The main objective of this thesis is to develop new techniques to evaluate the overall reliability of complex computing system running a software. The evaluation targets faults leading to soft errors. These faults can propagate through the different structures composing the full system. They can be masked during this propagation either at the technological or at the architectural level. When a fault reaches the software layer of the system, it can corrupt data, instructions or the control flow. These errors may impact the correct software execution by producing erroneous results or prevent the execution of the application leading to abnormal termination or application hang.In this thesis, the reliability of the different software components is analyzed at different levels of the system (depending on the design phase), emphasizing the role that the interaction between hardware and software plays in the overall system. Then, the reliability of the system is evaluated via a flexible, fast, and accurate evaluation framework. Finally, the reliability decision-making process in computing systems is comprehensively supported with the developed framework (methodology and tools).
24

Reliability analysis of neural networks in FPGAs / Análise de confiabilidade de redes neurais em FPGAs

Libano, Fabiano Pereira January 2018 (has links)
Redes neurais estão se tornando soluções atrativas para a automação de veículos nos mercados automotivo, militar e aeroespacial. Todas essas aplicações são de segurança crítica e, portanto, precisam ter a confiabilidade como um dos principais requisitos. Graças ao baixo custo, baixo consumo de energia, e flexibilidade, FPGAs estão entre os dispositivos mais promissores para implementar redes neurais. Entretanto, FPGAs também são conhecidas por sua susceptibilidade à falhas induzidas por partículas ionizadas. Neste trabalho, nós avaliamos os efeitos de erros induzios por radiação nas saídas de duas redes neurais (Iris Flower e MNIST), implementadas em FPGAs baseadas em SRAM. Em particular, via experimentos com feixe acelerado de nêutrons, nós percebemos que a radiação pode induzir erros que modificam a saída da rede afetando ou não a corretude funcional da mesma. Chamamos o primeiro caso de erro crítico e o segundo de error tolerável. Nós exploramos aspectos das redes neurais que podem impactar tanto seu desempenho quanto sua confiabilidade, tais como os níveis de precisão na representação dos dados e diferentes métodos de implementação de alguns tipos de camadas. Usando campanhas exaustivas de injeção de falhas, nós identificamos porções das implementações da Iris Flower e da MNIST em FPGAs que são mais prováveis de gerar erros critícos ou toleráveis, quando corrompidas. Baseado nessa análise, nós propusemos estratégias de ABFT para algumas camadas das redes, bem como uma estratégia de proteção seletiva que triplica somente as camadas mais vulneráveis das redes neurais. Nós validamos essas estratégias de proteção usando testes de radiação com nêutrons, a vemos que nossa solução de proteção seletiva conseguiu mascarar 68% das falhas na Iris Flower com um custo adicional de 45%, e 40% das falhas na MNIST com um custo adicional de 8%. / Neural networks are becoming an attractive solution for automatizing vehicles in the automotive, military, and aerospace markets. All of these applications are safety-critical and, thus, must have reliability as one of the main constraints. Thanks to their low-cost, low power-consumption, and flexibility, Field-Programmable Gate Arrays (FPGAs) are among the most promising devices to implement neural networks. Unfortunately, FPGAs are also known to be susceptible to faults induced by ionizing particles. In this work, we evaluate the effects of radiation-induced errors in the outputs of two neural networks (Iris Flower and MNIST), implemented in SRAM-based FPGAs. In particular, through accelerated neutron beam experiments, we notice that radiation can induce errors that modify the output of the network with or without affecting the neural network’s functionality. We call the former critical errors and the latter tolerable errors. We explore aspects of the neural networks that can have impacts on both performance and reliability, such as levels of data precision and different methods of implementation for some types of layers. Through exhaustive fault-injection campaigns, we identify the portions of Iris Flower and MNIST implementations on FPGAs that are more likely, once corrupted, to generate a critical or a tolerable error. Based on this analysis, we propose Algorithm-Based Fault Tolerance (ABFT) strategies for certain layers in the networks, as well as a selective hardening strategy that triplicates only the most vulnerable layers of the neural network. We validate these hardening approaches with neutron radiation testing, and see that our selective hardening solution
25

Timing Predictability in Future Multi-Core Avionics Systems

Löfwenmark, Andreas January 2017 (has links)
With more functionality added to safety-critical avionics systems, new platforms are required to offer the computational capacity needed. Multi-core platforms offer a potential that is now being explored, but they pose significant challenges with respect to predictability due to shared resources (such as memory) being accessed from several cores in parallel. Multi-core processors also suffer from higher sensitivity to permanent and transient faults due to shrinking transistor sizes. This thesis addresses several of these challenges. First, we review major contributions that assess the impact of fault tolerance on worst-case execution time of processes running on a multi-core platform. In particular, works that evaluate the timing effects using fault injection methods. We conclude that there are few works that address the intricate timing effects that appear when inter-core interferences due to simultaneous accesses of shared resources are combined with the fault tolerance techniques. We assess the applicability of the methods to COTS multi-core processors used in avionics. We identify dark spots on the research map of the joint problem of hardware reliability and timing predictability for multi-core avionics systems. Next, we argue that the memory requests issued by the real-time operating systems (RTOS) must be considered in resource-monitoring systems to ensure proper execution on all cores. We also adapt and extend an existing method for worst-case response time analysis to fulfill the specific requirements of avionics systems. We relax the requirement of private memory banks to also allow cores to share memory banks.
26

Adaptable rule checking tools for HDL

Lord, Mikael January 2009 (has links)
<p>Today’s electronics in aviation (avionics) are more complex than ever before. With higher requirements on safety and reliability and with new SoC (System on Chip) technology, the validation and verification of designs meet new challenges. In commercial and military aircraft there are many safety-critical systems that need to be reliable. The consequences of a failure of a safety-critical system onboard a civil or military aircraft are immeasurably more serious than a glitch or a bit-flip in a consumer appliance or Internet service delivery. If possible hazards are found early in the design process, a lot of work can be saved later on. Certain structures in the code are prone to produce glitchy logic and timing problems and should be avoided. This thesis will strengthen Saab Avitronics knowledge of adaptable rule checking tools for HDL, with a market analysis of the tools available. Moreover will it evaluate two of the most suitable tools and finally it will describe some of the design issues that exist when coding safety-critical systems. Finally it is concluded that the introduction of static rule checking tools will help the validator to find dangerous constructs in the code. However, it will not be possible to fully automate rule checking for safety-critical systems, because of the high requirements on reliability.</p>
27

Components, Safety Interfaces, and Compositional Analysis

Elmqvist, Jonas January 2010 (has links)
<p>Component-based software development has emerged as a promising approach for developing complex software systems by composing smaller independently developed components into larger component assemblies. This approach offers means to increase software reuse, achieve higher flexibility and shorter time-to-market by the use of off-the-shelf components (COTS). However, the use of COTS in safety-critical system is highly unexplored.</p><p>This thesis addresses the problems appearing in component-based development of safety-critical systems. We aim at efficient reasoning about safety at system level while adding or replacing components. For safety-related reasoning it does not suffice to consider functioning components in their intended environments but also the behaviour of components in presence of single or multiple faults. Our contribution is a formal component model that includes the notion of a safety interface. It describes how the component behaves with respect to violation of a given system-level property in presence of faults in its environment. This approach also provides a link between formal analysis of components in safety-critical systems and the traditional engineering processes supported by model-based development.</p><p>We also present an algorithm for deriving safety interfaces given a particular safety property and fault modes for the component. The safety interface is then used in a method proposed for compositional reasoning about component assemblies. Instead of reasoning about the effect of faults on the composed system, we suggest analysis of fault tolerance through pair wise analysis based on safety interfaces.</p><p>The framework is demonstrated as a proof-of-concept in two case studies; a hydraulic system from the aerospace industry and an adaptive cruise controller from the automotive industry. The case studies have shown that a more efficient system-level safety analysis can be performed using the safety interfaces.</p>
28

A Stator Turn Fault Detection Method and a Fault-Tolerant Operating Strategy for Interior PM Synchronous Motor Drives in Safety-Critical Applications

Lee, Youngkook 02 July 2007 (has links)
A stator turn fault in a safety-critical drive application must be detected at its initial stage and imperatively requires an evasive action to prevent a serious accident caused by an abrupt interruption in the drive s operation. However, this is much challenging for the case of an interior permanent magnet synchronous motor (IPMSM) drives because of the presence of the permanent magnets that cannot be turned off at will. This work tackles the problem of increase the stator turn fault tolerance of IPMSM drives in safety-critical applications. This objective is achieved by an on-line turn fault detection method and a simple turn fault-tolerant operating strategy. In this work, it is shown that a stator turn fault in a current-controlled voltage source inverter-driven machine leads to a reduced fundamental positive sequence component of the voltage references as compared to the machine without a turn fault for a given torque reference and rotating speed. Based on this finding, a voltage reference-based turn fault detection method is proposed. In addition, it is also revealed that an adjustment to the level of the rotating magnetic flux in an appropriate manner can yield a significant reduction in the propagation speed of the fault and possibly prevention of the fault from spreading to the entire winding. This would be accomplished without any hardware modification. Based on this principle, a stator turn fault-tolerant operating strategy for IPMSM drives maintaining drive s availability is proposed. To evaluate these turn fault detection method and fault-tolerant operating strategy, an electrical model and a thermal model of an IPMSM with stator turn faults are derived. All the proposed models and methods are validated through simulations and experiments on a 10kW IPMSM drive.
29

Concurrent Online Testing for Many Core Systems-on-Chips

Lee, Jason Daniel 2010 December 1900 (has links)
Shrinking transistor sizes have introduced new challenges and opportunities for system-on-chip (SoC) design and reliability. Smaller transistors are more susceptible to early lifetime failure and electronic wear-out, greatly reducing their reliable lifetimes. However, smaller transistors will also allow SoC to contain hundreds of processing cores and other infrastructure components with the potential for increased reliability through massive structural redundancy. Concurrent online testing (COLT) can provide sufficient reliability and availability to systems with this redundancy. COLT manages the process of testing a subset of processing cores while the rest of the system remains operational. This can be considered a temporary, graceful degradation of system performance that increases reliability while maintaining availability. In this dissertation, techniques to assist COLT are proposed and analyzed. The techniques described in this dissertation focus on two major aspects of COLT feasibility: recovery time and test delivery costs. To reduce the time between failure and recovery, and thereby increase system availability, an anomaly-based test triggering unit (ATTU) is proposed to initiate COLT when anomalous network behavior is detected. Previous COLT techniques have relied on initiating tests periodically. However, determining the testing period is based on a device's mean time between failures (MTBF), and calculating MTBF is exceedingly difficult and imprecise. To address the test delivery costs associated with COLT, a distributed test vector storage (DTVS) technique is proposed to eliminate the dependency of test delivery costs on core location. Previous COLT techniques have relied on a single location to store test vectors, and it has been demonstrated that centralized storage of tests scales poorly as the number of cores per SoC grows. Assuming that the SoC organizes its processing cores with a regular topology, DTVS uses an interleaving technique to optimally distribute the test vectors across the entire chip. DTVS is analyzed both empirically and analytically, and a testing protocol using DTVS is described. COLT is only feasible if the applications running concurrently are largely unaffected. The effect of COLT on application execution time is also measured in this dissertation, and an application-aware COLT protocol is proposed and analyzed. Application interference is greatly reduced through this technique.
30

Certification of Actel Fusion according to RTCA DO-254

Lundquist, Per January 2007 (has links)
<p>In recent years the aviation industry is moving towards the use of programmable logic devices in airborne safety critical systems. To be able to certify the close to fail-safe functionality of these programmable devises (e.g. FPGAs) to the aviation authorities, the aviation industry uses a guideline for design assurance for airborne electronic hardware named RTCA DO-254. At the same time the PLD industry is developing ever more complex embedded system-on-chip solutions integrating more and more functionality on a single chip.</p><p>This thesis looks at the problems that rise when trying to certify system-on-chip solutions according to RTCA DO-254. Used as an example of an embedded FPGA, the Actel Fusion FPGA chip with integrated analog and digital functionality will be tested according to the verification guidance. The results show that for the time being, the examined embedded system-on-chip FPGAs can not be verified to be used in airborne safety critical systems.</p>

Page generated in 0.0695 seconds