• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic Beamforming Optimization for Anti - Jamming and Hardware Fault Recovery

Becker, Jonathan 16 May 2014 (has links)
In recent years there has been a rapid increase in the number of wireless devices for both commercial and defense applications. Such unprecedented demand has increased device cost and complexity and also added a strain on the spectrum utilization of wireless communication systems. This thesis addresses these issues, from an antenna system perspective, by developing new techniques to dynamically optimize adaptive beamforming arrays for improved anti-jamming and reliability. Available frequency spectrum is a scarce resource, and therefor e increased interference will occur as the wireless spectrum saturates. To mitig ate unintentional interference, or intentional interference from a jamming source, antenna arrays are used to focus electromagnetic energy on a signal of interest while simultaneously minimizing radio frequency energy in directions of interfering signals. The reliability of such arrays, especially in commercial satellite and defense applications, can be addressed by hardware redundancy, but at the expense of increased volume, mass as well as component and design cost. This thesis proposes the development of new models and optimization algorithms to dynamically adapt beamforming arrays to mitigate interference and increase hardware reliability. The contributions of this research are as follows. First, analytical models are developed and experimental results show that small antenna arrays can thwart interference using dynamically applied stochastic algorithms. This type of insitu optimization, with an algorithm dynamically optimizing a beamformer to thwart interference sources with unknown positions, inside of an anechoic chamber has not been done before to our knowledge. Second, it is shown that these algorithms can recover from hardware failures and localized faults in the array. Experiments were performed with a proof-of-concept four-antenna array. This is the first hardware demonstration showing an antenna array with live hardware fault recovery that is adapted by stochastic algorithms in an anechoic chamber. We also compare multiple stochastic algorithms in performing both anti-jamming and hardware fault recovery. Third, we show that stochastic algorithms can be used to continuously track and mitigate interfering signals that continuously move in an additive white Gaussian noise wireless channel.
2

Testing the effects of violating component axioms in validation of complex aircraft systems

Kansal, Aparna 12 January 2015 (has links)
This thesis focuses on estimating faults in complex large-scale integrated aircraft systems, especially where they interact with, and control, the aircraft dynamics. A general assumption considered in the reliability of such systems is that any component level fault will be monitored, detected and corrected by some fault management capability. However, a reliance on fault management assumes not only that it can detect and manage all faults, but also that it can do so in sufficient time to recover from any deviation in the aircraft dynamics and flight path. Testing for system-level effects is important to ensure better reliability of aircraft systems. However, with existing methods for validation of complex aircraft systems, it is difficult and impractical to set up a finite test suite to enable testing and integration of all the components of a complex system. The difficulty lies in the cost of modelling every aspect of every component given the large number of test cases required for sufficient coverage. Just having a good simulator, or increasing the number of test cases is not sufficient; it is also important to know which simulation runs to conduct. For this purpose, the thesis proposes simulating faults in the system through the violation of “axiomatic conditions” of the system components, which are conditions on the functioning of these components introduced during their development. The thesis studies the effect, on the aircraft dynamics, of simulating such faults when reference models of the components representing their key functions are integrated.
3

An Effective Traffic-Reroute Scheme with Reverse Labeling in MPLS Networks

Lin, Kai-Han 01 August 2003 (has links)
MPLS, a next generation backbone architecture, can speed up packet forwarding to destination by label switching. However, if there exists no backup LSP when the primary LSP fails, MPLS frames cannot be forwarded to destination. Therefore, fault recovery has become an important research area in MPLS Traffic Engineering. Makam approach and Haskin approach are the most famous two among the previous literatures. Besides, IETF has made strict definitions for MPLS Recovery in RFC 3469 in February, 2003. We propose a Reverse Labeling Scheme to handle fault recovery in this thesis. We establish a virtual reverse LSP along the completely reverse direction of the primary path. When there is a link failure in the primary LSP, LSR will forward packets back to Ingress by virtual reverse LSP instead of using the primary LSP. This idea of building virtual reverse LSP makes Haskin approach practical in implementation. In addition, we save network resources by designing a scheme such that LSR is easier to convert from the primary LSP to the backup LSP. In order to solve the out-of-order packets in Haskin approach, Hundessa adds buffering on every LSR. The buffer can temporarily store the packets once a link failure has been detected. By adopting the basic idea of Hundessa approach, we embed our Reverse Labeling Scheme and implement it on Linux platform. We also make some modifications to solve the buffering problems. Finally, we demonstrate this Reverse Labeling Scheme by several experiments. We not only show the low packet loss rate, but also solve the packet out-of-order problems. The significant decrease of out-of-order packets can further improve the efficiency of TCP flow transmission.
4

A Sustainable Autonomic Architecture for Organically Reconfigurable Computing Systems

Oreifej, Rashad S. 01 January 2011 (has links)
A Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Low-level organic elements are analyzed and designed to achieve novel self-monitoring, self-diagnosis, and self-repair organic properties. The prototype of a 2-D spatial gradient Sobel video edge-detection organic system use-case developed on a XC4VSX35 Xilinx Virtex-4 Video Starter Kit is presented. Experimental results demonstrate the applicability of the proposed architecture and provide the infrastructure to quantify the performance and overcome fault-handling limitations. Dynamic online autonomous functionality restoration after a malfunction or functionality shift due to changing requirements is achieved at a fine granularity by exploiting dynamic Partial Reconfiguration (PR) techniques. A Genetic Algorithm (GA)-based hardware/software platform for intrinsic evolvable hardware is designed and evaluated for digital circuit repair using a variety of well-accepted benchmarks. Dynamic bitstream compilation for enhanced mutation and crossover operators is achieved by directly manipulating the bitstream using a layered toolset. Experimental results on the edge-detector organic system prototype have shown complete organic online refurbishment after a hard fault. In contrast to previous toolsets requiring many milliseconds or seconds, an average of 0.47 microseconds is required to perform the genetic mutation, 4.2 microseconds to perform the single point conventional crossover, 3.1 microseconds to perform Partial Match Crossover (PMX) as well as Order Crossover (OX), 2.8 microseconds to perform Cycle Crossover (CX), and 1.1 milliseconds for one input pattern intrinsic evaluation. These represent a performance advantage of three orders of magnitude over the JBITS software framework and more than seven orders of magnitude over the Xilinx design flow. Combinatorial Group Testing (CGT) technique was combined with the conventional GA in what is called CGT-pruned GA to reduce repair time and increase system availability. Results have shown up to 37.6% convergence advantage using the pruned technique. Lastly, a quantitative stochastic sustainability model for reparable systems is formulated to evaluate the Sustainability of FPGA-based reparable systems. This model computes at design-time the resources required for refurbishment to meet mission availability and lifetime requirements in a given fault-susceptible missions. By applying this model to MCNC benchmark circuits and the Sobel Edge-Detector in a realistic space mission use-case on Xilinx Virtex-4 FPGA, we demonstrate a comprehensive model encompassing the inter-relationships between system sustainability and fault rates, utilized, and redundant hardware resources, repair policy parameters and decaying reparability.
5

New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

Espinosa García, Jaime 03 November 2016 (has links)
[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application. / [ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación. / [CAT] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació. / Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146 / TESIS
6

Deep Learning Fault Protection Applied to Spacecraft Attitude Determination and Control

Justin Mansell (9175307) 30 July 2020 (has links)
The increasing numbers and complexity of spacecraft is driving a growing need for automated fault detection, isolation, and recovery. Anomalies and failures are common occurrences during space flight operations, yet most spacecraft currently possess limited ability to detect them, diagnose their underlying cause, and enact an appropriate response. This leaves ground operators to interpret extensive telemetry and resolve faults manually, something that is impractical for large constellations of satellites and difficult to do in a timely fashion for missions in deep space. A traditional hurdle for achieving autonomy has been that effective fault detection, isolation, and recovery requires appreciating the wider context of telemetry information. Advances in machine learning are finally allowing computers to succeed at such tasks. This dissertation presents an architecture based on machine learning for detecting, diagnosing, and responding to faults in a spacecraft attitude determination and control system. Unlike previous approaches, the availability of faulty examples is not assumed. In the first level of the system, one-class support vector machines are trained from nominal data to flag anomalies in telemetry. Meanwhile, a spacecraft simulator is used to model the activation of anomaly flags under different fault conditions and train a long short-term memory neural network to convert time-dependent anomaly information into a diagnosis. Decision theory is then used to convert diagnoses into a recovery action. The overall technique is successfully validated on data from the LightSail 2 mission. <br>
7

Transformer fault-recovery inrush currents in MMC-HVDC systems and mitigation strategies

Vaheeshan, Jeganathan January 2017 (has links)
The UK Government has set an ambitious target to achieve 15% of final energy consumption from renewable sources by 2020. High Voltage Direct Current (HVDC) technology is an attractive solution for integrating offshore wind power farms farther from the coast. In the near future, more windfarms are likely to be connected to the UK grid using HVDC links. With the onset of this fairly new technology, new challenges are inevitable. This research is undertaken to help assist with these challenges by looking at possibilities of problems with respect to faster AC/DC interaction modes, especially, on the impact of inrush currents which occur during fault-recovery transients. In addition to that, possible mitigation strategies are also investigated. Initially, the relative merits of different transformer models are analysed with respect to inrush current transient studies. The most appropriate transformer model is selected and further validated using field measurement data. A detailed electro-magnetic-transient (EMT) model of a grid-connected MMC-HVDC system is prepared in PSCAD/EMTDC to capture the key dynamics of fault-recovery transformer inrush currents. It is shown that the transformer in an MMC system can evoke inrush currents during fault recovery, and cause transient interactions with the converter and the rest of the system, which should not be neglected. It is shown for the first time through a detailed dynamic analysis that if the current sensors of the inner-current control loops are placed at the converter-side of the transformer instead of the grid-side, the inrush currents will mainly flow from the grid and decay faster. This is suggested as a basic remedial action to protect the converter from inrush currents. Afterwards, analytical calculations of peak flux-linkage magnitude in each phase, following a voltage-sag recovery transient, are derived and verified. The effects of zero-sequence currents and fault resistance on the peak flux linkage magnitude are systematically explained. A zero-sequence-current suppression controller is also proposed. A detailed study is carried out to assess the key factors that affect the maximum peak flux-linkage and magnetisation-current magnitudes, especially with regard to fault specific factors such as fault inception angle, duration and fault-current attenuation. Subsequently, the relative merits of a prior-art inrush current mitigation strategy and its implementation challenges in a grid-connected MMC converter are analysed. It is shown that the feedforward based auxiliary flux-offset compensation scheme, as incorporated in the particular strategy, need to be modified with a feedback control technique, to alleviate the major drawbacks identified. Following that, eight different feedback based control schemes are devised, and a detailed dynamic and transient analysis is carried out to find the best control scheme. The relative merits of the identified control scheme and its implementation challenges in a MMC converter are also analysed. Finally, a detailed EMT model of an islanded MMC-HVDC system is implemented in PSCAD/EMTDC and the impacts of fault-recovery inrush currents are analysed. For that, initially, a MMC control scheme is devised in the synchronous reference frame and its controllers are systematically tuned. To obtain an improved performance, an equivalent control scheme is derived in the stationary reference frame with Proportional-Resonant controllers, and incorporated in the EMT model. Following that, two novel inrush current mitigation strategies are proposed, with the support of analytical equations, and verified.
8

Schedulability in Mixed-criticality Systems / Ordonnancement des systèmes avec différents niveaux de criticité

Kahil, Rany 26 June 2019 (has links)
Les systèmes temps-réel critiques doivent exécuter leurs tâches dans les délais impartis. En cas de défaillance, des événements peuvent avoir des catastrophes économiques. Des classifications des défaillances par rapport aux niveaux des risques encourus ont été établies, en particulier dans les domaines des transports aéronautique et automobile. Des niveaux de criticité sont attribués aux différentes fonctions des systèmes suivant les risques encourus lors d'une défaillance et des probabilités d'apparition de celles-ci. Ces différents niveaux de criticité influencent les choix d'architecture logicielle et matérielle ainsi que le type de composants utilisés pour sa réalisation. Les systèmes temps-réels modernes ont tendance à intégrer sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticité. Cette intégration est nécessaire pour des systèmes modernes comme par exemple les drones (UAV) afin de réduire le coût, le poids et la consommation d'énergie. Malheureusement, elle conduit à des difficultés importantes lors de leurs conceptions. En plus, ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités.Il est bien connu que le problème d'ordonnancement des systèmes avec différents niveaux de criticités représente un des plus grand défi dans le domaine de systèmes temps-réel. Les techniques traditionnelles proposent comme solution l’isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Malheureusement, une telle solution conduit à une mauvaise des ressources et à la perte de l’avantage de cette intégration. En 2007, Vestal a proposé un modèle pour représenter les systèmes avec différents niveaux de criticité dont les tâches ont plusieurs temps d’exécution, un pour chaque niveau de criticité. En outre, les conditions de validité des stratégies d’ordonnancement ont été définies de manière formelle, permettant ainsi aux tâches les moins critiques d’échapper aux délais, voire d’être abandonnées en cas de défaillance ou de situation d’urgence.Les politiques de planification conventionnelles et les tests d’ordonnoncement se sont révélés inadéquats.Dans cette thèse, nous contribuons à l’étude de l’ordonnancement dans les systèmes avec différents niveaux de criticité. La surcharge d'un système est représentée sous la forme d'un ensemble de tâches pouvant décrire l'exécution sur l'hyper-période de tâches ou sur une durée donnée. Ce modèle nous permet d’étudier la viabilité des tests de correction basés sur la simulation pour les systèmes avec différents niveaux de criticité. Nous montrons que les tests de simulation peuvent toujours être utilisés pour ces systèmes, et la possibilité de l’ordonnancement du pire des scénarios ne suffit plus, même pour le cas de l’ordonnancement avec priorité fixe. Nous montrons que les politiques d'ordonnancement ne sont généralement pas prévisibles. Nous définissons le concept de faible prévisibilité pour les systèmes avec différents niveaux de criticité et nous montrons ensuite qu'une classe spécifique de stratégies à priorité fixe sont faiblement prévisibles. Nous proposons deux tests de correction basés sur la simulation qui fonctionnent pour des stratégies faiblement prévisibles.Nous montrons également que, contrairement à ce que l’on croyait, le contrôle de l’exactitude ne peut se faire que par l’intermédiaire d’un nombre linéaire de préemptions.La majorité des travaux reliés à notre domaine portent sur des systèmes à deux niveaux de criticité en raison de la difficulté du problème. Mais pour les systèmes automobiles et aériens, les normes industrielles définissent quatre ou cinq niveaux de criticité, ce qui nous a motivés à proposer un algorithme de planification qui planifie les systèmes à criticité mixte avec théoriquement un nombre quelconque de niveaux de criticité. Nous montrons expérimentalement que le taux de réussite est supérieur à celui de l’état de la technique. / Real-time safety-critical systems must complete their tasks within a given time limit. Failure to successfully perform their operations, or missing a deadline, can have severe consequences such as destruction of property and/or loss of life. Examples of such systems include automotive systems, drones and avionics among others. Safety guarantees must be provided before these systems can be deemed usable. This is usually done through certification performed by a certification authority.Safety evaluation and certification are complicated and costly even for smaller systems.One answer to these difficulties is the isolation of the critical functionality. Executing tasks of different criticalities on separate platforms prevents non-critical tasks from interfering with critical ones, provides a higher guaranty of safety and simplifies the certification process limiting it to only the critical functions. But this separation, in turn, introduces undesirable results portrayed by an inefficient resource utilization, an increase in the cost, weight, size and energy consumption which can put a system in a competitive disadvantage.To overcome the drawbacks of isolation, Mixed Criticality (MC) systems can be used. These systems allow functionalities with different criticalities to execute on the same platform. In 2007, Vestal proposed a model to represent MC-systems where tasks have multiple Worst Case Execution Times (WCETs), one for each criticality level. In addition, correctness conditions for scheduling policies were formally defined, allowing lower criticality jobs to miss deadlines or be even dropped in cases of failure or emergency situations.The introduction of multiple WCETs and different conditions for correctness increased the difficulty of the scheduling problem for MC-systems. Conventional scheduling policies and schedulability tests proved inadequate and the need for new algorithms arose. Since then, a lot of work has been done in this field.In this thesis, we contribute to the study of schedulability in MC-systems. The workload of a system is represented as a set of jobs that can describe the execution over the hyper-period of tasks or over a duration in time. This model allows us to study the viability of simulation-based correctness tests in MC-systems. We show that simulation tests can still be used in mixed-criticality systems, but in this case, the schedulability of the worst case scenario is no longer sufficient to guarantee the schedulability of the system even for the fixed priority scheduling case. We show that scheduling policies are not predictable in general, and define the concept of weak-predictability for MC-systems. We prove that a specific class of fixed priority policies are weakly predictable and propose two simulation-based correctness tests that work for weakly-predictable policies.We also demonstrate that contrary to what was believed, testing for correctness can not be done only through a linear number of preemptions.The majority of the related work focuses on systems of two criticality levels due to the difficulty of the problem. But for automotive and airborne systems, industrial standards define four or five criticality levels, which motivated us to propose a scheduling algorithm that schedules mixed-criticality systems with theoretically any number of criticality levels. We show experimentally that it has higher success rates compared to the state of the art.We illustrate how our scheduling algorithm, or any algorithm that generates a single time-triggered table for each criticality mode, can be used as a recovery strategy to ensure the safety of the system in case of certain failures.Finally, we propose a high level concurrency language and a model for designing an MC-system with coarse grained multi-core interference.

Page generated in 0.0569 seconds