• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 22
  • 21
  • 21
  • 19
  • 19
  • 15
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Efficient Generation of Mutants for Testing Execution Time

Abusamaan, Mohammed, Abuayyash, Mohammed January 2020 (has links)
In this thesis, we specifically focus on testing non-functional proprieties. We target the Worst-Case Execution Time (WCET), which is very important for real-time tasks. This thesis applies the concept of targeted mutation, where the mutations are applied to the parts of the code that are most likely to significantly affect the execution time. Moreover, program slicing is used to direct the mutations to the code parts that are likely to have the strongest influence on execution time. The main contribution of the thesis is to implement the method for the experimental evaluation of the targeted mutation testing. This thesis confirms the relevance of the concept of targeted mutation by showing that targeted mutation enables the design of effective test suites for WCET estimation and improves the efficiency of this process by reducing the number of mutants. To experiment the method, we used two C benchmarks from Mälardalen University benchmarks, The first benchmark is Janne-complex code, it consists of 20 line of code and 10 test cases, This benchmark contains two loops, where the inner loop max number of iterations depends on the outer loop current iterations. The results correspond to something Janne flow-analysis should produce. Where the second benchmark is Ludcmp, it consists of 50 lines of code and 9 test cases. This benchmark is a simultaneous linear equation by LU decomposition; it contains two arrays for input and one array for output. However, the number of equations determined by the variable (n). However, both benchmarks LOC decreased after slicing, and each test case applied ten times against each mutant to eliminate the effect of cache and other applications currently running since our experiments depend on estimation of the WCET. The thesis confirms the relevance of the concept of targeted mutation by showing that targeted mutation (i) encourages the design of effective test suites for WCET estimation and (ii) improves the efficiency of this process by reducing the number of mutants.
32

Enriching Enea OSE for Better Predictability Support

Ul Mustafa, Naveed January 2011 (has links)
A real-time application is designed as a set of tasks with specific timing attributes and constraints. These tasks can be categorized as periodic, sporadic or aperiodic, based on the timing attributes that are specified for them which in turn define their runtime behaviors. To ensure correct execution and behavior of the task set at runtime, the scheduler of the underlying operating system should take into account the type of each task (i.e.,  periodic, sporadic, aperiodic). This is important so that the scheduler can schedule the task set in a predictable way and be able to allocate CPU time to each task appropriately in order for them to achieve their timing constraints. ENEA OSE is a real-time operating system with fixed priority preemptive scheduling policy which is used heavily in embedded systems, such as telecommunication systems developed by Ericsson. While OSE allows for specification of priority levels for tasks and schedules them accordingly, it can not distinguish between different types of tasks. This thesis work investigates mechanisms to build a scheduler on top of OSE, which can identify three types of real-time tasks and schedule them in a more predictable way. The scheduler can also monitor behavior of task set at run-time and invoke violation handlers if time constraints of a task are violated. The scheduler is implemented on OSE5.5 soft kernel. It identifies periodic, aperiodic and sporadic tasks. Sporadic and aperiodic tasks can be interrupt driven or program driven. The scheduler implements EDF and RMS as scheduling policy of periodic tasks. Sporadic and aperiodic tasks can be scheduled using polling server or background scheme. Schedules generated by the scheduler  deviate from expected timing behavior due to scheduling overhead. Approaches to reduce deviation are suggested as future extension of thesis work. Usability of the scheduler can be increased by extending the scheduler to support other scheduling algorithm in addition to RMS and EDF. / CHESS
33

Compiler optimization VS WCET : Battle of the ages / Kompilatoroptimering VS WCET

Harrius, Tova, Nordin, Max January 2022 (has links)
Optimization by a compiler can be executed with many different methods. The defence company Saab provided us with a mission, to see if we could optimize their code with the help of the GCC compiler and its optimization flags. For this thesis we have conducted a study of the optimization flags to decrease the worst case execution time. The first step to assemble an effective base of flags was reading the documentation for the flags. We then tested the different flags and analysed them. In the end we ended up with four chosen sets that we saw fitted to be discussed and analyzed further. The results did not live up to our expectations, as we thought the flags would optimize the execution time. The flags int he majority of cases gave an, although small, increase of the execution time. We only had one set where the flags gave us a decrease, which we called the Expensive Optimization.With these results we can conclude that Saab do not need to change their existing set of optimization flags to optimize their compiler further.
34

Analyse pire cas pour processeur multi-cœurs disposant de caches partagés

Hardy, Damien 09 December 2010 (has links) (PDF)
Les systèmes temps-réel strict sont soumis à des contraintes temporelles dont le non respect peut entraîner des conséquences économiques, écologiques, humaines catastrophiques. Le processus de validation, garantissant la sûreté de ces logiciels en assurant le respect de ces contraintes dans toutes les situations possibles y compris le pire cas, se base sur la connaissance à priori du pire temps d'exécution de chacune des tâches du logiciel. Cependant, l'obtention de ce pire temps d'exécution est un problème difficile pour les architectures actuelles, en raison des mécanismes matériels complexes pouvant amener une variabilité importante du temps d'exécution. Ce document se concentre sur l'analyse du comportement temporel pire cas des hiérarchies de mémoires cache, afin de déterminer leur contribution au pire temps d'exécution. Plusieurs approches sont proposées afin de prédire et d'améliorer le pire temps d'exécution des tâches s'exécutant sur des processeurs multi-cœurs disposant d'une hiérarchie de mémoires cache avec des niveaux partagés entre les différents cœurs de calculs.
35

From Theory to Implementation of Embedded Control Applications : A Case Study

Fize, Florian January 2016 (has links)
Control applications are used in almost all scientific domains and are subject to timing constraints. Moreover, different applications can run on the same platform which leads to even more complex timing behaviors. However, some of the timing issues are not always considered in the implementation of such applications, and this can make the system fail. In this thesis, the timing issues are considered, i.e., the problem of non-constant delay in the control of an inverted pendulum with a real-time kernel running on an ATmega328p micro-controller. The study shows that control performance is affected by this problem. In addition, the thesis, reports the adaptation of an existing real-time kernel based on an EDF (Earliest Deadline First) scheduling policy, to the architecture of the ATmega328p. Moreover, the new approach of a server-based kernel is implemented in this thesis, still on the same Atmel micro-controller.
36

Evaluation of a method for identifying timing models

Kahsu, Lidia January 2012 (has links)
In today’s world, embedded systems which have very large and highly configurable software systems, consisting of hundreds of tasks with huge lines of code and mostly with real-time constraints, has replaced the traditional systems. Generally in real-time systems, the WCET of a program is a crucial component, which is the longest execution time of a specified task. WCET is determined by WCET analysis techniques and the values produced should be tight and safe to ensure the proper timing behavior of a real-time system. Static WCET is one of the techniques to compute the upper bounds of the execution time of programs, without actually executing the programs but relying on mathematical models of the software and the hardware involved. Mathematical models can be used to generate timing estimations on source code level when the hardware is not yet fully accessible or the code is not yet ready to compile. In this thesis, the methods used to build timing models developed by WCET group in MDH have been assessed by evaluating the accuracy of the resulting timing models for a number of combinations of hardware architecture. Furthermore, the timing model identification is extended for various hardware platforms, like advanced architecture with cache and pipeline and also included floating-point instructions by selecting benchmarks that uses floating-points as well.
37

Extraction and traceability of annotations for WCET estimation / Extraction et traçabilité d’annotations pour l’estimation de WCET

Li, Hanbing 09 October 2015 (has links)
Les systèmes temps-réel devenaient omniprésents, et jouent un rôle important dans notre vie quotidienne. Pour les systèmes temps-réel dur, calculer des résultats corrects n’est pas la seule exigence, il doivent de surcroît être produits dans un intervalle de temps borné. Connaître le pire cas de temps d’exécution (WCET - Worst Case Execution Time) est nécessaire, et garantit que le système répond à ses contraintes de temps. Pour obtenir des estimations de WCET précises, des annotations sont nécessaires. Ces annotations sont généralement ajoutées au niveau du code source, tandis que l’analyse de WCET est effectuée au niveau du code binaire. L’optimisation du compilateur est entre ces deux niveaux et a un effet sur la structure du code et annotations. Nous proposons dans cette thèse une infrastructure logicielle de transformation, qui pour chaque optimisation transforme les annotations du code source au code binaire. Cette infrastructure est capable de transformer les annotations sans perte d’information de flot. Nous avons choisi LLVM comme compilateur pour mettre en œuvre notre infrastructure. Et nous avons utilisé les jeux de test Mälardalen, TSVC et gcc-loop pour démontrer l’impact de notre infrastructure sur les optimisations du compilateur et la transformation d’annotations. Les résultats expérimentaux montrent que de nombreuses optimisations peuvent être activées avec notre système. Le nouveau WCET estimé est meilleur (plus faible) que l’original. Nous montrons également que les optimisations du compilateur sont bénéfiques pour les systèmes temps-réel. / Real-time systems have become ubiquitous, and play an important role in our everyday life. For hard real-time systems, computing correct results is not the only requirement. In addition, the worst-case execution times (WCET) are needed, and guarantee that they meet the required timing constraints. For tight WCET estimation, annotations are required. Annotations are usually added at source code level but WCET analysis is performed at binary code level. Compiler optimization is between these two levels and has an effect on the structure of the code and annotations.We propose a transformation framework for each optimization to trace the annotation information from source code level to binary code level. The framework can transform the annotations without loss of flow information. We choose LLVM as the compiler to implement our framework. And we use the Mälardalen, TSVC and gcc-loops benchmarks to demonstrate the impact of our framework on compiler optimizations and annotation transformation. The experimental results show that with our framework, many optimizations can be turned on, and we can still estimate WCET safely. The estimated WCET is better than the original one. We also show that compiler optimizations are beneficial for real-time systems.
38

Timing Analysis in Software Development

Däumler, Martin 31 March 2008 (has links)
Rapid development processes and higher customer requirements lead to increasing integration of software solutions in the automotive industry’s products. Today, several electronic control units communicate by bus systems like CAN and provide computation of complex algorithms. This increasingly requires a controlled timing behavior. The following diploma thesis investigates how the timing analysis tool SymTA/S can be used in the software development process of the ZF Friedrichshafen AG.Within the scope of several scenarios, the benefits of using it, the difficulties in using it and the questions that can not be answered by the timing analysis tool are examined. (new: additional files)
39

Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality Systems

Ranjbar, Behnaz 06 December 2022 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail. In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution. Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS. Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms. This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS. Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction 1.1. Mixed-Criticality Application Design 1.2. Mixed-Criticality Hardware Design 1.3. Certain Challenges and Questions 1.4. Thesis Key Contributions 1.4.1. Application Analysis and Modeling 1.4.2. Multi-Core Mixed-Criticality System Design 1.5. Thesis Overview 2. Preliminaries and Literature Reviews 2.1. Preliminaries 2.1.1. Mixed-Criticality Systems 2.1.2. Fault-Tolerance, Fault Model and Safety Requirements 2.1.3. Hardware Architectural Modeling 2.1.4. Low-Power Techniques and Power Consumption Model 2.2. Related Works 2.2.1. Mixed-Criticality Task Scheduling Mechanisms 2.2.2. QoS Improvement Methods in Mixed-Criticality Systems 2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems 2.3. Conclusion 3. Bounding Time in Mixed-Criticality Systems 3.1. BOT-MICS: A Design-Time WCET Adjustment Approach 3.1.1. Motivational Example 3.1.2. BOT-MICS in Detail 3.1.3. Evaluation 3.2. A Run-Time WCET Adjustment Approach 3.2.1. Motivational Example 3.2.2. ADAPTIVE in Detail 3.2.3. Evaluation 3.3. Conclusion 4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling 4.1. Problem Objectives and Motivational Example 4.2. FANTOM in detail 4.2.1. Safety Quantification 4.2.2. MC Tasks Utilization Bounds Definition 4.2.3. Scheduling Analysis 4.2.4. System Upper Bound Utilization 4.2.5. A General Design Time Scheduling Algorithm 4.3. Evaluation 4.3.1. Evaluation with Real-Life Benchmarks 4.3.2. Evaluation with Synthetic Task Sets 4.4. Conclusion 5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling 5.1. Motivational Example and Problem Statement 5.2. Proposed Method in Detail 5.2.1. An Overview of the Design-Time Approach 5.2.2. Run-Time Approach: Employment of SOLID 5.2.3. LIQUID Approach 5.3. Evaluation 5.3.1. Evaluation with Real-Life Benchmarks 5.3.2. Evaluation with Synthetic Task Sets 5.3.3. Investigating the Timing and Memory Overheads of ML Technique 5.4. Conclusion 6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design 6.1. Problem Objectives and Motivational Example 6.2. Design Methodology 6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping 6.3.1. Making Scheduling Tree 6.3.2. Mapping and Scheduling 6.3.3. Time Complexity Analysis 6.3.4. Memory Space Analysis 6.4. Evaluation 6.4.1. Experimental Setup 6.4.2. Analyzing the Tree Construction Time 6.4.3. Analyzing the Run-Time Timing Overhead 6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications 6.4.5. Analyzing the QoS of LC Tasks 6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature 6.4.7. Effect of Varying Different Parameters on Acceptance Ratio 6.4.8. Investigating Different Approaches at Run-Time 6.5. Conclusion 7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems 7.1. Research Questions, Objectives and Motivational Example 7.2. Design-Time Approach 7.3. Run-Time Mixed-Criticality Scheduler 7.3.1. Selecting the Appropriate Task to Assign Slack 7.3.2. Re-Mapping Technique 7.3.3. Run-Time Management Algorithm 7.3.4. DVFS governor in Clustered Multi-Core Platforms 7.4. Run-Time Scheduler Algorithm Optimization 7.5. Evaluation 7.5.1. Experimental Setup 7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption 7.5.3. The Effect of Varying Parameters of Cost Functions 7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping 7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms 7.5.6. The Latency of Changing Frequency in Real Platform 7.5.7. The Effect of Latency on System Schedulability 7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement 7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture 7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform 7.6. Conclusion 8. Conclusion and Future Work 8.1. Conclusions 8.2. Future Work
40

Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality Systems

Ranjbar, Behnaz 12 April 2024 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail. In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution. Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS. Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms. This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS. Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction 1.1. Mixed-Criticality Application Design 1.2. Mixed-Criticality Hardware Design 1.3. Certain Challenges and Questions 1.4. Thesis Key Contributions 1.4.1. Application Analysis and Modeling 1.4.2. Multi-Core Mixed-Criticality System Design 1.5. Thesis Overview 2. Preliminaries and Literature Reviews 2.1. Preliminaries 2.1.1. Mixed-Criticality Systems 2.1.2. Fault-Tolerance, Fault Model and Safety Requirements 2.1.3. Hardware Architectural Modeling 2.1.4. Low-Power Techniques and Power Consumption Model 2.2. Related Works 2.2.1. Mixed-Criticality Task Scheduling Mechanisms 2.2.2. QoS Improvement Methods in Mixed-Criticality Systems 2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems 2.3. Conclusion 3. Bounding Time in Mixed-Criticality Systems 3.1. BOT-MICS: A Design-Time WCET Adjustment Approach 3.1.1. Motivational Example 3.1.2. BOT-MICS in Detail 3.1.3. Evaluation 3.2. A Run-Time WCET Adjustment Approach 3.2.1. Motivational Example 3.2.2. ADAPTIVE in Detail 3.2.3. Evaluation 3.3. Conclusion 4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling 4.1. Problem Objectives and Motivational Example 4.2. FANTOM in detail 4.2.1. Safety Quantification 4.2.2. MC Tasks Utilization Bounds Definition 4.2.3. Scheduling Analysis 4.2.4. System Upper Bound Utilization 4.2.5. A General Design Time Scheduling Algorithm 4.3. Evaluation 4.3.1. Evaluation with Real-Life Benchmarks 4.3.2. Evaluation with Synthetic Task Sets 4.4. Conclusion 5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling 5.1. Motivational Example and Problem Statement 5.2. Proposed Method in Detail 5.2.1. An Overview of the Design-Time Approach 5.2.2. Run-Time Approach: Employment of SOLID 5.2.3. LIQUID Approach 5.3. Evaluation 5.3.1. Evaluation with Real-Life Benchmarks 5.3.2. Evaluation with Synthetic Task Sets 5.3.3. Investigating the Timing and Memory Overheads of ML Technique 5.4. Conclusion 6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design 6.1. Problem Objectives and Motivational Example 6.2. Design Methodology 6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping 6.3.1. Making Scheduling Tree 6.3.2. Mapping and Scheduling 6.3.3. Time Complexity Analysis 6.3.4. Memory Space Analysis 6.4. Evaluation 6.4.1. Experimental Setup 6.4.2. Analyzing the Tree Construction Time 6.4.3. Analyzing the Run-Time Timing Overhead 6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications 6.4.5. Analyzing the QoS of LC Tasks 6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature 6.4.7. Effect of Varying Different Parameters on Acceptance Ratio 6.4.8. Investigating Different Approaches at Run-Time 6.5. Conclusion 7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems 7.1. Research Questions, Objectives and Motivational Example 7.2. Design-Time Approach 7.3. Run-Time Mixed-Criticality Scheduler 7.3.1. Selecting the Appropriate Task to Assign Slack 7.3.2. Re-Mapping Technique 7.3.3. Run-Time Management Algorithm 7.3.4. DVFS governor in Clustered Multi-Core Platforms 7.4. Run-Time Scheduler Algorithm Optimization 7.5. Evaluation 7.5.1. Experimental Setup 7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption 7.5.3. The Effect of Varying Parameters of Cost Functions 7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping 7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms 7.5.6. The Latency of Changing Frequency in Real Platform 7.5.7. The Effect of Latency on System Schedulability 7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement 7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture 7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform 7.6. Conclusion 8. Conclusion and Future Work 8.1. Conclusions 8.2. Future Work

Page generated in 0.418 seconds