• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 37
  • 18
  • 17
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 239
  • 39
  • 37
  • 36
  • 36
  • 34
  • 29
  • 24
  • 23
  • 20
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Analysis of the planar exterior Navier-Stokes problem with effects related to rotation of the obstacle / 障害物の回転効果に関連するナヴィエ-ストークス方程式の2次元外部問題の解析

Higaki, Mitsuo 23 January 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21444号 / 理博第4437号 / 新制||理||1638(附属図書館) / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)准教授 前川 泰則, 教授 上 正明, 教授 堤 誉志雄 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
152

Advanced Scheduling Techniques for Mixed-Criticality Systems

Mahdiani, Mitra 10 August 2022 (has links)
Typically, a real-time system consists of a controlling system (i.e., a computer) and a controlled system (i.e., the environment). Real-time systems are those systems where correctness depends on two aspects: i) the logical result of computation and, ii) the time in which results are produced. It is essential to guarantee meeting timing constraints for this kind of systems to operate correctly. Missing deadlines in many cases -- in so-called hard real-time systems -- is associated with economic loss or loss of human lives and must be avoided under all circumstances. On the other hand, there is a trend towards consolidating software functions onto fewer processors in different domains such as automotive systems and avionics with the aim of reducing costs and complexity. Hence, applications with different levels of criticality that used to run in isolation now start sharing processors. As a result, there is a need for techniques that allow designing such mixed-criticality (MC) systems -- i.e., real-time systems combining different levels of criticality -- and, at the same time, complying with certification requirements in the different domains. In this research, we study the problem of scheduling MC tasks under EDF (Earliest Deadline First) and propose new approaches to improve scheduling techniques. In particular, we consider that a mix of low-criticality (LO) and high-criticality (HI) tasks are scheduled on one processor. While LO tasks can be modeled by minimum inter-arrival time, deadline, and worst-case execution time (WCET), HI tasks are characterized by two WCET parameters: an optimistic and a conservative one. Basically, the system operates in two modes: LO and HI mode. In LO mode, HI tasks run for no longer than their optimistic execution budgets and are scheduled together with the LO tasks. The system switches to HI mode when one or more HI tasks run for more than their conservative execution budgets. In this case, LO tasks are immediately discarded so as to be able of accommodating the increase in HI execution demand. We propose an exact test for mixed-criticality EDF, which increases efficiency and reliability when compared with the existing approaches from the literature. On this basis, we further derive approximated tests with less complexity and, hence, a reduced running time that makes them more suitable for online checks.:Contents 1. Introduction 1 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3. Structure of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Concepts, Models and Assumptions 7 2.1. Real-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1. Tasks Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2. Scheduling Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1. Feasibility versus Schedulability . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2. Schedulability Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3. Mixed-Criticality Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4. Basic Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5. The Earliest Deadline First Algorithm . . . . . . . . . . . . . . . . . . . . . . 13 2.5.1. EDF-VD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.2. Mixed-Criticality EDF . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.5.3. Demand Bound Function . . . . . . . . . . . . . . . . . . . . . . . . . 16 3. Related Work 17 3.1. Uniprocessor Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1.1. Uniprocessor Scheduling Based on EDF . . . . . . . . . . . . . . . . . 18 3.2. Multiprocessor Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2.1. Multiprocessor Scheduling Based on EDF . . . . . . . . . . . . . . . . 20 4. Introducing Utilization Caps 23 4.1. Introducing Utilization Caps . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1.1. Fixed utilization caps . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.2. Optimized utilization caps . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2. Findings of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5. Bounding Execution Demand under Mixed-Criticality EDF 29 5.1. Bounding Execution Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2. Analytical Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2.1. The GREEDY Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2.2. The ECDF Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.3. Finding Valid xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.4. Findings of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6. Approximating Execution Demand Bounds 41 6.1. Applying Approximation Techniques . . . . . . . . . . . . . . . . . . . . . . . 41 6.2. Devi’s Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.2.1. Per-task deadline scaling . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.2.2. Uniform deadline scaling . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.2.3. Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 6.3. Findings of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7. Evaluation and Results 49 7.1. Mixed-Criticality EDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.2. Obtaining Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.2.1. The Case Di = Ti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.2.2. The Case Di ≤ Ti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 7.3. Weighted schedulability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 7.4. Algorithms in this Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.4.1. The EDF-VD and DEDF-VD Algorithms . . . . . . . . . . . . . . . . 51 7.4.2. The GREEDY algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.4.3. The ECDF algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.5. Evaluation of Utilization Caps . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7.5.1. 10 tasks per task set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7.5.2. 20 tasks per task set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 7.5.3. 50 tasks per task set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7.5.4. Comparison of runtime . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.6. Evaluation of Execution Demand Bounds . . . . . . . . . . . . . . . . . . . . 61 7.6.1. Comparison for sets of 10 tasks . . . . . . . . . . . . . . . . . . . . . . 61 7.6.2. Comparison for sets of 20 tasks . . . . . . . . . . . . . . . . . . . . . . 64 7.7. Evaluation of Approximation Techniques . . . . . . . . . . . . . . . . . . . . . 67 7.7.1. Schedulability curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 7.7.2. Weighted schedulability . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.7.3. Comparison of runtime . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8. Conclusion and Future Work 77 8.1. Outlook/Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Bibliography 83 A. Introduction 91 A.1. Multiple Levels of Criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 A.1.1. Ordered mode switches . . . . . . . . . . . . . . . . . . . . . . . . . . 91 A.1.2. Unordered mode switches . . . . . . . . . . . . . . . . . . . . . . . . . 93 B. Evaluation and Results 95 B.1. Uniform Distribution for Task Periods . . . . . . . . . . . . . . . . . . . . . . 95
153

Link Criticality Characterization for Network Optimization : An approach to reduce packet loss rate in packet-switched networks

Zareafifi, Farhad January 2019 (has links)
Network technologies are continuously advancing and attracting ever-growing interests from the industry and society. Network users expect better experience and performance every day. Consequently, network operators need to improve the quality of their services. One way to achieve this goal entails over-provisioning the network resources, which is not economically efficient as it imposes unnecessary costs. Another way is to employ Traffic Engineering (TE) solutions to optimally utilize the current underlying resources by managing traffic distribution in the network. In this thesis, we consider packet-switched Networks (PSN), which allows messages to be split across multiple packets as in today’s Internet. Traffic engineering in PSN is a well-known topic yet current solutions fail to make efficient utilization of the network resources. The goal of the TE process is to compute a traffic distribution in the network that optimizes a given objective function while satisfying the network capacity constraints (e.g., do not overflow the link capacity with an excessive amount of traffic). A critical aspect of TE tools is the ability to capture the impact of routing a certain amount of traffic through a certain link, also referred as the link criticality function. Today’s TE tools rely on simplistic link criticality functions that are inaccurate in capturing the network-wide performance of the computed traffic distribution. A good link criticality function allows the TE tools to distribute the traffic in a way that it achieves close-to-optimal network performance, e.g., in terms of packet loss and possibly packet latencies. In this thesis, we embark upon the study of link criticality functions and introduce four different criticality functions called: 1) LeakyCap, 2) LeakyReLU, 3) SoftCap, and 4) Softplus. We compare and evaluate these four functions with the traditional link criticality function defined by Fortz and Thorup, which aims at capturing the performance degradation of a link given its utilization. To assess the proposed link criticality functions, we designed 57 network scenarios and showed how the link criticality functions affect network performance in terms of packet loss. We used different topologies and considered both constant and bursty types of traffic. Based on our results, the most reliable and effective link criticality function for determining traffic distribution rates is Softplus. Softplus outperformed Fortz function in 79% of experiments and was comparable in the remaining 21% of the cases. / Nätverksteknik är ett område under snabb utveckling som röner ett stort och växande intresse från såväl industri som samhälle. Användare av nätverkskommunikation förväntar sig ständigt ökande prestanda och därför behöver nätverksoperatörerna förbättra sina tjänster i motsvarande grad. Ett sätt att möta användarnas ökade krav är att överdimensionera nätverksresurserna, vilket dock leder till onödigt höga kostnader. Ett annat sätt är att använda sig av trafikstyrninglösningar med målet att utnyttjade tillgängliga resurserna så bra som möjligt. I denna avhandling undersöker vi paketswitchade nätverk (PSN) i vilka meddelanden kan delas upp i multipla paket, vilket är den rådande paradigmen för dagens Internet. Ä ven om trafikstyrning (TS) för PSN är ett välkänt ämne så finns det utrymme för förbättringar relativt de lösningar som är kända idag. Målet för TS-processen är att beräkna en trafikfördelning i nätverket som optimerar en given målfunktion, samtidigt som nätverkets kapacitetsbegränsningar inte överskrids. En kritisk aspekt hos TS-verktygen är förmågan att fånga påverkan av att sända en viss mängd trafik genom en specifik länk, vilket vi kallar länkkritikalitetsfunktionen. Dagens TS verktyg använder sig av förenklade länkkritikalitetsfunktioner som inte väl nog beskriver trafikfördelningens påverkan på hela nätverkets prestanda. En bra länkkritikalitetsfunktion möjliggör för TS-verktygen att fördela trafiken på ett sätt som närmar sig optimal nätverksprestanda, till exempel beskrivet som låg paketförlust och låg paketlatens. I denna avhandling undersöker vi länkkritikalitetsfunktioner och föreslår fyra olika funktioner som vi kallar 1) LeakyCap, 2) LeakyReLU, 3) SoftCap, och 4) Softplus. Vi jämför och utvärderar dessa fyra funktioner och inkluderar även klassiska länkkritikalitetsfunktioner som Fortz och Thorup, vilka avser fånga prestandadegraderingen av en länk över graden av utnyttjande.Vi har undersökt 57 olika nätverksscenarier för att bestämma hur de olika länk kritikalitets funktionerna påverkar nätverksprestanda i form av paketförlust. Olika topologier har använts och vi har studerat såväl konstant som stötvis flödande trafik. Enligt våra resultat är Softplus den mest tillförlitliga och effektiva länkkritikalitetsfunktionen för att fördela trafiken i ett nätverk. Softplus presterade bättre än Fortz i 79% av våra tester, och var jämförbar i övriga 21%.
154

Probing quantum criticality in heavy fermion CeCoIn5

Khansili, Akash January 2023 (has links)
Understanding the low-temperature properties of strongly correlated materials requires accurate measurement of the physical properties of these systems. Specific heat and nuclear spin-lattice relaxation are two such properties that allow the investigation of the electronic behavior of the system.  In this thesis, nanocalorimetry is used to measure specific heat, but also as basis for new experimental approach, developed to disentangle the different contributions to specific heat at low temperatures. The technique, that we call Thermal Impedance Spectroscopy (TISP) allows independent measurement of the electronic and nuclear specific heat at low temperatures based on the frequency response of the calorimeter-sample assembly. The method also enables simultaneous measurements of the nuclear spin-lattice relaxation time (T1). The nuclear spin lattice relaxation, as 1/T1T, and electronic specific heat, as C/T, provide information about the same quantity, electronic density of states, in the system. By comparing these properties in strongly correlated systems, we can obtain insights of electronic interactions.  Metallic indium is studied using thermal impedance spectroscopy from 0.3 K to 7 K at 35 T. The magnetic field dependence of nuclear spin-lattice relaxation rate is measured. Indium is a simple metallic system and the expected behavior of the nuclear spin-lattice relaxation is similar to that of the electronic specific heat. The results of the measurement are matched with the expectation from a simple metallic system and Nuclear Magnetic Resonance (NMR) measurements. This demonstrates the effectiveness of the new technique.  The heavy-fermion superconductor CeCoIn5 is studied using thermal impedance spectroscopy and ac-calorimetry. This material is located near a quantum critical point (QCP) bordering antiferromagnetism, as evidenced by doping studies. The nature of its quantum criticality and unconventional superconductivity is still elusive. Contrasting specific heat and nuclear spin-lattice relaxation in this correlated system helps to reveal the character of its quantum criticality.  The quantum criticality in CeCoIn5 is also studied using X-ray Absorption Spectroscopy (XAS) across the superconducting transition and X-ray Magnetic Circular Dichroism (XMCD) at 0.1 K and 6 T. The element-specific probe zooming in on cerium in this material indicates two things, a mixed valence of Ce in the superconducting state and a very small magnetic moment, that implies resonance-bond like antiferromagnetic local ordering in the system.
155

Critical Words Cache Memory

Gieske, Edmund Joseph 28 August 2008 (has links)
No description available.
156

An Embedded Multi-Core Platform for Mixed-Criticality Systems : Study and Analysis of Virtualization Techniques

Zaki, Youssef January 2016 (has links)
The common availability of multiple processors in modern CPU devices and the need to reduce cost of embedded systems has created a drive for integrating functionalities from different parts of a system into a single Multi- Processor System-on-Chip (MPSoC) device. As a result, system resources are shared amongst the critical and non-critical components of the system, which results in a mixed-criticality system (MCS). An example of a MCS is to combine an airbag control unit with the infotainment system of a car, in such a case, both components must be certified unless an isolation mechanism that can prevent the non-critical to interfere with the critical subsystems is implemented. This isolation can be achieved via spatial and temporal partitioning of system resources, such as static mapping of CPUs to critical tasks, memory and IO virtualization, and time domain multiplexing of applications. System isolation is currently achievable through virtualization techniques, and is commonly used in data centers and personal computers. Recently, virtualization solutions have been emerging for embedded systems in order to cope with the increased design complexity, the stringent non-functional requirements, and to facilitate the certification process of MCS. The achieved performance, safety, security, and robustness in a virtualized system depends on the virtualization architecture and hardware platform. This thesis work performs state-of the art research in the field of mixedcriticality embedded systems with a focus on virtualization of embedded systems. As a result, a deep study of virtualization architectures, and open-source virtualization solutions is conducted in order to understand the consequences of using this technology in MCS. The work is concluded with a design and implementation of mixed-criticality embedded system that leverages the hardware capabilities of the target device (Zynq-7000 all programmable SoC), and contributes to the Living Lab WP7 of the EMC2 project.
157

Långsiktig underhållsplan / Long-term maintenance plan

Cumbo, Aleksandar, Mahmod, Mokarrm January 2021 (has links)
AstraZenecas PET BFS på site Snäckviken vill implementera en maskinstatuslista lik den som implementerats för PET TBH/PS, för att kunna få en överblick över hela området samt kartlägga och prioritera framtida investeringar och besluttagande. PET BFS (IN/PP) är i behov av en maskinstatuslista på grund av en del föråldrad utrustning, inhalation och injektion linjerna saknar utförligt underlag för att en investeringsplan ska tas fram. Syftet med skapandet utav maskinstatuslistan är för att kunna ta reda på om livslängden kan förlängas eller om och när utrusningen behöver renoveras, uppgraderas eller bytas ut. Målet med rapporten är att sammanställa en maskinstatuslista/kritikalitetsmatris över all utrustning inom PET BFS, vilket inkluderar både inhalation och injektion. Maskinstatuslistan ska innehålla nuläget på status för utrustning. Detta inkluderar livslängd, reservdelsstatus och underhållskostnader för respektive utrustning. Det ska hjälpa PET BFS med att skapa en helhetsbild över linje/område vilket innebär att säkerhetsställa driftsäkerheten för all utrustning, identifiera kritiska komponenter, ta fram investeringsunderlag samt tillföra samsyn för underhållsstrategin inom verksamheten. Listan ska vara möjlig för fortsatt uppdatering av underhållsingenjör/serviceingenjör samt deltagare/resurser. De kritiska komponenterna identifierades för IN06, IN07, IN08, PO06, PO07 samt listades i maskinstatuslistan. De kritiska komponenternas status identifierades genom kontakt med leverantörer. Riskanalysen kunde inte utföras fullt ut dels på grund av saknad information, saknad tillgång till SAP samt tidsbegränsning. Data för andelen utgångna komponenter visar att en stor andel av komponenterna som undersökts är utgångna, samtidigt som majoriteten av dessa utgångna komponenter har ersättare. / AstraZeneca's PET BFS on the site Snäckviken wants to implement a machine status list similar to the one implemented for PET TBH / PS in order to get an overview of the entire area to map and prioritize future investments and decisions. PET BFS (IN/PP) is in need of a machine status list due to some outdated equipment, inhalation and injection departments lacking a detailed basis for producing an investment plan. The purpose of creating the machine status list is to be able to find out if the service life can be extended or if and when the equipment needs to be renovated, upgraded, or replaced. The aim of the report is to compile a machine status list/criticality matrix of all equipment within PET BFS, which includes both inhalation and injection. The machine status list must contain the current status of equipment status. This includes service life, spare part status and maintenance costs for each piece of equipment. It will help PET BFS to create an overall picture of the line/area, which means ensuring the operational safety of all equipment, identifying critical components, producing investment data and adding consensus for the maintenance strategy within the business. The list must be possible for further updating of the maintenance engineer/service engineer and participants/resources. The critical components were identified for IN06, IN07, IN08, PO06, PO07 and listed in the machine status list. The status of the critical components was identified through contact with suppliers. The risk analysis could not be performed in full due to lack of information, no access to SAP and time constraints. Data for the proportion of obsolete components show that a large proportion of the components examined are obsolete, while the majority of these obsolete components have a replacement.
158

The F [subscript N] method for a bare critical cylinder

Southers, Jack Daniel January 1982 (has links)
The F<sub>N</sub> method, originated by C. E. Siewert, is developed for a bare, axially infinite critical cylinder. The full-range completeness and orthogonality properties of the singular eigenfunctions are used to derive an expression for the emerging angular flux, which is represented by a power series. The resulting equations are reduced to matrix form and computer solved. Examples of the results of this method for different parameters are presented. Comparisons with other models are made. A fourth order approximation was found to be sufficient to achieve up to four digit agreement with benchmark values. / Master of Science
159

Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality Systems

Ranjbar, Behnaz 12 April 2024 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail. In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution. Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS. Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms. This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS. Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction 1.1. Mixed-Criticality Application Design 1.2. Mixed-Criticality Hardware Design 1.3. Certain Challenges and Questions 1.4. Thesis Key Contributions 1.4.1. Application Analysis and Modeling 1.4.2. Multi-Core Mixed-Criticality System Design 1.5. Thesis Overview 2. Preliminaries and Literature Reviews 2.1. Preliminaries 2.1.1. Mixed-Criticality Systems 2.1.2. Fault-Tolerance, Fault Model and Safety Requirements 2.1.3. Hardware Architectural Modeling 2.1.4. Low-Power Techniques and Power Consumption Model 2.2. Related Works 2.2.1. Mixed-Criticality Task Scheduling Mechanisms 2.2.2. QoS Improvement Methods in Mixed-Criticality Systems 2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems 2.3. Conclusion 3. Bounding Time in Mixed-Criticality Systems 3.1. BOT-MICS: A Design-Time WCET Adjustment Approach 3.1.1. Motivational Example 3.1.2. BOT-MICS in Detail 3.1.3. Evaluation 3.2. A Run-Time WCET Adjustment Approach 3.2.1. Motivational Example 3.2.2. ADAPTIVE in Detail 3.2.3. Evaluation 3.3. Conclusion 4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling 4.1. Problem Objectives and Motivational Example 4.2. FANTOM in detail 4.2.1. Safety Quantification 4.2.2. MC Tasks Utilization Bounds Definition 4.2.3. Scheduling Analysis 4.2.4. System Upper Bound Utilization 4.2.5. A General Design Time Scheduling Algorithm 4.3. Evaluation 4.3.1. Evaluation with Real-Life Benchmarks 4.3.2. Evaluation with Synthetic Task Sets 4.4. Conclusion 5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling 5.1. Motivational Example and Problem Statement 5.2. Proposed Method in Detail 5.2.1. An Overview of the Design-Time Approach 5.2.2. Run-Time Approach: Employment of SOLID 5.2.3. LIQUID Approach 5.3. Evaluation 5.3.1. Evaluation with Real-Life Benchmarks 5.3.2. Evaluation with Synthetic Task Sets 5.3.3. Investigating the Timing and Memory Overheads of ML Technique 5.4. Conclusion 6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design 6.1. Problem Objectives and Motivational Example 6.2. Design Methodology 6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping 6.3.1. Making Scheduling Tree 6.3.2. Mapping and Scheduling 6.3.3. Time Complexity Analysis 6.3.4. Memory Space Analysis 6.4. Evaluation 6.4.1. Experimental Setup 6.4.2. Analyzing the Tree Construction Time 6.4.3. Analyzing the Run-Time Timing Overhead 6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications 6.4.5. Analyzing the QoS of LC Tasks 6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature 6.4.7. Effect of Varying Different Parameters on Acceptance Ratio 6.4.8. Investigating Different Approaches at Run-Time 6.5. Conclusion 7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems 7.1. Research Questions, Objectives and Motivational Example 7.2. Design-Time Approach 7.3. Run-Time Mixed-Criticality Scheduler 7.3.1. Selecting the Appropriate Task to Assign Slack 7.3.2. Re-Mapping Technique 7.3.3. Run-Time Management Algorithm 7.3.4. DVFS governor in Clustered Multi-Core Platforms 7.4. Run-Time Scheduler Algorithm Optimization 7.5. Evaluation 7.5.1. Experimental Setup 7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption 7.5.3. The Effect of Varying Parameters of Cost Functions 7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping 7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms 7.5.6. The Latency of Changing Frequency in Real Platform 7.5.7. The Effect of Latency on System Schedulability 7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement 7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture 7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform 7.6. Conclusion 8. Conclusion and Future Work 8.1. Conclusions 8.2. Future Work
160

Critical Being(s) : An interview study about critical self-reflection in upper secondary, ESL classrooms

Sandström, Abigail January 2024 (has links)
This study aims to find out how a selection of upper-secondary Swedish teachers engage their ESL students in critical self-reflection, as well as what their motivations behind these choices are. Specifically, this study aims to provide support for the hypothesis that teachers actively avoid social learning in correlation with critical self-reflection. Through an interview process and analysis, it was found that the interviewed teachers did, in fact, actively avoid social methods and activities when teaching critical self-reflection, due to a variety of reasons. The main motivations behind opting for solitary or individual critical self-reflections were age, group dynamics, student dispositions, and language proficiency.

Page generated in 0.0461 seconds