161 |
Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality SystemsRanjbar, Behnaz 06 December 2022 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail.
In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution.
Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS.
Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms.
This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS.
Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction
1.1. Mixed-Criticality Application Design
1.2. Mixed-Criticality Hardware Design
1.3. Certain Challenges and Questions
1.4. Thesis Key Contributions
1.4.1. Application Analysis and Modeling
1.4.2. Multi-Core Mixed-Criticality System Design
1.5. Thesis Overview
2. Preliminaries and Literature Reviews
2.1. Preliminaries
2.1.1. Mixed-Criticality Systems
2.1.2. Fault-Tolerance, Fault Model and Safety Requirements
2.1.3. Hardware Architectural Modeling
2.1.4. Low-Power Techniques and Power Consumption Model
2.2. Related Works
2.2.1. Mixed-Criticality Task Scheduling Mechanisms
2.2.2. QoS Improvement Methods in Mixed-Criticality Systems
2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems
2.3. Conclusion
3. Bounding Time in Mixed-Criticality Systems
3.1. BOT-MICS: A Design-Time WCET Adjustment Approach
3.1.1. Motivational Example
3.1.2. BOT-MICS in Detail
3.1.3. Evaluation
3.2. A Run-Time WCET Adjustment Approach
3.2.1. Motivational Example
3.2.2. ADAPTIVE in Detail
3.2.3. Evaluation
3.3. Conclusion
4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling
4.1. Problem Objectives and Motivational Example
4.2. FANTOM in detail
4.2.1. Safety Quantification
4.2.2. MC Tasks Utilization Bounds Definition
4.2.3. Scheduling Analysis
4.2.4. System Upper Bound Utilization
4.2.5. A General Design Time Scheduling Algorithm
4.3. Evaluation
4.3.1. Evaluation with Real-Life Benchmarks
4.3.2. Evaluation with Synthetic Task Sets
4.4. Conclusion
5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling
5.1. Motivational Example and Problem Statement
5.2. Proposed Method in Detail
5.2.1. An Overview of the Design-Time Approach
5.2.2. Run-Time Approach: Employment of SOLID
5.2.3. LIQUID Approach
5.3. Evaluation
5.3.1. Evaluation with Real-Life Benchmarks
5.3.2. Evaluation with Synthetic Task Sets
5.3.3. Investigating the Timing and Memory Overheads of ML Technique
5.4. Conclusion
6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design
6.1. Problem Objectives and Motivational Example
6.2. Design Methodology
6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping
6.3.1. Making Scheduling Tree
6.3.2. Mapping and Scheduling
6.3.3. Time Complexity Analysis
6.3.4. Memory Space Analysis
6.4. Evaluation
6.4.1. Experimental Setup
6.4.2. Analyzing the Tree Construction Time
6.4.3. Analyzing the Run-Time Timing Overhead
6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications
6.4.5. Analyzing the QoS of LC Tasks
6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature
6.4.7. Effect of Varying Different Parameters on Acceptance Ratio
6.4.8. Investigating Different Approaches at Run-Time
6.5. Conclusion
7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems
7.1. Research Questions, Objectives and Motivational Example
7.2. Design-Time Approach
7.3. Run-Time Mixed-Criticality Scheduler
7.3.1. Selecting the Appropriate Task to Assign Slack
7.3.2. Re-Mapping Technique
7.3.3. Run-Time Management Algorithm
7.3.4. DVFS governor in Clustered Multi-Core Platforms
7.4. Run-Time Scheduler Algorithm Optimization
7.5. Evaluation
7.5.1. Experimental Setup
7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption
7.5.3. The Effect of Varying Parameters of Cost Functions
7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping
7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms
7.5.6. The Latency of Changing Frequency in Real Platform
7.5.7. The Effect of Latency on System Schedulability
7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement
7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture
7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform
7.6. Conclusion
8. Conclusion and Future Work
8.1. Conclusions
8.2. Future Work
|
162 |
Physics of Aftershocks in the South Iceland Seismic Zone : Insights into the earthquake process from statistics and numerical modelling of aftershock sequencesLindman, Mattias January 2009 (has links)
In seismology, an important goal is to attain a better understanding of the earthquake process. In this study of the physics of aftershock generation, I couple statistical analysis with modelling of physical processes in the postseismic period. I present a theoretical formulation for the distribution of interevent times for aftershock sequences obeying the empirically well established Omori law. As opposed to claims by other authors, this work demonstrates that the duration of the time interval between two successive earthquakes cannot be used to identify whether or not they belong to the same aftershock sequence or occur as a result of the same underlying process. This implies that a proper understanding of earthquake interevent time distributions is necessary before conclusions regarding the physics of the earthquake process are drawn. In a discussion of self-organised criticality (SOC) in relation to empirical laws in seismology, I find that Omori's law for aftershocks cannot be used as evidence for the theory of SOC. Instead, I consider that the occurrence of aftershocks in accordance with Omori's law is a result of a physical process that can be modelled and understood. I analyse characteristic features in the spatiotemporal distribution of aftershocks in the south Iceland seismic zone, following the two M6.5 June 2000 earthquakes and a M4.5 earthquake in September, 1999. These features include an initially constant aftershock rate, whose duration is larger following a larger main shock, and a subsequent power law decay that is interrupted by distinct and temporary deviations in terms of rate increases and decreases. Based on pore pressure diffusion modelling, I interpret these features in terms of main shock initiated diffusion processes. I conclude that thorough data analysis and physics-based modelling are essential components in attempts to improve our understanding of processes governing the occurrence of earthquakes.
|
163 |
Risk assessment of technology-induced errors in health careChio, Tien-Sung (David) 02 May 2016 (has links)
This study demonstrates that hybrid methods can be used for measuring the risk severity of technology-induced errors (TIE) that result from use of health information technology (HIT).
The objectives of this research study include:
1. Developing an integrated conceptual risk assessment model to measure the risk severity of technology-induced errors.
2. Analyzing the criticality and risk thresholds associated with TIE’s contributing factors.
3. Developing a computer-based simulation model that could be used to undertake various simulations of TIE’s problems and validate the results.
Using data from published papers describing three sample problems related to usability and technology-induced errors, hybrid methods were developed for assessing the risk severity and thresholds under various simulated conditions.
A risk assessment model (RAM) and its corresponding steps were developed. A computer-based simulation of risk assessment using the model was also developed, and several runs of the simulation were carried out. The model was tested and found to be valid.
Based on assumptions and published statistics obtained by publically available databases, we measured the risk severity and analyzed its criticality to classify risks of contributing factors into four different classes. The simulation results validated the efficiency and efficacy of the proposed methods with the sample problems. / Graduate / 0723 / 0680 / 0769 / tschio2011@gmail.com
|
164 |
Edge criticality in secure graph dominationDe Villiers, Anton Pierre 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: The domination number of a graph is the cardinality of a smallest subset of its vertex set with
the property that each vertex of the graph is in the subset or adjacent to a vertex in the subset.
This graph parameter has been studied extensively since its introduction during the early 1960s
and finds application in the generic setting where the vertices of the graph denote physical
entities that are typically geographically dispersed and have to be monitored efficiently, while
the graph edges model links between these entities which enable guards, stationed at the vertices,
to monitor adjacent entities.
In the above application, the guards remain stationary at the entities. In 2005, this constraint
was, however, relaxed by the introduction of a new domination-related parameter, called the
secure domination number. In this relaxed, dynamic setting, each unoccupied entity is defended
by a guard stationed at an adjacent entity who can travel along an edge to the unoccupied entity
in order to resolve a security threat that may occur there, after which the resulting configuration
of guards at the entities is again required to be a dominating set of the graph. The secure
domination number of a graph is the smallest number of guards that can be placed on its
vertices so as to satisfy these requirements.
In this generalised setting, the notion of edge removal is important, because one might seek the
cost, in terms of the additional number of guards required, of protecting the complex of entities
modelled by the graph if a number of edges in the graph were to fail (i.e. a number of links
were to be eliminated form the complex, thereby disqualifying guards from moving along such
disabled links).
A comprehensive survey of the literature on secure graph domination is conducted in this dissertation.
Descriptions of related, generalised graph protection parameters are also given. The
classes of graphs with secure domination number 1, 2 or 3 are characterised and a result on
the number of defenders in any minimum secure dominating set of a graph without end-vertices
is presented, after which it is shown that the decision problem associated with computing the
secure domination number of an arbitrary graph is NP-complete.
Two exponential-time algorithms and a binary programming problem formulation are presented
for computing the secure domination number of an arbitrary graph, while a linear algorithm is
put forward for computing the secure domination number of an arbitrary tree. The practical
efficiencies of these algorithms are compared in the context of small graphs.
The smallest and largest increase in the secure domination number of a graph are also considered
when a fixed number of edges are removed from the graph. Two novel cost functions are
introduced for this purpose. General bounds on these two cost functions are established, and
exact values of or tighter bounds on the cost functions are determined for various infinite classes
of special graphs. Threshold information is finally established in respect of the number of possible edge removals
from a graph before increasing its secure domination number. The notions of criticality and
stability are introduced and studied in this respect, focussing on the smallest number of arbitrary
edges whose deletion necessarily increases the secure domination number of the resulting graph,
and the largest number of arbitrary edges whose deletion necessarily does not increase the secure
domination number of the resulting graph. / AFRIKAANSE OPSOMMING: Die dominasiegetal van ’n grafiek is die kardinaalgetal van ’n kleinste deelversameling van die
grafiek se puntversameling met die eienskap dat elke punt van die grafiek in die deelversameling
is of naasliggend is aan ’n punt in die deelversameling. Hierdie grafiekparameter is sedert die
vroeë 1960s uitvoerig bestudeer en vind toepassing in die generiese situasie waar die punte van
die grafiek fisiese entiteite voorstel wat tipies geografies verspreid is en doeltreffend gemonitor
moet word, terwyl die lyne van die grafiek skakels tussen hierdie entiteite voorstel waarlangs
wagte, wat by die entiteite gebaseer is, naasliggende entiteite kan monitor.
In die bogenoemde toepassing, bly die wagte bewegingloos by die fisiese entiteite waar hulle
geplaas word. In 2005 is hierdie beperking egter verslap met die daarstelling van ’n nuwe
dominasie-verwante grafiekparameter, bekend as die sekure dominasiegetal. In hierdie verslapte,
dinamiese situasie word elke punt sonder ’n wag deur ’n wag verdedig wat by ’n naasliggende
punt geplaas is en wat langs die verbindingslyn na die leë punt kan beweeg om daar ’n bedreiging
te neutraliseer, waarna die gevolglike plasing van wagte weer ’n dominasieversameling van die
grafiek moet vorm. Die sekure dominasiegetal van ’n grafiek is die kleinste getal wagte wat op
die punte van die grafiek geplaas kan word om aan hierdie vereistes te voldoen.
Die beginsel van lynverwydering speel ’n belangrike rol in hierdie veralgemeende situasie, omdat
daar gevra mag word na die koste, in terme van die addisionele getal wagte wat vereis word, om
die kompleks van entiteite wat deur die grafiek gemodelleer word, te beveilig indien ’n aantal
lynfalings in die grafiek plaasvind (m.a.w. indien ’n aantal skakels uit die kompleks van entiteite
verwyder word, en wagte dus nie meer langs sulke skakels mag beweeg nie).
’n Omvattende literatuurstudie oor sekure dominasie van grafieke word in hierdie verhandeling
gedoen. Beskrywings van verwante, veralgemeende verdedigingsparameters in grafiekteorie word
ook gegee. Die klasse van grafieke met sekure dominasiegetal 1, 2 of 3 word gekarakteriseer
en ’n resultaat oor die getal verdedigers in enige kleinste sekure dominasieversameling van ’n
grafiek sonder endpunte word daargestel, waarna daar getoon word dat die beslissingsprobleem
onderliggend aan die berekening van die sekure dominasiegetal van ’n arbitrêre grafiek NP-
volledig is.
Twee eksponensiële-tyd algoritmes en ’n binêre programmeringsformulering word vir die bepaling
van die sekure dominasiegetal van ’n arbitrêre grafiek daargestel, terwyl ’n lineêre algoritme vir
die berekening van die sekure dominasiegetal van ’n arbitrêre boom ontwerp word. Die praktiese
doeltreffendhede van hierdie algoritmes word vir klein grafieke met mekaar vergelyk. Die kleinste en groostste toename in die sekure dominasiegetal van ’n grafiek word ook oorweeg
wanneer ’n vaste getal lyne uit die grafiek verwyder word. Twee nuwe kostefunksies word vir
hierdie doel daargestel en algemene grense word op hierdie kostefunksies vir arbitrêre grafieke
bepaal, terwyl eksakte waardes van of verbeterde grense op hierdie kostefunksies vir verskeie
oneindige klasse van spesiale grafieke bereken word.
Drempelinligting word uiteindelik bepaal in terme van die moontlike getal lynverwyderings uit
’n grafiek voordat die sekure dominasiegetal daarvan toeneem. Die konsepte van kritiekheid en
stabiliteit word in hierdie konteks bestudeer, met ’n fokus op die kleinste getal arbitrêre lynfalings
wat noodwendig die sekure dominasiegetal van die gevolglike grafiek laat toeneem, of die grootste
getal arbitrêre lynfalings wat noodwendig die sekure dominasiegetal van die gevolglike grafiek
onveranderd laat.
|
165 |
Magnetothermal properties near quantum criticality in the itinerant metamagnet Sr₃Ru₂O₇Rost, Andreas W. January 2009 (has links)
The search for novel quantum states is a fundamental theme in condensed matter physics. The almost boundless number of possible materials and complexity of the theory of electrons in solids make this both an experimentally and theoretically exciting and challenging research field. Particularly, the concept of quantum criticality resulted in a range of discoveries of novel quantum phases, which can become thermodynamically stable in the vicinity of a second order phase transition at zero temperature due to the existence of quantum critical fluctuations. One of the materials in which a novel quantum phase is believed to form close to a proposed quantum critical point is Sr₃Ru₂O₇. In this quasi-two-dimensional metal, the critical end point of a line of metamagnetic first order phase transitions can be suppressed towards zero temperature, theoretically leading to a quantum critical end point. Before reaching absolute zero, one experimentally observes the formation of an anomalous phase region, which has unusual ‘nematic-like’ transport properties. In this thesis magnetocaloric effect and specific heat measurements are used to systematically study the entropy of Sr₃Ru₂O₇ as a function of both magnetic field and temperature. It is shown that the boundaries of the anomalous phase region are consistent with true thermodynamic equilibrium phase transitions, separating the novel quantum phase from the surrounding ‘normal’ states. The anomalous phase is found to have a higher entropy than the low and high field states as well as a temperature dependence of the specific heat which deviates from standard Fermi liquid predictions. Furthermore, it is shown that the entropy in the surrounding ‘normal’ states increases significantly towards the metamagnetic region. In combination with data from other experiments it is concluded that these changes in entropy are most likely caused by many body effects related to the underlying quantum phase transition.
|
166 |
Quantum Magnetism, Nonequilibrium Dynamics and Quantum Simulation of Correlated Quantum SystemsManmana, Salvatore Rosario 03 June 2015 (has links)
No description available.
|
167 |
Un modèle à criticalité auto-régulée de la magnétosphère terrestreVallières-Nollet, Michel-André January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
168 |
Un modèle d'Ising Curie-Weiss de criticalité auto-organisée / A Curie-Weiss model of self-organized criticalityGorny, Matthias 08 June 2015 (has links)
Dans leur célèbre article de 1987, les physiciens Per Bak, Chao Tang et Kurt Wiesenfeld ont montré que certains systèmes complexes, composés d'un nombre important d'éléments en interaction dynamique, évoluent vers un état critique, sans intervention extérieure. Ce phénomène, appelé criticalité auto-organisée, peut être observé empiriquement ou simulé par ordinateur pour de nombreux modèles. Cependant leur analyse mathématique est très ardue. Même des modèles dont la définition est apparemment simple, comme les modèles décrivant la dynamique d'un tas de sable, ne sont pas bien compris mathématiquement. Le but de cette thèse est la construction d'un modèle de criticalité auto-organisée, qui est aussi simple que possible, et qui est accessible à une étude mathématique rigoureuse. Pour cela, nous modifions le modèle d'Ising Curie-Weiss généralisé en introduisant un contrôle automatique du paramètre de température. Pour une classe de distributions symétriques satisfaisant une certaine condition d'intégrabilité, nous montrons que la somme Sn des variables aléatoires du modèle a le comportement typique du modèle d'Ising Curie-Weiss généralisé critique: les fluctuations sont d'ordre n^(3/4) et la loi limite est C exp(- lambda*x^4) dx, où C et lambda sont des constantes strictement positives. Notre étude nous a menés à généraliser ce modèle dans plusieurs directions : cas de la dimension supérieure, fonctions d'interactions plus générales, extension à des auto-interactions menant à des fluctuations d'ordre n^(5/6). Nous étudions aussi des modèles dynamiques dont la distribution invariante est la loi de notre modèle d'Ising Curie-Weiss de criticalité auto-organisée. / In their famous 1987 article, Per Bak, Chao Tang and Kurt Wiesenfeld showed that certain complex systems, composed of a large number of dynamically interacting elements, are naturally attracted by critical points, without any external intervention. This phenomenon, called self-organized criticality, can be observed empirically or simulated on a computer in various models. However the mathematical analysis of these models turns out to be extremely difficult. Even models whose definition seems simple, such as the models describing the dynamics of a sandpile, are not well understood mathematically. The goal of this thesis is to design a model exhibiting self-organized criticality, which is as simple as possible, and which is amenable to a rigorous mathematical analysis. To this end, we modify the generalized Ising Curie-Weiss model by implementing an automatic control of the inverse temperature. For a class of symmetric distributions whose density satisfies some integrability conditions, we prove that the sum Sn of the random variables behaves as in the typical critical generalized Ising Curie-Weiss model: the fluctuations are of order n^(3/4) and the limiting law is C exp(- lambda*x^4) dx where C and lambda are suitable positive constants. Our study led us to generalize this model in several directions: the multidimensional case, more general interacting functions, extension to self-interactions leading to fluctuations with order n^(5/6). We also study dynamic models whose invariant distribution is the law of our Curie-Weiss model of self-organized criticality.
|
169 |
Algoritmos de otimização e criticalidade auto-organizada / Optimization algorithms and self-organized criticalityCastro, Paulo Alexandre de 22 April 2002 (has links)
As teorias científicas surgiram da necessidade do homem entender o funcionamento das coisas. Novos métodos e técnicas são então criados com o objetivo não só de melhor compreender, mas também de desenvolver essas próprias teorias. Nesta dissertação, vamos estudar várias dessas técnicas (aqui chamadas de algoritmos) com o objetivo de obter estados fundamentais em sistemas de spin e de revelar suas possíveis propriedades de auto-organização crítica. No segundo capítulo desta dissertação, apresentamos os algoritmos de otimização: simulated annealing, algoritmo genético, otimização extrema (EO) e evolutivo de Bak-Sneppen (BS). No terceiro capítulo apresentamos o conceito de criticalidade auto-organizada (SOC), usando como exemplo o modelo da pilha de areia. Para uma melhor compreensão da importância da criticalidade auto-organizada, apresentamos vários outros exemplos de onde o fenômeno é observado. No quarto capítulo apresentamos o modelo de relógio quiral de p-estados que será nosso sistema de testes. No caso unidimensional, determinamos a matriz de transferência e utilizamos o teorema de Perron-Frobenius para provar a inexistência de transição de fase a temperaturas finitas a temperaturas finitas. Esboçamos os diagramas de fases dos estados fundamentais que obtivemos de maneira analítica e numérica para os casos de p = 2, 3, 4, 5 e 6, no caso numérico fazendo uso do algoritmo de Bak-Sneppen com sorteio (BSS). Apresentamos ainda um breve estudo do número de mínimos locais para o modelo de relógio quiral de p-estados, para os casos de p = 3 e 4. Por último, no quinto capítulo, propomos uma dinâmica Bak-Sneppen com ruído (BSR) como uma nova técnica de otimização para tratar sistemas discretos. O ruído é introduzido diretamente no espaço de configuração de spins. Conseqüentemente, o fitness (adaptabilidade) passa a assumir valores contínuos, num pequeno intervalo em torno do seu valor original (discreto). Os resultados dessa dinâmica indicam a presença de criticalidade auto-organizada, evidenciada pelo decaimento em leis de potências das correlações espacial e temporal. Também estudamos o método EO e obtivemos uma confirmação numérica de que sua dinâmica exibe um comportamento não crítico com alcance espacial infinito e decaimento exponencial das avalanches. Finalmente, para o modelo de relógio quiral, comparamos a eficiência das três dinâmicas (EO, BSS e BSR) no que tange às suas habilidades de encontrar o estado fundamental do sistema. / In order to understand how things work, man has formulated scientific theories. New methods and techniques have been created not only to increase our understanding on the subject but also to develop and even expand those theories. In this thesis, we study several techniques (here called algorithms) designed with the objective to get the ground states of some spin systems and eventually to reveal possible properties of critical self-organization. In the second chapter, we introduce four fundamental optimization algorithms: simulated annealing, genetics algorithms, extremal optimization (EO) and Bak-Sneppen (BS). In the third chapter we present the concept of self-organized criticality (SOC), using as an example the sandpile model. To understand the importance of the self-organized criticality, we show many other situations where the phenomenon can be observed. In the fourth chapter, we introduce the p-states chiral clock model. This will be our test or toy system. For the one-dimensional case, we first determined the corresponding transfer-matrix and then proved the nonexistence of phase transitions by using the Perron-Frobenius theorem. We calculate the ground state phase diagrams both analytically and numerically in the cases of p = 2, 3, 4, 5 and 6. We also present a brief study of the number of local minima for the cases p = 3 and 4 of the chiral clock model. Finally, in the fifth chapter, we propose a Bak-Sneppen dynamics with noise (BSN) as a new technique of optimization to treat discrete systems. The noise is directly introduced into the spin configuration space. Consequently, the fitness now take values in a continuum but small interval around its original value (discrete). The results of this dynamics indicate the presence of self-organized criticality, which becomes evident with the power law scaling of the spacial and temporal correlations. We also study the EO algorithm and found a numerical con_rmation that it does not show a critical behavior since it has an in_nite space range and an exponential decay of the avalanches. At the end, we compare the e_ciency of the three dynamics (EO, BSD and BSN) for the chiral clock model, concerning their abilities to _nd the system\'s ground state.
|
170 |
Évaluation quantitative de séquences d’événements en sûreté de fonctionnement à l’aide de la théorie des langages probabilistes / Quantitative assessment of events sequences in dependability studies, based on probabilistic languages theoryIonescu, Dorina-Romina 21 November 2016 (has links)
Les études de sûreté de fonctionnement (SdF) sont en général basées sur l’hypothèse d’indépendance des événements de défaillance et de réparation ainsi que sur l’analyse des coupes qui décrivent les sous-ensembles de composants entraînant la défaillance du système. Dans le cas des systèmes dynamiques pour lesquels l’ordre d’occurrence des événements a une incidence directe sur le comportement dysfonctionnel du système, il est important de privilégier l’utilisation de séquences d’événements permettant une évaluation des indicateurs de SdF plus précise que les coupes. Ainsi, nous avons proposé, dans une première partie de nos travaux, un cadre formel permettant la détermination des séquences d’événements qui décrivent l’évolution du système ainsi que leur évaluation quantitative, en recourant à la théorie de langages probabilistes et à la théorie des processus markoviens/semi-markoviens. L'évaluation quantitative des séquences intègrent le calcul de leur probabilité d'occurrence ainsi que leur criticité (coût et longueur des séquences). Pour l’évaluation des séquences décrivant l’évolution des systèmes complexes présentant plusieurs modes de fonctionnement ou de défaillance, une approche modulaire basée sur les opérateurs de composition (choix et concaténation) a été proposée. Celle-ci consiste à calculer la probabilité d'une séquence d'événements globale à partir d'évaluations réalisées localement, mode par mode. Les différentes contributions sont appliquées sur deux cas d'étude de taille et complexité croissante. / Dependability studies are often based on the assumption of events (failures and repairs) independence but also on the analyse of cut-set which describes the subsets of components causing a system failure. In the case of dynamic systems where the events occurrence order has a direct impact on the dysfunctional behaviour, it is important to promote using event sequences instead of cut-sets for dependability assessment. In the first part, a formal framework is proposed. It helps in determining sequences of events that describe the evolution of the system and their assessment, using the theory of probabilistic languages and the theory of Markov/semi-Markov processes. The assessment integrates the calculation of the probability occurrence of the event sequences and their criticality (cost and length). For the assessment of complex systems with multiple operating/failure modes, a modular approach based on composition operators (choice and concatenation) is proposed. Evaluation of the probability of a global sequence of events is performed from local Markov/semi-Markov models for each mode of the system. The different contributions are applied on two case studies with a growing complexity.
|
Page generated in 0.034 seconds