11 |
Optimizing scoped and immortal memory management in real-time JavaHamza, Hamza January 2013 (has links)
The Real-Time Specification for Java (RTSJ) introduces a new memory management model which avoids interfering with the garbage collection process and achieves better deterministic behaviour. In addition to the heap memory, two types of memory areas are provided - immortal and scoped. The research presented in this Thesis aims to optimize the use of the scoped and immortal memory model in RTSJ applications. Firstly, it provides an empirical study of the impact of scoped memory on execution time and memory consumption with different data objects allocated in scoped memory areas. It highlights different characteristics for the scoped memory model related to one of the RTSJ implementations (SUN RTS 2.2). Secondly, a new RTSJ case study which integrates scoped and immortal memory techniques to apply different memory models is presented. A simulation tool for a real-time Java application is developed which is the first in the literature that shows scoped memory and immortal memory consumption of an RTSJ application over a period of time. The simulation tool helps developers to choose the most appropriate scoped memory model by monitoring memory consumption and application execution time. The simulation demonstrates that a developer is able to compare and choose the most appropriate scoped memory design model that achieves the least memory footprint. Results showed that the memory design model with a higher number of scopes achieved the least memory footprint. However, the number of scopes per se does not always indicate a satisfactory memory footprint; choosing the right objects/threads to be allocated into scopes is an important factor to be considered. Recommendations and guidelines for developing RTSJ applications which use a scoped memory model are also provided. Finally, monitoring scoped and immortal memory at runtime may help in catching possible memory leaks. The case study with the simulation tool developed showed a space overhead incurred by immortal memory. In this research, dynamic code slicing is also employed as a debugging technique to explore constant increases in immortal memory. Two programming design patterns are presented for decreasing immortal memory overheads generated by specific data structures. Experimental results showed a significant decrease in immortal memory consumption at runtime.
|
12 |
Towards Predictable Real-Time Performance on Multi-Core PlatformsKim, Hyoseung 01 June 2016 (has links)
Cyber-physical systems (CPS) integrate sensing, computing, communication and actuation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12 commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation. Our proposed techniques theoretically bound temporal interference that tasks may suffer from when accessing shared resources. Our techniques also involve software primitives and algorithms for real-time operating systems and hypervisors, which significantly reduce the degree of the temporal interference. Specifically, we tackle the issues of cache and memory contention, locking and synchronization, interrupt handling, and access control for computational accelerators such as general-purpose graphics processing units (GPGPUs), all of which are crucial to achieving predictable real-time performance on a modern multi-core platform. Our solutions are readily applicable to commodity multi-core platforms, and can be used not only for developing new systems but also migrating existing applications from single-core to multi-core platforms.
|
13 |
Analysing and supporting the reliability decision-making process in computing systems with a reliability evaluation framework / Analyser et supporter le processus de prise de décision dans la fiabilité des systèmes informatiques avec un framework d'évaluation de fiabilitéKooli, Maha 01 December 2016 (has links)
La fiabilité est devenu un aspect important de conception des systèmes informatiques suite à la miniaturisation agressive de la technologie et le fonctionnement non interrompue qui introduisent un grand nombre de sources de défaillance des composantes matérielles. Le système matériel peut être affecté par des fautes causées par des défauts de fabrication ou de perturbations environnementales telles que les interférences électromagnétiques, les radiations externes ou les neutrons de haute énergie des rayons cosmiques et des particules alpha. Pour les systèmes embarqués et systèmes utilisés dans les domaines critiques pour la sécurité tels que l'avionique, l'aérospatiale et le transport, la présence de ces fautes peut endommager leurs composants et conduire à des défaillances catastrophiques. L'étude de nouvelles méthodes pour évaluer la fiabilité du système permet d'aider les concepteurs à comprendre les effets des fautes sur le système, et donc de développer des produits fiables et sûrs. En fonction de la phase de conception du système, le développement de méthodes d'évaluation de la fiabilité peut réduire les coûts et les efforts de conception, et aura un impact positif le temps de mise en marché du produit.L'objectif principal de cette thèse est de développer de nouvelles techniques pour évaluer la fiabilité globale du système informatique complexe. L'évaluation vise les fautes conduisant à des erreurs logicielles. Ces fautes peuvent se propager à travers les différentes structures qui composent le système complet. Elles peuvent être masquées lors de cette propagation soit au niveau technologique ou architectural. Quand la faute atteint la partie logicielle du système, elle peut endommager les données, les instructions ou le contrôle de flux. Ces erreurs peuvent avoir un impact sur l'exécution correcte du logiciel en produisant des résultats erronés ou empêcher l'exécution de l'application.Dans cette thèse, la fiabilité des différents composants logiciels est analysée à différents niveaux du système (en fonction de la phase de conception), mettant l'accent sur le rôle que l'interaction entre le matériel et le logiciel joue dans le système global. Ensuite, la fiabilité du système est évaluée grâce à des méthodologies d'évaluation flexible, rapide et précise. Enfin, le processus de prise de décision pour la fiabilité des systèmes informatiques est pris en charge avec les méthodes et les outils développés. / Reliability has become an important design aspect for computing systems due to the aggressive technology miniaturization and the uninterrupted performance that introduce a large set of failure sources for hardware components. The hardware system can be affected by faults caused by physical manufacturing defects or environmental perturbations such as electromagnetic interference, external radiations, or high-energy neutrons from cosmic rays and alpha particles.For embedded systems and systems used in safety critical fields such as avionic, aerospace and transportation, the presence of these faults can damage their components and leads to catastrophic failures. Investigating new methods to evaluate the system reliability helps designers to understand the effects of faults on the system, and thus to develop reliable and dependable products. Depending on the design phase of the system, the development of reliability evaluation methods can save the design costs and efforts, and will positively impact product time-to-market.The main objective of this thesis is to develop new techniques to evaluate the overall reliability of complex computing system running a software. The evaluation targets faults leading to soft errors. These faults can propagate through the different structures composing the full system. They can be masked during this propagation either at the technological or at the architectural level. When a fault reaches the software layer of the system, it can corrupt data, instructions or the control flow. These errors may impact the correct software execution by producing erroneous results or prevent the execution of the application leading to abnormal termination or application hang.In this thesis, the reliability of the different software components is analyzed at different levels of the system (depending on the design phase), emphasizing the role that the interaction between hardware and software plays in the overall system. Then, the reliability of the system is evaluated via a flexible, fast, and accurate evaluation framework. Finally, the reliability decision-making process in computing systems is comprehensively supported with the developed framework (methodology and tools).
|
14 |
Avaliação comparativa do impacto do emprego de técnicas de programação defensiva na segurança de sistemas críticos. / Comparative evaluation of the impact of the use of defensive programming techniques on the safety of critical systems.Secall, Jorge Martins 26 February 2007 (has links)
Com o objetivo da redução do tempo de desenvolvimento de produtos comerciais, hardwares padronizados, como microcontroladores e microprocessadores dedicados, têm sido largamente empregados em aplicações críticas, transferindo para o software elementos até então de responsabilidade exclusiva do hardware. Técnicas de programação defensiva são mecanismos preventivos contra a ocorrência de falhas de hardware ou de software. Para a verificação da segurança de sistemas de aplicações críticas, técnicas de injeção de falhas foram desenvolvidas, propiciando o teste dos mecanismos de tolerância a falhas em condições muito semelhantes às do ambiente operacional real. A introdução de técnicas de programação defensiva aumenta a segurança dos sistemas de aplicações críticas. Não há, na literatura pesquisada, qualquer referência a uma avaliação quantitativa das técnicas de programação defensiva. Esta tese é a descrição de um trabalho experimental, que visa esta avaliação quantitativa, e se organiza em algumas etapas. Primeiro, algumas técnicas de programação defensiva são apresentadas, caracterizadas e eleitas como objeto de avaliação. A seguir, técnicas de injeção de falhas são descritas e uma delas é eleita como meio de teste do trabalho experimental. A partir daí, as técnicas de programação defensiva são verificadas sob o enfoque da técnica de injeção de falhas escolhida. O resultado é uma avaliação quantitativa relativa da eficácia de algumas técnicas de programação defensiva na capacidade de tolerância a falhas inseguras de sistemas de aplicações críticas. Ao final, indicações de continuidade do trabalho são apresentadas. O ambiente metroferroviário, em que trabalha o autor, foi utilizado como estudo de caso. Entretanto, as considerações e conclusões desta tese se aplicam a qualquer sistema de missão critica. / Aiming the reduction of commercial systems` time to the market, standardized hardware, as microcontrollers and embedded microprocessors, has been broadly employed for critical applications, transferring to the software issues that once exclusively relied on the hardware design. Defensive programming techniques are preventive engines against hardware and software faults. In order to verify the safety of critical application systems, fault injection techniques were developed, allowing for the testing of fault tolerant techniques under conditions quite close to actual operational environments. The introduction of defensive programming techniques increases the safety of critical application systems. There are no references, on a large research base, on quantitative evaluations of defensive programming techniques. This thesis describes an experimental work towards a relative quantitative evaluation, organized in a few stages. First, some defensive programming techniques are shown, characterized and selected as the evaluation target. Following, fault injection techniques are described and one of them is selected as the agent of the experimental work. From this point on, the defensive programming techniques are verified under the fault injection technique chosen. The result is a relative quantitative evaluation on the efficiency of some defensive programming techniques on the unsafe fault tolerance capacity of critical application systems. Finally, indications for further work are presented. The railway environment, where the author works, was employed as a case study. However, the reasoning and the conclusions of this thesis are applicable to any critical mission system.
|
15 |
Avaliação comparativa do impacto do emprego de técnicas de programação defensiva na segurança de sistemas críticos. / Comparative evaluation of the impact of the use of defensive programming techniques on the safety of critical systems.Jorge Martins Secall 26 February 2007 (has links)
Com o objetivo da redução do tempo de desenvolvimento de produtos comerciais, hardwares padronizados, como microcontroladores e microprocessadores dedicados, têm sido largamente empregados em aplicações críticas, transferindo para o software elementos até então de responsabilidade exclusiva do hardware. Técnicas de programação defensiva são mecanismos preventivos contra a ocorrência de falhas de hardware ou de software. Para a verificação da segurança de sistemas de aplicações críticas, técnicas de injeção de falhas foram desenvolvidas, propiciando o teste dos mecanismos de tolerância a falhas em condições muito semelhantes às do ambiente operacional real. A introdução de técnicas de programação defensiva aumenta a segurança dos sistemas de aplicações críticas. Não há, na literatura pesquisada, qualquer referência a uma avaliação quantitativa das técnicas de programação defensiva. Esta tese é a descrição de um trabalho experimental, que visa esta avaliação quantitativa, e se organiza em algumas etapas. Primeiro, algumas técnicas de programação defensiva são apresentadas, caracterizadas e eleitas como objeto de avaliação. A seguir, técnicas de injeção de falhas são descritas e uma delas é eleita como meio de teste do trabalho experimental. A partir daí, as técnicas de programação defensiva são verificadas sob o enfoque da técnica de injeção de falhas escolhida. O resultado é uma avaliação quantitativa relativa da eficácia de algumas técnicas de programação defensiva na capacidade de tolerância a falhas inseguras de sistemas de aplicações críticas. Ao final, indicações de continuidade do trabalho são apresentadas. O ambiente metroferroviário, em que trabalha o autor, foi utilizado como estudo de caso. Entretanto, as considerações e conclusões desta tese se aplicam a qualquer sistema de missão critica. / Aiming the reduction of commercial systems` time to the market, standardized hardware, as microcontrollers and embedded microprocessors, has been broadly employed for critical applications, transferring to the software issues that once exclusively relied on the hardware design. Defensive programming techniques are preventive engines against hardware and software faults. In order to verify the safety of critical application systems, fault injection techniques were developed, allowing for the testing of fault tolerant techniques under conditions quite close to actual operational environments. The introduction of defensive programming techniques increases the safety of critical application systems. There are no references, on a large research base, on quantitative evaluations of defensive programming techniques. This thesis describes an experimental work towards a relative quantitative evaluation, organized in a few stages. First, some defensive programming techniques are shown, characterized and selected as the evaluation target. Following, fault injection techniques are described and one of them is selected as the agent of the experimental work. From this point on, the defensive programming techniques are verified under the fault injection technique chosen. The result is a relative quantitative evaluation on the efficiency of some defensive programming techniques on the unsafe fault tolerance capacity of critical application systems. Finally, indications for further work are presented. The railway environment, where the author works, was employed as a case study. However, the reasoning and the conclusions of this thesis are applicable to any critical mission system.
|
16 |
Lessons from Listening: The Aid Effectiveness Agenda : A Critical Systems Heuristics analysis of the Grand Bargain and Paris Declaration for Aid Effectiveness from the perspective of implementers and local practitioners / Lessons from Listening: The Aid Effectiveness Agenda : A Critical Systems Heuristics analysis of the Grand Bargain and Paris Declaration for Aid Effectiveness from the perspective of implementers and local practitionersDevadoss, Ruth January 2018 (has links)
Wide debates over the last 15 years have questioned the impact of global initiatives like the Paris Declaration on Aid Effectiveness 2005 and more recently the Grand Bargain 2017 on any real improvements to the development effectiveness agenda. Many also ask to what extent do the initiatives consider the concerns and views of practitioners as stakeholders who implement the objectives and who have valuable experience, contextual insights, specific skill-sets and innovative ideas on how to address complex problems (Sjöstedt 2013). The breadth of literature surrounding the initiatives seems to reflect this, collectively calling for improvements in four common theme areas; greater collaboration, partnership and coordination between actors; instilled mutual accountability and shared responsibility; simplified administrative requirements for implementers; and greater participation and inclusion of stakeholder voices throughout processes. Questions that ask ‘who are the actors and decision-makers?’, and ‘who ought they be?’ can highlight gaps between an ideal situation and the reality, and is characteristic of a Critical Systems Heuristics (CSH) approach to analysing sources of influence in a typical system, or in this case, global initiative. Therefore, this paper analyses the voices of aid and development practitioners who are actively working in the sector, and compares their responses to the four themes from the literature. The research was conducted over three (3) months from May to July 2018 and interviewed nineteen (19) participants from a wide variety of development and humanitarian backgrounds and levels. The main findings of the research are summed as follows: Definitions of ‘effectiveness’ vary and depend on underlying political influences Global initiatives like the Paris Declaration and Grand Bargain have had minimal visible impact on changing systems at the implementation level The role of global initiatives is however still important as forums for promoting discussion, defining boundaries and unifying debates Power imbalances and hierarchies within the development sector are structurally embedded and addressing this is crucial to improving effectiveness Real improvements to the effectiveness agenda require both innovative, participative and evidence-based learning, and systems to accept and address the concerns of implementers
|
17 |
Components, Safety Interfaces, and Compositional AnalysisElmqvist, Jonas January 2010 (has links)
<p>Component-based software development has emerged as a promising approach for developing complex software systems by composing smaller independently developed components into larger component assemblies. This approach offers means to increase software reuse, achieve higher flexibility and shorter time-to-market by the use of off-the-shelf components (COTS). However, the use of COTS in safety-critical system is highly unexplored.</p><p>This thesis addresses the problems appearing in component-based development of safety-critical systems. We aim at efficient reasoning about safety at system level while adding or replacing components. For safety-related reasoning it does not suffice to consider functioning components in their intended environments but also the behaviour of components in presence of single or multiple faults. Our contribution is a formal component model that includes the notion of a safety interface. It describes how the component behaves with respect to violation of a given system-level property in presence of faults in its environment. This approach also provides a link between formal analysis of components in safety-critical systems and the traditional engineering processes supported by model-based development.</p><p>We also present an algorithm for deriving safety interfaces given a particular safety property and fault modes for the component. The safety interface is then used in a method proposed for compositional reasoning about component assemblies. Instead of reasoning about the effect of faults on the composed system, we suggest analysis of fault tolerance through pair wise analysis based on safety interfaces.</p><p>The framework is demonstrated as a proof-of-concept in two case studies; a hydraulic system from the aerospace industry and an adaptive cruise controller from the automotive industry. The case studies have shown that a more efficient system-level safety analysis can be performed using the safety interfaces.</p>
|
18 |
Concurrent Online Testing for Many Core Systems-on-ChipsLee, Jason Daniel 2010 December 1900 (has links)
Shrinking transistor sizes have introduced new challenges and opportunities for system-on-chip (SoC) design and reliability. Smaller transistors are more susceptible to early lifetime failure and electronic wear-out, greatly reducing their reliable lifetimes. However, smaller transistors will also allow SoC to contain hundreds of processing cores and other infrastructure components with the potential for increased reliability through massive structural redundancy. Concurrent online testing (COLT) can provide sufficient reliability and availability to systems with this redundancy. COLT manages the process of testing a subset of processing cores while the rest of the system remains operational. This can be considered a temporary, graceful degradation of system performance that increases reliability while maintaining availability.
In this dissertation, techniques to assist COLT are proposed and analyzed. The techniques described in this dissertation focus on two major aspects of COLT feasibility: recovery time and test delivery costs. To reduce the time between failure and recovery, and thereby increase system availability, an anomaly-based test triggering unit (ATTU) is proposed to initiate COLT when anomalous network behavior is detected. Previous COLT techniques have relied on initiating tests periodically. However, determining the testing period is based on a device's mean time between failures (MTBF), and calculating MTBF is exceedingly difficult and imprecise.
To address the test delivery costs associated with COLT, a distributed test vector storage (DTVS) technique is proposed to eliminate the dependency of test delivery costs on core location. Previous COLT techniques have relied on a single location to store test vectors, and it has been demonstrated that centralized storage of tests scales poorly as the number of cores per SoC grows. Assuming that the SoC organizes its processing cores with a regular topology, DTVS uses an interleaving technique to optimally distribute the test vectors across the entire chip. DTVS is analyzed both empirically and analytically, and a testing protocol using DTVS is described.
COLT is only feasible if the applications running concurrently are largely unaffected. The effect of COLT on application execution time is also measured in this dissertation, and an application-aware COLT protocol is proposed and analyzed. Application interference is greatly reduced through this technique.
|
19 |
Modelling the current state and potential use of knowledge management in higher education institutionsJack, Gillian January 2004 (has links)
This research explores the development of a framework appropriate to evaluate the readiness of a university to engage with knowledge management. Many universities are evolving from traditional bureaucratic, hierarchical structures to become more flexible, adaptable, commercially viable and competitive and knowledge management is becoming increasingly important in this respect. An over view of knowledge management clarifies what the concept is, and a critical review of current frameworks and models identifies gaps and weaknesses specifically in relation to empirical testing, theoretical underpinning and a holistic approach. This framework addresses those gaps and weaknesses and draws on organisational management, strategy, structure and culture, and systems thinking to ensure a holistic approach. These key elements provide the basis upon which a knowledge management framework is developed. A Soft Systems Methodological approach with a critical dimension is used to underpin this research because enquiry into organisational problem situations is complex and unstructured, based on human activity and social systems. The framework is innovative and offers contributions to knowledge because it: - is a new development within the domain of knowledge management. (it is intended to help evaluate the readiness of universities to engage in knowledge management); - provides a new application of critical systems thinking (critical systems thinking is applied to knowledge management); - uses a new synthesis (it was developed using a synthesis of soft systems principles, knowledge management concepts, and organisational theory); - enables organisations to consider their situations in new ways (by enabling self-critique of KM readiness); - offers new insights into the domain of knowledge management by means of the comprehensive and substantial literature review that helped its development.
|
20 |
Correlações quânticas em sistemas críticos / Quantum correlations in critical systemsNascimento , Andesson Brito 24 July 2015 (has links)
Submitted by Cláudia Bueno (claudiamoura18@gmail.com) on 2015-12-03T14:15:36Z
No. of bitstreams: 2
Dissertação - Andesson Brito Nascimento - 2015.pdf: 4640566 bytes, checksum: dbcca8bfca43a95c51641cfa230b0285 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-12-04T06:59:29Z (GMT) No. of bitstreams: 2
Dissertação - Andesson Brito Nascimento - 2015.pdf: 4640566 bytes, checksum: dbcca8bfca43a95c51641cfa230b0285 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-12-04T06:59:29Z (GMT). No. of bitstreams: 2
Dissertação - Andesson Brito Nascimento - 2015.pdf: 4640566 bytes, checksum: dbcca8bfca43a95c51641cfa230b0285 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2015-07-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Correlations are ubiquitous in nature and have played an extremely important role in human
life for a long time. For example, in economy, correlations between price and demand
are extremely important for a businessman (or even a government) to make decisions regarding
their investment policy. In the field of biology, genetic correlations are central to follow
individual characteristics. The relationship between income distribution and crime rate is
just one example coming from the social sciences. Mathematically, correlation is a number
that describes the degree of relationship between two variables. In the classical domain, this
number can be computed in the context of information theory, developed by Shannon in
1948. Focusing on the subject of the present work, the discussion regarding the quantum
nature of the correlations permeates physics since Einstein, Podolski and Rosen published
their famous article criticizing quantum mechanics. Since then, the so-called quantum correlations
have been shown to be a very important tool in the study of many-bodies physics.
Another feature of many-body physics is that certain systems, under certain conditions,
exhibit what we call quantum phase transition. Such transitions are analogous to the
classical transitions, but being governed by purely quantum fluctuations and, as such, may
occur at zero temperature, unlike the former, which are guided by thermal fluctuations. One
of the main phenomenon that characterizes these transitions is the fact that the correlation
length (defined between two subsystems of the global system) highly increases at the transition
point. This means that such subsystems can be strongly correlated even if they are
separated by a large distance.
The main goal of the present work is the study of quantum correlations, specifically
the entanglement and the local quantum uncertainty (LQU), in systems presenting one or
more quantum phase transitions. Specifically, we consider three models of spin chains: 1)
The XY and the XY T, which describes chains of spins- 1=2 —the second considering three
spins interaction while the first one takes into account only pairwise interactions; 2) A model describing a chain formed by bosonic spins, i.e. particles with spin-1. As a general conclusion,
quantum correlation provides a very powerful tool for the study of critical phenomena,
providing, among other things, a means to identify a quantum phase transition. / As correlações são onipresentes na natureza e têm desempenhado um papel extremamente
importante na vida humana por um longo tempo. Por exemplo, na economia correlações
entre oferta e demanda são extremamente importantes para um homem de negócios (ou
mesmo para um governo) tomar decisões à respeito de sua política de investimento. No
campo da biologia, correlações genéticas são fundamentais para seguirmos características
individuais. A relação entre distribuição de renda e taxa de criminalidade é apenas um dos
exemplo vindos das Ciências Sociais. De um modo geral, a correlação é uma quantidade
que descreve o grau de relação entre duas variáveis. No domínio clássico, essa quantidade
pode ser medida no âmbito da teoria da informação, desenvolvida por Shannon em 1948.
Focando no assunto do presente trabalho, a discussão sobre a natureza quântica das correlações
permeia a física desde que Einstein, Podolski e Rosen publicaram seu famoso artigo
criticando a mecânica quântica. Desde então, as chamadas correlações quânticas têm se
mostrado uma ferramenta muito importante no estudo da Física de muitos corpos.
Outra característica da Física de muitos corpos é que certos sistemas, em certas condições,
exibem o que chamamos de transição de fase quântica. Tais transições são análogas
às transições clássicas, mas sendo governadas por flutuações de natureza puramente quântica,
podendo ocorrer à temperatura zero, ao contrário das primeiras, que são guiadas por
flutuações térmicas. Um dos principais fenômenos que caracterizam estas transições é o fato
de que o comprimento de correlação (definido entre dois subsistemas do sistema global)
torna-se de longo alcance no ponto de transição. Isso significa que tais subsistemas podem
estar fortemente correlacionados mesmo estando separados por uma grande distância.
O objetivo deste trabalho é o estudo de correlações quânticas, mais especificamente
do emaranhamento e da incerteza local quântica (LQU), em cadeias de spin que apresentem
uma ou mais transições quânticas de fase. Especificamente, estudamos três modelos de
cadeias de spin: Os modelos XY e XY T, que são cadeias formadas por spins-1=2, sendo que o segundo considera interação entre três spins enquanto o primeiro somente entre pares;
um modelo formado por partículas bosônicas de spin-1. Como conclusão geral, temos que
as correlações quânticas fornecem uma ferramenta muito boa para o estudo de fenômenos
críticos, oferecendo, entre outros, um meio de identificarmos uma transição quântica de
fase.
|
Page generated in 0.0847 seconds