Spelling suggestions: "subject:"cache"" "subject:"vache""
291 |
Eine cache-optimale Implementierung der Finite-Elemente-MethodeGünther, Frank. January 2004 (has links) (PDF)
München, Techn. Universiẗat, Diss., 2004.
|
292 |
Μείωση της κατανάλωσης ισχύος σε διασυνδετικά μέσα εντός ολοκληρωμένου χρησιμοποιώντας τεχνικές φιλτραρίσματος / Reduction of power consumption in on-chip interconnection networks with filtering techniquesΟικονόμου, Ιωάννης 23 January 2012 (has links)
Η πρόοδος της τεχνολογίας CMOS δίνει τη δυνατότητα σχεδιασμού φθηνών, πολυπύρηνων, κοινής μνήμης, ενσωματωμένων επεξεργαστών. Ωστόσο, η υποστήριξη της συνάφειας της κρυφής μνήμης με κάποια μέθοδο που παρουσιάζει καλή κλιμάκωση απαιτεί σημαντική προσπάθεια. Τα πρωτόκολλα υποκλοπής παρέχουν μία λύση εύκολη στο σχεδιασμό, όμως είναι απαιτητικά σε εύρος ζώνης και κατανάλωση. Επιπλέον, η κλιμάκωσή τους είναι περιορισμένη όταν χρησιμοποιούνται σε αρτηρίες. Τα πρωτόκολλα που κάνουν χρήση ευρετηρίου, ειδικά τα κατανεμημένα, επιφέρουν μικρότερη επιβάρυνση στο δίκτυο. Απαιτούν όμως ελεγκτές ευρετηρίων οι οποίοι είναι δύσκολοι στο σχεδιασμό και καταναλώνουν πολύτιμη μνήμη, επιφάνεια και κατανάλωση εντός του ολοκληρωμένου, κάνοντάς τη λύση αυτή ακατάλληλη για ενσωματωμένα πολυπύρηνα συστήματα.
Στην εργασία αυτή, παρουσιάζουμε ένα μηχανισμό διατήρησης της συνάφειας ο οποίος παρουσιάζει καλή κλιμάκωση, και βασίζεται σε απλά πρωτόκολλα υποκλοπής, πάνω όμως σε ένα ιεραρχικό δίκτυο σημείο προς σημείο. Για να μειωθούν δραματικά τα μηνύματα που στέλνονται με ευρεία εκπομπή, προτείνουμε τα Χρονολογικά Φίλτρα, μια λύση βασισμένη στα φίλτρα Bloom. Σε αντίθεση με προηγούμενες προσεγγίσεις, τα Χρονολογικά Φίλτρα (Temporal Filters - TF) είναι εφοδιασμένα με ένα μοναδικό χαρακτηριστικό: την ικανότητα να σβήνουν τα περιεχόμενά τους σε συγχρονισμό - αλλά χωρίς να επικοινωνούν - με τις κρυφές μνήμες. Τα Χρονολογικά Φίλτρα και οι κρυφές μνήμες σβήνουν τα περιεχόμενά τους βασισμένα στις ενέργειες που γίνονται για τη διατήρηση της συνάφειας, παρέχοντας ασφαλές φιλτράρισμα ορισμένων μηνυμάτων του πρωτοκόλλου συνάφειας. Με τον τρόπο αυτό, ξεπερνάμε το πρόβλημα της αφαίρεσης στοιχείων των φίλτρων Bloom, χωρίς τη χρήση επιπλέον μετρητών, μηνυμάτων ή σημάτων, όπως έχουν προταθεί σε προηγούμενες εργασίες. Όλα τα παραπάνω γίνονται χωρίς καμία τροποποίηση των πρωτοκόλλων συνάφειας της κρυφής μνήμης. Ως αποτέλεσμα, η λύση που προτείνεται στην εργασία αυτή, χρησιμοποιεί μικρές δομές που μπορούν να ενσωματωθούν εύκολα στους μεταγωγείς του μέσου διασύνδεσης.
Για την αποτίμηση των μηχανισμών που προτείνουμε, χρησιμοποιήθηκε το περιβάλλον προσομοίωσης GEMS - για να μοντελοποιηθούν πολυπύρηνοι επεξεργαστές εντός ολοκληρωμένου με 8 και 16 πυρήνες, με ιδιωτικές κρυφές μνήμες πρώτου και δευτέρου επιπέδου - και η σουίτα μετροπρογραμμάτων SPLASH-2. Τα Χρονολογικά Φίλτρα αποδείχτηκαν ικανά να μειώσουν έως και κατά 74.7\% (κατά μέσο όρο) τα μηνύματα στο μέσο διασύνδεσης. Επιπλέον, τα Χρονολογικά Φίλτρα προσφέρουν τη δυνατότητα μείωσης της στατικής κατανάλωσης, καθώς χρησιμοποιείται η τεχνική Decay στις κρυφές μνήμες. / Advances in CMOS technology are enabling the design of inexpensive, multicore, shared-memory, embedded processors. However, supporting cache coherence in a scalable fashion in these architectures requires considerable effort. Snoop protocols provide an easy-to-design solution but they are greedy bandwidth and power consumers. In addition, their scalability is limited over a broadcast bus. Scalable directory protocols, especially distributed ones, remedy the bandwidth overhead but require hard-to-design directory controllers that consume precious on-chip storage, area, and power, rendering the solution unattractive for embedded multicores.
In this work we advocate a scalable coherence solution based on simple broadcast snooping protocols but over a scalable hierarchical point-to-point network. To dramatically cut down on broadcasts we propose Temporal Filtering, a solution based on Bloom filters - a storage-efficient memory structure. In contrast to previous approaches, Temporal Filters (TFs) are equipped with a unique characteristic: the ability to self-clean their contents in concert - but without communicating - with caches. Both TFs and caches decay their contents based on coherence activity, guaranteeing the correctness of coherence filtering. In this way, we overcome the problem of entry removal in the Bloom filters without the need of extra counters, messages, or even extra signals as in previous work and, more importantly, without requiring changes in the underlying cache snoop protocols. As a result, our solution utilizes frugal single-bit structures that can be easily integrated into network switches.
For our evaluation we use GEMS to model a 8- and 16-core CMP with private L1/L2 caches of various sizes, and the SPLASH-2 suite. TFs are proven able to reduce the 74.7\% (arithmetic average) of the network messages. In addition, TFs offer also leakage saving opportunities since cache decay is also applied in private caches.
|
293 |
Αποτίμηση αρχιτεκτονικών ιεαραρχίας μνήμης επεξεργαστή για κατανάλωση ισχύοςΖουμπούλογλου, Παρασκευάς-Πάρις 09 July 2013 (has links)
Η κρυφή μνήμη αποτελεί έναν σημαντικό παράγοντα για την απόδοση του
επεξεργαστή. Ταυτόχρονα όμως αποτελεί και ένα από τα δομικά μέρη πάνω στο chip στο
οποίο καταναλώνεται σημαντικό κομμάτι της ισχύος. Στην παρούσα εργασία γίνεται μία
ανάλυση πάνω στην κατανάλωση των διαφόρων επιπέδων της ιεαραρχίας της κρυφής
μνήμη του επεξεργαστή και παρουσιάζονται ορισμένες τεχνικές που οδηγούν στην μείωση
της ενώ παράλληλα διατηρείται η απόδοση του υπολογιστικού συστήματος όσο το δυνατόν
πιο σταθερή. Η αποτίμηση των τεχνικών αυτών έγινε με την βοήθεια του SimpleScalar,
εξομοιωτή υπερβαθμωτών αρχιτεκτονικών επεξεργαστή, και του εργαλείου CACTI της HP,
το οποίο μοντελοποιεί διάφορα χαρακτηριστικά (χρόνο προσπέλασης, δυναμική
κατανάλωση ισχύος κτλ.) της κρυφής και κύριας μνήμης του επεξεργαστή. / Cache memory plays an important role in the performance of the processor.
Simultaneously, however, it is one of the core components of the chip which consume a
significant percentage of the total power. In this thesis we present an analysis of the power
dissipation of the different levels in cache memory hierarchy and we propose techniques
that lead to a reduction of power consumption while maintaning the system performance.
For the efficiency study of these techniques we use SimpleScalar, a superscalar
architecture simulator, and CACTI, an enhanced cache access and cycle time model.
|
294 |
Couples de spin-orbite en vue d'applications aux mémoires cache / Spin orbit torques for cache memory applicationsHamelin, Claire 28 October 2016 (has links)
Le remplacement des technologies DRAM et SRAM des mémoires caches est un enjeu pour l’industrie microélectronique qui doit faire face à des demandes de miniaturisation, de réduction des amplitudes et des durées des courants d’écriture et de lecture des données. Les mémoires à accès direct magnétiques (MRAM) sont des candidates pour une future génération de mémoires et la découverte des couples de spin-orbite (SOT) a ouvert la voix à une combinaison des deux technologies appelée SOT-MRAM. Ces mémoires sont très prometteuses car elles allient non-volatilité et bonne fiabilité, mais de nombreux défis techniques et théoriques restent à relever.L’objectif de ce travail de thèse est d’étudier le retournement de l’aimantation par couple de spin-orbite avec des impulsions de courant sub-nanoseconde et de diminuer les courants d’écriture à couple de spin-orbite. Ce travail est préliminaire à la preuve de concept d’une mémoire SOT-MRAM écrite avec des impulsions de courant électrique ultra-courtes et des amplitudes relativement faibles.Pour cela nous avons étudié des cellules mémoire à base de Ta-CoFeB-MgO. Nous avons vérifié les dépendances du courant critique en durées d’impulsions et en un champ magnétique extérieur. Nous avons ensuite, sur une cellule type SOT-MRAM, prouvé l’écriture ultrarapide avec des impulsions de courant inférieures à la nanoseconde. Puis nous nous sommes intéressés à la diminution du courant d’écriture de SOT-MRAM à l’aide d’un champ électrique. Nous avons démontré que ce dernier permet de modulerl’anisotropie magnétique. Sa diminution lors d’une impulsion de courant dans la liste de tantale montre que la densité de courant critique pour le retournement de l’aimantation du CoFeB par SOT est réduite. Ces résultats sont très encourageants pour le développement des SOT-MRAM et incitent à approfondir ces études. Le mécanisme de retournement de l’aimantation semble être une nucléation puis une propagation de parois de domaines magnétiques. Cette hypothèse se fonde sur des tendances physiques observées lors des expériences ainsi que sur des simulations numériques. / They require smaller areas for bigger storage densities, non-volatility as well as reduced and shorter writing electrical currents. Magnetic Random Access Memory (MRAM) is one of the best candidates for the replacement of SRAM and DRAM. Moreover, the recent discovery of spin-orbit torques (SOT) may lead to a new technology called SOT-MRAM. These promising technologies combine non-volatility and good reliability but many challenges still need to be taken up.This thesis aims at switching magnetization by spin-orbit torques with ultra-fast current pulse and at reducing their amplitude. This preliminary work should enable one to proof the concept of SOT-MRAM written with short current pulses and low electrical consumption to write a memory cell.To do so, we studied Ta-CoFeB-MgO-based memory cells for which we verified current dependencies on pulse lengths and external magnetic field. Then we proved the ultrafast writing of a SOT-MRAM cell with pulses as short as 400 ps. Next, we focused on reducing the critical writing currents by SOT with the application of an electric field. We showed that magnetic anisotropy can be modulated by an electricfield. If it can be lowered while a current pulse is injected through the tantalum track, we observed a reduction of the critical current density for the switching of the CoFeB magnetization. Those results are very promising for the development of SOT-MRAM and encourage one to delve deeper into this study. The magnetization switching mechanism seems to be a nucleation followed by propagations of magneticdomain walls. This assumption is based on many physical tendencies we observed and also on numerical simulations.
|
295 |
Uma abordagem colaborativa de cache em redes ad hocCaetano, Marcos Fagundes January 2008 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2008. / Submitted by Jaqueline Ferreira de Souza (jaquefs.braz@gmail.com) on 2009-09-23T19:59:07Z
No. of bitstreams: 1
2008_MarcosFagundesCaetano.pdf: 2662030 bytes, checksum: 837df9f42cd67ea2308b3d49fa6f67a3 (MD5) / Approved for entry into archive by Luanna Maia(luanna@bce.unb.br) on 2010-06-17T14:01:59Z (GMT) No. of bitstreams: 1
2008_MarcosFagundesCaetano.pdf: 2662030 bytes, checksum: 837df9f42cd67ea2308b3d49fa6f67a3 (MD5) / Made available in DSpace on 2010-06-17T14:01:59Z (GMT). No. of bitstreams: 1
2008_MarcosFagundesCaetano.pdf: 2662030 bytes, checksum: 837df9f42cd67ea2308b3d49fa6f67a3 (MD5)
Previous issue date: 2008 / O avanço das tecnologias de rede sem fio permitiu o surgimento de redes ad-hoc.
A partir de um ambiente não infra-estruturado é possíıvel o estabelecimento de
comunicação entre dispositivos espalhados em uma região. Esses dispositivos estabelecem
comunicação entre si, de forma dinamica e em tempo real, criando topologias que permitam o roteamento de pacotes entre os membros da rede. Entretanto, algumas limitações inerentes `a tecnologia geram problemas que contribuem
para a degradação da vazão na rede. De acordo com Gupta et al. [28],
quanto maior ´e o número de nós em uma rede, menor será a sua vazão. Para esse
contexto, o modelo tradicional de cache não se apresenta como uma boa opção.
A penalidade imposta `a rede, após um local cache miss, ´e alta e sobrecarrega
tanto os nós intermediários que participam do roteamento, quanto o servidor da
rede.
Com objetivo de diminuir essa penalização, diversos trabalhos implementam
o conceito de cache colaborativo. Essa política consiste em tentar obter a informa
ção, após um local miss, a partir dos nós vizinhos mais próximos. Entretanto,
seu uso pode ser considerado limitado. As políticas colaborativas de cache restringem-se apenas a disponibilizar, aos demais membros da rede, as informações locais armazenada no cache de cada cliente. Nenhuma política global
para gerenciamento dessas informações ´e proposta. O objetivo desse trabalho ´e
propor um mecanismo de cache colaborativo que permita o compartilhamento de
informações, entre nós de uma rede, de forma a diminuir a carga de trabalho tanto
no servidor quanto na rede. A partir de uma área de cache global, compartilhada
entre um grupo de nós, é possíıvel a diminuição do tempo médio de resposta e do
número médio de saltos durante o processo de obtenção de dados em uma rede.
Para validação da proposta, um modelo foi implementado utilizando o simulador
de redes ad-hoc, GloMoSim [50]. Os resultados experimentais demonstram uma
redução de 57.77% no número de requisições submetidas ao servidor para grupos
de 8 nós, e 72.95% para grupos de 16 nós. Observou-se uma redução de aproximadamente
16 vezes no tempo médio gasto para responder a uma requisição. ___________________________________________________________________________________________ ABSTRACT / The advance of wireless tecnologies has allowed the appearing of ad hoc networks.
From a unstructured environment, it is possible to stablish communication among
devices. These devices set up communication among themselves, in a dinamic
way and in real time, creating topologics that allow the packages flow among the
network members. However, some limitations intrinsic to the tecnology generate
problems that contribute to the degradation of the network flow. According with
Gupta et al. [28], as bigger is the number of nodes in a network, as smaller
will be its throughput. The penalty imposed to the network, after a local cache
miss, is high and overloads not just the intermediate nodes that participate in
the routing, but also the network server.
With the intent of decrease this penalization, several works implement the
concept of colaborative cache. This policy consists in trying to get the information
from the nearest nodes, after a local miss. Nevertheless, its use can be
considered limitated. The colaborative cache policies restrain to give just the
local information stored in each client’s cache to the other network members.
There’s no proposition for a global policy to manage such information. The objective
of this work is to propose a colaborative cache mechanism that allows the
information sharing, among nodes of a network, in a way to decrease the load of
work in the server and in the network. From a global chache area, shared by a
group of nodes, it’s possible to reduce the average response time and the average
number of hops during the process of getting data in a network. To validate the
proposal, a model was implemented using the GloMoSim [50] ad hoc network
simulator. The experimental results show a 57.77% reduction in the number of
requests submited to the server for groups of 8 nodes, and a 72,95% reduction
for groups of 16 nodes. It was noticed a decrease of 16 times in the average time
spent to answer to a request (Round Trip Time).
|
296 |
EVALUATING EXOTIC SPECIES ASSEMBLAGES ACROSS A CHRONOSEQUENCE OF RESTORED FLOODPLAIN FORESTSMcLane, Craig Russell 01 December 2009 (has links)
Exotic plant species pose a great risk to restoration success in post-agricultural bottomlands, but little information exists on their dynamics during early succession of actively restored sites. Compositional trends of exotic plants may be similar to those published for natives in other systems, with an early peak in herbaceous richness followed by a decline as woody species establish. I established 16 sites in an 18-year chronosequence (1991-2008) of restored forests, with an additional four mature sites for comparison, within the Cypress Creek NWR, Illinois. Within each site, I identified all vascular plant species and quantified soil texture, total soil C, total soil N, and canopy openness at three strata (1.5m, 1.25m, & 0.75m). Trends in exotic assemblages were significantly correlated with canopy openness at all strata (all p < 0.0001). Richness of exotic herbaceous species and native herbaceous species were related to stand age consistent with a non-linear Weibull regression model (R2 = 0.543, p = 0.005; R2 = 0.483, p = 0.013, respectively). Average percent herbaceous species cover also showed a similar reduction in overall abundance for both native and exotic plants but followed an exponential decay model (R2 = 0.3777, p = 0.0039; R2 = 0.3003, p = 0.0124, respectively). Woody native richness over time conformed to a logistic model (R2 = 0.404, p = 0.012). Woody exotic plants exhibited no discernible relationship with stand age, although they were in sites of all ages. My results indicate that herbaceous exotic species exhibit successional trends similar to natives and therefore may not pose a lasting threat to restoration projects in these floodplain forests. In contrast, woody exotic species can establish earlier or later in succession, persist under closed canopy conditions, and may pose a lasting threat. Thus, bottomland restorations and mature forests are quite vulnerable to exotic plants even after canopy closure.
|
297 |
Resource and thermal management in 3D-stacked multi-/many-core systemsZhang, Tiansheng 10 March 2017 (has links)
Continuous semiconductor technology scaling and the rapid increase in computational needs have stimulated the emergence of multi-/many-core processors. While up to hundreds of cores can be placed on a single chip, the performance capacity of the cores cannot be fully exploited due to high latencies of interconnects and memory, high power consumption, and low manufacturing yield in traditional (2D) chips. 3D stacking is an emerging technology that aims to overcome these limitations of 2D designs by stacking processor dies over each other and using through-silicon-vias (TSVs) for on-chip communication, and thus, provides a large amount of on-chip resources and shortens communication latency. These benefits, however, are limited by challenges in high power densities and temperatures.
3D stacking also enables integrating heterogeneous technologies into a single chip. One example of heterogeneous integration is building many-core systems with silicon-photonic network-on-chip (PNoC), which reduces on-chip communication latency significantly and provides higher bandwidth compared to electrical links. However, silicon-photonic links are vulnerable to on-chip thermal and process variations. These variations can be countered by actively tuning the temperatures of optical devices through micro-heaters, but at the cost of substantial power overhead.
This thesis claims that unearthing the energy efficiency potential of 3D-stacked systems requires intelligent and application-aware resource management. Specifically, the thesis improves energy efficiency of 3D-stacked systems via three major components of computing systems: cache, memory, and on-chip communication. We analyze characteristics of workloads in computation, memory usage, and communication, and present techniques that leverage these characteristics for energy-efficient computing.
This thesis introduces 3D cache resource pooling, a cache design that allows for flexible heterogeneity in cache configuration across a 3D-stacked system and improves cache utilization and system energy efficiency. We also demonstrate the impact of resource pooling on a real prototype 3D system with scratchpad memory.
At the main memory level, we claim that utilizing heterogeneous memory modules and memory object level management significantly helps with energy efficiency. This thesis proposes a memory management scheme at a finer granularity: memory object level, and a page allocation policy to leverage the heterogeneity of available memory modules and cater to the diverse memory requirements of workloads.
On the on-chip communication side, we introduce an approach to limit the power overhead of PNoC in (3D) many-core systems through cross-layer thermal management. Our proposed thermally-aware workload allocation policies coupled with an adaptive thermal tuning policy minimize the required thermal tuning power for PNoC, and in this way, help broader integration of PNoC. The thesis also introduces techniques in placement and floorplanning of optical devices to reduce optical loss and, thus, laser source power consumption. / 2018-03-09T00:00:00Z
|
298 |
Memory consistency directed cache coherence protocols for scalable multiprocessorsElver, Marco Iskender January 2016 (has links)
The memory consistency model, which formally specifies the behavior of the memory system, is used by programmers to reason about parallel programs. From a hardware design perspective, weaker consistency models permit various optimizations in a multiprocessor system: this thesis focuses on designing and optimizing the cache coherence protocol for a given target memory consistency model. Traditional directory coherence protocols are designed to be compatible with the strictest memory consistency model, sequential consistency (SC). When they are used for chip multiprocessors (CMPs) that provide more relaxed memory consistency models, such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of scalability, in terms of per-core storage due to sharer tracking, which poses a problem with increasing number of cores in today’s CMPs, most of which no longer are sequentially consistent. The recent convergence towards programming language based relaxed memory consistency models has sparked renewed interest in lazy cache coherence protocols. These protocols exploit synchronization information by enforcing coherence only at synchronization boundaries via self-invalidation. As a result, such protocols do not require sharer tracking which benefits scalability. On the downside, such protocols are only readily applicable to a restricted set of consistency models, such as Release Consistency (RC), which expose synchronization information explicitly. In particular, existing architectures with stricter consistency models (such as x86) cannot readily make use of lazy coherence protocols without either: adapting the protocol to satisfy the stricter consistency model; or changing the architecture’s consistency model to (a variant of) RC, typically at the expense of backward compatibility. The first part of this thesis explores both these options, with a focus on a practical approach satisfying backward compatibility. Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and SPARC processors, and existing parallel programs written for these architectures, we first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency model. TSO-CC does not track sharers and instead relies on self-invalidation and detection of potential acquires (in the absence of explicit synchronization) using per cache line timestamps to efficiently and lazily satisfy the TSO memory consistency model. Our results show that TSO-CC achieves, on average, performance comparable to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales logarithmically with increasing core count. Next, we propose an approach for the x86-64 architecture, which is a compromise between retaining the original consistency model and using a more storage efficient lazy coherence protocol. First, we propose a mechanism to convey synchronization information via a simple ISA extension, while retaining backward compatibility with legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC), a scalable cache coherence protocol for RCtso, the resulting memory consistency model. RC3 does not track sharers and relies on self-invalidation on acquires. To satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1 timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving performance comparable to a MESI directory protocol for RC optimized programs. RC3’s storage overhead per cache line scales logarithmically with increasing core count and reduces on-chip coherence storage overheads by 45% compared to TSO-CC. Finally, it is imperative that hardware adheres to the promised memory consistency model. Indeed, consistency directed coherence protocols cannot use conventional coherence definitions (e.g. SWMR) to be verified against, and few existing verification methodologies apply. Furthermore, as the full consistency model is used as a specification, their interaction with other components (e.g. pipeline) of a system must not be neglected in the verification process. Therefore, verifying a system with such protocols in the context of interacting components is even more important than before. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the consistency model. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when used for simulation-based memory consistency verification would be too slow. We propose McVerSi, a test generation framework for fast memory consistency verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to memory consistency test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering memory consistency bugs. To guide tests towards exercising as much logic as possible, the simulator’s reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI protocol due to the faulty interaction of the pipeline and the cache coherence protocol, highlighting that even conventional protocols should be verified rigorously in the context of a full-system. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests).
|
299 |
Geocaching v CHKO Moravský krasNajtová, Simona January 2015 (has links)
Najtová, S.: The analysis of geocaching at PLA Moravian Karst. Diploma thesis. Brno, 2015. The thesis describes the geocaching as a tool and an additional tourism activity. The theoretical part defines the basic principles of the game, rules, types of geocaches, player equipment and from which views can be seen on geocaching. Theoretical part also includes history in the Czech Republic and abroad too. The research was carried out on the territory of the PLA Moravian Karst. The thesis analyzes the geocaching in the selected location and focuses for example on the development of "caches", attendance, login or the difficulty of the terrain or caches searching. We can not also forget that it is a protected landscape area, therefore, are examined negative impacts of geocaching on the environment. The gained knowledge can be used used to estimate the future development of geocaching in the Moravian Karst and also prevent negative impacts on the environment.
|
300 |
Geocaching a Ingress jako podpora cestovního ruchu ve vybrané lokalitěLančaričová, Aneta January 2016 (has links)
This thesis introduces game Geocaching and an alternative game Ingress. Geocaching is currently very popular activity for tourists which is causing growth of environmental hazard due to increasing number of players around the world The theoretical part includes description of the game and the risks associated with it. The thesis presents also an alternative game Ingress, which is environmnetal friendly thanks to its virtual character. This thesis brings new routes in close surroundings of Radešín village for Geocaching and Ingress in order to increase attractivity of the place. A questionaire survey which is evaluated in the final section was designed to determine the awareness of Geocaching and Ingress and their impact on the environment.
|
Page generated in 0.054 seconds