1 |
Software Transactional Memory Building BlocksRiegel, Torvald 15 August 2013 (has links) (PDF)
Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks.
I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC.
|
2 |
Performance Study of Concurrent Search Trees and Hash Algorithms on Multiprocessors SystemsDemuynck, Marie-Anne 05 1900 (has links)
This study examines the performance of concurrent algorithms for B-trees and linear hashing. B-trees are widely used as an access method for large, single key, database files, stored in lexicographic order on secondary storage devices. Linear hashing is a fast and reliable hash algorithm, suitable for accessing records stored unordered in buckets. This dissertation presents performance results on implementations of concurrent Bunk-tree and linear hashing algorithms, using lock-based, partitioned and distributed methods on the Sequent Symmetry shared memory multiprocessor system and on a network of distributed processors created with PVM (Parallel Virtual Machine) software. Initial experiments, which started with empty data structures, show good results for the partitioned implementations and lock-based linear hashing, but poor ones for lock-based Blink-trees. A subsequent test, which started with loaded data structures, shows similar results, but with much improved performances for locked Blink- trees. The data also highlighted the high cost of split operations, which reached up to 70% of the total insert time.
|
3 |
Software Transactional Memory Building BlocksRiegel, Torvald 13 May 2013 (has links)
Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks.
I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC.
|
4 |
Parallel model checking for multiprocessor architecture / Model checking sur architecture multiprocesseurTacla Saad, Rodrigo 20 December 2011 (has links)
Nous proposons de nouveaux algorithmes et de nouvelles structures de données pour la vérification formelle de systèmes réactifs finis sur architectures parallèles. Ces travaux se basent sur les techniques de vérification model checking. Notre approche cible des architectures multi-processeurs et multi-cœurs, avec mémoire partagée, qui correspondent aux générations de serveurs les plus performants disponibles actuellement.Dans ce contexte, notre objectif principal est de proposer des approches qui soient à la fois efficaces au niveau des performances, mais aussi compatibles avec les politiques de partage dynamique du travail utilisées par les algorithmes de génération d’espaces d'états en parallèle; ainsi, nous ne plaçons pas de contraintes sur la manière dont le travail ou les données sont partagés entre les processeurs.Parallèlement à la définition de nouveaux algorithmes de model checking pour machines multi-cœurs, nous nous intéressons également aux algorithmes de vérification probabiliste. Par probabiliste, nous entendons des algorithmes de model checking qui ont une forte probabilité de visiter tous les états durant la vérification d’un système. La vérification probabiliste permet des gains importants au niveau de la mémoire utilisée, en échange d’une faible probabilité de ne pas être exhaustif; il s’agit donc d’une stratégie permettant de répondre au problème de l’explosion combinatoire / In this thesis, we propose and study new algorithms and data structures for model checking finite-state, concurrent systems. We focus on techniques that target shared memory, multi-cores architectures, that are a current trend in computer architectures.In this context, we present new algorithms and data structures for exhaustive parallel model checking that are as efficient as possible, but also ``friendly'' with respect to the work-sharing policies that are used for the state space generation (e.g. a work-stealing strategy): at no point do we impose a restriction on the way work is shared among the processors. This includes both the construction of the state space as the detection of cycles in parallel, which is is one of the key points of performance for the evaluation of more complex formulas.Alongside the definition of enumerative, model checking algorithms for many-cores architectures, we also study probabilistic verification algorithms. By the term probabilistic, we mean that, during the exploration of a system, any given reachable state has a high probability of being checked by the algorithm. Probabilistic verification trades savings at the level of memory usage for the probability of missing some states. Consequently, it becomes possible to analyze part of the state space of a system when there is not enough memory available to represent the entire state space in an exact manner
|
5 |
Framework for Real-time collaboration on extensive Data Types using Strong Eventual ConsistencyMasson, Constantin 12 1900 (has links)
La collaboration en temps réel est un cas spécial de collaboration où les utilisateurs travaillent sur le même élément simultanément et sont au courant des modifications des autres utilisateurs en temps réel. Les données distribuées doivent rester disponibles et consistant tout en étant répartis sur plusieurs systèmes physiques. "Strong Consistency"
est une approche qui crée un ordre total des opérations en utilisant des mécanismes tel que le "locking". Cependant, cela introduit un "bottleneck". Ces dix dernières années, les algorithmes de concurrence ont été étudiés dans le but de garder la convergence de tous les replicas sans utiliser de "locking" ni de synchronisation. "Operational Trans-
formation" et "Conflict-free Replicated Data Types (CRDT)" sont utilisés dans ce but. Cependant, la complexité de ces stratégies les rend compliquées à intégrer dans des logicielles conséquents, comme les éditeurs de modèles, spécialement pour des data structures complexes comme les graphes. Les implémentations actuelles intègrent seulement des data linéaires tel que le texte. Dans ce mémoire, nous présentons CollabServer, un framework pour construire des environnements de collaboration. Il a une implémentation de CRDTs pour des data structures complexes tel que les graphes et donne la possibilité de construire ses propres data structures. / Real-time collaboration is a special case of collaboration where users work on the same artefact simultaneously and are aware of each other’s changes in real-time. Shared data should remain available and consistent while dealing with its physically distributed
aspect. Strong Consistency is one approach that enforces a total order of operations using mechanisms, such as locking. This however introduces a bottleneck. In the last decade, algorithms for concurrency control have been studied to keep convergence of all replicas without locking or synchronization. Operational Transformation and Conflict free Replicated Data Types (CRDT) are widely used to achieve this purpose. However, the complexity of these strategies makes it hard to integrate in large software, such as modeling editors, especially for complex data types like graphs. Current implementations only integrate linear data, such as text. In this thesis, we present CollabServer, a framework to build collaborative environments. It features a CRDTs implementation for
complex data types such as graphs and gives possibility to build other data structures.
|
Page generated in 0.0817 seconds