• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 23
  • 13
  • 9
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 254
  • 78
  • 53
  • 50
  • 44
  • 42
  • 39
  • 37
  • 35
  • 32
  • 32
  • 30
  • 29
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures

Triplett, Josh 01 January 2012 (has links)
High-performance programs and systems require concurrency to take full advantage of available hardware. However, the available concurrent programming models force a difficult choice, between simple models such as mutual exclusion that produce little to no concurrency, or complex models such as Read-Copy Update that can scale to all available resources. Simple concurrent programming models enforce atomicity and causality, and this enforcement limits concurrency. Scalable concurrent programming models expose the weakly ordered hardware memory model, requiring careful and explicit enforcement of causality to preserve correctness, as demonstrated in this dissertation through the manual construction of a scalable hash-table item-move algorithm. Recent research on "relativistic programming" aims to standardize the programming model of Read-Copy Update, but thus far these efforts have lacked a generalized memory ordering model, requiring data-structure-specific reasoning to preserve causality. I propose a new memory ordering model, "relativistic causal ordering", which combines the scalabilty of relativistic programming and Read-Copy Update with the simplicity of reader atomicity and automatic enforcement of causality. Programs written for the relativistic model translate to scalable concurrent programs for weakly-ordered hardware via a mechanical process of inserting barrier operations according to well-defined rules. To demonstrate the relativistic causal ordering model, I walk through the straightforward construction of a novel concurrent hash-table resize algorithm, including the translation of this algorithm from the relativistic model to a hardware memory model, and show through benchmarks that the resulting algorithm scales far better than those based on mutual exclusion.
222

Bounded model checking v nástroji Java PathFinder / Bounded Model Checking Using Java PathFinder

Dudka, Vendula January 2008 (has links)
This thesis deals with the application of bounded model checking method for self-healing assurance of concurrency related problems. The self-healing is currently interested in the Java programming language. Therefore, it concetrate mainly on the model checker Java PathFinder which is built for handling Java programs. The verification method is implemented like the Record&Replay trace strategy for navigation through a state space and performance bounded model checking from reached state through the use of Record&Replay trace strategy. Java PathFinder was extended by new moduls and interfaces in order to perform the bounded model checking for self-healing assurance. Bounded model checking is applied at the neighbourhood of self-healing.
223

Enriching Web Applications Efficiently with Real-Time Collaboration Capabilities

Heinrich, Matthias 26 September 2014 (has links)
Web applications offering real-time collaboration support (e.g. Google Docs) allow geographically dispersed users to edit the very same document simultaneously, which is appealing to end-users mainly because of two application characteristics. On the one hand, provided real-time capabilities supersede traditional document merging and document locking techniques that distract users from the content creation process. On the other hand, web applications free end-users from lengthy setup procedures and allow for instant application access. However, implementing collaborative web applications is a time-consuming and complex endeavor since offering real-time collaboration support requires two specific collaboration services. First, a concurrency control service has to ensure that documents are synchronized in real-time and that emerging editing conicts (e.g. if two users change the very same word concurrently) are resolved automatically. Second, a workspace awareness service has to inform the local user about actions and activities of other participants (e.g. who joined the session or where are other participants working). Implementing and integrating these two collaboration services is largely ine cient due to (1) the lack of necessary collaboration functionality in existing libraries, (2) incompatibilities of collaboration frameworks with widespread web development approaches as well as (3) the need for massive source code changes to anchor collaboration support. Therefore, we propose a Generic Collaboration Infrastructure (GCI) that supports the e cient development of web-based groupware in various ways. First, the GCI provides reusable concurrency control functionality and generic workspace awareness support. Second, the GCI exposes numerous interfaces to consume these collaboration services in a exible manner and without requiring invasive source code changes. And third, the GCI is linked to a development methodology that e ciently guides developers through the development of web-based groupware. To demonstrate the improved development e ciency induced by the GCI, we conducted three user studies encompassing developers and end-users. We show that the development e ciency can be increased in terms of development time when adopting the GCI. Moreover, we also demonstrate that implemented collaborative web applications satisfy end-user needs with respect to established software quality characteristics (e.g. usability, reliability, etc.). / Webbasierte, kollaborative Echtzeitanwendungen (z.B. Google Docs) erlauben es geografisch verteilten Nutzern, Dokumente gemeinschaftlich und simultan zu bearbeiten. Die Implementierung kollaborativer Echtzeitanwendungen ist allerdings aufwendig und komplex, da einerseits eine Nebenläufigkeitskontrolle von Nöten ist und andererseits die Nachvollziehbarkeit von nicht-lokalen Interaktionen mit dem gemeinsamen virtuellen Arbeitsraum gewährleistet sein muss (z.B. wer editiert wo). Um die Entwicklung kollaborativer Echtzeitanwendungen effizient zu gestalten, wurde eine Generische Kollaborationsinfrastruktur (GKI) entwickelt. Diese GKI stellt sowohl eine Nebenläufigkeitskontrolle als auch Komponenten zur Nachvollziehbarkeit von nicht-lokalen Interaktionen auf eine wiederverwendbare und nicht-invasive Art und Weise zur Verfügung. In drei dedizierten Studien, die sowohl Entwickler als auch Endanwender umfassten, wurde die Entwicklungseffizienz der GKI nachgewiesen. Dabei wurde die Entwicklungszeit, der Umfang des Quelltextes als auch die Gebrauchstauglichkeit analysiert.
224

Data Race Detection for Parallel Programs Using a Virtual Platform

Haverås, Daniel January 2018 (has links)
Data races are highly destructive bugs found in concurrent programs. Because of unordered thread interleavings, data races can randomly appear and disappear during the debugging process which makes them difficult to find and reproduce. A data race exists when multiple threads or processes concurrently access a shared memory address, with at least one of the accesses being a write. Such a scenario can cause data corruption, memory leaks, crashes, or incorrect execution. It is therefore important that data races are absent from production software. This thesis explores dynamic data race detection in programs running on Ericsson’s System Virtualization Platform (SVP), a SystemC/TLM-2.0-based virtual platform used for running software on simulated hardware. SVP is a bit-accurate simulator of Ericsson Many-Core Architecture (EMCA) hardware, enabling software and hardware to be developed in parallel, as well as providing unique insight into software execution. This latter property of SVP has been utilized to implement SVPracer, a proof-of-concept dynamic data race detector. SVPracer is based on a happens-before algorithm similar to Google’s ThreadSanitizer v2, but is significantly different in implementation as it relies entirely on instrumenting binary code during runtime without requiring code modification during build time. A set of test programs exhibiting various data races were written and compiled for EOS, the operating system (OS) running on EMCA Digital Signal Processors (DSPs). Similar programs were created for Linux using POSIX APIs, to compare SVPracer against ThreadSanitizer v2. Both SVPracer and ThreadSanitizer v2 correctly detect the data races present in the respective test programs. Further work must be done in SVPracer to eliminate some false positive results, caused by missing support for some OS functionality such as semaphores. Still, the present state of SVPracer is sufficient proof that dynamic data race detection is possible using a virtual platform. Future work could involve exploring other data race detection algorithms as well as implementing deadlock/livelock detection in virtual platforms. / Datakapplöpning är en mycket destruktiv typ av bugg i samtidig programvara. På grund av icke-ordnad sammanvävning av trådar kan datakapplöpning slumpmässigt dyka upp och försvinna under avlusning (debugging), vilket gör dem svåra att hitta och återskapa. Datakapplöpning existerar när flera trådar eller processer samtidigt accessar en delad minnesaddress och minst en av accesserna är en skrivning. Ett sådant scenario kan orsaka datakorruption, minnesläckor, krascher eller felaktig exekvering. Det är därför viktigt att datakapplöpning inte finns med i programvara för slutlig release. Det här examensarbetet utforskar dynamisk detektion av datakapplöpning i program som körs på Ericssons System Virtualization Platform (SVP), en SystemC/TLM-2.0baserad virtuell platform som används för att köra program på simulerad hårdvara. SVP är en bit-exakt simulator för hårdvara av typen Ericsson Many-Core Architecture (EMCA), vilket möjliggör parallell utveckling av hårdvara och programvara samt unik inblick i programvaruexekvering. Den senare egenskapen hos SVP har använts för att implementera SVPracer, en konceptvalidering av dynamisk detektion av datakapplöpning. SVPracer baseras på en algoritm av typen happens-before, som liknar den i Googles ThreadSanitizer v2. Stora skillnader finns dock i SVPracers implementation eftersom den instrumenterar binärkod under körning, utan att behöva modifiera koden under kompilering. Ett antal testprogram med olika typer av datakapplöpning skapades för (EOS), ett operativsystem som körs på EMCAs signalprocessorer (DSP). Motsvarande program skrevs för Linux med POSIX-APIer, för att kunna jämföra SVPracer med ThreadSanitizer v2. Både SVPracer och ThreadSanitizer v2 upptäckte datakapplöpningarna i samtliga testprogram. SVPracer kräver vidare arbete för att eliminera några falska positiva resultat orsakade av saknat stöd för vissa OS-funktioner, exempelvis semaforer. Trots det bedöms SVPracers nuvarande prestanda som tillräckligt bevis för att virtuella plattformar kan användas för detektion av datakapplöpning. Framtida arbete skulle kunna involvera utforskning av andra detektionsalgoritmer samt detektion av baklås.
225

Efficient Compiler and Runtime Support for Serializability and Strong Semantics on Commodity Hardware

Sengupta, Aritra 07 September 2017 (has links)
No description available.
226

[pt] REVISITANDO MONITORES / [en] REVISITING MONITORS

RENAN ALMEIDA DE MIRANDA SANTOS 13 August 2020 (has links)
[pt] A maioria das linguagens de programação modernas fornece ferramentas para programação concorrente sem restringir seu uso. Assim, fica a cargo do programador evitar a ocorrência de condições de corrida. Nessa dissertação, revisitamos o modelo de monitores, projetados para prevenir condições de corrida ao limitar o acesso à variáveis compartilhadas, e mostramos que monitores podem ser implementados em linguagens de programação com semântica referencial, dadas as regras de tipagem apropriadas. Nós descrevemos a linguagem de programação Aria, projetada com monitores nativos seguindo a proposta original do modelo. Através da resolução de problemas clássicos de concorrência, nós avaliamos o uso de monitores em Aria para sincronização em diferentes níveis de granularidade, e extendemos a linguagem com novos recursos a fim de contemplar as limitações do modelo envolvendo desempenho e expressividade. / [en] Most current programming languages do not restrict the use of the concurrency primitives they provide, leaving it to the programmer to detect data races. In this dissertation, we revisit the monitor model, which guards against data races by guaranteeing that accesses to shared variables occur only inside monitors, and show that this concept can be implemented in a programming language with referential semantics, given appropriate typing rules. We describe the Aria programming language, designed with native monitors according to these rules. Through the discussion of classic concurrency problems, we evaluate the use of Aria monitors for synchronization at different levels of granularity and extend the language with new features to address the limitations of monitors regarding performance and expressiveness.
227

[en] EVENTMANAGER: A TOOL FOR ANALYSING CONCURRENT PROGRAMS / [pt] EVENTMANAGER: UMA FERRAMENTA DE ANÁLISE DE PROGRAMAS CONCORRENTES

ANNA LETICIA ALEGRIA P DE OLIVEIRA 10 October 2022 (has links)
[pt] Alunos aprendendo programação concorrente muitas vezes têm dificuldades de testar seus programas por conta do não-determinismo presente no escalonamento de threads. Em geral, é difícil testar cenários específicos e mais difícil ainda repetir um determinado cenário para testar mudanças do código. Nesta tese, apresentamos a EventManager: uma ferramenta que criamos para permitir que um usuário instrumente seu programa, marcando eventos no código e especificando sequências de eventos através de uma linguagem de domínio específico (DSL). Esta linguagem restringe o escalonamento das threads para que obedeça as sequências permitidas para estes eventos. Descrevemos a implementação da EventManager para aplicações baseadas em threads POSIX. Investigamos a aplicação da ferramenta em soluções de problemas clássicos de concorrência para averiguar a expressividade da linguagem que criamos. / [en] Students learning concurrent programming often struggle with tests due to the non-deterministic nature of thread scheduling. It is in general hard to test specific scenarios and harder yet to repeat a given scenario for further tests after changes to the code. In this thesis, we present EventManager: a tool we developed that allows the user to instrument their program, marking events in the code and specifying valid event sequences using a domainspecific language. This language restricts thread scheduling to obey allowed sequences for these events. We describe the implementation of EventManager for applications based on POSIX threads. We investigate our tool applied on solutions of classical concurrency problems to verify the expressiveness of the created language.
228

Extracting Parallelism from Legacy Sequential Code Using Transactional Memory

Saad Ibrahim, Mohamed Mohamed 26 July 2016 (has links)
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup. / Ph. D.
229

A comparative study of transaction management services in multidatabase heterogeneous systems

Renaud, Karen Vera 04 1900 (has links)
Multidatabases are being actively researched as a relatively new area in which many aspects are not yet fully understood. This area of transaction management in multidatabase systems still has many unresolved problems. The problem areas which this dissertation addresses are classification of multidatabase systems, global concurrency control, correctness criterion in a multidatabase environment, global deadlock detection, atomic commitment and crash recovery. A core group of research addressing these problems was identified and studied. The dissertation contributes to the multidatabase transaction management topic by introducing an alternative classification method for such multiple database systems; assessing existing research into transaction management schemes and based on this assessment, proposes a transaction processing model founded on the optimal properties of transaction management identified during the course of this research. / Computing / M. Sc. (Computer Science)
230

Automatic Reasoning Techniques for Non-Serializable Data-Intensive Applications

Gowtham Kaki (7022108) 14 August 2019 (has links)
<div> <div> <div> <p>The performance bottlenecks in modern data-intensive applications have induced database implementors to forsake high-level abstractions and trade-off simplicity and ease of reasoning for performance. Among the first casualties of this trade-off are the well-known ACID guarantees, which simplify the reasoning about concurrent database transactions. ACID semantics have become increasingly obsolete in practice due to serializable isolation – an integral aspect of ACID, being exorbitantly expensive. Databases, including the popular commercial offerings, default to weaker levels of isolation where effects of concurrent transactions are visible to each other. Such weak isolation guarantees, however, are extremely hard to reason about, and have led to serious safety violations in real applications. The problem is further complicated in a distributed setting with asynchronous state replications, where high availability and low latency requirements compel large-scale web applications to embrace weaker forms of consistency (e.g., eventual consistency) besides weak isolation. Given the serious practical implications of safety violations in data-intensive applications, there is a pressing need to extend the state-of-the-art in program verification to reach non- serializable data-intensive applications operating in a weakly-consistent distributed setting. </p> <p>This thesis sets out to do just that. It introduces new language abstractions, program logics, reasoning methods, and automated verification and synthesis techniques that collectively allow programmers to reason about non-serializable data-intensive applications in the same way as their serializable counterparts. The contributions </p> </div> </div> <div> <div> <p>xi </p> </div> </div> </div> <div> <div> <div> <p>made are broadly threefold. Firstly, the thesis introduces a uniform formal model to reason about weakly isolated (non-serializable) transactions on a sequentially consistent (SC) relational database machine. A reasoning method that relates the semantics of weak isolation to the semantics of the database program is presented, and an automation technique, implemented in a tool called ACIDifier is also described. The second contribution of this thesis is a relaxation of the machine model from sequential consistency to a specifiable level of weak consistency, and a generalization of the data model from relational to schema-less or key-value. A specification language to express weak consistency semantics at the machine level is described, and a bounded verification technique, implemented in a tool called Q9 is presented that bridges the gap between consistency specifications and program semantics, thus allowing high-level safety properties to be verified under arbitrary consistency levels. The final contribution of the thesis is a programming model inspired by version control systems that guarantees correct-by-construction <i>replicated data types</i> (RDTs) for building complex distributed applications with arbitrarily-structured replicated state. A technique based on decomposing inductively-defined data types into <i>characteristic relations</i> is presented, which is used to reason about the semantics of the data type under state replication, and eventually derive its correct-by-construction replicated variant automatically. An implementation of the programming model, called Quark, on top of a content-addressable storage is described, and the practicality of the programming model is demonstrated with help of various case studies. </p> </div> </div> </div>

Page generated in 0.046 seconds