• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 13
  • 9
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 252
  • 78
  • 52
  • 50
  • 43
  • 41
  • 38
  • 36
  • 35
  • 32
  • 31
  • 30
  • 28
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Concurrency Optimization for Integrative Network Analysis

Barnes, Robert Otto II 12 June 2013 (has links)
Virginia Tech\'s Computational Bioinformatics and Bio-imaging Laboratory (CBIL) is exploring integrative network analysis techniques to identify subnetworks or genetic pathways that contribute to various cancers. Chen et. al. developed a bagging Markov random field (BMRF)-based approach which examines gene expression data with prior biological information to reliably identify significant genes and proteins. Using random resampling with replacement (bootstrapping or bagging) is essential to confident results but is computationally demanding as multiple iterations of the network identification (by simulated annealing) is required. The MATLAB implementation is computationally demanding, employs limited concurrency, and thus time prohibitive. Using strong software development discipline we optimize BMRF using algorithmic, compiler, and concurrency techniques (including Nvidia GPUs) to alleviate the wall clock time needed for analysis of large-scale genomic data. Particularly, we decompose the BMRF algorithm into functional blocks, implement the algorithm in C/C++ and further explore the C/C++ implementation with concurrency optimization. Experiments are conducted with simulation and real data to demonstrate that a significant speedup of BMRF can be achieved by exploiting concurrency opportunities. We believe that the experience gained by this research shall help pave the way for us to develop computationally efficient algorithms leveraging concurrency, enabling researchers to efficiently analyze larger-scale data sets essential for furthering cancer research. / Master of Science
42

Effective fault localization techniques for concurrent software

Park, Sang Min 12 January 2015 (has links)
Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.
43

On bisimulation and model-checking for concurrent systems with partial order semantics

Gutierrez, Julian January 2011 (has links)
In concurrency theory—the branch of (theoretical) computer science that studies the logical and mathematical foundations of parallel computation—there are two main formal ways of modelling the behaviour of systems where multiple actions or events can happen independently and at the same time: either with interleaving or with partial order semantics. On the one hand, the interleaving semantics approach proposes to reduce concurrency to the nondeterministic, sequential computation of the events the system can perform independently. On the other hand, partial order semantics represent concurrency explicitly by means of an independence relation on the set of events that the system can execute in parallel; following this approach, the so-called ‘true concurrency’ approach, independence or concurrency is a primitive notion rather than a derived concept as in the interleaving framework. Using interleaving or partial order semantics is, however, more than a matter of taste. In fact, choosing one kind of semantics over the other can have important implications—both from theoretical and practical viewpoints—as making such a choice can raise different issues, some of which we investigate here. More specifically, this thesis studies concurrent systems with partial order semantics and focuses on their bisimulation and model-checking problems; the theories and techniques herein apply, in a uniform way, to different classes of Petri nets, event structures, and transition system with independence (TSI) models. Some results of this work are: a number of mu-calculi (in this case, fixpoint extensions of modal logic) that, in certain classes of systems, induce exactly the same identifications as some of the standard bisimulation equivalences used in concurrency. Secondly, the introduction of (infinite) higher-order logic games for bisimulation and for model-checking, where the players of the games are given (local) monadic second-order power on the sets of elements they are allowed to play. And, finally, the formalization of a new order-theoretic concurrent game model that provides a uniform approach to bisimulation and model-checking and bridges some mathematical concepts in order theory with the more operational world of games. In particular, we show that in all cases the logic games for bisimulation and model-checking developed in this thesis are sound and complete, and therefore, also determined—even when considering models of infinite state systems; moreover, these logic games are decidable in the finite case and underpin novel decision procedures for systems verification. Since the mu-calculi and (infinite) logic games studied here generalise well-known fixpoint modal logics as well as game-theoretic decision procedures for analysing concurrent systems with interleaving semantics, this thesis provides some of the groundwork for the design of a logic-based, game-theoretic framework for studying, in a uniform manner, several concurrent systems regardless of whether they have an interleaving or a partial order semantics.
44

Concurrency model for the Majo language : An analysis of graph based concurrency

Fält, Markus January 2018 (has links)
Today most computers have powerful multi core processors that can perform many calculations simultaneously. However writing programs that take full advan- tage of the processors in modern day computers can be a challenge. This is due to the challenge of managing shared resources between parallel processing threads. This report documents the development of the Majo language that aims to solve these problems by using abstractions to make parallel programming easier. The model for the abstractions is dividing the program in to what is called nodes. One node represents one thread of execution and nodes are connected to each other by thread safe communication channels. All communication channels are frst in frst out queues. The nodes communicate by pushing and popping values form these queues. The performance of the language was measured and compared to other languages such as Python, Ruby and JavaScript. The tests were based on timing how long it took to generate the Mandelbrot set as well as sorting a list of inte- gers. The language scalability was also tested by seeing how much the execution time decreased by adding more parallel threads. The results from these tests showed that the developed prototype of the language had some unforeseen bugs that slowed down the execution more then expected in some tests. However the scalability test gave encouraging results. For future development the language exe- cution time should be improved by fxing relevant bugs and a more generalized model for concurrency should be developed.
45

Inlined Reference Monitors : Certification,Concurrency and Tree Based Monitoring

Lundblad, Andreas January 2013 (has links)
Reference monitor inlining is a technique for enforcing security policies by injecting security checks into the untrusted software in a style similar to aspect-oriented programming. The intention is that the injected code enforces compliance with the policy (security), without adding behavior (conservativity) or affecting existing policy compliant behavior (transparency). This thesis consists of four papers which covers a range of topics including formalization of monitor inlining correctness properties, certification of inlined monitors, limitations in multithreaded settings and extensions using data-flow monitoring. The first paper addresses the problem of having a potentially complex program rewriter as part of the trusted computing base. By means of proof-carrying code we show how the inliner can be replaced by a relatively simple proof-checker. This technique also enables the use of monitor inlining for quality assurance at development time, while minimizing the need for post-shipping code rewrites. The second paper focuses on the issues associated with monitor inlining in a concurrent setting. Specifically, it discusses the problem of maintaining transparency when introducing locks for synchronizing monitor state reads and updates. Due to Java's relaxed memory model, it turns out to be impossible for a monitor to be entirely transparent without sacrificing the security property. To accommodate for this, the paper proposes a set of new correctness properties shown to be realistic and realizable. The third paper also focuses on problems due to concurrency and identifies a class of race-free policies that precisely characterizes the set of inlineable policies. This is done by showing that inlining of a policy outside this class is either not secure or not transparent, and by exhibiting a concrete algorithm for inlining of policies inside the class which is secure, conservative, and transparent. The paper also discusses how certification in the style of proof-carrying code could be supported in multithreaded Java programs. The fourth paper formalizes a new type of data centric runtime monitoring which combines monitor inlining with taint tracking. As opposed to ordinary techniques which focus on monitoring linear flows of events, the approach presented here relies on tree shaped traces. The paper describes how the approach can be efficiently implemented and presents a denotational semantics for a simple ``while'' language illustrating how the theoretical foundations is to be used in a practical setting. Each paper is concluded by a practical evaluation of the theoretical results, based on a prototype implementation and case studies on real-world applications and policies. / Referensmonitorinvävning, eller monitorinvävning, är en teknik som används för att se till att en given säkerhetspolicy efterföljs under exekvering av potentiellt skadlig kod. Tekniken går ut på att bädda in en uppsättning säkerhetskontroller (en säkerhetsmonitor) i koden på ett sätt som kan jämföras med aspektorienterad programmering. Syftet med den invävda monitorn är att garantera att policyn efterföljs (säkerhet) utan att påverka ursprungsprogrammets beteende, såvida det följer policyn (transparans och konservativitet). Denna avhandling innefattar fyra artiklar som tillsammans täcker in en rad ämnen rörande monitorinvävning. Bland annat diskuteras formalisering av korrekthetsegenskaper hos invävda monitorer, certifiering av invävda monitorer, begränsningar i multitrådade program och utökningar för hantering av dataflödesmonitorering. Den första artikeln behandlar problemen associerade med att ha en potentiellt komplex programmodifierare som del i den säkerhetskritiska komponenten av ett datorsystem. Genom så kallad bevisbärande kod visar vi hur en monitorinvävare kan ersättas av en relativt enkel beviskontrollerare. Denna teknik möjliggör även användandet av monitorinvävning som hjälpmedel för programutvecklare och eliminerar behovet av programmodifikationer efter att programmet distribuerats. Den andra artikeln fokuserar på problemen kring invävning av monitorer i multitrådade program. Artikeln diskuterar problemen kring att upprätthålla transparans trots införandet av lås för synkronisering av läsningar av och skrivningar till säkerhetstillståndet. På grund av Javas minnesmodell visar det sig dock omöjligt att bädda in en säkerhetsmonitor på ett säkert och transparent sätt. För att ackommodera för detta föreslås en ny uppsättning korrekthetsegenskaper som visas vara realistiska och realiserbara. Den tredje artikeln fokuserar även den på problemen kring flertrådad exekvering och karaktäriserar en egenskap för en policy som är tillräcklig och nödvändig för att både säkerhet och transparens ska uppnås. Detta görs genom att visa att en policy utan egenskapen inte kan upprätthållas på ett säkert och transparent sätt, och genom att beskriva en implementation av en monitorinvävare som är säker och transparent för en policy som har egenskapen. Artikeln diskuterar också hur certifiering av säkerhetsmonitorer i flertrådade program kan realiseras genom bevisbärande kod. Den fjärde artikeln beskriver en ny typ av datacentrisk säkerhetsmonitorering som kombinerar monitorinvävning med dataflödesanalys. Till skillnad mot existerande tekniker som fokuserar på linjära sekvenser av säkerhetskritiska händelser förlitar sig tekniken som presenteras här på trädformade händelsesekvenser. Artikeln beskriver hur tekniken kan implementeras på ett effektivt sätt med hjälp av abstraktion. Varje artikel avslutas med en praktisk evaluering av de teoretiska resultaten baserat på en prototypimplementation och fallstudier av verkliga program och säkerhetsegenskaper. / <p>QC 20130220</p>
46

Advanced Concepts in Asynchronous Exception Handling

Krischer, Roy January 2010 (has links)
Asynchronous exception handling is a useful and sometimes necessary alternative form of communication among threads. This thesis examines and classifies general concepts related to asynchrony, asynchronous propagation control, and how asynchronous exception handling affects control flow. The work covers four advanced topics affecting asynchronous exception-handling in a multi-threaded environment. The first topic is concerned with the non-determinism that asynchronous exceptions introduce into a program's control-flow because exceptions can be propagated at virtually any point during execution. The concept of asynchronous propagation control, which restricts the set of exceptions that can be propagated, is examined in depth. Combining it with a restriction of asynchrony that permits propagation of asynchronous exceptions only at certain well-defined (poll) points can re-establish sufficient determinism to verify a program's correctness, but introduces overhead, as well as a delay between the delivery of an asynchronous exception and its propagation. It also disturbs a programmer's intuition about asynchronous propagation in the program, and requires the use of programming idioms to avoid errors. The second topic demonstrates how a combined model of full and restricted asynchrony can be safely employed, and thus, allow for a more intuitive use of asynchronous propagation control, as well as potentially improve performance. The third topic focuses on the delay of propagation that is introduced when a thread is blocked, i.e., on concurrency constructs that provide mutual exclusion or synchronization. An approach is presented to transparently unblock threads so propagation of asynchronous termination and resumption exceptions can begin immediately. The approach does not require additional syntax, simplifies certain programming situations, and can improve performance. The fourth topic explores usability issues affecting the understanding of (asynchronous) exception handling as a language feature. To overcome these issues, tools and language features are presented that help in understanding exception handling code by providing additional run-time information, as well as assist in testing. For all topics, the necessary extensions to the syntax/semantics of the language are discussed; where applicable, a prototypical implementation is presented, with examples that demonstrate the benefits of the new approaches.
47

Orchestration and atomicity

Kitchin, David Wilson 11 September 2013 (has links)
This dissertation presents the concurrent programming language Ora, an extension of the Orc orchestration language with the capability to execute transactions. A new formal definition of transactions is given, in terms of two complementary properties: atomicity and coatomicity. These properties are described in terms of a partial order of events, rather than as properties of a totally ordered program trace. Atomicity and coatomicity are ensured in Ora programs by a novel algorithm for multiversion concurrency control. / text
48

Estimating Network Features and Associated Measures of Uncertainty and Their Incorporation in Network Generation and Analysis

Goyal, Ravi 19 November 2012 (has links)
The efficacy of interventions to control HIV spread depends upon many features of the communities where they are implemented, including not only prevalence, incidence, and per contact risk of transmission, but also properties of the sexual or transmission network. For this reason, HIV epidemic models have to take into account network properties including degree distribution and mixing patterns. The use of sampled data to estimate properties of a network is a common practice; however, current network generation methods do not account for the uncertainty in the estimates due to sampling. In chapter 1, we present a framework for constructing collections of networks using sampled data collected from ego-centric surveys. The constructed networks not only target estimates for density, degree distributions and mixing frequencies, but also incorporate the uncertainty due to sampling. Our method is applied to the National Longitudinal Study of Adolescent Health and considers two sampling procedures. We demonstrate how a collection of constructed networks using the proposed methods are useful in investigating variation in unobserved network topology, and therefore also insightful for studying processes that operate on networks. In chapter 2, we focus on the degree to which impact of concurrency on HIV incidence in a community may be overshadowed by differences in unobserved, but local, network properties. Our results demonstrate that even after controlling for cumulative ego-centric properties, i.e. degree distribution and concurrency, other network properties, which include degree mixing and clustering, can be very influential on the size of the potential epidemic. In chapter 3, we demonstrate the need to incorporate information about degree mixing patterns in such modeling. We present a procedure to construct collections of bipartite networks, given point estimates for degree distribution, that either makes use of information on the degree mixing matrix or assumes that no such information is available. These methods permit a demonstration of the differences between these two network collections, even when degree sequence is fixed. Methods are also developed to estimate degree mixing patterns, given a point estimate for the degree distribution.
49

Computational process networks : a model and framework for high-throughput signal processing

Allen, Gregory Eugene 16 June 2011 (has links)
Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is widely acknowledged to be difficult, with common industry practice leaving the burden of preventing concurrency problems on the programmer. The Kahn Process Network model provides the mathematically provable property of determinism of a program result regardless of the execution order of its processes, including concurrent execution. This model is also natural for describing streams of data samples in a signal processing system, where processes transform streams from one data type to another. However, a Kahn Process Network may require infinite memory to execute. I present the dynamic distributed deadlock detection and resolution (D4R) algorithm, which permits execution of Process Networks in bounded memory if it is possible. It detects local deadlocks in a Process Network, determines whether the deadlock can be resolved and, if so, identifies the process that must take action to resolve the deadlock. I propose the Computational Process Network (CPN) model which is based on the formalisms of Kahn’s PN model, but with enhancements that are designed to make it efficiently implementable. These enhancements include multi-token transactions to reduce execution overhead, multi-channel queues for multi-dimensional synchronous data, zero-copy semantics, and consumer and producer firing thresholds for queues. Firing thresholds enable memoryless computation of sliding window algorithms, which are common in signal processing systems. I show that the Computational Process Network model preserves the formal properties of Process Networks, while reducing the operations required to implement sliding window algorithms on continuous streams of data. I also present a high-throughput software framework that implements the Computational Process Network model using C++, and which maps naturally onto distributed targets. This framework uses POSIX threads, and can exploit parallelism in both multi-core and distributed systems. Finally, I present case studies to exercise this framework and demonstrate its performance and utility. The final case study is a three-dimensional circular convolution sonar beamformer and replica correlator, which demonstrates the high throughput and scalability of a real-time signal processing algorithm using the CPN model and framework. / text
50

Θέματα ορθότητας ταυτόχρονων αλγορίθμων σε δομές ευρετηρίου / Correctness issues of concurrent algorithms on index structures

Θεοδωρόπουλος, Κωνσταντίνος 16 May 2007 (has links)
Η εργασία αυτή αφορά την μελέτη ταυτόχρονων (concurrent) αλγορίθμων σε δομές ευρετηρίου (δευτερεύουσας μνήμης) καθώς και τα μοντέλα ορθότητάς τους. Οι δομές ευρετηρίου που θα μελετήσουμε είναι τα Β-δέντρα κατάλληλα τροποποιημένα έτσι ώστε να μεγιστοποιείται η απόδοσή τους σε περιβάλλον ταυτοχρονισμού . Συγκεκριμένα εκτός από τους δείκτες του Β-δέντρου έxουν προστεθεί και δείκτες που ενώνουν τον έναν κόμβο με τον άλλον στο ίδιο επίπεδο του δέντρου σχηματίζοντας έτσι αλυσίδες. Στην αρχή θα δούμε μερικά φαινόμενα που ανακύπτουν όταν διάφορες διεργασίες εκτελούνται παράλληλα. Τα φαινόμενα αυτά έχουν την βάση τους στην πρόσβαση των διεργασιών σε κοινά δεδομένα από τα οποία εξαρτάται η εκτέλεσή τους. Θα δούμε πως μπορούμε να συγχρονίσουμε τις διεργασίες έτσι ώστε να μην προκύπτουν ανεπιθύμητα φαινόμενα Κατόπιν θα δούμε μερικούς βασικούς ταυτόχρονους αλγορίθμους που αντιπροσωπεύουν την λύση σε μερικά βασικά προβλήματα στην θεωρία ταυτοχρονισμού (concurrency) ,όπως τα consumer-producer problem κ.ά. Στην συνέχεια θα μετακινηθούμε στο πεδίο των αλγορίθμων για δομές ευρετηρίου. Η δομή ευρετηρίου που θα εξετάσουμε είναι το Β-δέντρο. Το δέντρο αυτό αποτελεί την κατεξοχήν επιλογή για τους σχεδιαστές βάσεων δεδομένων για την οργάνωση μεγάλου όγκου πληροφορίας στην δευτερεύουσα μονάδα μνήμης του συστήματος. Η ανάγκη για ταυτοχρονισμό σε αυτήν την δομή είναι επιβεβλημένη για την υποστήριξη πολυχρηστικών βάσεων δεδομένων και άλλων χρήσιμων εφαρμογών. Η αποδοτικότητα των αλγορίθμων μετριέται σε μέγεθος μνήμης για το οποίο έχουν αποκλειστική πρόσβαση έτσι ώστε να συγχρονίζονται μεταξύ τους οι αλγόριθμοι. Θα δούμε διάφορους αλγορίθμους καθώς και έναν δικό μας. Τέλος θα εξετάσουμε την έννοια της ορθότητας σε ταυτόχρονους αλγορίθμους. Θα δούμε διάφορα μοντέλα ορθότητας καθώς και διάφορα κριτήρια. Σε αυτά θα προσθέσουμε και ένα δικό μας κριτήριο το οποίο εξασφαλίζει απλότητα στον σχεδιασμό των αλγορίθμων πολύ μεγαλύτερη απότι προηγούμενα κριτήρια. Η συνεισφορά αυτής της εργασίας έγκειται στα εξής : 1) Στην ανάπτυξη ενός καινούριου αλγορίθμου ο οποίος επιτυγχάνει μεγαλύτερη απόδοση από προηγούμενους λόγω του νέου τρόπου συγχρονισμού που προτείνει. Συγκεριμένα ο συγχρονισμός επιτυγχάνεται όχι μέσω καθολικού αμοιβαίου αποκλεισμού (όπως μέχρι τώρα) αλλά κατά περίπτωση αμοιβαίου αποκλεισμού. 2) Στην ανάπτυξη ενός κριτηρίου ορθότητας που επιτρέπει την απλότητα και την σαφήνεια στην ανάπτυξη ιδιαίτερα αποδοτικών ταυτόχρονων αλγορίθμων. Το κριτήριο λαμβάνει υπόψη του τις καταστάσεις που μπορεί να προκύψουν σε ένα ορισμένο τμήμα κοινής μνήμης και προσφέρει έναν τρόπο να αποκλείσουμε την μετάβαση του συστήματος σε αυτές τις καταστάσεις. / This thesis is about the study of concurrent algorithms on index structures (secondary memory) as well as their correctness models. The index structures that we are going to study are properly modified B-trees to maximize efficiency in a concurrent anvironmemt. More specifically, apart from the ordinary pointers of the B-tree, pointers between nodes in the same level have been added, thus forming chains. In the beggining we will observe some phenomena that occur during concurent execution of processes. These phenomena occur when processes access simultaneously common data form which their exectution is depended. We will see how we can synchronise the processes so that these unwnated phenomena may not occur. After that, we will see some basic concurrent algorithms that represent the solution to some basic problems in concurrency theory such as consumer-producer problems etc. Then we will move to the filed of algorithms for index structures.The structure that we will study is the B tree which is very commonly used by databse designers to store large amounts of data in the secondary memory unit of the system. The need for supporting concurency for this data structure is evident form the fact that these trees are used in multi-user databases and other useful applications. The efficiency of these concurrent algortihms is measured by the size of the memory that each algortihms may have exclusive access in it. We will review various algortihms and present one of our own. At the end we will study the notion of correctness in concurrent algorithms.We will review various correctness models and present our own criterion that assures simplicity in design of alorithms much more than achieved by previous modles of correctness. The contribution of this thesis lies in the following 1)To the development of a new algorithm , more efficeint than previous ones due to the new way of synchronisation suggested.More specifically, synchronisation is achieved not through global mutual exclusion (which is the standard practice) but through relative mutual exclusion. 2) To the development of a new correctness criterion that allows simplicity and clarity in the desing of highly efficient concurrent algorithms . This criterion takes in consideration the intermediate states that can occur in an are of shared memory and offers a new way of blocking transition to specific sets of states.

Page generated in 0.4676 seconds