• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 19
  • 13
  • 9
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 240
  • 72
  • 48
  • 48
  • 43
  • 40
  • 36
  • 33
  • 33
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Virtual files : a framework for experimental design

Ross, George D. M. January 1983 (has links)
The increasing power and decreasing cost of computers has resulted in them being applied in an ever widening area. In the world of Computer Aided Design it is now practicable to involve the machine in the earlier stages where a design is still speculative, as well as in the later stages where the computer's calculating ability becomes paramount. Research on database systems has not followed this trend, concentrating instead on commercial applications, with the result that there are very few systems targeted at the early stages of the design process. In this thesis we consider the design and implementation of the file manager for such a system, first of all from the point of view of a single designer working on an entire design, and then from the point of view of a team of designers, each working on a separate aspect of a design. We consider the functionality required of the type of system we are proposing, defining the terminology of experiments to describe it. Having ascertained our requirements we survey current database technology in order to determine to what extent it meets our requirements. We consider traditional concurrency control methods and conclude that they are incompatible with our requirements. We consider current data models and conclude that, with the exception of the persistent programming model, they are not appropriate in the context required, while the implementation of the persistent programming model provides transactions on data structures but not experiments. The implementation of experiments is considered. We examine a number of potential methods, deciding on differential files as the one most likely both to meet our requirements and to have the lowest overheads. Measurements conducted on both a preliminary and a full-scale implementation confirm that this is the case. There are, nevertheless, further gains in convenience and performance to be obtained by exploiting the capabilities of the hardware to the full; we discuss these in relation to virtual memory systems, with particular reference to the VAX/VMS environment. Turning to the case where several designers are each working on a (nearly) distinct part of a design, we consider how to detect conflicts between experiments. Basing our approach on optimistic concurrency control methods, we show how read and write sets may be used to determine those areas of the database where conflicts might arise. As an aside, we show how the methods we propose can be used in an alternative approach to optimistic concurrency control, giving a reduction in system overheads for certain applications. We consider implementation techniques, concluding that a differential files approach has significant advantages in maintaining write sets, while a two-level bitmap may be used to maintain read sets efficiently.
52

Inlined Reference Monitors : Certification,Concurrency and Tree Based Monitoring

Lundblad, Andreas January 2013 (has links)
Reference monitor inlining is a technique for enforcing security policies by injecting security checks into the untrusted software in a style similar to aspect-oriented programming. The intention is that the injected code enforces compliance with the policy (security), without adding behavior (conservativity) or affecting existing policy compliant behavior (transparency). This thesis consists of four papers which covers a range of topics including formalization of monitor inlining correctness properties, certification of inlined monitors, limitations in multithreaded settings and extensions using data-flow monitoring. The first paper addresses the problem of having a potentially complex program rewriter as part of the trusted computing base. By means of proof-carrying code we show how the inliner can be replaced by a relatively simple proof-checker. This technique also enables the use of monitor inlining for quality assurance at development time, while minimizing the need for post-shipping code rewrites. The second paper focuses on the issues associated with monitor inlining in a concurrent setting. Specifically, it discusses the problem of maintaining transparency when introducing locks for synchronizing monitor state reads and updates. Due to Java's relaxed memory model, it turns out to be impossible for a monitor to be entirely transparent without sacrificing the security property. To accommodate for this, the paper proposes a set of new correctness properties shown to be realistic and realizable. The third paper also focuses on problems due to concurrency and identifies a class of race-free policies that precisely characterizes the set of inlineable policies. This is done by showing that inlining of a policy outside this class is either not secure or not transparent, and by exhibiting a concrete algorithm for inlining of policies inside the class which is secure, conservative, and transparent. The paper also discusses how certification in the style of proof-carrying code could be supported in multithreaded Java programs. The fourth paper formalizes a new type of data centric runtime monitoring which combines monitor inlining with taint tracking. As opposed to ordinary techniques which focus on monitoring linear flows of events, the approach presented here relies on tree shaped traces. The paper describes how the approach can be efficiently implemented and presents a denotational semantics for a simple ``while'' language illustrating how the theoretical foundations is to be used in a practical setting. Each paper is concluded by a practical evaluation of the theoretical results, based on a prototype implementation and case studies on real-world applications and policies. / Referensmonitorinvävning, eller monitorinvävning, är en teknik som används för att se till att en given säkerhetspolicy efterföljs under exekvering av potentiellt skadlig kod. Tekniken går ut på att bädda in en uppsättning säkerhetskontroller (en säkerhetsmonitor) i koden på ett sätt som kan jämföras med aspektorienterad programmering. Syftet med den invävda monitorn är att garantera att policyn efterföljs (säkerhet) utan att påverka ursprungsprogrammets beteende, såvida det följer policyn (transparans och konservativitet). Denna avhandling innefattar fyra artiklar som tillsammans täcker in en rad ämnen rörande monitorinvävning. Bland annat diskuteras formalisering av korrekthetsegenskaper hos invävda monitorer, certifiering av invävda monitorer, begränsningar i multitrådade program och utökningar för hantering av dataflödesmonitorering. Den första artikeln behandlar problemen associerade med att ha en potentiellt komplex programmodifierare som del i den säkerhetskritiska komponenten av ett datorsystem. Genom så kallad bevisbärande kod visar vi hur en monitorinvävare kan ersättas av en relativt enkel beviskontrollerare. Denna teknik möjliggör även användandet av monitorinvävning som hjälpmedel för programutvecklare och eliminerar behovet av programmodifikationer efter att programmet distribuerats. Den andra artikeln fokuserar på problemen kring invävning av monitorer i multitrådade program. Artikeln diskuterar problemen kring att upprätthålla transparans trots införandet av lås för synkronisering av läsningar av och skrivningar till säkerhetstillståndet. På grund av Javas minnesmodell visar det sig dock omöjligt att bädda in en säkerhetsmonitor på ett säkert och transparent sätt. För att ackommodera för detta föreslås en ny uppsättning korrekthetsegenskaper som visas vara realistiska och realiserbara. Den tredje artikeln fokuserar även den på problemen kring flertrådad exekvering och karaktäriserar en egenskap för en policy som är tillräcklig och nödvändig för att både säkerhet och transparens ska uppnås. Detta görs genom att visa att en policy utan egenskapen inte kan upprätthållas på ett säkert och transparent sätt, och genom att beskriva en implementation av en monitorinvävare som är säker och transparent för en policy som har egenskapen. Artikeln diskuterar också hur certifiering av säkerhetsmonitorer i flertrådade program kan realiseras genom bevisbärande kod. Den fjärde artikeln beskriver en ny typ av datacentrisk säkerhetsmonitorering som kombinerar monitorinvävning med dataflödesanalys. Till skillnad mot existerande tekniker som fokuserar på linjära sekvenser av säkerhetskritiska händelser förlitar sig tekniken som presenteras här på trädformade händelsesekvenser. Artikeln beskriver hur tekniken kan implementeras på ett effektivt sätt med hjälp av abstraktion. Varje artikel avslutas med en praktisk evaluering av de teoretiska resultaten baserat på en prototypimplementation och fallstudier av verkliga program och säkerhetsegenskaper. / <p>QC 20130220</p>
53

Advanced Concepts in Asynchronous Exception Handling

Krischer, Roy January 2010 (has links)
Asynchronous exception handling is a useful and sometimes necessary alternative form of communication among threads. This thesis examines and classifies general concepts related to asynchrony, asynchronous propagation control, and how asynchronous exception handling affects control flow. The work covers four advanced topics affecting asynchronous exception-handling in a multi-threaded environment. The first topic is concerned with the non-determinism that asynchronous exceptions introduce into a program's control-flow because exceptions can be propagated at virtually any point during execution. The concept of asynchronous propagation control, which restricts the set of exceptions that can be propagated, is examined in depth. Combining it with a restriction of asynchrony that permits propagation of asynchronous exceptions only at certain well-defined (poll) points can re-establish sufficient determinism to verify a program's correctness, but introduces overhead, as well as a delay between the delivery of an asynchronous exception and its propagation. It also disturbs a programmer's intuition about asynchronous propagation in the program, and requires the use of programming idioms to avoid errors. The second topic demonstrates how a combined model of full and restricted asynchrony can be safely employed, and thus, allow for a more intuitive use of asynchronous propagation control, as well as potentially improve performance. The third topic focuses on the delay of propagation that is introduced when a thread is blocked, i.e., on concurrency constructs that provide mutual exclusion or synchronization. An approach is presented to transparently unblock threads so propagation of asynchronous termination and resumption exceptions can begin immediately. The approach does not require additional syntax, simplifies certain programming situations, and can improve performance. The fourth topic explores usability issues affecting the understanding of (asynchronous) exception handling as a language feature. To overcome these issues, tools and language features are presented that help in understanding exception handling code by providing additional run-time information, as well as assist in testing. For all topics, the necessary extensions to the syntax/semantics of the language are discussed; where applicable, a prototypical implementation is presented, with examples that demonstrate the benefits of the new approaches.
54

Asynchronous Backup and Initialization of a Database Server for Replicated Database Systems

Bhalla, Subhash, Madnick, Stuart E. 14 April 2003 (has links)
A possibility of a temporary disconnection of database service exists in many computing environments. It is a common need to permit a participating site to lag behind and re-initialize to full recovery. It is also necessary that active transactions view a globally consistent system state for ongoing operations. We present an algorithm for on-the-fly backup and site-initialization. The technique is non-blocking in the sense that failure and recovery procedures do not interfere with ordinary transactions. As a result the system can tolerate disconnection of services and reconnection of disconnected services, without incurring high overheads
55

Spatially Induced Independence and Concurrency within Presheaves of Labelled Transition Systems

Fortier-Garceau, Simon January 2015 (has links)
In this thesis, we demonstrate how presheaves of labelled transition systems (LTS) acquire a very natural form of spatially induced independence on their actions when we allow a minimal amount of gluing on selected transitions within such systems. This gluing condition is characterized in the new model of LTS-adapted presheaf, and we also make use of the new model of asynchronous labelled transition system with equivalence (ALTSE) to characterize independence on actions. As such, our main result, the Theorem of Spatially Induced Independence, establishes functors from the categories of LTS-adapted presheaves to the categories of ALTSE-valued presheaves; it is a result that extends a proposition of Malcolm [SSTS] in the context of LTS-valued sheaves on complete Heyting algebras.
56

A Distributed Component-based Software Framework for Laboratory Automation Systems

January 2012 (has links)
abstract: Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging. / Dissertation/Thesis / Thesis Presentation / M.S. Computer Science 2012
57

Θέματα ορθότητας ταυτόχρονων αλγορίθμων σε δομές ευρετηρίου / Correctness issues of concurrent algorithms on index structures

Θεοδωρόπουλος, Κωνσταντίνος 16 May 2007 (has links)
Η εργασία αυτή αφορά την μελέτη ταυτόχρονων (concurrent) αλγορίθμων σε δομές ευρετηρίου (δευτερεύουσας μνήμης) καθώς και τα μοντέλα ορθότητάς τους. Οι δομές ευρετηρίου που θα μελετήσουμε είναι τα Β-δέντρα κατάλληλα τροποποιημένα έτσι ώστε να μεγιστοποιείται η απόδοσή τους σε περιβάλλον ταυτοχρονισμού . Συγκεκριμένα εκτός από τους δείκτες του Β-δέντρου έxουν προστεθεί και δείκτες που ενώνουν τον έναν κόμβο με τον άλλον στο ίδιο επίπεδο του δέντρου σχηματίζοντας έτσι αλυσίδες. Στην αρχή θα δούμε μερικά φαινόμενα που ανακύπτουν όταν διάφορες διεργασίες εκτελούνται παράλληλα. Τα φαινόμενα αυτά έχουν την βάση τους στην πρόσβαση των διεργασιών σε κοινά δεδομένα από τα οποία εξαρτάται η εκτέλεσή τους. Θα δούμε πως μπορούμε να συγχρονίσουμε τις διεργασίες έτσι ώστε να μην προκύπτουν ανεπιθύμητα φαινόμενα Κατόπιν θα δούμε μερικούς βασικούς ταυτόχρονους αλγορίθμους που αντιπροσωπεύουν την λύση σε μερικά βασικά προβλήματα στην θεωρία ταυτοχρονισμού (concurrency) ,όπως τα consumer-producer problem κ.ά. Στην συνέχεια θα μετακινηθούμε στο πεδίο των αλγορίθμων για δομές ευρετηρίου. Η δομή ευρετηρίου που θα εξετάσουμε είναι το Β-δέντρο. Το δέντρο αυτό αποτελεί την κατεξοχήν επιλογή για τους σχεδιαστές βάσεων δεδομένων για την οργάνωση μεγάλου όγκου πληροφορίας στην δευτερεύουσα μονάδα μνήμης του συστήματος. Η ανάγκη για ταυτοχρονισμό σε αυτήν την δομή είναι επιβεβλημένη για την υποστήριξη πολυχρηστικών βάσεων δεδομένων και άλλων χρήσιμων εφαρμογών. Η αποδοτικότητα των αλγορίθμων μετριέται σε μέγεθος μνήμης για το οποίο έχουν αποκλειστική πρόσβαση έτσι ώστε να συγχρονίζονται μεταξύ τους οι αλγόριθμοι. Θα δούμε διάφορους αλγορίθμους καθώς και έναν δικό μας. Τέλος θα εξετάσουμε την έννοια της ορθότητας σε ταυτόχρονους αλγορίθμους. Θα δούμε διάφορα μοντέλα ορθότητας καθώς και διάφορα κριτήρια. Σε αυτά θα προσθέσουμε και ένα δικό μας κριτήριο το οποίο εξασφαλίζει απλότητα στον σχεδιασμό των αλγορίθμων πολύ μεγαλύτερη απότι προηγούμενα κριτήρια. Η συνεισφορά αυτής της εργασίας έγκειται στα εξής : 1) Στην ανάπτυξη ενός καινούριου αλγορίθμου ο οποίος επιτυγχάνει μεγαλύτερη απόδοση από προηγούμενους λόγω του νέου τρόπου συγχρονισμού που προτείνει. Συγκεριμένα ο συγχρονισμός επιτυγχάνεται όχι μέσω καθολικού αμοιβαίου αποκλεισμού (όπως μέχρι τώρα) αλλά κατά περίπτωση αμοιβαίου αποκλεισμού. 2) Στην ανάπτυξη ενός κριτηρίου ορθότητας που επιτρέπει την απλότητα και την σαφήνεια στην ανάπτυξη ιδιαίτερα αποδοτικών ταυτόχρονων αλγορίθμων. Το κριτήριο λαμβάνει υπόψη του τις καταστάσεις που μπορεί να προκύψουν σε ένα ορισμένο τμήμα κοινής μνήμης και προσφέρει έναν τρόπο να αποκλείσουμε την μετάβαση του συστήματος σε αυτές τις καταστάσεις. / This thesis is about the study of concurrent algorithms on index structures (secondary memory) as well as their correctness models. The index structures that we are going to study are properly modified B-trees to maximize efficiency in a concurrent anvironmemt. More specifically, apart from the ordinary pointers of the B-tree, pointers between nodes in the same level have been added, thus forming chains. In the beggining we will observe some phenomena that occur during concurent execution of processes. These phenomena occur when processes access simultaneously common data form which their exectution is depended. We will see how we can synchronise the processes so that these unwnated phenomena may not occur. After that, we will see some basic concurrent algorithms that represent the solution to some basic problems in concurrency theory such as consumer-producer problems etc. Then we will move to the filed of algorithms for index structures.The structure that we will study is the B tree which is very commonly used by databse designers to store large amounts of data in the secondary memory unit of the system. The need for supporting concurency for this data structure is evident form the fact that these trees are used in multi-user databases and other useful applications. The efficiency of these concurrent algortihms is measured by the size of the memory that each algortihms may have exclusive access in it. We will review various algortihms and present one of our own. At the end we will study the notion of correctness in concurrent algorithms.We will review various correctness models and present our own criterion that assures simplicity in design of alorithms much more than achieved by previous modles of correctness. The contribution of this thesis lies in the following 1)To the development of a new algorithm , more efficeint than previous ones due to the new way of synchronisation suggested.More specifically, synchronisation is achieved not through global mutual exclusion (which is the standard practice) but through relative mutual exclusion. 2) To the development of a new correctness criterion that allows simplicity and clarity in the desing of highly efficient concurrent algorithms . This criterion takes in consideration the intermediate states that can occur in an are of shared memory and offers a new way of blocking transition to specific sets of states.
58

Concurrency model for the Majo language : An analysis of graph based concurrency

Fält, Markus January 2018 (has links)
Today most computers have powerful multi core processors that can perform many calculations simultaneously. However writing programs that take full advan- tage of the processors in modern day computers can be a challenge. This is due to the challenge of managing shared resources between parallel processing threads. This report documents the development of the Majo language that aims to solve these problems by using abstractions to make parallel programming easier. The model for the abstractions is dividing the program in to what is called nodes. One node represents one thread of execution and nodes are connected to each other by thread safe communication channels. All communication channels are frst in frst out queues. The nodes communicate by pushing and popping values form these queues. The performance of the language was measured and compared to other languages such as Python, Ruby and JavaScript. The tests were based on timing how long it took to generate the Mandelbrot set as well as sorting a list of inte- gers. The language scalability was also tested by seeing how much the execution time decreased by adding more parallel threads. The results from these tests showed that the developed prototype of the language had some unforeseen bugs that slowed down the execution more then expected in some tests. However the scalability test gave encouraging results. For future development the language exe- cution time should be improved by fxing relevant bugs and a more generalized model for concurrency should be developed.
59

Constraint Solving for Diagnosing Concurrency Bugs

Khoshnood, Sepideh 28 May 2015 (has links)
Programmers often have to spend a significant amount of time inspecting the software code and execution traces to identify the root cause of a software bug. For a multithreaded program, debugging is even more challenging due to the subtle interactions between concurrent threads and the often astronomical number of possible interleavings. In this work, we propose a logical constraint-based symbolic analysis method to aid in the diagnosis of concurrency bugs and find their root causes, which can be later used to recommend repairs. In our method, the diagnosis process is formulated as a set of constraint solving problems. By leveraging the power of constraint satisfiability (SAT) solvers and a bounded model checker, we perform a semantic analysis of the sequential computation as well as the thread interactions. The analysis is ideally suited for handling software with small to medium code size but complex concurrency control, such as device drivers, synchronization protocols, and concurrent data structures. We have implemented our method in a software tool and demonstrated its effectiveness in diagnosing subtle concurrency bugs in multithreaded C programs. / Master of Science
60

Optimizing Distributed Transactions: Speculative Client Execution, Certified Serializability, and High Performance Run-Time

Pandey, Utkarsh 01 September 2016 (has links)
On-line services already form an important part of modern life with an immense potential for growth. Most of these services are supported by transactional systems, which are backed by database management systems (DBMS) in many cases. Many on-line services use replication to ensure high-availability, fault tolerance and scalability. Replicated systems typically consist of different nodes running the service co-ordinated by a distributed algorithm which aims to drive all the nodes along the same sequence of states by providing a total order to their operations. Thus optimization of both local DBMS operations through concurrency control and the distributed algorithm driving replicated services can lead to enhancing the performance of the on-line services. Deferred Update Replication (DUR) is a well-known approach to design scalable replicated systems. In this method, the database is fully replicated on each distributed node. User threads perform transactions locally and optimistically before a total order is reached. DUR based systems find their best usage when remote transactions rarely conflict. Even in such scenarios, transactions may abort due to local contention on nodes. A generally adopted method to alleviate the local contention is to invoke a local certification phase to check if a transaction conflicts with other local transactions already completed. If so, the given transaction is aborted locally without burdening the ordering layer. However, this approach still results in many local aborts which significantly degrades the performance. The first main contribution of this thesis is PXDUR, a DUR based transactional system, which enhances the performance of DUR based systems by alleviating local contention and increasing the transaction commit rate. PXDUR alleviates local contention by allowing speculative forwarding of shared objects from locally committed transactions awaiting total order to running transactions. PXDUR allows transactions running in parallel to use speculative forwarding, thereby enabling the system to utilize the highly parallel multi-core platforms. PXDUR also enhances the performance by optimizing the transaction commit process. It allows the committing transactions to skip read-set validation when it is safe to do so. PXDUR achieves performance gains of an order of magnitude over closest competitors under favorable conditions. Transactions also form an important part of centralized DBMS, which tend to support multi-threaded access to utilize the highly parallel hardware platforms. The applications can be wrapped in transactions which can then access the DBMS as per the rules of concurrency control. This allows users to develop applications that can run on DBMSs without worrying about synchronization. texttt{Serializability} is the de-facto standard form of isolation required by transactions for many applications. The existing methods employed by DBMSs to enforce serializability employ explicit fine-grained locking. The eager-locking based approach is pessimistic and can be too conservative for many applications. The locking approach can severely limit the performance of DBMSs especially for scenarios with moderate to high contention. This leads to the second major contribution of this thesis is TSAsR, an adaptive transaction processing framework, which can be applied to DBMSs to improve performance. TSAsR allows the DBMS's internal synchronization to be more relaxed and enforces serializability through the processng of external meta-data in an optimistic manner. It does not require any changes in the application code and achieves orders of magnitude performance improvements for high and moderate contention cases. The replicated transaction processing systems require a distributed algorithm to keep the system consistent by ensuring that each node executes the same sequence of deterministic commands. These algorithms generally employ texttt{State Machine Replication (SMR)}. Enhancing the performance of such algorithms is a potential way to increase the performance of distributed systems. However, developing new SMR algorithms is limited in production settings because of the huge verification cost involved in proving their correctness. There are frameworks that allow easy specification of SMR algorithms and subsequent verification. However, algorithms implemented in such framework, give poor performance. This leads to the third major contribution of this thesis Verified JPaxos, a JPaxos based runtime system which can be integrated to an easy to verify I/O automaton based on Multipaxos protocol. Multipaxos is specified in Higher Order Logic (HOL) for ease of verification which is used to generates executable code representing the Multipaxos state changes (I/O Automaton). The runtime drives the HOL generated code and interacts with the service and network to create a fully functional replicated Multipaxos system. The runtime inherits its design from JPaxos along with some optimizations. It achieves significant improvement over a state-of-art SMR verification framework while still being comparable to the performance of non-verified systems. / Master of Science

Page generated in 0.1196 seconds