201 |
Functional and Imperative Object-Oriented Programming in Theory and Practice : A Study of Online Discussions in the Programming CommunityJernlund, Per, Stenberg, Martin January 2019 (has links)
Functional programming (FP) has progressively become more prevalent and techniques from the FP paradigm has been implemented in many different Imperative object-oriented programming (OOP) languages. However, there is no indication that OOP is going out of style. Nevertheless the increased popularity in FP has sparked new discussions across the Internet between the FP and OOP communities regarding a multitude of related aspects. These discussions could provide insights into the questions and challenges faced by programmers today. This thesis investigates these online discussions in a small and contemporary scale in order to identify the most discussed aspect of FP and OOP. Once identified the statements and claims made by various discussion participants were selected and compared to literature relating to the aspects and the theory behind the paradigms in order to determine whether there was any discrepancies between practitioners and theory. It was done in order to investigate whether the practitioners had different ideas in the form of best practices that could influence theories. The most discussed aspect within FP and OOP was immutability and state relating primarily to the aspects of concurrency and performance . This thesis presents a selection of representative quotes that illustrate the different points of view held by groups in the community and then addresses those claims by investigating what is said in literature. It was shown that there were no direct discrepancies between the practitioners and the theory.
|
202 |
Halbordnungsbasierte Verfeinerung zur Verifikation verteilter AlgorithmenPeuker, Sibylle 03 July 2001 (has links)
In dieser Arbeit geht es um die schrittweise Verfeinerung verteilter Algorithmen. Dabei wird ein einfacher Algorithmus, der einige gewünschte Eigenschaften hat, Schritt für Schritt zu einem komplexen Algorithmus verfeinert, der konkrete Implementationsanforderungen erfüllt, so daß in jedem Schritt die gewünschten Eigenschaften erhalten bleiben. Wir stellen einen neuen eigenschaftserhaltenden Verfeinerungsbegriff vor, der auf der kausalen Ordnung der Aktionen eines Algorithmus basiert. Diesen Begriff definieren wir als Transitionsverfeinerung für elementare Petrinetze und diskutieren Beweiskriterien. Danach definieren und diskutieren wir die simultane Verfeinerung mehrerer Transitionen. Zur Modellierung komplexer verteilter Algorithmen sind elementare Petrinetze oft nicht adäquat. Wir benutzen deshalb algebraische Petrinetze. Wir definieren Transitionsverfeinerung für algebraische Petrinetze und stellen einen Zusammenhang zur simultanen Verfeinerung von Transitionen in elementaren Petrinetzen her. Transitionsverfeinerung ist besonders für Verfeinerungsschritte geeignet, in denen synchrone Kommunikation zwischen Agenten durch asynchronen Nachrichtenaustausch ersetzt wird. Wir zeigen dies am Beispiel eines komplexen verteilten Algorithmus, zur Berechnung des minimalen spannenden Baumes in einem gewichteten Graphen. Wir zeigen die Korrektheit dieses Algorithmus in mehreren Schritten, von denen einige Schritte Transitionsverfeinerungen sind. In anderen Schritten sind klassische Verfeinerungsbegriffe ausreichend. Wir übertragen deshalb auch einen klassischen Verfeinerungsbegriff in unser formales Modell. / The topic of this PhD thesis is the stepwise refinement of distributed algorithms. Stepwise refinement starts with a simple algorithm with certain desired properties. This algorithm is refined step by step such that the desired properties are preserved in each refinement step. The result is a complex distributed algorithm which satisfies concrete implementation requirements and which still has the desired properties. We propose a new property preserving notion of refinement which is based on the causal ordering of actions of an algorithm. We call this notion transition refinement and we define it first for elementary Petri nets. Furthermore, we discuss proof criteria. Then, we define and discuss the simultaneous refinement of several transitions. For modelling complex distributed algorithms, we use algebraic Petri nets instead of elementary Petri nets. We define transition refinement for algebraic Petri nets, and we show its relationship to simultaneous transition refinement in elementary Petri nets. Transition refinement is particularly suitable for refinement steps in which synchronous communication between agents is replaced by asynchronous message passing. We show this by means of a complex distributed algorithm for determining the minimal spanning tree of a weighted graph. We prove the correctness of this algorithm in several steps. Some of these steps are transition refinements. For other steps, well-known notions of refinement are sufficient. Therefore, we also carry over a well-known notion of refinement into our formal model.
|
203 |
Enriching Web Applications Efficiently with Real-Time Collaboration CapabilitiesHeinrich, Matthias 26 September 2014 (has links) (PDF)
Web applications offering real-time collaboration support (e.g. Google Docs) allow geographically dispersed users to edit the very same document simultaneously, which is appealing to end-users mainly because of two application characteristics. On the one hand, provided real-time capabilities supersede traditional document merging and document locking techniques that distract users from the content creation process. On the other hand, web applications free end-users from lengthy setup procedures and allow for instant application access. However, implementing collaborative web applications is a time-consuming and complex endeavor since offering real-time collaboration support requires two specific collaboration services. First, a concurrency control service has to ensure that documents are synchronized in real-time and that emerging editing conicts (e.g. if two users change the very same word concurrently) are resolved automatically. Second, a workspace awareness service has to inform the local user about actions and activities of other participants (e.g. who joined the session or where are other participants working). Implementing and integrating these two collaboration services is largely ine cient due to (1) the lack of necessary collaboration functionality in existing libraries, (2) incompatibilities of collaboration frameworks with widespread web development approaches as well as (3) the need for massive source code changes to anchor collaboration support. Therefore, we propose a Generic Collaboration Infrastructure (GCI) that supports the e cient development of web-based groupware in various ways. First, the GCI provides reusable concurrency control functionality and generic workspace awareness support. Second, the GCI exposes numerous interfaces to consume these collaboration services in a exible manner and without requiring invasive source code changes. And third, the GCI is linked to a development methodology that e ciently guides developers through the development of web-based groupware. To demonstrate the improved development e ciency induced by the GCI, we conducted three user studies encompassing developers and end-users. We show that the development e ciency can be increased in terms of development time when adopting the GCI. Moreover, we also demonstrate that implemented collaborative web applications satisfy end-user needs with respect to established software quality characteristics (e.g. usability, reliability, etc.). / Webbasierte, kollaborative Echtzeitanwendungen (z.B. Google Docs) erlauben es geografisch verteilten Nutzern, Dokumente gemeinschaftlich und simultan zu bearbeiten. Die Implementierung kollaborativer Echtzeitanwendungen ist allerdings aufwendig und komplex, da einerseits eine Nebenläufigkeitskontrolle von Nöten ist und andererseits die Nachvollziehbarkeit von nicht-lokalen Interaktionen mit dem gemeinsamen virtuellen Arbeitsraum gewährleistet sein muss (z.B. wer editiert wo).
Um die Entwicklung kollaborativer Echtzeitanwendungen effizient zu gestalten, wurde eine Generische Kollaborationsinfrastruktur (GKI) entwickelt. Diese GKI stellt sowohl eine Nebenläufigkeitskontrolle als auch Komponenten zur Nachvollziehbarkeit von nicht-lokalen Interaktionen auf eine wiederverwendbare und nicht-invasive Art und Weise zur Verfügung. In drei dedizierten Studien, die sowohl Entwickler als auch Endanwender umfassten, wurde die Entwicklungseffizienz der GKI nachgewiesen. Dabei wurde die Entwicklungszeit, der Umfang des Quelltextes als auch die Gebrauchstauglichkeit analysiert.
|
204 |
Operating system transactionsPorter, Donald E. 26 January 2011 (has links)
Applications must be able to synchronize accesses to operating system (OS)
resources in order to ensure correctness in the face of concurrency
and system failures. This thesis proposes system transactions,
with which the programmer
specifies atomic updates to heterogeneous system resources and the OS
guarantees atomicity, consistency, isolation, and durability (ACID).
This thesis provides a model for system transactions as a concurrency control mechanism.
System transactions efficiently and cleanly solve long-standing
concurrency problems that are difficult to address with other
techniques.
For example, malicious users can exploit
race conditions between distinct system calls in privileged applications,
gaining administrative access to a system.
Programmers can eliminate these vulnerabilities by eliminating these
race conditions with system transactions.
Similarly, failed software installations can leave a system unusable.
System transactions can roll back an unsuccessful software installation
without disturbing concurrent, independent updates to the file system.
This thesis describes the design and implementation of TxOS,
a variant of Linux 2.6.22 that implements
system transactions. The thesis contributes new implementation
techniques that yield fast, serializable transactions
with strong isolation and fairness between system transactions and
non-transactional activity.
Using system transactions,
programmers can build
applications with better performance or stronger correctness guarantees
from simpler code. For instance, wrapping an installation of
OpenSSH in a system transaction guarantees that a failed installation
will be rolled back completely. These atomicity properties are
provided by the OS, requiring no modification to the installer itself
and adding only 10% performance overhead. The prototype implementation of system transactions also
minimizes non-transactional overheads. For instance, a non-transactional
compilation of Linux incurs negligible (less than 2%) overhead on TxOS.
Finally, this thesis describes a new lock-free linked list algorithm,
called OLF, for optimistic, lock-free lists.
OLF addresses key limitations of prior algorithms, which
sacrifice functionality for performance.
Prior lock-free list algorithms can
safely insert or delete
a single item, but cannot atomically compose multiple operations
(e.g., atomically move an item between two lists).
OLF provides
both arbitrary composition of list operations as well as performance scalability
close to previous lock-free list designs.
OLF also removes previous requirements for dynamic memory allocation and garbage collection
of list nodes, making it suitable for low-level system software, such as the Linux kernel.
We replace lists
in the Linux kernel's directory cache with OLF lists, which
currently requires a coarse-grained
lock to ensure invariants across multiple lists.
OLF lists in the Linux kernel improve performance of a filesystem metadata microbenchmark
by 3x over unmodified Linux at 8 CPUs.
The TxOS prototype demonstrates that a mature
OS running on commodity hardware can provide system transactions at a
reasonable performance cost.
As a practical OS abstraction for application developers,
system transactions facilitate
writing correct application code in the presence of
concurrency and system failures.
The OLF algorithm demonstrates that application developers can have both
the functionality of locks and the performance scalability of a lock-free linked list. / text
|
205 |
Supporting Selective Formalism in CSP++ with Process-Specific StorageGumtie, Alicia 14 September 2012 (has links)
Communicating Sequential Processes (CSP) is a formal language whose primary purpose is to model and verify concurrent systems. The CSP++ toolset was created to embody the concept of selective formalism by making machine-readable CSPm specifications both executable (through the automatic synthesis of C++ source) and extensible (by allowing the integration of C++ user-coded functions). However, these user-coded functions were limited by their inability to share data with each other, which meant that their application was constrained to solving simple problems in isolation. We extend CSP++ by providing user-coded functions in the same CSP process with safe access to a shared storage area, similar in concept and API to Pthreads' thread-local storage, enabling cooperation between them and granting them the ability to undertake more complex tasks without breaking the formalism of the underlying specification. This feature's utility is demonstrated in our
line-following robot case study.
|
206 |
Type Systems for Distributed Programs: Components and SessionsDardha, Ornela 19 May 2014 (has links) (PDF)
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their correctness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock-freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π-calculus, in order to understand the expressive power of concurrent calculi with structured communication primitives and how they stand with respect to the standard typed concurrent calculi, namely (variants) of typed π-calculus. Then, we show how to derive in the session π-calculus basic properties, like type safety or complex ones, like progress, by encoding.
|
207 |
Types for Correct Concurrent API UsageBeckman, Nels E. 01 December 2010 (has links)
This thesis represents an attempt to improve the state of the art in our ability tounderstand and check object protocols, with a particular emphasis on concurrent pro-grams. Object protocols are the patterns of use imposed on clients of APIs in object-oriented programs. We show through an empirical study of open-source object-oriented programs that object protocols are quite common. We then present “Sync-or-Swim,” a methodology and suite of accompanying tools for checking at compile-time that object protocols are used and implemented correctly. This methodology isbased upon the existing access permissions method of alias control, which is hereextended to be sound in the face of shared-memory concurrency. The analysis isformalized as a type system for an object-oriented calculus, and then proven to befree from false-negatives using a proof of type safety. The type system is extendedwith parametric polymorphism, or “generics,” in order to increase its ability to checkcommonly occurring patterns. An implementation of the approach, a static analysisfor programs written in the Java programming language, is presented. This imple-mentation was used to perform a series of case studies whose goal was to evaluatethe ease of use, expressiveness and ability to verify commonly occurring patterns.These case studies are presented. Next, an approach and an associated tool for in-ferring access permission annotations is presented. This inference tool can reducethe burden of using our protocol-checking approach by automatically inferring therequired typing annotations. This inference is built upon a system of probabilisticconstraints, which allows the easy encoding of heuristics. Finally, an optimization ofsoftware transactional memory runtimes is presented. This optimization is enabledby the typing annotations required to use the concurrent protocol checker and canremove some of the overhead typically associated with transactional memory sys-tems. As a result of the work presented in this thesis, it is possible to guarantee theabsence of certain API usage errors even in concurrent programs, and to do so witha low burden on programmers. By adhering to such an approach, programmers canproduce more reliable software.
|
208 |
Concurrency Issues in Programmable Brick LanguagesMunden, Gilliad E. January 2000 (has links) (PDF)
No description available.
|
209 |
Concurrency Analysis and Mining Techniques for APIsSanthiar, Anirudh January 2017 (has links) (PDF)
Software components expose Application Programming Interfaces (APIs) as a means to access their functionality, and facilitate reuse. Developers use APIs supplied by programming languages to access the core data structures and algorithms that are part of the language framework. They use the APIs of third-party libraries for specialized tasks. Thus, APIs play a central role in mediating a developer's interaction with software, and the interaction between different software components. However, APIs are often large, complex and hard to navigate. They may have hundreds of classes and methods, with incomplete or obsolete documentation. They may encapsulate concurrency behaviour that the developer is unaware of. Finding the right functionality in a large API, using APIs correctly, and maintaining software that uses a constantly evolving API are challenges that every developer runs into. In this thesis, we design automated techniques to address two problems pertaining to APIs (1) Concurrency analysis of APIs, and (2) API mining. Speci cally, we consider the concurrency analysis of asynchronous APIs, and mining of math APIs to infer the functional behaviour of API methods.
The problem of concurrency bugs such as race conditions and deadlocks has been well studied for multi-threaded programs. However, developers have been eschewing a pure multi-threaded programming model in favour of asynchronous programming models supported by asynchronous APIs. Asynchronous programs and multi-threaded programs have different semantics, due to which existing techniques to analyze the latter cannot detect bugs present in programs that use asynchronous APIs. This thesis addresses the problem of concurrency analysis of programs that use asynchronous APIs in an end-to-end fashion. We give operational semantics for important classes of asynchronous and event-driven systems. The semantics are designed by carefully studying real software and serve to clarify subtleties in scheduling. We use the semantics to inform the design of novel algorithms to find races and deadlocks. We implement the algorithms in tools, and show their effectiveness by finding serious bugs in popular open-source software.
To begin with, we consider APIs for asynchronous event-driven systems supporting pro-grammatic event loops. Here, event handlers can spin event loops programmatically in addition to the runtime's default event loop. This concurrency idiom is supported by important classes of APIs including GUI, web browser, and OS APIs. Programs that use these APIs are prone to interference between a handler that is spinning an event loop and another handler that runs inside the loop. We present the first happens-before based race detection technique for such programs. Next, we consider the asynchronous programming model of modern languages like C]. In spite of providing primitives for the disciplined use of asynchrony, C] programs can deadlock because of incorrect use of blocking APIs along with non-blocking (asynchronous) APIs. We present the rst deadlock detection technique for asynchronous C] programs. We formulate necessary conditions for deadlock using a novel program representation that represents procedures and continuations, control ow between them and the threads on which they may be scheduled. We design a static analysis to construct the pro-gram representation and use it to identify deadlocks. Our ideas have resulted in research tools with practical impact. Sparse Racer, our tool to detect races, found 13 previously unknown use-after-free bugs in KDE Linux applications. Dead Wait, our deadlock detector, found 43 previously unknown deadlocks in asynchronous C] libraries. Developers have fixed 43 of these races and deadlocks, indicating that our techniques are useful in practice to detect bugs that developers consider worth fixing.
Using large APIs effectively entails finding the right functionality and calling the methods that implement it correctly, possibly composing many API elements. Automatically infer-ring the information required to do this is a challenge that has attracted the attention of the research community. In response, the community has introduced many techniques to mine APIs and produce information ranging from usage examples and patterns, to protocols governing the API method calling sequences. We show how to mine unit tests to match API methods to their functional behaviour, for the specific but important class of math APIs.
Math APIs are at the heart of many application domains ranging from machine learning to scientific computations, and are supplied by many competing libraries. In contrast to obtaining usage examples or identifying correct call sequences, the challenge in this domain is to infer API methods required to perform a particular mathematical computation, and to compose them correctly. We let developers specify mathematical computations naturally, as a math expression in the notation of interpreted languages (such as Matlab). Our unit test mining technique maps subexpressions to math API methods such that the method's functional behaviour matches the subexpression's executable semantics, as defined by the interpreter. We apply our technique, called MathFinder, to math API discovery and migration, and validate it in a user study. Developers who used MathFinder nished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, and IDE code completion. We also demonstrate the use of MathFinder to assist in the migration of Weka, a popular machine learning library, to a different linear algebra library.
|
210 |
[en] A PROGRAMMING INTERFACE FOR OVERLOAD CONTROL IN STAGED EVENT BASED ARCHITECTURES / [pt] UMA INTERFACE DE PROGRAMAÇÃO PARA CONTROLE DE SOBRECARGA EM ARQUITETURAS BASEADAS EM ESTÁGIOSBRENO RIBA DA COSTA CRUZ 22 February 2016 (has links)
[pt] Controle de sobrecarga pode ser feito com o uso de políticas de escalonamento adequadas, que procuram ajustar dinamicamente os recursos alocados a uma aplicação. Pela dificuldade de implementação, muitas vezes desenvolvedores se veem obrigados a reprogramar o sistema para adequá-lo a uma determinada política. Através do estudo de diversas políticas de escalonamento, propomos neste trabalho um modelo de interface que permite a criação e monitoração de novas políticas dentro de arquiteturas baseadas em estágios. Implementamos a interface de programação proposta e exercitamos um conjunto de políticas que construímos sobre ela em duas aplicações com características de carga bem distintas. / [en] Specific scheduling policies can be appropriate for overload control in
different application scenarios. However, these policies are often difficult to
implement, leading developers to reprogram entire systems in order to adapt
them to a particular policy. Through the study of various scheduling policies,
we propose an interface model that allows the programmer to integrate
new policies and monitoring schemes to the same application in a Staged
Event-Driven Architecture. We describe the implementation of the proposed
interface and the results of it s use in implementing a set of scheduling
policies for two applications with different load profiles.
|
Page generated in 0.0212 seconds