• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 86
  • 42
  • 24
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 722
  • 722
  • 412
  • 306
  • 305
  • 217
  • 122
  • 106
  • 97
  • 96
  • 95
  • 84
  • 60
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Automatic Reasoning Techniques for Non-Serializable Data-Intensive Applications

Gowtham Kaki (7022108) 14 August 2019 (has links)
<div> <div> <div> <p>The performance bottlenecks in modern data-intensive applications have induced database implementors to forsake high-level abstractions and trade-off simplicity and ease of reasoning for performance. Among the first casualties of this trade-off are the well-known ACID guarantees, which simplify the reasoning about concurrent database transactions. ACID semantics have become increasingly obsolete in practice due to serializable isolation – an integral aspect of ACID, being exorbitantly expensive. Databases, including the popular commercial offerings, default to weaker levels of isolation where effects of concurrent transactions are visible to each other. Such weak isolation guarantees, however, are extremely hard to reason about, and have led to serious safety violations in real applications. The problem is further complicated in a distributed setting with asynchronous state replications, where high availability and low latency requirements compel large-scale web applications to embrace weaker forms of consistency (e.g., eventual consistency) besides weak isolation. Given the serious practical implications of safety violations in data-intensive applications, there is a pressing need to extend the state-of-the-art in program verification to reach non- serializable data-intensive applications operating in a weakly-consistent distributed setting. </p> <p>This thesis sets out to do just that. It introduces new language abstractions, program logics, reasoning methods, and automated verification and synthesis techniques that collectively allow programmers to reason about non-serializable data-intensive applications in the same way as their serializable counterparts. The contributions </p> </div> </div> <div> <div> <p>xi </p> </div> </div> </div> <div> <div> <div> <p>made are broadly threefold. Firstly, the thesis introduces a uniform formal model to reason about weakly isolated (non-serializable) transactions on a sequentially consistent (SC) relational database machine. A reasoning method that relates the semantics of weak isolation to the semantics of the database program is presented, and an automation technique, implemented in a tool called ACIDifier is also described. The second contribution of this thesis is a relaxation of the machine model from sequential consistency to a specifiable level of weak consistency, and a generalization of the data model from relational to schema-less or key-value. A specification language to express weak consistency semantics at the machine level is described, and a bounded verification technique, implemented in a tool called Q9 is presented that bridges the gap between consistency specifications and program semantics, thus allowing high-level safety properties to be verified under arbitrary consistency levels. The final contribution of the thesis is a programming model inspired by version control systems that guarantees correct-by-construction <i>replicated data types</i> (RDTs) for building complex distributed applications with arbitrarily-structured replicated state. A technique based on decomposing inductively-defined data types into <i>characteristic relations</i> is presented, which is used to reason about the semantics of the data type under state replication, and eventually derive its correct-by-construction replicated variant automatically. An implementation of the programming model, called Quark, on top of a content-addressable storage is described, and the practicality of the programming model is demonstrated with help of various case studies. </p> </div> </div> </div>
542

A tangible programming environment model informed by principles of perception and meaning

Smith, Andrew Cyrus 09 1900 (has links)
It is a fundamental Human-Computer Interaction problem to design a tangible programming environment for use by multiple persons that can also be individualised. This problem has its origin in the phenomenon that the meaning an object holds can vary across individuals. The Semiotics Research Domain studies the meaning objects hold. This research investigated a solution based on the user designing aspects of the environment at a time after it has been made operational and when the development team is no longer available to implement the user’s design requirements. Also considered is how objects can be positioned so that the collection of objects is interpreted as a program. I therefore explored how some of the principles of relative positioning of objects, as researched in the domains of Psychology and Art, could be applied to tangible programming environments. This study applied the Gestalt principle of perceptual grouping by proximity to the design of tangible programming environments to determine if a tangible programming environment is possible in which the relative positions of personally meaningful objects define the program. I did this by applying the Design Science Research methodology with five iterations and evaluations involving children. The outcome is a model of a Tangible Programming Environment that includes Gestalt principles and Semiotic theory; Semiotic theory explains that the user can choose a physical representation of the program element that carries personal meaning whereas the Gestalt principle of grouping by proximity predicts that objects can be arranged to appear as if linked to each other. / School of Computing / Ph. D. (Computer Science)
543

Preserving the separation of concerns while composing aspects with reflective AOP

Marot, Antoine 07 October 2011 (has links)
Aspect-oriented programming (AOP) is a programming paradigm to localize and modularize the concerns that tend to be tangled and scattered across traditional programming modules, like functions or classes. Such concerns are known as crosscutting concerns and aspect-oriented languages propose to encapsulate them in modules called aspects. Because each crosscutting concern implemented in an aspect is separated from the other concerns, AOP improves reusability, readability, and maintainability of code.<p><p>While it improves separation of concerns, AOP suffers from well-known composition issues. Aspects developed in isolation may indeed interact with each other in ways that were not expected by the programmers and therefore lead to a program that does not meet its requirements. Without appropriate tools, undesired aspect interactions must be identified by reading code in order to gain global knowledge of the program and understand where and how aspects interact. Then, if the aspect language does not offer the needed support, these interactions must be resolved by invasively changing the code of the conflicting aspects to make them work together. Neither one of these solutions are acceptable since global knowledge as well as invasive and composition-specific modifications are exactly what separation of concerns seeks to avoid.<p><p>In this dissertation we show that the existing approaches to compose aspects are not entirely satisfying either with respect to separation of concerns. These approaches either rely on global knowledge and invasive modifications, which is problematic, or lack genericity and/or expressivity, which means that code reading/code modification may still be required for the aspect interactions they cannot handle.<p><p>To properly detect and resolve aspect interactions we propose a novel approach that is based on AOP itself. Since aspect composition is a concern that, by definition, crosscuts the aspects, it indeed makes sense to expect that a technique to improve the separation of crosscutting concerns such as AOP is well-suited for the task. The resulting mechanism is based on reflection principles and is called reflective AOP. <p><p>The main difference between "regular" AOP and reflective AOP lies in the parts of the system they address. While traditional AOP aims at modularizing the concerns that crosscut the base system, reflective AOP offers the possibility to handle the concerns that crosscut the aspects themselves. This is achieved by incorporating new kinds of joinpoints, pointcuts and advice into the aspect language. These new elements, which form what we call a meta joinpoint model, are dedicated to the aspect level and enable programmers to reason about and act upon the semantics of aspects at runtime. As validated on numerous examples of aspect composition, having a well-designed and principled meta joinpoint model makes it possible to deal with both the detection and the resolution of composition issues in a way that preserves the separation of concerns principle. These examples are illustrated using Phase, our prototype reflective AOP language. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
544

Programování na základní škole a rozvoj algoritmického myšlení žáků / Programming in elementary school and development of students' algorithmic thinking

Milichovská, Lucie January 2020 (has links)
This thesis deals with development of algorithmic thinking and teaching programming in elementary school. It focuses on available ways and tools suitable for classes. The practical part of the thesis is focused on children's programming language Scratch, which is one of the tools designed to be used for teaching. The main goal is to create comprehensive collections of tasks that develop algorithmic thinking of pupils aged 9 - 10 years. The tasks get more complex gradually so that the pupils don't need any previous programming experience. Also they are designed so that they can be solved without the assistance of a teacher. All the tasks were checked against a set of pupils in the given age range. The collection of tasks is also made available as a web presentation for the ease of further use.
545

Language Constructs for Safe Parallel Programming on Multi-Cores

Östlund, Johan January 2016 (has links)
The last decade has seen the transition from single-core processors to multi-cores and many-cores. This move has by and large shifted the responsibility from chip manufacturers to programmers to keep up with ever-increasing expectations on performance. In the single-core era, improvements in hardware capacity could immediately be leveraged by an application: faster machine - faster program. In the age of the multi-cores, this is no longer the case. Programs must be written in specific ways to utilize available parallel hardware resources. Programming language support for concurrent and parallel programming is poor in most popular object-oriented programming languages. Shared memory, threads and locks is the most common concurrency model provided. Threads and locks are hard to understand, error-prone and inflexible; they break encapsulation - the very foundation of the object-oriented approach. This makes it hard to break large complex problems into smaller pieces which can be solved independently and composed to make a whole. Ubiquitous parallelism and object-orientation, seemingly, do not match. Actors, or active objects, have been proposed as a concurrency model better fit for object-oriented programming than threads and locks. Asynchronous message passing between actors each with a logical thread of control preserves encapsulation as objects themselves decide when messages are executed. Unfortunately most implementations of active objects do not prevent sharing of mutable objects across actors. Sharing, whether on purpose or by accident, exposes objects to multiple threads of control, destroying object encapsulation. In this thesis we show techniques for compiler-enforced isolation of active objects, while allowing sharing and zero-copy communication of mutable data in the cases where it is safe to do so. We also show how the same techniques that enforce isolation can be utilized internal to an active object to allow data race-free parallel message processing and data race-free structured parallel computations. This overcomes the coarse-grained nature of active object parallelism without compromising safety. / UPMARC
546

Extensible automated constraint modelling via refinement of abstract problem specifications

Akgün, Özgür January 2014 (has links)
Constraint Programming (CP) is a powerful technique for solving large-scale combinatorial (optimisation) problems. Constraint solving a given problem proceeds in two phases: modelling and solving. Effective modelling has an huge impact on the performance of the solving process. This thesis presents a framework in which the users are not required to make modelling decisions, concrete CP models are automatically generated from a high level problem specification. In this framework, modelling decisions are encoded as generic rewrite rules applicable to many different problems. First, modelling decisions are divided into two broad categories. This categorisation guides the automation of each kind of modelling decision and also leads us to the architecture of the automated modelling tool. Second, a domain-specific declarative rewrite rule language is introduced. Thanks to the rule language, automated modelling transformations and the core system are decoupled. The rule language greatly increases the extensibility and maintainability of the rewrite rules database. The database of rules represents the modelling knowledge acquired after analysis of expert models. This database must be easily extensible to best benefit from the active research on constraint modelling. Third, the automated modelling system Conjure is implemented as a realisation of these ideas; having an implementation enables empirical testing of the quality of generated models. The ease with which rewrite rules can be encoded to produce good models is shown. Furthermore, thanks to the generality of the system, one needs to add a very small number of rules to encode many transformations. Finally, the work is evaluated by comparing the generated models to expert models found in the literature for a wide variety of benchmark problems. This evaluation confirms the hypothesis that expert models can be automatically generated starting from high level problem specifications. An method of automatically identifying good models is also presented. In summary, this thesis presents a framework to enable the automatic generation of efficient constraint models from problem specifications. It provides a pleasant environment for both problem owners and modelling experts. Problem owners are presented with a fully automated constraint solution process, once they have a precise description of their problem. Modelling experts can now encode their precious modelling expertise as rewrite rules instead of merely modelling a single problem; resulting in reusable constraint modelling knowledge.
547

Locating Potential Aspect Interference Using Clustering Analysis

Bennett, Brian Todd 01 May 2015 (has links)
Software design continues to evolve from the structured programming paradigm of the 1970s and 1980s and the object-oriented programming (OOP) paradigm of the 1980s and 1990s. The functional decomposition design methodology used in these paradigms reduced the prominence of non-functional requirements, which resulted in scattered and tangled code to address non-functional elements. Aspect-oriented programming (AOP) allowed the removal of crosscutting concerns scattered throughout class code into single modules known as aspects. Aspectization resulted in increased modularity in class code, but introduced new types of problems that did not exist in OOP. One such problem was aspect interference, in which aspects meddled with the data flow or control flow of a program. Research has developed various solutions for detecting and addressing aspect interference using formal design and specification methods, and by programming techniques that specify aspect precedence. Such explicit specifications required practitioners to have a complete understanding of possible aspect interference in an AOP system under development. However, as system size increased, understanding of possible aspect interference could decrease. Therefore, practitioners needed a way to increase their understanding of possible aspect interference within a program. This study used clustering analysis to locate potential aspect interference within an aspect-oriented program under development, using k-means partitional clustering. Vector space models, using two newly defined metrics, interference potential (IP) and interference causality potential (ICP), and an existing metric, coupling on advice execution (CAE), provided input to the clustering algorithms. Resulting clusters were analyzed via an internal strategy using the R-Squared, Dunn, Davies-Bouldin, and SD indexes. The process was evaluated on both a smaller scale AOP system (AspectTetris), and a larger scale AOP system (AJHotDraw). By seeding potential interference problems into these programs and comparing results using visualizations, this study found that clustering analysis provided a viable way for detecting interference problems in aspect-oriented software. The ICP model was best at detecting interference problems, while the IP model produced results that were more sporadic. The CAE clustering models were not effective in pinpointing potential aspect interference problems. This was the first known study to use clustering analysis techniques specifically for locating aspect interference.
548

LF : a language for reliable embedded systems

Van Riet, F. A. 11 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: Computer-aided verification techniques, such as model checking, are often considered essential to produce highly reliable software systems. Modern model checkers generally require models to be written in eSP-like notations. Unfortunately, such systems are usually implemented using conventional imperative programming languages. Translating the one paradigm into the other is a difficult and error prone process. If one were to program in a process-oriented language from the outset, the chasm between implementation and model could be bridged more readily. This would lead to more accurate models and ultimately more reliable software. This thesis covers the definition of a process-oriented language targeted specifically towards embedded systems and the implementation of a suitable compiler and run-time system. The language, LF, is for the most part an extension of the language Joyce, which was defined by Brinch Hansen. Both LF and Joyce have features which I believe make them easier to use than other esp based languages such as occam. An example of this is a selective communication primitive which allows for both input and output guards which is not supported in occam. The efficiency of the implementation is important. The language was therefore designed to be expressive, but constructs which are expensive to implement were avoided. Security, however, was the overriding consideration in the design of the language and runtime system. The compiler produces native code. Most other esp derived languages are either interpreted or execute as tasks on host operating systems. Arguably this is because most implementations of esp and derivations thereof are for academic purposes only. LF is intended to be an implementation language. The performance of the implementation is evaluated in terms of practical metries such as the time needed to complete communication operations and the average time needed to service an interrupt. / AFRIKAANSE OPSOMMING: Rekenaar ondersteunde verifikasietegnieke soos programmodellering, is onontbeerlik in die ontwikkeling van hoogs betroubare programmatuur. In die algemeen, aanvaar programme wat modelle toets eSP-agtige notasie as toevoer. Die meeste programme word egter in meer konvensionele imperatiewe programmeertale ontwikkel. Die vertaling vanuit die een paradigma na die ander is 'n moelike proses, wat baie ruimte laat vir foute. Indien daar uit die staanspoor in 'n proses gebaseerde taal geprogrammeer word, sou die verwydering tussen model en program makliker oorbrug kon word. Dit lei tot akkurater modelle en uiteindelik tot betroubaarder programmatuur. Die tesis ondersoek die definisie van 'n proses gebaseerde taal, wat gemik is op ingebedde programmatuur. Verder word die implementasie van 'n toepaslike vertaler en looptyd omgewing ook bespreek. Die taal, LF, is grotendeels gebaseer op Joyce, wat deur Brinch Hansen ontwikkel is. Joyce en op sy beurt LF, is verbeterings op ander esp verwante tale soos occam. 'n Voorbeeld hiervan is 'n selektiewe kommunikasieprimitief wat die gebruik van beide toevoer- en afvoerwagte ondersteun. Omdat 'n effektiewe implementasie nagestreef word, is die taalontwerp om so nadruklik moontlik te wees, sonder om strukture in te sluit wat oneffektief is om te implementeer. Sekuriteit was egter die oorheersende oorweging in die ontwerp van die taal en looptyd omgewing. Die vertaler lewer masjienkode, terwyl die meeste ander implementasies van eSP-agtige tale geinterpreteer word of ondersteun word as prosesse op 'n geskikte bedryfstelsel- die meeste eSP-agtige tale word slegs vir akademiese doeleindes aangewend. LF is by uitstek ontwerp as implementasie taal. Die evaluasie van die stelsel se werkverrigting is gedoen aan die hand van praktiese maatstawwe soos die tyd wat benodig word vir kommunikasie, sowel as die gemiddelde tyd benodig vir die hantering van onderbrekings.
549

Terminaison des systèmes de réécriture d'ordre supérieur basée sur la notion de clôture de calculabilité

Blanqui, Frédéric 13 July 2012 (has links) (PDF)
Dans ce document, nous montrons comment la notion de calculabilité introduite par W. W. Tait et étendue par Girard aux types polymorphes peut être utilisée et facilement étendue pour montrer la terminaison de différents types de relations de réécriture, y compris avec filtrage sur des symboles définis, filtrage d'ordre supérieur ou réécriture de classe modulo certaines théories équationnelles. Nous montrons également que la notion de clôture de calculabilité donne lieu a une relation bien fondée incluant l'extension à l'ordre supérieur par J.-P. Jouannaud et A. Rubio de l'ordre récursif sur les chemins de N. Dershowitz.
550

Vers une méthodologie dédiée à l'orchestration d'entités communicantes

Drey, Zoé 22 September 2010 (has links) (PDF)
Les technologies omniprésentes dans notre environnement intègrent des éléments logiciels facilitant leur utilisation. Ces technologies offrent un vaste laboratoire d'expérimentation pour la recherche et en particulier pour l'informatique appliquée. Ces technologies sont un support évident pour rendre des services aux personnes dans leur vie quotidienne. Ces services concernent divers champs d'applications, chacun servant des objectifs spécifiques : confort, sécurité, accès à l'information ou encore assistance à la personne. Puisque les applications offrant ces services sont intimement liées aux besoins des utilisateurs, il est indispensable qu'elles s'adaptent facilement à leurs besoins. Une manière de répondre à ce défi est de proposer à l'utilisateur des outils pour programmer lui-même ses applications. Notre contribution consiste non seulement à définir un tel outil, sous la forme d'un langage visuel paramétré par un champ d'applications, mais aussi à proposer une méthodologie dont l'objectif est de guider un utilisateur dans la programmation d'applications à l'aide de ce langage. Cette méthodologie est dédiée à l'orchestration d'entités communicantes : elles représentent les technologies déployées dans nos environnements. Notre approche, associant une méthodologie à un langage, est accessible à un programmeur novice et suffisamment expressive pour traiter divers champs d'applications. Afin d'augmenter la confiance de l'utilisateur dans le développement de ses applications, nous étendons la méthodologie en proposant une approche de développement dirigée par la vérification de quelques propriétés. Cette vérification est permise par la sémantique du langage, formellement définie.

Page generated in 0.0296 seconds