• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Recurrent sets for non-termination and safety of programs

Bakhirkin, Alexey January 2016 (has links)
Termination and non-termination are a pair of fundamental program properties. Arguably, the majority of code is required to terminate, e.g., dispatch routines of drivers or other event-driven code, GPU programs, etc – and the existence of non-terminating executions is a serious bug. Such a bug may manifest by freezing a device or an entire system, or by causing a multi-region cloud service disruption. Thus, proving termination is an interesting problem in the process of establishing correctness, and proving non-termination is a complementary problem that is interesting for debugging. This work considers a sub-problem of proving non-termination – the problem of finding recurrent sets. A recurrent set is a way to compactly represent the set of nonterminating executions of a program and is a set of states from which an execution of the program cannot or may not escape (there exist multiple definitions that differ in modalities). A recurrent set acts as a part of a non-termination proof. If we find a nonempty recurrent set and are able to show its reachability from an initial state – then we prove the existence of a non-terminating execution. Most part of this work is devoted to automated static analyses that find recurrent sets in imperative programs. We follow the general framework of abstract interpretation and go all the way from trace semantics of programs to practical analyses that compute abstract representations of recurrent sets. In particular, we present two novel analyses. The first one is based on abstract pre-condition computation (backward analysis) and trace partitioning and focuses on numeric programs (but with some modifications it may be applicable to non-numeric ones). In popular benchmarks, it performs comparably to state-of-the-art tools. The second analysis is based on abstract post-condition computation (forward analysis) and is readily applicable to non-numeric (e.g., heap-manipulating) programs, which we demonstrate by tackling examples from the domain of shape analysis with 3-valued logic. As it turns out, recurrent sets can be used in establishing other properties as well. For example, recurrent sets are used in CTL model checking of programs. And as part of this work, we were able to apply recurrent sets in the process of establishing sufficient pre-conditions for safety.
132

Heuristically guided constraint satisfaction for AI planning

Judge, Mark January 2015 (has links)
Standard Planning Domain Definition Language (PDDL) planning problem definitions may be reformulated and solved using alternative techniques. One such paradigm, Constraint Programming (CP), has successfully been used in various planner architectures in recent years. The efficacy of a given constraint reformulation depends on the encoding method, search technique(s) employed, and the consequent amount of propagation. Despite advances in this area, constraint based planners have failed to match the performance of other approaches. One reason for this is that the structural information that is implicit in the domain/problem instance description is lost in the reformulation process. Hence, to achieve better performance, a constraint based planner should have an appropriate encoding and a search procedure informed by the structure of the original problem. This thesis describes a system that aims to improve planner performance by employing such methods. The planner uses a table-constraint encoding together with a new variable/ value ordering heuristic. This ordered, goal oriented guidance reduces the search space and better directs the search focus. The system also makes use of novel meta variable techniques for goal locking and resource assignment. These improve propagation and prune the search space. This CP based planning architecture is shown to perform well on a range of problem instances taken from recent International Planning Competitions (IPCs).
133

Incremental model-to-text transformation

Ogunyomi, Babajide J. January 2016 (has links)
Model-driven engineering (MDE) promotes the use of abstractions to simplify the development of complex software systems. Through several model management tasks (e.g., model verification, re-factoring, model transformation), many software development tasks can be automated. For example, model-to-text transformations (M2T) are used to realize textual development artefacts (e.g., documentation, configuration scripts, code, etc.) from underlying source models. Despite the importance of M2T transformation, contemporary M2T languages lack support for developing transformations that scale. As MDE is applied to systems of increasing size and complexity, a lack of scalable M2T transformations and other model management tasks hinders industrial adoption. This is largely due to the fact that model management tools do not support efficient propagation of changes from models to other development artefacts. As such, the re-synchronisation of generated textual artefacts with underlying system models can take considerably large amount of time to execute due to redundant re-computations. This thesis investigates scalability in the context of M2T transformation, and proposes two novel techniques that enable efficient incremental change propagation from models to generated textual artefacts. In contrast to existing incremental M2T transformation technique, which relies on model differencing, our techniques employ fundamentally different approaches to incremental change propagation: they use a form of runtime analysis that identifies the impact of source model changes on generated textual artefacts. The structures produced by this runtime analysis, are used to perform efficient incremental transformations (scalable transformations). This claim is supported by the results of empirical evaluation which shows that the techniques proposed in this thesis can be used to attain an average reduction of 60% in transformation execution time compared to non-incremental (batch) transformation.
134

Towards scalable model indexing

Barmpis, Konstantinos January 2016 (has links)
Model-Driven Engineering (MDE) is a software engineering discipline promoting models as first-class artefacts of the software lifecycle. It offers increased productivity, consistency, maintainability and reuse by using these models to generate other necessary products, such as program code or documentation. As such, persisting, accessing, manipulating, transforming and querying such models needs to be efficient, for maintaining the various benefits MDE can offer. Scalability is often identified to be a bottleneck for potential adapters of MDE, as large-scale models need to be handled seamlessly, without causing disproportionate losses in performance or limiting the ability of multiple stakeholders to work simultaneously on the same collection of large models. This work identifies the primary scalability concerns of MDE and tackles those related to the querying of large collections of models in collaborative modeling environments; it presents a novel approach whereby information contained in such models can be efficiently retrieved, orthogonally to the formats in which models are persisted. This approach, coined model indexing leverages the use of file-based version control systems for storing models, while allowing developers to efficiently query models without needing to retrieve them from remote locations or load them into memory beforehand. Empirical evidence gathered during the course of the research project is then detailed, which provides confidence that such novel tools and technologies can mitigate these specific scalability concerns; the results obtained are promising, offering large improvements in the execution time of certain classes of queries, which can be further optimized by use of caching and database indexing techniques. The architecture of the approach is also empirically validated, by virtue of integration with various state-of-the-art modeling and model management tools, and so is the correctness of the various algorithms used in this approach.
135

Engineering an adaptive and socially-aware feedback acquisition

Almaliki, Malik Rajab January 2015 (has links)
Self-adaptive software systems are characterised by their ability to monitor changes in their dynamic environment and react to these changes when needed. Adaptation is driven by these changes in the internal state of the system and its external environment. Social Adaptation is a kind of adaptation which gives users’ feedback a primary role in shaping adaptation decisions. Social Adaptation infers and employs users’ collective judgement on the alternative behaviours of a system as the main driver in tailoring adaptation decision. Users’ collective judgement is determined through individual users’ feedback collected during the lifetime of the software. Social Adaptation still lacks systematic and efficient engineering mechanisms of the acquisition process of users’ feedback. The goal of this thesis is to devise an engineering method for a systematic and adaptive acquisition of users’ feedback. Given the various contextual information which could influence how feedback should be collected from users, this thesis looks at the acquisition process itself as an adaptive process. The goal of such adaptation is to optimize the quality of obtained feedback without affecting users’ experience. In order to achieve the goal of this thesis, several empirical studies with software engineering experts and end-users have been conducted. This helped gaining insights into how the role of users’ feedback is perceived by software experts and how users behave and react to feedback acquisition. The outcomes of the empirical studies are then exploited to achieve the aim of thesis. The findings informed by these studies suggest that users’ behaviours to feedback acquisition highly varies and an adaptive feedback acquisition is highly needed to cater for differences in behaviours, improve users’ satisfaction, feedback quality and software success. To tackle this problem, the concept of Persona is employed to aid software engineers understand the various users’ behaviours and improve their ability to design feedback acquisition techniques more efficiently. The personas are developed based on a mixture of the qualitative and quantitative studies conducted throughout this thesis. In addition, this thesis proposes PAFA, a Persona-based method for a systematic design of an Adaptive Feedback Acquisition and reports on its evaluation. Finally, this thesis is also meant to contribute to the knowledge of software engineering community on developing systematic ways for feedback engineering which are hoped to lead to a better quality feedback and maintained users experience.
136

An intelligent fault tolerant multi-agent framework for automated node monitoring and software deployment

Manzoor, Umar January 2011 (has links)
Computer networks today are far more complex than in 1980s and managing such networks is a challenging job for network management team. With the ever growing complexity of computer networks and the limitations of the available assistance softwares / tools, it has become difficult, hectic, and time consuming for the network management team to execute the tasks such as traffic monitoring, node monitoring, performance monitoring, software deployment etc over the network. To address these issues, researchers as well as leading IT companies have moved towards a new paradigm called Autonomic Computing whose main is the development of self- managing systems. Autonomic system makes decision autonomously, constantly optimizes its status and adapts itself to the changing conditions. This research proposes a new autonomic framework based on multi-agent paradigm for autonomous network management. In this study, we particularly focused on monitoring node activities and software deployment, the aims were 1) to minimize the human interaction required to perform these tasks which optimizes the task processing time and reduces human resource requirement, and 2) to overcome some of the major problems (such as autonomous monitoring, autonomous installation for any type/kind of software, etc) related to these tasks. The proposed framework is fully autonomous, has an effective mechanism for achieving the said tasks and is based on Layered architecture. Once initialized with given rules / domain knowledge, it accomplishes the task(s) autonomously without human interaction / intervention. It uses mobile agents for task execution and fault / failure can affect the performance of the system; therefore, to make the system robust fault tolerance mechanism is incorporated at different levels of the system. The framework is implemented in Java using Java Agent Development (JADE) framework and supports platform independence; however, it has been tested and evaluated only on Microsoft Windows environment. In this research, the major challenges faced were 1) capturing unknown malicious applications running over the network, 2) development of generic approach which works for any type / kind of software set, 3) automatic generation of events required in software deployment process and 4) development of efficient approach for application setup transfer over network. The first challenge was related to monitoring node activities which was catered by analyzing the application content (i.e. text, image and video) using text analysis / image processing algorithms. Domain specific ontology was developed and populated using known malicious applications content for categorization purpose. The concepts extracted using the content analysis phase were mapped to domain specific ontology concepts and assigned score. The application was assigned the ontology class (if any) which has the highest score. The other challenges were related to software deployment which were catered by lunching application setup autonomously and for each step, window content (i.e. text, controls) were extracted, filtered using text processing algorithm and classified using rule based classifier. After classification, the appropriate window event was generated autonomously. The reason of using rule based classifier was that software deployment process is standardized and every installer follows the same standard. Furthermore, exponential file transfer algorithm was incorporated in the framework to transfer the application setup smartly and efficiently over the network. We have run this system on experimental basis at the university campus having seven labs equipped with 20-300 number of PCs running Microsoft Windows (any version) in various labs. For automated node monitoring evaluation, initially one hundred volunteers were selected for experimentation in these labs and all of them were told about the system. After initial experimentation, we announced about the system on the university blackboard, walls/doors of the labs etc and open the labs for all users. The announcement clearly states that "Your activities will be monitored and the collected data will be used only for educational/research purpose". The activities were monitored for one month and the monitored data was stored in database for analysis. For Software Deployment evaluation some of the popular softwares (such as Microsoft Office, Adobe Reader, FireFox etc) were deployed. The proposed framework has been tested on different scenarios and results prove that the overall performance of the proposed approach in terms of efficiency and time is far better than existing approaches / frameworks.
137

Preferential description logics : reasoning in the presence of inconsistencies

Deane, Graham January 2016 (has links)
The Web Ontology Languages define a rich machine readable language for knowledge representation on the world wide web. The current generation are based on Description Logics, a family of decidable logics that offer low complexity of reasoning. Due to the principle of explosion, from a contradiction anything follows, inconsistencies prevent meaningful inference from classical reasoners. However, locating and repairing inconsistencies can be resource intensive. This thesis presents an inconsistency tolerant semantics for the Description Logic ALC called Preferential ALC (p-ALC). A p-ALC knowledge base is comprised of defeasible and non-defeasible axioms. The defeasible ABox and TBox are labelled with weights that reflect a relative measure of confidence in each axiom and provide a mechanism to "arbitrate" between inconsistencies. Entailment is defined through the notion of preferred interpretations which minimise the total weight of the unsatisfied axioms. We introduce a tableau algorithm for p-ALC in which the open branches correspond to preferred interpretations. We prove that the algorithm is sound, complete and terminates for any input knowledge base and show how it can be used to compute p-ALC entailment by refutation. The proposed p-ALC differs from existing semantics that obtain inferences from inconsistent knowledge bases designed for classical reasoners. For instance: the para-consistent and the repair semantics, lack a mechanism for "arbitration" of inconsistency; and the mechanism included in possibilistic logic results in a logic with a weak consequence relation. We present an implementation of the algorithm using answer set programming (ASP) that is solved incrementally and exploits the optimisation of answer sets to identify preferred interpretations of p-ALC. The well defined semantics of ASP enabled us to prove that the implementation is correct. The implementation is also verified empirically and the performance is evaluated against a set of synthetically generated inconsistent knowledge bases in which inconsistency is introduced.
138

A model driven approach for software reliability prediction

Rodrigues, Genaina Nunes January 2008 (has links)
Software reliability, one of the major software quality attributes, quantitatively expresses the continuity of correct service delivery. In current practice, reliability models are typically measurement-based models, and mostly employed in isolation at the later stage of the soft ware development process, after architectural decisions have been made that cannot easily be reversed early software reliability prediction models are often insufficiently formal to be ana- lyzable and not usually connected to the target system. We postulate it is possible to overcome these issues by supporting software reliability engineering from requirements to deployment using scenario specifications. We contribute a novel reliability prediction technique that takes into account the component structure exhibited in the scenarios and the concurrent nature of component-based systems by extending scenario specifications to model (1) the probability of component failure, and (2) scenario transition probabilities. Those scenarios are subsequently transformed into enhanced behaviour models to compute the system reliability. Additionally we enable the integration between reliability and development models through profiles that extend the core Unified Modelling Language (UML). By means of a reli ability profile, the architecture of a component-based system can express both method invoca tions and deployment relationships between the application components in one environment. To facilitate reliability prediction, and determine the impact of concurrency on systems reliability, we have extended the Label Transition System Analyser Tool (LTSA), implementing a plugin for reliability analysis. Finally, we evaluate our analysis technique with a case study focusing on Condor, a dis tributed job scheduler and resource management system. The purpose of the case study is to evaluate the efficacy of our analysis technique and to compare it with other reliability tech niques.
139

From B to SPARK : an extension to a forma design environment

Storey, A. C. January 1995 (has links)
The B Method is a complete formal development process for mathematically transforming software systems from specification into code. This thesis provides the reader with an overview of that method, including a description of the development notation (Abstract Machine Notation) and its application on a case study. The main part of the thesis considers ways in which the B Method may be used and extended so as to provide a platform for the generation of SPARK code. SPARK is a subset of Ada recommended for implementing safety-critical software. Details are given on how this translation process is achieved and how it is implemented by a set of rule bases which can be executed by a theorem proving environment called the B-Tool. The current tool support for the B Method (the B Toolkit) contains a translator from B into C code. The translation process from B to C is compared with the B to SPARK translation process outlined in this thesis and conclusions are drawn as to how user-confidence is increased by translation into a programming language which is type rich, supports the structuring of programs and may be statically analysed.
140

Improving implicit parallelism

Calderon, Jose Manuel January 2015 (has links)
We propose a new technique for exploiting the inherent parallelism in lazy functional programs. Known as implicit parallelism, the goal of writing a sequential program and having the compiler improve its performance by determining what can be executed in parallel has been studied for many years. Our technique abandons the idea that a compiler should accomplish this feat in ‘one shot’ with static analysis and instead allow the compiler to improve upon the static analysis using iterative feedback. We demonstrate that iterative feedback can be relatively simple when the source language is a lazy purely functional programming language. We present three main contributions to the field: the auto- matic derivation of parallel strategies from a demand on a structure, and two new methods of feedback-directed auto-parallelisation. The first method treats the runtime of the program as a black box and uses the ‘wall-clock’ time as a fitness function to guide a heuristic search on bitstrings representing the parallel setting of the program. The second feedback approach is profile directed. This allows the compiler to use profile data that is gathered by the runtime system as the pro- gram executes. This allows the compiler to determine which threads are not worth the overhead of creating them. Our results show that the use of feedback-directed compilation can be a good source of refinement for the static analysis techniques that struggle to account for the cost of a computation. This lifts the burden of ‘is this parallelism worthwhile?’ away from the static phase of compilation and to the runtime, which is better equipped to answer the question.

Page generated in 0.0527 seconds