• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

An extensible static analysis framework for automated analysis, validation and performance improvement of model management programs

Wei, Ran January 2016 (has links)
Model Driven Engineering (MDE) is a state-of-the-art software engineering approach, which adopts models as first class artefacts. In MDE, modelling tools and task-specific model management languages are used to reason about the system under development and to (automatically) produce software artefacts such as working code and documentation. Existing tools which provide state-of-the-art model management languages exhibit the lack of support for automatic static analysis for error detection (especially when models defined in various modelling technologies are involved within a multi-step MDE development process) and for performance optimisation (especially when very large models are involved in model management operations). This thesis investigates the hypothesis that static analysis of model management programs in the context of MDE can help with the detection of potential runtime errors and can be also used to achieve automated performance optimisation of such programs. To assess the validity of this hypothesis, a static analysis framework for the Epsilon family of model management languages is designed and implemented. The static analysis framework is evaluated in terms of its support for analysis of task-specific model management programs involving models defined in different modelling technologies, and its ability to improve the performance of model management programs operating on large models.
122

Model checking of state-rich formalisms (by linking to combination of state-based formalism and process algebra)

Ye, Kangfeng January 2016 (has links)
Computer-based systems are becoming more and more complex. It is really a grand challenge to assure the dependability of these systems with the growing complexity, especially for high integrity and safety critical systems that require extremely high dependability. Circus, as a formal language, is designed to tackle this problem by providing precision preservation and correctness assurance. It is a combination of Z, CSP, refinement calculus and Dijkstra's guarded commands. A main objective of Circus is to provide calculational style refinement that differentiates itself from other integrated formal methods. Looseness, which is introduced from constants and uninitialised state space in Circus, and nondeterminism, which is introduced from disjunctive operations and CSP operators, make model checking of Circus more difficult than that of sole CSP or Z. Current approaches have a number of disadvantages like nondeterminism and divergence information loss, abstraction deterioration, and no appropriate tools to support automation. In this thesis, we present a new approach to model-check state-rich formalisms by linking them to a combination of a state-based formalism and a process algebra. Specifically, the approach illustrated in this thesis is to model-check Circus by linking to CSP || B. Eventually, we can use ProB, a model checker for B, Event-B, and CSP || B etc., to check the resultant CSP || B model. A formal link from Circus to CSP || B is defined in our work. Our link solution is to rewrite Circus models first to make all interactions between the state part and the behavioural part of Circus only through schema expressions, then translate the state part and the behavioural part to B and CSP respectively. In addition, since the semantics of Circus is based on Hoare and He's Unifying Theories of Programming (UTP), in order to prove the soundness of our link, we also give UTP semantics to CSP || B. Finally, because both ends of the link have their semantics defined in UTP, they are comparable. Furthermore, in order to support an automatic translation process, a translator is developed. It has supported almost all constructs defined in the link though with some limitations. Finally, three case studies are illustrated to show the usability of our model checking solution as well as limitations. The bounded reactive buffer is a typical Circus example. By our model checking approach, basic properties like deadlock freedom and divergence freedom for both the specification and the implementation with a small buffer size have been verified. In addition, the implementation has been verified to be a refinement of the specification in terms of traces and failures. Afterwards, in the Electronic Shelf Edge Label (ESEL) case study, we demonstrate how to use Circus to model different development stages of systems from the specification to two more specific systems. We have verified basic properties and sequential refinements of three models as well as three application related properties. Similarly, only the systems with a limited number of ESELs are verified. Finally, we present the steam boiler case study. It is a real and industrial control system problem. Though our solution cannot model check the steam boiler model completely due to its large state space, our solution still proves its benefits. Through our model checking approach, we have found a substantial number of errors from the original Circus solution. Then with counterexamples during animation and model checking, we have corrected all these found errors.
123

Enhancing legacy software system analysis by combining behavioural and semantic information sources

Cutting, David January 2016 (has links)
Computer software is, by its very nature highly complex and invisible yet subject to a near-continual pressure to change. Over time the development process has become more mature and less risky. This is in large part due to the concept of software traceability; the ability to relate software components back to their initial requirements and between each other. Such traceability aids tasks such as maintenance by facilitating the prediction of “ripple effects” that may result, and aiding comprehension of software structures in general. Many organisations, however, have large amounts of software for which little or no documentation exists; the original developers are no longer available and yet this software still underpins critical functions. Such “legacy software” can therefore represent a high risk when changes are required. Consequently, large amounts of effort go into attempting to comprehend and understand legacy software. The most common way to accomplish this, given that reading the code directly is hugely time consuming and near-impossible, is to reverse engineer the code, usually to a form of representative projection such as a UML class diagram. Although a wide number of tools and approaches exist, there is no empirical way to compare them or validate new developments. Consequently there was an identified need to define and create the Reverse Engineering to Design Benchmark (RED-BM). This was then applied to a number of industrial tools. The measured performance of these tools varies from 8.8% to 100%, demonstrating both the effectiveness of the benchmark and the questionable performance of several tools. In addition to the structural relationships detectable through static reverse engineering, other sources of information are available with the potential to reveal other types of relationships such as semantic links. One such source is the mining of source code repositories which can be analysed to find components within a software system that have, historically, commonly been changed together during the evolution of the system and from the strength of that infer a semantic link. An approach was implemented to mine such semantic relationships from repositories and relationships were found beyond those expressed by static reverse engineering. These included groups of relationships potentially suitable for clustering. To allow for the general use of multiple information sources to build traceability links between software components a uniform approach was defined and illustrated. This includes rules and formulas to allow combination of sources. The uniform approach was implemented in the field of predictive change impact analysis using reverse engineering and repository mining as information sources. This implementation, the Java Code Relationship Anlaysis (jcRA) package, was then evaluated against an industry standard tool, JRipples. Depending on the target, the combined approach is able to outperform JRipples in detecting potential impacts with the risk of over-matching (a high number of false-positives and overall class coverage on some targets).
124

Domain-specific research idea creation

Jing, D. January 2016 (has links)
The research presented in this thesis is aimed at providing an idea creation method through the support of Creative Computing and other related techniques from software engineering. On one hand, it is always a controversial topic on whether autonomous idea creation is possible due to different perspectives; on the other hand, idea creation becomes extremely important in our life nowadays. Hence, this study is strived to contribute on automatic ideation, in which computers, instead of human beings, generate creative ideas. A creative idea is an idea that is not only useful but also novel and surprising. To achieve this goal, it concentrates on creativity in ideas, which is lacking of efforts in both research and industrial communities. Therefore, this thesis proposes a novel idea creation approach with three phases: Knowledge Extraction/Reuse, Idea Elements Generation and Evolving into Creative Ideas. In particular, this thesis provides four original contributions. The first contribution is a novel approach to create ideas for specific domains, based on simulated human and machine ideation processes and Creative Computing. The second one is a set of rules for extracting knowledge, based on abstraction techniques and ontology techniques, for extracting useful/reusable domain knowledge from relevant text data. The third one consists of reasoning rules, based on identified atomic reasoning operations, where ontology, description logics, inference techniques, creativity principles and the atomic operations are employed. The fourth one is a developed inference engine that can be used in multiple applications for different domains, where the main component of Idea Elements Generation phase and the reasoning rules are implemented. Combining with the designed creativity evaluation metrics with identified creativity elements, sub-elements and corresponding algorithms, the proposed idea creation approach provides core techniques to support for every phases of creating new ideas. Therefore, one prototype system with three applications is developed following the proposed approach proving the concept of the contributions of the study.
125

Runtime quantitative verification of self-adaptive systems

Gerasimou, Simos January 2016 (has links)
Software systems used in mission- and business-critical applications in domains including defence, healthcare, and finance must comply with strict dependability, performance, and other Quality-of-Service (QoS) requirements. Self-adaptive systems achieve this compliance under changing environmental conditions, evolving requirements and system failures by using closed-loop control to modify their behaviour and structure in response to these events. Runtime quantitative verification (RQV) is a mathematically-based approach that implements the closed-loop control of self-adaptive systems. Using runtime observations of a system and its environment, RQV updates stochastic models whose formal analysis underpins the adaptation decisions made within the control loop. The approach can identify and, under certain conditions, predict violation of QoS requirements, and can drive self-adaptation in ways guaranteed to restore or maintain compliance with these requirements. Despite its merits, RQV has significant computation and memory overheads, which restrict its applicability to small systems and to adaptations affecting only the configuration parameters of the system. In this thesis, we introduce RQV variants that improve the efficiency and scalability of the approach and extend its applicability to larger and more complex self-adaptive software systems, and to adaptations that modify the structure of a system. First, we integrate RQV with established efficiency improvement techniques from other software engineering areas. We use caching of recent analysis results, limited lookahead to precompute suitable adaptations for potential future changes, and nearly-optimal reconfiguration to eliminate the need for an exhaustive analysis of the entire reconfiguration space. Second, we introduce an RQV variant that incorporates evolutionary algorithms into the RQV process facilitating the efficient search through large reconfiguration spaces and enabling adaptations that include structural changes. Third, we propose an RQV-driven approach that decentralises the control loops in distributed self-adaptive systems. Finally, we devise an RQV-based methodology for the engineering of trustworthy self-adaptive systems. We evaluate the proposed RQV variants using prototype self-adaptive systems from several application domains, including an embedded system for unmanned underwater vehicles and a foreign exchange service-based system. Our results, subject to the adaptation scenarios used in the evaluation, demonstrate the effectiveness and generality of the new RQV variants.
126

Denotational semantics of mobility in Unifying Theories of Programming (UTP)

Ekembe Ngondi, Gerard January 2016 (has links)
UTP promotes the unification of programming theories and has been used successfully for giving denotational semantics to Imperative Programming, CSP process algebra, and the Circus family of programming languages, amongst others. In this thesis, we present an extension of UTP-CSP (the UTP semantics for CSP) with the concept of mobility. Mobility is concerned with the movement of an entity from one location (the source) to another (the target). We deal with two forms of mobility: • Channel mobility, concerned with the movement of links between processes, models networks with a dynamic topology; and • Strong process mobility, which requires to suspend a running process first, and then move both its code and its state upon suspension, and finally resume the process on the target upon reception. Concerning channel mobility: • We model channels as concrete entities in CSP, and show that it does not affect the underlying CSP semantics. • A requirement is that a process may not own a channel prior to receiving it. In CSP, the set of channels owned by a process (called its interface) is static by definition. We argue that making the interface variable introduces a paradox. We resolve this by introducing a new concept: the capability of a process, and show how it relates to the interface. We then define channel mobility as the operation that changes the interface of a process, but not its capability. We also provide a functional link between static CSP and its mobile version. Concerning strong mobility, we provide: • The first extension of CSP with jump features, using the concept of continuations. • A novel semantics for the generic interrupt (a parallel-based interrupt operator), using the concept of Bulk Synchronous Parallelism. We then define strong mobility as a specific interrupt operator in which the interrupt routine migrates the suspended program.
127

A new framework and learning tool to enhance the usability of software

Almansour, Fahad January 2017 (has links)
Due to technological developments, apps (mobile applications) and web-based applications are now used daily by millions of people worldwide. Accordingly, such applications need to be usable by all groups of users, regardless of individual attributes. Thus, software usability measurement is fundamental metric that needs to be evaluated in order to assess software efficiency, effectiveness, learnability and user satisfaction. Consequently, a new approach is required that both educates software novice developers in software evaluation methods and promotes the use of usability evaluation methods to create usable products. This research devised a development framework and learning tool in order to enhance overall awareness and assessment practice. Furthermore, the research also focuses on Usability Evaluation Methods (UEMs) with the objective of providing novice developers with support when making decisions pertaining to the use of learning resources. The proposed development framework and its associated learning resources is titled dEv (Design Evaluation), and it has been devised in order to address the three key challenges identified in the literature review and reinforce by the studies. These three challenges are: (i) the involvement of users in the initial phases of the development process, (ii) the mindset and perspectives of novice developers with regard to various issues as a result of their lack of UEMs or the provision of too many, and (iii) the general lack of knowledge and awareness concerning the importance and value of UEMs. The learning tool was created in line with investigation studies, feedback and novice developers requirements in the initial stages of the development process. An iterative experimental approach was adapted which incorporated the use of interviews and survey-based questionnaires. It was geared towards analysing the framework, learning tool and their various effects. Two subsequent studies were carried out in order to test the approach adopted and provide insight into its results. The studies also reported on their ability to affect novice developers using assessment methods and also to overcome a number of the difficulties associated with UEM application. This suggested approach is valuable when considering two different contributions: primarily, the integration of software evaluation and software development in the dEv framework, which encourages professionals to evaluate across all phases of the development; secondly, it is able to enhance developer awareness and insight with regard to evaluation techniques and their application.
128

Error detection and recovery in software development

Lopez, Tamara January 2016 (has links)
Software rarely works as intended when it is first written. Software engineering research has long been concerned with assessing why software fails and who is to blame, or why a piece of software is flawed and how to prevent such faults in the future. Errors are examined in the context of bugs, elements of source code that produce undesirable, unexpected and unintended deviations in behaviour. Though error is a prevalent, mature topic within software engineering, error detection and recovery are less well understood. This research uses rich qualitative methods to study error detection and recovery in professional software development practice. It has considered conceptual representations of error in software engineering research and trade literature. Using ethnographic principles, it has gathered accounts given by professional developers in interviews and in video-recorded paired interaction. Developers performing a range of tasks were observed, and findings were compared to theories of human error formed in psychology and safety science. Three empirical studies investigated error from the perspective of developers, recon- structing the view they hold when errors arise, to build a catalogue of active encounters with error in conceptual design, at the desk and after the fact. Analyses were structured to consider development holistically over time, rather than in terms of discrete tasks. By placing emphasis on “local rationality”, analytical focus was redirected from outcomes toward factors that influence performance. The resultant observations are assembled in an account of error handling in software development as personal and situated (in time and the developer’s environment), with implications for the changing nature of expertise.
129

Type inference in flexible model-driven engineering

Zolotas, Athanasios January 2016 (has links)
Model-driven Engineering (MDE) is an approach to software development that promises increased productivity and product quality. Domain models that conform to metamodels, both of which are the core artefacts in MDE approaches, are manipulated to perform different development processes using specific MDE tools. However, domain experts, who have detailed domain knowledge, typically lack the technical expertise to transfer this knowledge using MDE tools. Flexible or bottom-up Model-driven Engineering is an emerging approach to domain and systems modelling that tackles this challenge by promoting the use of simple drawing tools to increase the involvement of domain experts in MDE processes. In this approach, no metamodel is created upfront but instead the process starts with the definition of example models that will be used to infer a draft metamodel. When complete knowledge of the domain is acquired, a final metamodel is devised and a transition to traditional MDE approaches is possible. However, the lack of a metamodel that encodes the semantics of conforming models and of tools that impose these semantics bears some drawbacks, among others that of having models with nodes that are unintentionally left untyped. In this thesis we propose the use of approaches that use algorithms from three different research areas, that of classification algorithms, constraint programming and graph similarity to help with the type inference of such untyped nodes. We perform an evaluation of the proposed approaches in a number of randomly generated example models from 10 different domains with results suggesting that the approaches could be used for type inference both in an automatic or a semi-automatic style.
130

An activity-centric approach to configuration work in distributed interaction

Houben, Steven January 2016 (has links)
The widespread introduction of new types of computing devices, such as smartphones, tablet computers, large interactive displays or even wearable devices, has led to setups in which users are interacting with a rich ecology of devices. These new device ecologies have the potential to introduce a whole new set of cross-device and cross-user interactions as well as to support seamless distributed workspaces that facilitate coordination and communication with other users. Because of the distributed nature of this paradigm, there is an intrinsic difficulty and overhead in managing and using these kind of complex device ecologies, which I refer to as configuration work. It is the effort required to set up, manage, communicate, understand and use information, applications and services that are distributed over all devices in use and people involved. Because current devices and their containing software are still document- and application-centric, they fail to capture and support the rich activities and context in which they are being used. This leaves users without a stable concept for cross-device information management, forcing them to perform a large amount of manual configuration work. In this dissertation, I explore an activity-centric approach to configuration work in distributed interaction. The central goal of this dissertation is to develop and apply concepts and ideas from Activity-Centric Computing to distributed interaction. Using the triangulation approach, I explore these concepts on a conceptual, empirical and technological level and present a framework and use cases for designing activitycentric configurations in multi-device information systems. The dissertation presents two major contributions: First, I introduce the term configuration work as an abstract analytical unit that describes and captures the problems and challenges of distributed interaction. Using both empirical data and related work, I argue that configuration work is composed of: curation work, task resumption lag, mobility work, physical handling and articulation work. Using configuration work as a problem description, I operationalize Activity Theory and Activity-Centric Computing to mitigate and reduce configuration work in distributed interaction. By allowing users to interact with computational representations of their real-world activities, creating complex multi-user device ecologies and switching between cross-device information configurations will be more efficient, more effective and provide better support for users’ mental model about a multi-user and multi-device environment. Using activity configuration as a central concept, I introduce a framework that describes how digital representations of human activity can be distributed, fragmented and used across multiple devices and users. Second, I present a technical infrastructure and four applications that apply the concepts of activity configuration. The infrastructure is a general purpose platform for the design, development and deployment of distributed activitycentric systems. The infrastructure simplifies the development of activity-centric systems as it presents complex distributed computing processes and services into high level activity system abstractions. Using this infrastructure and conceptual framework, I describe four fully working applications that explore multi-device interactions in two specific domains: office work and hospital work. The systems are evaluated and tested with end-users in a number of lab and field studies.

Page generated in 0.0444 seconds