• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Νέοι Memetic αλγόριθμοι με εφαρμογές στη βιοπληροφορική

Πεταλάς, Γιάννης 20 September 2010 (has links)
- / -
232

Τεχνικές εξαγωγής συμπερασμάτων στην επιχειρηματική νοημοσύνη

Τασουλής, Δημήτρης 20 September 2010 (has links)
- / -
233

Enriched coalgebraic modal logic

Wilkinson, Toby January 2013 (has links)
We formalise the notion of enriched coalgebraic modal logic, and determine conditions on the category V (over which we enrich), that allow an enriched logical connection to be extended to a framework for enriched coalgebraic modal logic. Our framework uses V-functors L: A → A and T: X → X, where L determines the modalities of the resulting modal logics, and T determines the coalgebras that provide the semantics. We introduce the V-category Mod(A, α) of models for an L-algebra (A, α), and show that the forgetful V-functor from Mod(A, α) to X creates conical colimits. The concepts of bisimulation, simulation, and behavioural metrics (behavioural approximations),are generalised to a notion of behavioural questions that can be asked of pairs of states in a model. These behavioural questions are shown to arise through choosing the category V to be constructed through enrichment over a commutative unital quantale (Q, Ⓧ, I) in the style of Lawvere (1973). Corresponding generalisations of logical equivalence and expressivity are also introduced,and expressivity of an L-algebra (A, α) is shown to have an abstract category theoretic characterisation in terms of the existence of a so-called behavioural skeleton in the category Mod(A, α). In the resulting framework every model carries the means to compare the behaviour of its states, and we argue that this implies a class of systems is not fully defined until it is specified how states are to be compared or related.
234

Behavioural properties and dynamic software update for concurrent programmes

Anderson, Gabrielle January 2013 (has links)
Software maintenance is a major part of the development cycle. The traditional methodology for rolling out an update to existing programs is to shut down the system, modify the binary, and restart the program. Downtime has significant disadvantages. In response to such concerns, researchers and practitioners have investigated how to perform update on running programs whilst maintaining various desired properties. In a multi-threaded setting this is further complicated by the interleaving of different threads' actions. In this thesis we investigate how to prove that safety and liveness are preserved when updating a program. We present two possible approaches; the main intuition behind each of these is to find quiescent points where updates are safe. The first approach requires global synchronisation, and is more generally applicable, but can delay updates indefinitely. The second restricts the class of programs that can be updated, but permits update without global synchronisation, and guarantees application of update. We provide full proofs of all relevant properties.
235

Towards a systematic process for modelling complex systems in event-B

Alkhammash, Eman January 2014 (has links)
Formal methods are mathematical techniques used for developing large systems. The complexity of growing systems pose an increasing challenge in the task of formal development and requires a significant improvement of formal techniques and tool support. Event-B is a formal method used for modelling and reasoning about systems. The Rodin platform is an open tool that supports Event-B specification and verification. This research aims to address some challenges in modelling complex systems. The main challenges addressed in this thesis cover three aspects: The first aspect focuses on providing a way to manage the complexity of large systems. The second aspect focuses on bridging the gap between the requirements and the formal models. The third aspect focuses on supporting the reuse of models and their proofs. To address the first challenge, we have attempted to simplify the task of formal development of large systems using a compositional technique. The compositional technique aims at dividing the system into smaller parts starting from requirements, followed on by a construction of the specification of each part in isolation, and then finally composing these parts together to model the overall behaviour of the system. We classified the requirements into two categories: The first category consists of a different set of requirements, each of which describes a particular component of the system. The second category describes the composition requirements that show how components interact with each other. The first category is used to construct Event-B specification of each component separately from other components. The second category is used to show the interaction of the separated models using the composition technique. To address the second and the third challenges, we proposed two techniques in this thesis. The first technique supports construction of a formal model from informal requirements with the aim of retaining traceability to requirements in models. This approach makes use of the UML-B and atomicity decomposition (AD) approaches. UML-B provides the UML graphical notation that enables the development of an Event-B formal model, while the AD approach provides a graphical notation to illustrate the refinement structures and assists in the organisation of refinement levels. The second technique supports the reusability of Event-B formal models and their respective proof obligations. This approach adopts generic instantiation and composition approaches to form a new methodology for reusing existing Event-B models into the development process of other models. Generic instantiation technique is used to create an instance of a pattern that consists of refinement chain in a way that preserves proofs while composition is used to enable the integration of several sub-models into a large model. FreeRTOS (real-time operating system) was selected as a case study to identify and address the above mentioned general problems in the formal development of complex systems.
236

The modular compilation of effects

Day, Laurence E. January 2017 (has links)
The introduction of new features to a programming language often requires that its compiler goes to the effort of ensuring they are introduced in a manner that does not interfere with the existing code base. Engineers frequently find themselves changing code that has already been designed, implemented and (ideally) proved correct, which is bad practice from a software engineering point of view. This thesis addresses the issue of constructing a compiler for a source language that is modular in the computational features that it supports. Utilising a minimal language that allows us to demonstrate the underlying techniques, we go on to introduce a significant range of effectful features in a modular manner, showing that their syntax can be compiled independently, and that source languages containing multiple features can be compiled by making use of a fold. In the event that new features necessitate changes in the underlying representation of either the source language or that of the compiler, we show that our framework is capable of incorporating these changes with minimal disruption. Finally, we show how the framework we have developed can be used to define both modular evaluators and modular virtual machines.
237

Reinforcement learning hyper-heuristics for optimisation

Alanazi, Fawaz January 2017 (has links)
Hyper-heuristics are search algorithms which operate on a set of heuristics with the goal of solving a wide range of optimisation problems. It has been observed that different heuristics perform differently between different optimisation problems. A hyper-heuristic combines a set of predefined heuristics, and applies a machine learning technique to predict which heuristic is the most suitable to apply at a given point in time while solving a given problem. A variety of machine learning techniques have been proposed in the literature. Most of the existing machine learning techniques are reinforcement learning mechanisms interacting with the search environment with the goal of adapting the selection of heuristics during the search process. The literature on the theoretical foundation of reinforcement learning hyper-heuristics is almost nonexisting. This work provides theoretical analyses of reinforcement learning hyper-heuristics. The goal is to shed light on the learning capabilities and limitations of reinforcement learning hyper-heuristics. This improves our understanding of these hyper-heuristics, and aid the design of better reinforcement learning hyper-heuristics. It is revealed that the commonly used additive reinforcement learning mechanism, under a mild assumption, chooses asymptotically heuristics uniformly at random. This thesis also proposes the problem of identifying the most suitable heuristic with a given error probability. We show a general lower bound on the time that "every" reinforcement learning hyper-heuristic needs to identify the most suitable heuristic with a given error probability. The results reveal a general limitation to learning achieved by this computational approach. Following our theoretical analysis, different reusable and easyto-implement reinforcement learning hyper-heuristics are proposed in this thesis. The proposed hyper-heuristics are evaluated on well-known combinatorial optimisation problems. One of the proposed reinforcement learning hyper-heuristics outperformed a state-of-the-art algorithm on several benchmark problems of the well-known CHeSC 2011.
238

A software engineering approach for agent-based modelling and simulation of public goods games

Vu, Tuong Manh January 2017 (has links)
In Agent-based Modelling and Simulation (ABMS), a system is modelled as a collection of agents, which are autonomous decision-making units with diverse characteristics. The interaction of the individual behaviours of the agents results in the global behaviour of the system. Because ABMS offers a methodology to create an artificial society in which actors with their behaviour can be designed and results of their interaction can be observed, it has gained attention in social sciences such as Economics, Ecology, Social Psychology, and Sociology. In Economics, ABMS has been used to model many strategic situations. One of the popular strategic situations is the Public Goods Game (PGG). In the PGG, participants secretly choose how many of their private money units to put into a public pot. Social scientists can conduct laboratory experiments of PGGs to study human behaviours in strategic situations. Research findings from these laboratory studies have inspired studies using computational agents and vice versa. However, there is a lack of guidelines regarding the detailed development process and the modelling of agent behaviour for agent-based models of PGGs. We believe that this has contributed to ABMS of PGG not having been used to its full potential. This thesis aims to leverage the potential of ABMS of PGG, focusing on the development methodology of ABMS and the modelling of agent behaviour. We construct a development framework with incorporated software engineering techniques, then tailored it to ABMS of PGG. The framework uses the Unified Modelling Language (UML) as a standard specification language, and includes a simulation development lifecycle, a step-by-step development guideline, and a short guide for modelling agent behaviour with statecharts. It utilizes software engineering methods to provide a structured approach to identify agent interactions, and design simulation architecture and agent behaviour. The framework is named ABOOMS (Agent-Based Object-Oriented Modelling and Simulation). After applying the ABOOMS framework to three case studies, the framework demonstrates flexibility in development with two different modelling principles (Keep-It-Simple-Stupid vs. Keep-It-Descriptive-Stupid), capability in supporting complex psychological mechanisms, and ability to model dynamic behaviours in both discrete and continuous time. Additionally, the thesis developed an agent-based model of a PGG in a continuous-time setting. To the best of our knowledge such agent-based models do not exist. During the development, a new social preference, Generous Conditional Cooperators, was introduced to better explain the behavioural dynamics in continuous-time PGG. Experimentation with the agent-based model generated dynamics that are not presented in discrete-time setting. Thus, it is important to study both discrete and continuous time PGG, with laboratory experiment and ABMS. Our new framework allows to do the latter in a structured way. With the ABOOMS framework, economists can develop PGG simulation models in a structured way and communicate them with a formal model specification. The thesis also showed that there is a need for further investigation on behaviours in continuous-time PGG. For future works, the framework can be tested with variations of PGG or other related strategic interactions.
239

Regression test selection by exclusion

Ngah, Amir January 2012 (has links)
This thesis addresses the research in the area of regression testing. Software systems change and evolve over time. Each time a system is changed regression tests have to be run to validate these changes. An important issue in regression testing is how to minimise reuse the existing test cases of original program for modied program. One of the techniques to tackle this issue is called regression test selection technique. The aim of this research is to signicantly reduce the number of test cases that need to be run after changes have been made. Specically, this thesis focuses on developing a model for regression test selection using the decomposition slicing technique. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of the system. The model of regression test selection based on decomposition slicing and exclusion of test cases was developed in this thesis. The model is called Regression Test Selection by Exclusion (ReTSE) and has four main phases. They are Program Analysis, Comparison, Exclusion and Optimisation phases. The validity of the ReTSE model is explored through the application of a number of case studies. The case studies tackle all types of modication such as change, delete and add statements. The case studies have covered a single and combination types of modication at a time. The application of the proposed model has shown that signicant reductions in the number of test cases can be achieved. The evaluation of the model based on an existing framework and comparison with another model also has shown promising results. The case studies have limited themselves to relatively small programs and the next step is to apply the model to larger systems with more complex changes to ascertain if it scales up. While some parts of the model have been automated tools will be required for the rest when carrying out the larger case studies.
240

Automatic construction of conceptual models to support early stages of software development : a semantic object model approach

Chioasca, Erol-Valeriu January 2015 (has links)
The earliest stage of software development almost always involves converting requirements descriptions written in natural language (NLRs) into initial conceptual models, represented by some formal notation. This stage is time-consuming and demanding, as initial models are often constructed manually, requiring human modellers to have appropriate modelling knowledge and skills. Furthermore, this stage is critical, as errors made in initial models are costly to correct if left undetected until the later stages. Consequently, the need for automated tool support is desirable at this stage. There are many approaches that support the modelling process in the early stages of software development. The majority of approaches employ linguistic-driven analysis to extract essential information from input NLRs in order to create different types of conceptual models. However, the main difficulty to overcome is the ambiguous and incomplete nature of NLRs. Semantic-driven approaches have the potential to address the difficulties of NLRs, however, the current state of the art methods have not been designed to address the incomplete nature of NLRs. This thesis presents a semantic-driven automatic model construction approach which addresses the limitations of current semantic-driven NLR transformation approaches. Central to this approach is a set of primitive conceptual patterns called Semantic Object Models (SOMs), which superimpose a layer of semantics and structure on top of NLRs. These patterns serve as intermediate models to bridge the gap between NLRs and their initial conceptual models. The proposed approach first translates a given NLR into a set of individual SOM instances (SOMi) and then composes them into a knowledge representation network called Semantic Object Network (SON). The proposed approach is embodied in a software tool called TRAM. The validation results show that the proposed semantic-driven approach aids users in creating improved conceptual models. Moreover, practical evaluation of TRAM indicates that the proposed approach performs better than its peers and has the potential for use in real world software development.

Page generated in 0.0319 seconds