• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A machine with class : a framework for object generation, integration and language authentication (FROGILA)

Ogunshile, Emmanuel Kayode Akinshola January 2011 (has links)
The object technology model is constantly evolving to address the software crisis problem. This novel idea which informed and currently guides the design style of most modern scalable software systems has caused a strong belief that the object-oriented technology is the ultimate answer to the software crisis, i.e. applying an object-oriented development method will eventually lead to quality code. It is important to emphasise that object-orientedness does not make testing obsolete. As a matter of fact, some aspects of its very nature introduce new problems into the production of correct programs and their testing due to paradigmatic features like encapsulation, inheritance, polymorphism and dynamic binding as this research work shows. Most work in testing research has centred on procedure-oriented software with worthwhile methods of testing having been developed as a result. However, those cannot be applied directly to object-oriented software owing to the fact that the architectures of such systems differ on many key issues. In this thesis, we investigate and review the problems introduced by the features of the object technology model and then proceed to show why traditional structured software testing techniques are insufficient for testing object-oriented software by comparing the fundamental differences in their architecture. Also, by reviewing Weyuker's test adequacy axioms we show that program-based testing and specification-based testing are orthogonal and complementary. Thus, a software testing methodology that is solely based on one of these approaches (i.e. program-based or specification-based testing) cannot adequately cover all the essential paths of the system under test or satisfactorily guarantee correctness in practice. We argue that a new method is required which integrates the benefits of the two approaches and further builds upon their individual strengths to create a more meaningful, practical and reliable solution. To this end, this thesis introduces and discusses a new automaton-based framework formalism for object-oriented classes called the Class-Machine and a test method that is based on this formalism. Here, the notion of a class or the idea behind classification in object-oriented languages is embodied within a machine framework. The Class-Machine model represents a polymorphic abstraction for heterogeneous families of Object-Machines that model a real life problem in a given domain; these Object-Machines are instances of different concrete machine types. The Class-Machine has an extensible machine implementation as well as an extensible machine interface. Thus, the Class-Machine is introduced as a formal framework for generating autonomous Object-Machines (i.e. Object-Machine Generator) that share common Generic Class-Machine States and Specific Object-Machine States. The states of these Object-Machines are manipulated by a set of processing functions (i.e. Class-Machine Methods and Object-Machine Methods) that must satisfy a set of preconditions before they are allowed to modify the state(s) of the Object-Machines. The Class-Machine model can also be viewed as a platform for integrating a society of communicating Object-Machines. To verify and completely test systems that adhere to the Class-Machine framework, a novel testing method is proposed i.e. the fault-finders (f²) - a distributed family of software checkers specifically designed to crawl through a Class-Machine implementation to look for a particular type of fault and tell us the location of the fault in the program (i.e. the class under test). Given this information, we can statistically show the distribution of faults in an object-oriented system and then provide a probabilistic assertion of the number and type of faults that remain undetected after testing is completed. To address the problems caused through the encapsulation mechanism, this thesis introduces and discusses another novel framework formalism that has complete visibility on all the encapsulated methods, memory states of the instance and class variables of a given Object-Machine or Class-Machine system under test. We call this the Class Machine Friend Function (CMƒƒ). In order to further illustrate all the fundamental theoretical ideas and paradigmatic features inherent within our proposed Class-Machine model, this thesis considers four different Class-Machine case studies. Finally, to further show that the Class-Machine theoretical purity does not mitigate against practical concerns, our novel object-oriented specification, verification, debugging and testing approaches proposed in this thesis are exemplified in an automated testing tool called: The Class-Machine Testing Tool (CMTT).
142

Search-based generation of human readable test data and its impact on human oracle costs

Afshan, Sheeva January 2013 (has links)
The frequent non-availability of an automated oracle makes software testing a tedious manual task which involves the expensive performance of a human oracle. Despite this, the literature concerning the automated test data generation has mainly focused on the achievement of structural code coverage, without simultaneously considering the reduction of human oracle cost. One source of human oracle cost is the unreadability of machine-generated test inputs, which can result in test scenarios that are hard to comprehend and time-consuming to verify. This is particularly apparent for string inputs consisting of arbitrary sequences of characters that are dissimilar to values a human tester would normally generate. The key objectives of this research is to investigate the impact of a seeded search-based test data generation approach on test data oracle costs, and to propose a novel technique that can generate human readable test inputs for string data types. The first contribution of this thesis is the result of an empirical study in which human subjects are invited to manually evaluate test inputs generated using the seeded and unseeded search-based approaches for 14 open source case studies. For 9 of the case studies, the human manual evaluation was significantly less time-consuming for inputs produced using the seeded approach, while the accuracy of test input evaluation was also significantly improved in 2 cases. The second contribution is the introduction of a novel technique in which a natural language model is incorporated into the search-based process with the aim of improving the human readability of generated strings. A human study is performed in which test inputs generated using the technique for 17 open source case studies are evaluated manually by human subjects. For 10 of the case studies, the human manual evaluation was significantly less time consuming for inputs produced using the language model. In addition, the results revealed that accuracy of test input evaluation was also significantly enhanced for 3 of the case studies.
143

Computation of ripple effect measures for software

Black, Sue January 2001 (has links)
No description available.
144

Inductive logic programming using bounded hypothesis space

Athakravi, Duangtida January 2015 (has links)
Inductive Logic Programming (ILP) systems apply inductive learning to an inductive learning task by deriving a hypothesis which explains the given examples. Applying ILP systems to real applications poses many challenges as they require large search space, noise is present in the learning task, and in domains such as software engineering hypotheses are required to satisfy domain specific syntactic constraints. ILP systems use language biases to define the hypothesis space, and learning can be seen as a search within the defined hypothesis space. Past systems apply search heuristics to traverse across a large hypothesis space. This is unsuitable for systems implemented using Answer Set Programming (ASP), for which scalability is a constraint as the hypothesis space will need to be grounded by the ASP solver prior to solving the learning task, making them unable to solve large learning tasks. This work explores how to learn using bounded hypothesis spaces and iterative refinement. Hypotheses that explain all examples are learnt by refining smaller partial hypotheses. This improves the scalability of ASP based systems as the learning task is split into multiple smaller manageable refinement tasks. The thesis presents how syntactic integrity constraints on the hypothesis space can be used to strengthen hypothesis selection criteria, removing hypotheses with undesirable structure. The notion of constraint-driven bias is introduced, where hypotheses are required to be acceptable with respect to the given meta-level integrity constraints. Building upon the ILP system ASPAL, the system RASPAL which learns through iterative hypothesis refinement is implemented. RASPAL's algorithm is proven, under certain assumptions, to be complete and consistent. Both systems have been applied to a case study in learning user's behaviours from data collected from their mobile usage. This demonstrates their capability for learning with noise, and the difference in their efficiency. Constraint-driven bias has been implemented for both systems, and applied to a task in specification revision, and in learning stratified programs.
145

An integrated fault tolerance framework for service oriented computing

Hall, Stephen January 2010 (has links)
No description available.
146

Distributed, shared and persistent objects : a model for distributed object-oriented programming

Wang, Xu January 1995 (has links)
No description available.
147

The consistent representation of scientific knowledge : investigations into the ontology of karyotypes and mitochondria

Warrender, Jennifer Denise January 2015 (has links)
Ontologies are widely used in life sciences to model scienti c knowledge. The engineering of these ontologies is well-studied and there are a variety of methodologies and techniques, some of which have been re-purposed from software engineering methodologies and techniques. However, due to the complex nature of bio-ontologies, they are not resistant to errors and mistakes. This is especially true for more expressive and/or larger ontologies. In order to improve on this issue, we explore a variety of software engineering techniques that were re-purposed in order to aid ontology engineering. This exploration is driven by the construction of two light-weight ontologies, The Mitochondrial Disease Ontology and The Karyotype Ontology. These ontologies have speci c and useful computational goals, as well as providing exemplars for our methodology. This thesis discusses the modelling decisions undertaken as well as the overall success of each ontological model. Due to the added knowledge capture steps required for the mitochondrial knowledge, The Karyotype Ontology is further developed than The Mitochondrial Disease Ontology. Speci cally, this thesis explores the use of a pattern-driven and programmatic approach to bio-medical ontology engineering. During the engineering of our biomedical ontologies, we found many of the components of each model were similar in logical and textual de nitions. This was especially true for The Karyotype Ontology. In software engineering a common technique to avoid replication is to abstract through the use of patterns. Therefore we utilised localised patterns to model these highly repetitive models. There are a variety of possible tools for the encoding of these patterns, but we found ontology development using Graphical User Interface (GUI) tools to be time-consuming due to the necessity of manual GUI interaction when the ontology needed updating. With the development of Tawny- OWL, a programmatic tool for ontology construction, we are able to overcome this issue, with the added bene t of using a single syntax to express both simple and - i - patternised parts of the ontology. Lastly, we brie y discuss how other methodologies and tools from software engineering, namely unit tests, di ng, version control and Continuous Integration (CI) were re-purposed and how they aided the engineering of our two domain ontologies. Together, this knowledge increases our understanding in ontology engineering techniques. By re-purposing software engineering methodologies, we have aided construction, quality and maintainability of two novel ontologies, and have demonstrated their applicability more generally.
148

Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell

Stewart, Robert January 2013 (has links)
As the number of cores in manycore systems grows exponentially, the number of failures is also predicted to grow exponentially. Hence massively parallel computations must be able to tolerate faults. Moreover new approaches to language design and system architecture are needed to address the resilience of massively parallel heterogeneous architectures. Symbolic computation has underpinned key advances in Mathematics and Computer Science, for example in number theory, cryptography, and coding theory. Computer algebra software systems facilitate symbolic mathematics. Developing these at scale has its own distinctive set of challenges, as symbolic algorithms tend to employ complex irregular data and control structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel High Performance Computing platforms. A key element of SymGridParII is a domain specific language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for scalable distributed-memory parallelism, and employs work stealing to load balance dynamically generated irregular task sizes. To investigate providing scalable fault tolerant symbolic computation we design, implement and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles faults, using task replication as a key recovery strategy. The scheduler supports load balancing with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel skeletons that encapsulate common parallel programming patterns. The user is oblivious to many failures, they are instead handled by the scheduler. An operational semantics describes small-step reductions on states. A simple abstract machine for scheduling transitions and task evaluation is presented. It defines the semantics of supervised futures, and the transition rules for recovering tasks in the presence of failure. The transition rules are demonstrated with a fault-free execution, and three executions that recover from faults. The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN model checker is used to exhaustively search the intersection of states in this automaton to validate a key resiliency property of the protocol. It asserts that an initially empty supervised future on the supervisor node will eventually be full in the presence of all possible combinations of failures. The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey mechanism has been developed for stress testing resiliency with random failure combinations. All unit tests pass in the presence of random failure, terminating with the expected results.
149

Types, categories, actions

Revell, Timothy January 2016 (has links)
This thesis explores relational parametricity using fibrations. We present a complementary view of Reynolds's relational parametricity using the relations fibration. This approach allows us to uncover some of the hidden categorical structure present in Reynolds's original definitions and results, leading to new insights in the study of parametricity. In a similar vain we provide an alternative parametric model of System F using group actions, which has some novel differences to the standard relational model. We then alter the type system leading to a general categorical framework for type systems with dimension types. We develop some informative models of this type theory, including a model based on group actions that captures invariance under scaling.
150

Reconfigurable software communication architecture : design implementation

Mourikas, George January 2011 (has links)
No description available.

Page generated in 0.0563 seconds