• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 2
  • Tagged with
  • 36
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Termination analysis of higher-order functional programs

Sereni, Damien January 2006 (has links)
No description available.
12

An ontological approach to model software quality assurance knowledge domain

Bajnaid, Nada O. January 2013 (has links)
Software Quality Assurance (SQA) becomes one of the most important objectives of software development and maintenance activities and as a result within an area of Software Engineering (SE) there are developed standards related to the SQA. Despite the effort made to improve consistency and coherency among standards, still there is no single standard embraces the whole SQA knowledge area. To contribute to this effort, this thesis presents an ontological model to describe and define the SQA knowledge area. International standards (SWEBOK, IEEE, and ISO) were the main sources of the terminology and semantic relations of the developed SQA conceptual model. A formal ontology was implemented using the semantic web open standard OWL language. To avoid contradictory information, the developed ontology was validated for consistency. Clarity and completeness have been evaluated using assessment questionnaire. Application-Based ontology evaluation is used to measure practical aspects of ontology deployment. Based on the results and findings of the ontology evaluation process, an enhanced version of the SQA ontology was developed. The ultimate goal was to develop an ontology that faithfully models the SQA discipline as practiced in the software development life cycle.
13

Stability of test criteria and fault hierarchies in software testing

Kapoor, Kalpesh January 2004 (has links)
Software testing is an important activity to verify and validate a system. A test criterion is a set of rules that are used for assessing the system under test. The effectiveness of a test criterion is measured in terms of its ability to reveal faults. Two key issues in software testing are: (a) effectiveness of test criteria in detecting faults, and (b) minimisation of test effort. These issues are studied empirically and formally within the framework of fault domain and test hypothesis. Typically, for a given test criterion, more than one test set may satisfy the criterion for a specification and implementation. A new notion of stability of test criteria is defined to assess the variation in effectiveness of test sets for a given test criterion. Experimental evaluation of stability is performed for various types of coverage such as condition coverage, decision condition coverage, full-predicate coverage, modified condition decision coverage and reinforced condition decision coverage. Fault detection effectiveness is also studied using a formal framework, which is applied to identify the conditions for the detection of various fault classes. It is shown that the number of test cases in a test set that can detect all hypothesised faults in an implementation depends on the complexity of the specification. One of the main difficulties with the fault-based testing approach is due to the large number of possible elements in a fault domain or that can be generated on the basis of a test hypothesis. To overcome this problem, various conditions to establish fault hierarchies that help to identify stronger faults are described. Here, the objective is to establish the relationship between faults such that detection of one fault guarantees the detection of another. The analysis of fault hierarchies is also shown to be useful in validating the coupling hypothesis, which states that if a test technique can detect an implementation with one fault, it can also detect the presence of multiple faults. The results obtained from the empirical and formal analysis provide an insight into earlier contradictory results regarding the effectiveness of test criteria. The formal framework helps in the classification of specifications and implementations in order to evaluate the effort required for testing; and the concept of fault hierarchy is useful in reducing test effort.
14

Contract representation for validation and run time monitoring

Solaiman, Ellis January 2004 (has links)
Organisations are increasingly using the Internet to offer their own services and to utilise the services of others. This naturally leads to resource sharing across organisational boundaries. Nevertheless, organisations will require their interactions with other organisations to be strictly controlled. In the paper-based world, business interactions, information exchange and sharing have been conducted under the control of contracts that the organisations sign. The world of electronic business needs to emulate electronic equivalents of the contract based business management practices. This thesis examines how a 'conventional' contract can be converted into its electronic equivalent and how it can be used for controlling business interactions taking place through computer messages. To implement a contract electronically, a conventional text contract needs to be described in a mathematically precise notation so that the description can be subjected to rigorous analysis and freed from the ambiguities that the original humanoriented text is likely to contain. Furthermore, a suitable run time infrastructure is required for monitoring the executable version of the contract. To address these issues, this thesis describes how standard conventional contracts can be converted into Finite State Machines (FSMs). It is illustrated how to map the rights and obligations extracted from the clauses of the contract into the states, transition and output functions, and input and output symbols of a FSM. The thesis then goes on to develop a list of correctness properties that a typical executable business contract should satisfy. A contract model should be validated against safety properties, which specify situations that the contract must not get into (such as deadlocks, unreachable states ... etc), and liveness properties, which detail qualities that would be desirable for the contract to contain (responsiveness, accessibility ... etc). The FSM description can then be subjected to model checking. This is demonstrated with the aid of examples using the Promela language and the Spin validator. Subsequently, the FSM representation can be used to ensure that the clauses stipulated in the contract are observed when the contract is executed. The requirements of a suitable run time infrastructure for monitoring contract compliance are discussed and a prototype middleware implementation is presented.
15

Agent-based trust and reputation in the context of inaccurate information sources

Teacy, W. T. Luke January 2006 (has links)
Trust is a prevalent concept in human society that, in essence, concerns our reliance on the actions of other entities within our environment. For example, we may rely on our car starting to get to work on time, and on our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require integration of computing resources, across organisational boundaries. In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another, as certain resources may fail more often than others. For this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which of them to rely on. With this in mind, our goal is to develop mechanisms, or models, to aid decision making by an autonomous agent (the truster), when the consequences of its decisions depend on the actions of other agents (the trustees). To achieve this, we have developed a probabilistic framework for assessing trust based on a trustee's past behaviour, which we have instantiated through the creation of two novel trust models (TRAVOS and TRAVOS-C). These facilitate decision making in two different contexts with regard to trustee behaviour. First, using TRAVOS, a truster can make decisions in contexts where a trustee can only act in one of two ways: either it can cooperate, acting to the truster's advantage; or it can defect, thereby acting against the truster's interests. Second, using TRAVOS-C, a truster can make decisions about trustees that can act in a continuous range of ways, for example, taking into account the delivery time of a service. These models share an ability to account for observations of a trustee's behaviour, made either directly by the truster, or by a third party (reputation source). In the latter case, both models can cope with third party information that is unreliable, either because the sender is lying, or because it has a different world view. In addition, TRAVOS-C can assess a trustee for which there is little or no direct or reported experience, using information about other agents that share characteristics with the trustee. This is achieved using a probabilistic mechanism, which automatically accounts for the amount of correlation observed between agents' behaviour, in a truster's environment.
16

Use of program and data-specific heuristics for automatic software test data generation

Alshraideh, Mohammad January 2007 (has links)
The application of heuristic search techniques, such as genetic algorithms, to the problem of automatically generating software test data has been a growing interest for many researchers in recent years. The problem tackled by this thesis is the development of heuristics for test data search for a class of test data generation problems that could not be solved prior to the work done in this thesis because of a lack of an informative cost function. Prior to this thesis, work in applying search techniques to structural test data generation was largely limited to numeric test data and in particular, this left open the problem of generating string test data. Some potential string cost functions and corresponding search operators are presented in this thesis. For string equality, an adaptation of the binary Hamming distance is considered, together with two new string specific match cost functions. New cost functions for string ordering are also defined. For string equality, a version of the edit distance cost function with fine-grained costs based on the difference in character ordinal values was found to be the most effective in an empirical study. A second problem tackled in this thesis is the problem of generating test data for programs whose coverage criterion cost function is locally constant. This arises because the computation produced by many programs leads to a loss of information. The use of flag variables, for example, can lead to information loss. Consequently, conventional instrumentation added to a program receives constant or almost constant input and hence the search receives very little guidance and will often fail to find test data. The approach adopted in this thesis is to exploit the structure and behaviour of the computation from the input values to the test goal, the usual instrumentation point. The new technique depends on introducing program data-state scarcity as an additional search goal. The search is guided by a new fitness function made up of two parts, one depending on the branch distance of the test goal, the other depending on the diversity of the data-states produced during execution of the program under test. In addition to the program data-state, the program operations, in the form of the program-specific operations, can be used to aid the generation of test data. The program-specific operators is demonstrated for strings and an empirical investigation showed a fivefold increase in performance. This technique can also be generalised to other data types. An empirical investigation of the use of program-specific search operators combined with a data-state scarcity search for flag problems showed a threefold increase in performance.
17

Test-driven development of embedded control systems : application in an automotive collision prevention system

Dohmke, Thomas January 2008 (has links)
With test-driven development (TDD) new code is not written until an automated test has failed, and duplications of functions, tests, or simply code fragments are always removed. TDD can lead to a better design and a higher quality of the developed system, but to date it has mainly been applied to the development of traditional software systems such as payroll applications. This thesis describes the novel application of TDD to the development of embedded control systems using an automotive safety system for preventing collisions as an example. The basic prerequisite for test-driven development is the availability of an automated testing framework as tests are executed very often. Such testing frameworks have been developed for nearly all programming languages, but not for the graphical, signal driven language Simulink. Simulink is commonly used in the automotive industry and can be considered as state-of-the-art for the design and development of embedded control systems in the automotive, aerospace and other industries. The thesis therefore introduces a novel automated testing framework for Simulink. This framework forms the basis for the test-driven development process by integrating the analysis, design and testing of embedded control systems into this process. The thesis then shows the application of TDD to a collision prevention system. The system architecture is derived from the requirements of the system and four software components are identified, which represent problems of particular areas for the realisation of control systems, i.e. logical combinations, experimental problems, mathematical algorithms, and control theory. For each of these problems, a concept to systematically derive test cases from the requirements is presented. Moreover two conventional approaches to design the controller are introduced and compared in terms of their stability and performance. The effectiveness of the collision prevention system is assessed in trials on a driving simulator. These trials show that the system leads to a significant reduction of the accident rate for rear-end collisions. In addition, experiments with prototype vehicles on test tracks and field tests are presented to verify the system’s functional requirements within a system testing approach. Finally, the new test-driven development process for embedded control systems is evaluated in comparison to traditional development processes.
18

Software measurement for functional programming

Ryder, Chris January 2004 (has links)
This thesis presents an investigation into the usefulness of software measurement techniques, also known as software metrics, for software written in functional programming languages such as Haskell. Statistical analysis is performed on a selection of metrics for Haskell programs, some taken from the world of imperative languages. An attempt is made to assess the utility of various metrics in predicting likely places that bugs may occur in practice by correlating bug fixes with metric values within the change histories of a number of case study programs. This work also examines mechanisms for visualising the results of the metrics and shows some proof of concept implementations for Haskell programs, and notes the usefulness of such tools in other software engineering processes such as refactoring.
19

Μελέτη σύνθετων εφαρμογών και ανάλυση δυνατοτήτων παραλληλοποίησης τους σε αρχιτεκτονικές κοινής και κοινής/κατανεμημένης μνήμης

Μουτσουρούφης, Γεώργιος 26 September 2007 (has links)
Η χρήση μετροπρογραμμάτων για την μέτρηση της επίδοσης και της απόδοσης συστημάτων αρχιτεκτονικής πολλαπλών επεξεργαστικών στοιχείων και συστημάτων λογισμικού για την υποστήριξη εκτέλεσης παράλληλων εφαρμογών πάνω σε πολυεπεξεργαστικές πλατφόρμες, είναι μία έγκυρη και ευρέως διαδεδομένη μέθοδος. Μάλιστα για την τυποποίηση τέτοιων προγραμμάτων, μεγάλες και παγκοσμίως αναγνωρισμένες ερευνητικές ομάδες, έχουν προτείνει και έχουν παραλληλοποιήσει συλλογή μετροπρογραμμάτων, ως αποτέλεσμα πολυετούς εμπειρίας και έρευνας. Οι πιο γνωστές συλλογές μετροπρογραμμάτων είναι τα SPEC, NAS και SPLASH. Όμως αν και οι κώδικες αυτών των συλλογών ετροπρογραμμάτων είναι διαθέσιμοι στην επιστημονική κοινότητα, δεν είναι δυνατόν να καλύπτουν πλήρως τις ανάγκες των ερευνητικών ομάδων παγκοσμίως που ασχολούνται με την έρευνα και την ανάπτυξη συστημάτων λογισμικού για την υποστήριξη παράλληλων εφαρμογών πάνω σε πολυεπεξεργαστικές πλατφόρμες. Η παρούσα διπλωματική, επιχειρεί τη μελέτη, την ανάλυση και την παραλληλοποίηση δύο σύνθετων ακολουθιακών εφαρμογών που θα χρησιμοποιηθούν ως μετροπρογράμματα για την αξιολόγηση συστημάτων υποστήριξης παράλληλων εφαρμογών. Οι πλατφόρμες υποστήριξης παράλληλων εφαρμογών αναπτύσσονται στο Εργαστήριο Πληροφοριακών Συστημάτων Υψηλών Επιδόσεων (ΕΠΣΥΕ). Η διπλωματική αποτελείται από την παρουσίαση και την ανάλυση μεθόδων βελτιστοποίησης εφαρμογών. Αναφέρεται σε βελτιώσεις μονοεπεξεργαστικών συστημάτων, που συνήθως συντελούνται με την αλλαγή κώδικα για την καλύτερη εκμετάλλευση των πόρων του συστήματος, και από την προσαρμογή ή και την αλλαγή των ακολουθιακών αλγορίθμων για παράλληλες αρχιτεκτονικές συστημάτων SMP. Στη συνέχεια επιχειρούμε την βελτιστοποίηση και την παραλληλοποίηση δύο εφαρμογών τις οποίες θα χρησιμοποιεί η ομάδα παρλλαλήλων συστημάτων του εργαστηρίου ΕΠΣΥΕ για την αξιολόγηση του λογισμικού που αναπτύσσει στα πλαίσια των ερευνητικών της δραστηριοτήτων. Η μία εφαρμογή είναι από το χώρο της Ιατρικής και αφορά τον υπολογισμό του απαιτούμενου ποσοστού ακτινοβολίας για την ακτινοθεραπεία όγκων. Η δεύτερη εφαρμογή είναι από το χώρο της μοριακής χημείας και αναφέρεται στον υπολογισμό της κίνησης των μορίων, αερίων εγκλωβισμένων σε μάζες στερεών σωμάτων. Τέλος παραθέτουμε τις μετρήσεις των βελτιστοποιημένων και παραλληλοποιημένων εφαρμογών και τις βελτιώσεις που επιτυγχάνουν. Η προσπάθεια βελτιστοποίησης και προσαρμογής των συγκεκριμένων αλλά και επιπλέον εφαρμογών σε υπάρχουσες αλλά και σε νέες αρχιτεκτονικές θα συνεχιστεί, με στόχο την απόκτηση της απαιτούμενης τεχνογνωσίας για την βελτιστοποίηση και παραλληλοποίηση δικών μας εφαρμογών/μετροπρογραμμάτων, που θα χρησιμοποιούμε για την αξιολόγηση των πλατφορμών που αναπτύσσουμε για την υποστήριξη παράλληλης και ταυτόχρονης επεξεργασίας. / The use of benchmarks for the measurement of the effectiveness and performance of computer systems with multiple processors and of software systems that support parallel execution of applications on those platforms is a valid and widespread method. In fact, towards the standardization of such programs large and worldwide acknowledged research teams have proposed and parallelized a collection of benchmarks which are the result of many years of experience and research. The most known collections of such benchmarks are these of SPEC, NASH and SPLASH. Although the codes of this collection of benchmarks are in the disposal of the scientific community, they could not possibly fully cover the needs of the scientific teams all over the world that work in the field of research and development of software systems that support parallel applications the multiprocessor platforms. This thesis is an effort to study, analyze and parallelize two complex sequential applications that will be used as benchmarks for the evaluation of parallel application support systems. The platforms supporting parallel applications are developed in the High Performance Computer Laboratory in Patra (HPClab). This thesis includes a presentation and analyses of the optimization methods for parallel applications. It refers to improvements which are usually the result of changes in the programming code, in order to achieve optimal utilization of the given system resources. These improvements may also derive from the adjustment or even the change of the sequential algorithms for parallel SMP system architectures. What will follow, will be an attempt to optimize and parallelize two applications that will be used by the HPClab team in order to evaluate the software developed in the content of its research activity. The first application comes from the scientific field of medicine and regards the computation of the required amount of radiation applied for the cure of cancer. The second one comes from the scientific field of engineer chemistry and regards the computation of molecule movement of gases enclosed in solid objects. Finally, the measurements for the optimized and parallelized applications and the improvements achieved will be presented. The attempt to optimize and adjust these applications as well as others, will continue to be developed in the framework of existent and platforms, with the goal to attain the necessary know how for the optimization and parallelization of our own applications/benchmarks which will be used for the evaluation of the platforms developed and for the support of parallel applications.
20

Improving fault coverage and minimising the cost of fault identification when testing from finite state machines

Guo, Qiang January 2006 (has links)
Software needs to be adequately tested in order to increase the confidence that the system being developed is reliable. However, testing is a complicated and expensive process. Formal specification based models such as finite state machines have been widely used in system modelling and testing. In this PhD thesis, we primarily investigate fault detection and identification when testing from finite state machines. The research in this thesis is mainly comprised of three topics - construction of multiple Unique Input/Output (UIO) sequences using Metaheuristic Optimisation Techniques (MOTs), the improved fault coverage by using robust Unique Input/Output Circuit (UIOC) sequences, and fault diagnosis when testing from finite state machines. In the studies of the construction of UIOs, a model is proposed where a fitness function is defined to guide the search for input sequences that are potentially UIOs. In the studies of the improved fault coverage, a new type of UIOCs is defined. Based upon the Rural Chinese Postman Algorithm (RCPA), a new approach is proposed for the construction of more robust test sequences. In the studies of fault diagnosis, heuristics are defined that attempt to lead to failures being observed in some shorter test sequences, which helps to reduce the cost of fault isolation and identification. The proposed approaches and techniques were evaluated with regard to a set of case studies, which provides experimental evidence for their efficacy.

Page generated in 0.0316 seconds