• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 13
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 66
  • 24
  • 15
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Effective Randomized Concurrency Testing with Partial Order Methods

Yuan, Xinhao January 2020 (has links)
Modern software systems have been pervasively concurrent to utilize parallel hardware and perform asynchronous tasks. The correctness of concurrent programming, however, has been challenging for real-world and large systems. As the concurrent events of a system can interleave arbitrarily, unexpected interleavings may lead the system to undefined states, resulting in denials of services, performance degradation, inconsistent data, security issues, etc. To detect such concurrency errors, concurrency testing repeatedly explores the interleavings of a system to find the ones that induce errors. Traditional systematic testing, however, suffers from the intractable number of interleavings due to the complexity in real-world systems. Moreover, each iteration in systematic testing adjusts the explored interleaving with a minimal change that swaps the ordering of two events. Such exploration may waste time in large homogeneous sub-spaces leading to the same testing result. Thus on real-world systems, systematic testing often performs poorly to reveal even simple errors within a limited time budget. On the other hand, randomized testing samples interleavings of the system to quickly surface simple errors with substantial chances, but it may as well explore equivalent interleavings that do not affect the testing results. Such redundancies weaken the probabilistic guarantees and performance of randomized testing to find any errors. Towards effective concurrency testing, this thesis leverages partial order semantics with randomized testing to find errors with strong probabilistic guarantees. First, we propose partial order sampling (POS), a new randomized testing framework to sample interleavings of a concurrent program with a novel partial order method. It effectively and simultaneously explores the orderings of all events of the program, and has high probabilities to manifest any errors of unexpected interleavings. We formally proved that our approach has exponentially better probabilistic guarantees to sample any partial orders of the program than state-of-the-art approaches. Our evaluation over 32 known concurrency errors in public benchmarks shows that our framework performed 2.6 times better than state-of-the-art approaches to find the errors. Secondly, we describe Morpheus, a new practical concurrency testing tool to apply POS to high-level distributed systems in Erlang. Morpheus leverages dynamic analysis to identify and predict critical events to reorder during testing, and significantly improves the exploration effectiveness of POS. We performed a case study to apply Morpheus on four popular distributed systems in Erlang, including Mnesia, the database system in standard Erlang distribution, and RabbitMQ, the message broker service. Morpheus found 11 previously unknown errors leading to unexpected crashes, deadlocks, and inconsistent states, demonstrating the effectiveness and practicalness of our approaches.
32

Predictive software design measures

Love, Randall James 11 June 2009 (has links)
This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding. Prediction equations are formed at established points in the software development process called milestones. Four areas of predictive measurement are examined at each design milestone for candidate predictive metrics. These areas are: internal complexity, information flow, defect categorization, and the change in design. Prediction equations are created from the set of candidate predictive metrics at each milestone. The most promising of the prediction equations are selected and evaluated. The single "best" prediction equation is selected at each design milestone. The resulting predictions are promising in terms of ranking areas of the software design by the number of predicted defects. Predictions of the actual number of defects are less accurate. / Master of Science
33

A decision support system framework for testing and evaluating software in organisations

Sekgweleo, Tefo Gordon January 2018 (has links)
Thesis (DPhil (Informatics))--Cape Peninsula University of Technology, 2018. / Increasingly, organisations in South African and across the world rely on software for various reasons, such as competitiveness and sustainability. The software are either developed in-house or purchased from the shelf. Irrespective of how the software was acquired, they do encounter challenges, from implementation to support, and use stages. The challenges sometimes hinder and are prohibitive to processes and activities that the software is intended to enable and support. Majority of the challenges that are encountered with software are attributed to the fact that they were not tested or appropriately tested before implementation. Some of the challenges has been costly to many organisations, particularly in South Africa. As a result, some organisations have been lacking in their efforts toward growth, competitiveness and sustainability. The challenges manifest from the fact that there are no testing tools and methods that can be easily customised for an organisation’s purposes. As a result, some organisations adopt more tools and methods for the same testing purposes, which has not solved the problem, as the challenges continue among South Africa organisations. Based on the challenges as stated above, this study was undertaken. The aim was to develop a decision support system framework, which can be used for software testing by any organisation, owing to its flexibility for customisation. The interpretivist and inductive approaches were employed. The qualitative methods and the case study design approach were applied. Three South African organisations, a private, public and small to medium enterprise (SME) were used as cases in this study. A set of criteria was used to select the organisations. The analysis of the data was guided by two sociotechnical theories, actor network theory (ANT) and diffusion of innovation (DOI). The theories were complementarily applied because of their different focuses. The actor network theory focuses on actors, which are both human and non-human, heterogeneity of networks, and the relationship between the actors within networks. This includes the interactions that happen at different moments as translated within the heterogeneous networks. Thus, ANT was employed to examine and gain better understanding of the factors that influence software testing in organisations. The DOI focuses on how new (fresh) ideas are diffused in an environment, with particular focus on innovation decision process, which constitute five stages: knowledge, persuasion, decision, implementation and confirmation. Findings from the data analysis of the three cases were further interpreted. Based on the interpretation, a decision support system framework was developed. The framework is intended to be of interest to software developers, software project managers and other stakeholders, most importantly, to provide guide to software testers in their tasks of testing software. Thus, this research is intended to be of interest and benefit to organisations and academic through its theoretical, practical and methodological contribution as detailed in the chapter seven (conclusion). In conclusion, even though this research is rigorous, comprehensive and holistic, there are room for future studies. I would like to propose that future research should be in the areas of measurement of software testing. Also, sociotechnical theories like structuration theory and technology acceptance model should be considered in the analysis of such studies.
34

Identifying Testing Requirements for Modified Software

Apiwattanapong, Taweesup 09 July 2007 (has links)
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs. This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
35

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
36

MIST: towards a minimum set of test cases

Feng, Xin, 馮昕 January 2002 (has links)
abstract / toc / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
37

Design structure and iterative release analysis of scientific software

Zulkarnine, Ahmed Tahsin January 2012 (has links)
One of the main objectives of software development in scientific computing is efficiency. Being focused on highly specialized application domain, important software quality metrics, e.g., usability, extensibility ,etc may not be amongst the list of primary objectives. In this research, we have studied the design structures and iterative releases of scientific research software using Design Structure Matrix(DSM). We implemented a DSM partitioning algorithm using sparse matrix data structure Compressed Row Storage(CRS), and its timing was better than those obtained from the most widely used C++ library boost. Secondly, we computed several architectural complexity metrics, compared releases and total release costs of a number of open source scientific research software. One of the important finding is the absence of circular dependencies in studied software which attributes to the strong emphasis on computational performance of the code. Iterative release analysis indicates that there might be a correspondence between “clustering co-efficient” and “release rework cost” of the software. / x, 87 leaves : ill. ; 29 cm
38

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
39

Property preserving development and testing for CSP-CASL

Kahsai, Temesghen January 2009 (has links)
This thesis describes a theoretical study and an industrial application in the area of formal systems development, verification and formal testing using the specification language CSP-CASL. The latter is a comprehensive specification language which allows to describe systems in a combined algebraic / process algebraic notation. To this end it integrates the process algebra CSP and the algebraic specification language CASL. In this thesis we propose various formal development notions for CSP-CASL capable of capturing informal vertical and horizontal software development which we typically find in industrial applications. We provide proof techniques for such development notions and verification methodologies to prove interesting properties of reactive systems. We also propose a theoretical framework for formal testing from CSP-CASL specifications. Here, we present a conformance relation between a physical system and a CSP-C ASL specification. In particular we study the relationship between CSP-CASL development notions and the implemented system. The proposed theoretical notions of formal system development, property verification and formal testing for CSP-CASL, have been successfully applied to two industrial application: an electronic payment system called EP2 and the starting system of the BR725 Rolls- Royce jet engine control software.
40

Development of a tool to test computer protocols

Myburgh, W. D 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University, 2003. / ENGLISH ABSTRACT: Software testing tools simplify and automate the menial work associated with testing. Moreover, for complex concurrent software such as computer protocols, testing tools allow testing on an abstract level that is independent of specific implementations. Standard conformance testing methodologies and a number of testing tools are commercially available, but detailed descriptions of the implementation of such testing tools are not widely available. This thesis investigates the development of a tool for automated protocol testing in the ETH Oberon development environment. The need to develop a protocol testing tool that automates the execution of specified test cases was identified in collaboration with a local company that develops protocols in the programming language Oberon. Oberon is a strongly typed secure language that supports modularisation and promotes a readable programming style. The required tool should translate specified test cases into executable test code supported by a runtime environment. A test case consists of a sequence of input actions to which the software under test is expected to respond by executing observable output actions. A number of issues are considered of which the first is concerned with the representation of test case specifications. For this, a notation was used that is basically a subset of the test specification language TTCN-3 as standardised by the European Telecommunications Standards Institute. The second issue is the format of executable test cases and a suitable runtime environment. A translator was developed that generates executable Oberon code from specified test cases. The compiled test code is supported by a runtime library, which is part of the tool. Due to the concurrent nature of a protocol environment, concurrent processes in the runtime environment are identified. Since ETH Oberon supports multitasking in a limited sense, test cases are executed as cooperating background tasks. The third issue is concerned with the interaction between an executing test case and a system under test. It is addressed by an implementation dependent interface that maps specified test interactions onto real interactions as required by the test context in which an implementation under test operates. A supporting protocol to access the service boundary of an implementation under test remotely and underlying protocol service providers are part of a test context. The ETH Oberon system provides a platform that simplifies the implementation of protocol test systems, due to its size and simple task mechanism. Operating system functionality considered as essential is pointed out in general terms since other systems could be used to support such testing tools. In conclusion, directions for future work are proposed. / AFRIKAANSE OPSOMMING: Toetsstelsels vir programmatuur vereenvoudig en outomatiseer die slaafse werk wat met toetsing assosieer word. 'n Toetsstelsel laat verder toe dat komplekse gelyklopende programmatuur, soos rekenaarprotokolle, op 'n abstrakte vlak getoets word, wat onafhanklik van spesifieke implementasies is. Daar bestaan standaard metodes vir konformeringstoetsing en 'n aantal toetsstelsels is kommersiëel beskikbaar. Uitvoerige beskrywings van die implementering van sulke stelsels is egter nie algemeen beskikbaar nie. Hierdie tesis ondersoek die ontwikkeling van 'n stelsel vir outomatiese toetsing van protokolle in die ontwikkelingsomgewing van ETH Oberon. Die behoefte om 'n protokoltoetsstelsel te ontwikkel, wat die uitvoering van gespesifiseerde toetsgevalle outomatiseer, is geïdentifiseer in oorleg met 'n plaaslike maatskappy wat protokolle ontwikkel in die Oberon programmeertaal. Oberon is 'n sterkgetipeerde taal wat modularisering ondersteun en a leesbare programmeerstyl bevorder. Die toestsstelsel moet gespesifiseerde toetsgevalle vertaal na uitvoerbare toetskode wat ondersteun word deur 'n looptydomgewing. 'n Toetsgeval bestaan uit 'n reeks van toevoeraksies waarop verwag word dat die programmatuur wat getoets word, sal reageer deur die uitvoering van afvoeraksies wat waargeneem kan word. 'n Aantal kwessies word aangeraak, waarvan die eerste te make het met die voorstelling van die spesifikasie van toetsgevalle. Hiervoor is 'n notasie gebruik wat in wese 'n subversameling van die toetsspesifikasietaal TTCN-3 is. TTCN-3 is gestandardiseer deur die European Telecommunications Standards Institute. Die tweede kwessie is die formaat van uitvoerbare toetsgevalle en 'n geskikte looptydomgewing. 'n Vertaler is ontwikkel wat uitvoerbare Oberon-kode genereer vanaf gespesifiseerde toetsgevalle. Die vertaalde toetskode word ondersteun deur 'n biblioteek van looptydfunksies, wat deel van die stelsel is. As gevolg van die eienskap dat 'n protokolomgewing uit gelyklopende prosesse bestaan, word daar verskillende tipes van gelyklopende prosesse in 'n protokoltoetsstelsel geïdentifiseer. Aangesien ETH Oberon 'n beperkte multitaakstelsel is, word toetsgevalle vertaal na eindige outomate wat uitgevoer word as samewerkende agtergrondtake. Die derde kwessie het te make met die interaksie tussen 'n toetsgeval wat uitgevoer word en die stelsel wat getoets word. Dit word aangespreek deur 'n koppelvlak wat gespesifiseerde interaksies afbeeld op werklike interaksies soos vereis deur die konteks waarin 'n implementasie onderworpe aan toetsing uitvoer. 'n Ondersteunende protokolom die dienskoppelvlak van die implementasie oor 'n afstand te bereik en ander onderliggende protokoldienste is deel van 'n toetskonteks. Die ETH Oberon-stelsel help in die vereenvoudiging van die implementasie van protokol toetsstelsels, as gevolg van die stelsel se grootte en die eenvoudige taakhanteerder . Die essensiële funksionaliteit van bedryfsstelsels word uitgelig in algemene terme omdat ander stelsels gebruik kan word om toetsstelsels te ondersteun. Ten slotte word voorstelle vir opvolgwerk gemaak.

Page generated in 0.128 seconds