• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 13
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 13
  • 13
  • 12
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Model selection and testing for an automated constraint modelling toolchain

Hussain, Bilal Syed January 2017 (has links)
Constraint Programming (CP) is a powerful technique for solving a variety of combinatorial problems. Automated modelling using a refinement based approach abstracts over modelling decisions in CP by allowing users to specify their problem in a high level specification language such as ESSENCE. This refinement process produces many models resulting from different choices that can be selected, each with their own strengths. A parameterised specification represents a problem class where the parameters of the class define the instance of the class we wish to solve. Since each model has different performance characteristics the model chosen is crucial to be able to solve the instance effectively. This thesis presents a method to generate instances automatically for the purpose of choosing a subset of the available models that have superior performance across the instance space. The second contribution of this thesis is a framework to automate the testing of a toolchain for automated modelling. This process includes a generator of test cases that covers all aspects of the ESSENCE specification language. This process utilises our first contribution namely instance generation to generate parameterised specifications. This framework can detect errors such as inconsistencies in the model produced during the refinement process. Once we have identified a specification that causes an error, this thesis presents our third contribution; a method for reducing the specification to a much simpler form, which still exhibits a similar error. Additionally this process can generate a set of complementary specifications including specifications that do not cause the error to help pinpoint the root cause.
32

Structure and Feedback in Cloud Service API Fuzzing

Atlidakis, Evangelos January 2021 (has links)
Over the last decade, we have witnessed an explosion in cloud services for hosting software applications (Software-as-a-Service), for building distributed services (Platform- as-a-Service), and for providing general computing infrastructure (Infrastructure-as-a- Service). Today, most cloud services are programmatically accessed through Application Programming Interfaces (APIs) that follow the REpresentational State Trans- fer (REST) software architectural style and cloud service developers use interface-description languages to describe and document their services. My thesis is that we can leverage the structured usage of cloud services through REST APIs and feedback obtained during interaction with such services in order to build systems that test cloud services in an automatic, efficient, and learning-based way through their APIs. In this dissertation, I introduce stateful REST API fuzzing and describe its implementation in RESTler: the first stateful REST API fuzzing system. Stateful means that RESTler attempts to explore latent service states that are reachable only with sequences of multiple interdependent API requests. I then describe how stateful REST API fuzzing can be extended with active property checkers that test for violations of desirable REST API security properties. Finally, I introduce Pythia, a new fuzzing system that augments stateful REST API fuzzing with coverage-guided feedback and learning-based mutations.
33

Processor and postprocessor for a plane frame analysis program on the IBM PC

Ghabra, Fawwaz I. 15 November 2013 (has links)
In this thesis, a PROCESSOR and a POSTPROCESSOR are developed for a plane frame analysis computer program on the IBM PC. The PROCESSOR reads the data prepared by a PREPROCESSOR and solves for the unknown joint displacement using the matrix displacement method. The POSTPROCESSOR uses the results of the PROCESSOR to obtain the required responses of the structure. A chapter on testing procedures is also provided. / Master of Science
34

AdaTAD - a debugger for the Ada multi-task environment

Fainter, Robert Gaffney January 1985 (has links)
In a society that is increasingly dependent upon computing machinery, the issues associated with the correct functioning of that machinery are of crucial interest. The consequences of erroneous behavior of computers are dire with the worst case scenario being, conceivably, global thermonuclear war. Therefore, development of procedures and tools which can be used to increase the confidence of the correctness of the software that controls the world's computers is of vital importance. The Department of Defense (DoD) is in the process of adopting a standard computer language for the development of software. This language is called Ada¹. One of the major features of Ada is that it supports concurrent programming via its "task" compilation unit. There are not, however, any automated tools to aid in locating errors in the tasks. The design for such a tool is presented. The tool is named AdaTAD and is a debugger for programs written in Ada. The features of AdaTAD are specific to the problems of concurrent programming. The requirements of AdaTAD are derived from the literature. AdaTAD is, however, a unique tool designed using Ada as a program description language. When AdaTAD is implemented in Ada it becomes portable among all environments which support the Ada language. This offers the advantage that a single debugger is portable to many different machine architectures. Therefore, separate debuggers are not necessary for each implementation of Ada. Moreover, since AdaTAD is designed to allow debugging of tasks, AdaTAD will also support debugging in a distributed environment. That means that, if the tasks of a user's program are running on different computers in a distributed environment, the user is still able to use AdaTAD to debug the tasks as a single program. This feature is unique among automated debuggers. After the design is presented, several examples are offered to explain the operation of AdaTAD and to show that AdaTAD is useful in revealing the location of errors specific to concurrent programming. / Ph. D.
35

The application of structure and code metrics to large scale systems

Canning, James Thomas January 1985 (has links)
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data. / Ph. D.
36

A HASP monitor for IBM systems under OS and HASP

Owens, Kathryn J. 03 June 2011 (has links)
This thesis describes the design, development, implementation, and output results of a software monitor program which measures job turnaround time on an IBM 360 system under OS/MFT and HASP. This program is designed to be used in conjunction with other monitors and accounting data to measure the performance of the System/360. In this thesis, relevant RASP logic is summarized, followed by design specifications of the monitor, solutions to design problems, and a full description of the monitor's program logic. Actual results obtained by the monitor are included.Ball State UniversityMuncie, IN 47306
37

Program analysis to support quality assurance techniques for web applications

Halfond, William G. J. 20 January 2010 (has links)
As web applications occupy an increasingly important role in the day-to-day lives of millions of people, testing and analysis techniques that ensure that these applications function with a high level of quality are becoming even more essential. However, many software quality assurance techniques are not directly applicable to modern web applications. Certain characteristics, such as the use of HTTP and generated object programs, can make it difficult to identify software abstractions used by traditional quality assurance techniques. More generally, many of these abstractions are implemented differently in web applications, and the lack of techniques to identify them complicates the application of existing quality assurance techniques to web applications. This dissertation describes the development of program analysis techniques for modern web applications and shows that these techniques can be used to improve quality assurance. The first part of the research focuses on the development of a suite of program analysis techniques that identifies useful abstractions in web applications. The second part of the research evaluates whether these program analysis techniques can be used to successfully adapt traditional quality assurance techniques to web applications, improve existing web application quality assurance techniques, and develop new techniques focused on web application-specific issues. The work in quality assurance techniques focuses on improving three different areas: generating test inputs, verifying interface invocations, and detecting vulnerabilities. The evaluations of the resulting techniques show that the use of the program analyses results in significant improvements in existing quality assurance techniques and facilitates the development of new useful techniques.
38

The implications of deviating from software testing processes : a case study of a software development company in Cape Town, South Africa

Roems, Raphael January 2017 (has links)
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2017. / Ensuring that predetermined quality standards are met is an issue which software development companies, and the software development industry at large, is having issues in attaining. The software testing process is an important process within the larger software development process, and is done to ensure that software functionality meets user requirements and software defects are detected and fixed prior to users receiving the developed software. Software testing processes have progressed to the point where there are formal processes, dedicated software testing resources and defect management software in use at software development organisations. The research determined implications that the case study software development organisation could face when deviating from software testing processes, with a focus on function performed by the software tester role. The analytical dimensions of duality of structure framework, based on Structuration Theory, was used as a lens to understand and interpret the socio-technical processes associated with software development processes at the case study organisation. Results include the identification of software testing processes, resources and tools, together with the formal software development processes and methodologies being used. Critical e-commerce website functionality and software development resource costs were identified. Tangible and intangible costs which arise due to software defects were also identified. Recommendations include the prioritisation of critical functionality for test execution for the organisation’s e-commerce website platform. The necessary risk management should also be undertaken in scenarios with time constraints on software testing, which balances risk with quality, features, budget and schedule. Numerous process improvements were recommended for the organisation, to assist in preventing deviations from prescribed testing processes. A guideline was developed as a research contribution to illustrate the relationships of the specific research areas and the impact on software project delivery.
39

Combining over- and under-approximating program analyses for automatic software testing

Csallner, Christoph 07 July 2008 (has links)
This dissertation attacks the well-known problem of path-imprecision in static program analysis. Our starting point is an existing static program analysis that over-approximates the execution paths of the analyzed program. We then make this over-approximating program analysis more precise for automatic testing in an object-oriented programming language. We achieve this by combining the over-approximating program analysis with usage-observing and under-approximating analyses. More specifically, we make the following contributions. We present a technique to eliminate language-level unsound bug warnings produced by an execution-path-over-approximating analysis for object-oriented programs that is based on the weakest precondition calculus. Our technique post-processes the results of the over-approximating analysis by solving the produced constraint systems and generating and executing concrete test-cases that satisfy the given constraint systems. Only test-cases that confirm the results of the over-approximating static analysis are presented to the user. This technique has the important side-benefit of making the results of a weakest-precondition based static analysis easier to understand for human consumers. We show examples from our experiments that visually demonstrate the difference between hundreds of complicated constraints and a simple corresponding JUnit test-case. Besides eliminating language-level unsound bug warnings, we present an additional technique that also addresses user-level unsound bug warnings. This technique pre-processes the testee with a dynamic analysis that takes advantage of actual user data. It annotates the testee with the knowledge obtained from this pre-processing step and thereby provides guidance for the over-approximating analysis. We also present an improvement to dynamic invariant detection for object-oriented programming languages. Previous approaches do not take behavioral subtyping into account and therefore may produce inconsistent results, which can throw off automated analyses such as the ones we are performing for bug-finding. Finally, we address the problem of unwanted dependencies between test-cases caused by global state. We present two techniques for efficiently re-initializing global state between test-case executions and discuss their trade-offs. We have implemented the above techniques in the JCrasher, Check 'n' Crash, and DSD-Crasher tools and present initial experience in using them for automated bug finding in real-world Java programs.
40

Uma contribuição ao teste baseado em modelo no contexto de aplicações móveis / A contribution to the model-based testing in the context of mobile applications

Farto, Guilherme de Cleva 08 March 2016 (has links)
Devido ao crescente número e diversidade de usuários, novas abordagens de teste são necessárias para reduzir a ocorrência de defeitos e garantir uma melhor qualidade em aplicações móveis. As particularidades desse tipo de software exigem que as técnicas de teste tradicionais sejam revisitadas e novas abordagens propostas. A natureza direcionada a eventos e as funcionalidades de aplicações móveis demandam que os testes sejam executados de maneira automatizada. O Teste Baseado em Modelo (TBM) apresenta-se como uma abordagem válida e promissora que oportuniza o uso de um processo definido, bem como de mecanismos e técnicas formais para o teste de aplicações móveis. Esta dissertação de mestrado tem como objetivo investigar a adoção de TBM junto à técnica de modelagem Event Sequence Graph (ESG) no teste de aplicações móveis para a plataforma Android. Inicialmente, o TBM é avaliado com o apoio de ESG e da ferramenta Robotium. Com base nos resultados obtidos e desafios identificados, propõe-se uma abordagem específica que fundamenta o reuso de modelos de teste para (i) reduzir o esforço manual demandado na etapa de concretização de casos de teste e (ii) testar distintas características inerentes ao contexto de mobilidade. Uma ferramenta de apoio foi projetada e desenvolvida para automatizar a abordagem proposta. Um estudo experimental em ambiente industrial é conduzido para avaliar a abordagem e a ferramenta propostas quanto à efetividade na redução do esforço requisitado para a concretização, bem como à capacidade de detecção de defeitos em aplicações móveis desenvolvidas na plataforma Android. / Due to the increasing number and diversity of users, new testing approaches are necessary to reduce the presence of faults and ensure better quality in mobile applications. The particularities of this class of software require that traditional testing techniques are revisited and new approaches proposed. The event oriented nature and functionalities of mobile applications demand tests that can be performed automatically. Model-Based Testing (MBT) is a valid and promising approach that favors the use of a defined process, as well as mechanisms and formal techniques for the testing of mobile applications. This dissertation investigates the odoption of MBT along with the modeling technique Event Sequence Graph (ESG) to test Android applications. Initially, we evaluate TBM supported by ESG and the Robotium tool. Based on the results and challenges identified, we propose a specific approach underlying the reuse of test models to (i) reduce the manual effort to the concretization of test cases and to (ii) test different and inherited characteristics of the mobility context. A supporting tool was designed and implemented to automate the proposed approach. Finaly, we conducted an experimental study in an industrial environment to evaluate the proposed approach and tool regarding the effectiveness in reducing the concretization’s efforts, as well as the fault detection capability in Android mobile applications.

Page generated in 0.1212 seconds