• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 16
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 80
  • 80
  • 28
  • 27
  • 24
  • 18
  • 18
  • 17
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficiency determination of automated techniques for GUI testing

Jönsson, Tim January 2014 (has links)
Efficiency as a term in software testing is, in the research community, a term that is not so well defined. In the industry, and specifically the test tool industry, it has become a sales pitch without meaning. GUI testing in its manual form is a time consuming task, which can be thought of as repetitive and tedious by testers. Using human testers to perform a task, where focus is hard to keep, often ends in defects going unnoticed. The purpose of this thesis is to collect knowledge on the area efficiency in software testing, but focusing more on efficiency in GUI testing in order to keep the scope focused. Part of the purpose is also to test the hypothesis that automated GUI testing is more efficient than traditional, manual GUI testing. In order to reach the purpose, the choice fell to use case study research as the main research method. Through the case study, a theoretical study was performed to gain knowledge on the subject. To gain data used for an analysis in the case study, the choice fell on using a semi-experimental research approach where one automated GUI testing technique called Capture & Replay was tested against a more traditional approach towards GUI testing. The results obtained throughout the case study gives a definition on efficiency in software testing, as well as three measurements on efficiency, those being defect detection, repeatability of test cases, and time spent with human interaction. The result also includes the findings from the semi-experimental research approach where the testing tools Squish, and TestComplete, where used beside a manual testing approach. The main conclusion deducted in this work is that an automated approach towards GUI testing can become more efficient than a manual approach, in the long run. This is when efficiency is determined on the points of defect detection, repeatability, and time.
12

Automatizované testování aplikace pro IP televizi / Automatized testing of application for IP television

Oudrnický, Jan January 2011 (has links)
Main goal of the thesis is to analyze current state of testing of nangu.tv system and design system for automatized testing of this application and its components. Thesis builds up theoretical framework needed for better understanding of the given area, defines specifics of the application and its development. The requirements for the given automatized system are then specified together with selecting tools for its development. Following steps also propose changes in system of testing. Thesis presents steps which are to include designed system into the current process of testing in the company. Main purpose of this work is to lay foundation for future implementation of automated testing into the process of testing to Alnair a.s. company as to improve the level of testing process in the company.
13

NBAP message construction using QuickCheck

Jernberg, Daniel, Granberg, Andreas January 2007 (has links)
<p>Traffic and Feature SW is a department based in Kista. At this department the main processor software, or MPSW in short, in Ericssons Radio Base Stations is tested prior to integration of new releases. Traffic and Feature SW, also called MPSW in this thesis, works closely with another department of Ericsson called RBS System I&V which tests the same software but for complete RBS nodes. MPSW uses black- and greybox scripted testing in regression suites that are executed on a daily basis. These regression suites are separated into different subsets of functionality that maps to the capabilities of the Radio Base Station software. The authors of this thesis has performed an implementation of automated test cases for a subset of the Radio Base Station software using an automated software tool called QuickCheck. These test cases were successfully integrated into the test framework and were able to find several issues with the main processor software and its subsystems in the Radio Base Station. The commissioner of this thesis have plans to further integrate the QuickCheck tool into the test framework, possibly automating test cases for several subsets of the Radio Base Station software. The authors have therefore analysed and put forth metrics that compares the testing of the Radio Base Station software using QuickCheck with the conventional regression test cases. These metrics covers areas such as the cost related to and the inherent capabilities of QuickCheck. The evaluation of these metrics was performed by the authors to give the commissioner decisive information. These evaluations showed that QuickCheck was able to generate complex message stuctures in complex sequences. QuickCheck was also able to shrink both the content of these messages and the length of the sequences of messages to be able to provide a minimal counterexample when a fault was discovered. The only metric that QuickCheck failed to support was to inherit functionality to support the handling of statistics from executions.</p> / <p>Traffic and Feature SW is a department based in Kista. At this department the main processor software, or MPSW in short, in Ericssons Radio Base Stations is tested prior to integration of new releases. Traffic and Feature SW, also called MPSW in this thesis, works closely with another department of Ericsson called RBS System I&V which tests the same software but for complete RBS nodes. MPSW uses black- and greybox scripted testing in regression suites that are executed on a daily basis. These regression suites are separated into different subsets of functionality that maps to the capabilities of the Radio Base Station software. The authors of this thesis has performed an implementation of automated test cases for a subset of the Radio Base Station software using an automated software tool called QuickCheck. These test cases were successfully integrated into the test framework and were able to find several issues with the main processor software and its subsystems in the Radio Base Station. The commissioner of this thesis have plans to further integrate the QuickCheck tool into the test framework, possibly automating test cases for several subsets of the Radio Base Station software. The authors have therefore analysed and put forth metrics that compares the testing of the Radio Base Station software using QuickCheck with the conventional regression test cases. These metrics covers areas such as the cost related to and the inherent capabilities of QuickCheck. The evaluation of these metrics was performed by the authors to give the commissioner decisive information. These evaluations showed that QuickCheck was able to generate complex message stuctures in complex sequences. QuickCheck was also able to shrink both the content of these messages and the length of the sequences of messages to be able to provide a minimal counterexample when a fault was discovered. The only metric that QuickCheck failed to support was to inherit functionality to support the handling of statistics from executions.</p>
14

A Mutation-based Framework for Automated Testing of Timeliness

Nilsson, Robert January 2006 (has links)
A problem when testing timeliness of event-triggered real-time systems is that response times depend on the execution order of concurrent tasks. Conventional testing methods ignore task interleaving and timing and thus do not help determine which execution orders need to be exercised to gain confidence in temporal correctness. This thesis presents and evaluates a framework for testing of timeliness that is based on mutation testing theory. The framework includes two complementary approaches for mutation-based test case generation, testing criteria for timeliness, and tools for automating the test case generation process. A scheme for automated test case execution is also defined. The testing framework assumes that a structured notation is used to model the real-time applications and their execution environment. This real-time system model is subsequently mutated by operators that mimic potential errors that may lead to timeliness failures. Each mutated model is automatically analyzed to generate test cases that target execution orders that are likely to lead to timeliness failures. The validation of the theory and methods in the proposed testing framework is done iteratively through case-studies, experiments and proof-of-concept implementations. This research indicates that an adapted form of mutation-based testing can be used for effective and automated testing of timeliness and, thus, for increasing the confidence level in real-time systems that are designed according to the event-triggered paradigm.
15

Finding Malformed Html Outputs And Unhandled Execution Errors Of Asp.net Applications

Ozkinaci, Mehmet Erdal 01 May 2011 (has links) (PDF)
As dynamic web applications are becoming widespread nearly in every area, ASP.NET is one of the popular development languages in this domain. The errors in these web applications can reduce the credibility of the site and cause possible loss of a number of clients. Therefore, testing these applications becomes significant. We present an automated tool to test ASP.NET web applications against execution errors and HTML errors that cause displaying inaccurate and incomplete information. Our tool, called Mamoste, adapts concolic testing technique which interleaves concrete and symbolic executions to generate test inputs dynamically. Mamoste also considers page events as inputs which cannot be handled with concolic testing. We have performed experiments on a subset of an heavily used ASP.NET application of a government office. We have found 366 HTML errors and a faulty component which is used almost every ASP.NET page in this application. In addition, Mamoste discovered that a common user control is misused in several generated pages.
16

Real-Time Test Oracles using Event Monitoring

Nilsson Holmgren, Sebastian January 2005 (has links)
<p>To gain confidence in that a dynamic real-time system behaves correctly, we test it. Automated verification & validation can be used to conduct testing of such systems in an effective and economic way.</p><p>An event monitor can be used as a part of a test oracle to monitor the system that is being tested. The test oracle could use the data (i.e., the streams of events) derived from the tested system, to determine if an executed test case gave a positive or negative result. To do this, the test oracle compares the streams of events received from the event monitor with the event expressions derived from the formal specification, and decides if the executed test case has responded positive or negative. Any deviations between observed behaviour and accepted behaviour should be reported by the test oracle as a negative result. If the executed test case gave a negative result, the monitor part should signal this to the reporter part of the test oracle.</p><p>This work aims to investigate how the event expressions can be derived from the formal specification, and in particular, how the event specification language Solicitor can be used to represent these event expressions.</p><p>We also discuss the need for parameterized event types in Solicitor, and any other event specification languages used in event monitoring. We also show that support for parameterized event types is a significant requirement for such languages.</p>
17

Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First Programming

Wilkerson, Jerod W. January 2008 (has links)
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
18

Benefits of continuous delivery for Sigma IT Consulting

Sigfast, Martin, Olsson, Fredrik January 2018 (has links)
Manual deploys and testing of code can be both time-consuming and error-prone. Many repetitive manual steps can lead to critical tests or necessary configuration being forgotten or being skipped due time-constraints resulting in software that doesn’t work as intended when deployed to production. The purpose of this report is to examine whether continuous delivery(CD) could lead to any positive effects and if there are any obstacles for CD in an Episerver project at Sigma ITC. The study was done by implementing a CD pipeline for a project similar to a real project at Sigma then letting the developers work with it and interviewing them about their current workflow, their attitude towards the different steps involved and if they saw any problems with CD for their project. Even if the developers, in general, where positive to CD they had some reservations about it in their current projects. The main obstacle the developers saw where the time/cost which would affect the customer and also some uncertainty around the complexity in testing Episerver. The results show that there could be positive effects of CD even if the project type is not optimal for reaping all the CD benefits, it all depends on people involved seeing a value in testing and the questions around testing in Episerver are straightened out.
19

Model selection and testing for an automated constraint modelling toolchain

Hussain, Bilal Syed January 2017 (has links)
Constraint Programming (CP) is a powerful technique for solving a variety of combinatorial problems. Automated modelling using a refinement based approach abstracts over modelling decisions in CP by allowing users to specify their problem in a high level specification language such as ESSENCE. This refinement process produces many models resulting from different choices that can be selected, each with their own strengths. A parameterised specification represents a problem class where the parameters of the class define the instance of the class we wish to solve. Since each model has different performance characteristics the model chosen is crucial to be able to solve the instance effectively. This thesis presents a method to generate instances automatically for the purpose of choosing a subset of the available models that have superior performance across the instance space. The second contribution of this thesis is a framework to automate the testing of a toolchain for automated modelling. This process includes a generator of test cases that covers all aspects of the ESSENCE specification language. This process utilises our first contribution namely instance generation to generate parameterised specifications. This framework can detect errors such as inconsistencies in the model produced during the refinement process. Once we have identified a specification that causes an error, this thesis presents our third contribution; a method for reducing the specification to a much simpler form, which still exhibits a similar error. Additionally this process can generate a set of complementary specifications including specifications that do not cause the error to help pinpoint the root cause.
20

Fuzz testing for design assurance levels

Gustafsson, Marcus, Holm, Oscar January 2017 (has links)
With safety critical software, it is important that the application is safe and stable. While this software can be quality tested with manual testing, automated testing has the potential to catch errors that manual testing will not. In addition there is also the possibility to save time and cost by automating the testing process. This matters when it comes to avionics components, as much time and cost is spent testing and ensuring the software does not crash or behave faulty. This research paper will focus on exploring the usefulness of automated testing when combining it with fuzz testing. It will also focus on how to fuzzy test applications classified into DAL-classifications.

Page generated in 0.0713 seconds