• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 53
  • 43
  • 37
  • 32
  • 27
  • 20
  • 19
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Suitability of the Requirements Abstraction Model (RAM) Requirements for High Level System Testing / Lämpligheten av kravabstraktionen modellerar (RAM), krav för att testa för hög nivåsystem

Muhammad, Naeem January 2007 (has links)
In market-driven requirements engineering requirements are elicited from various internal and external sources. These sources may include engineers, marketing teams, customers etc. This results in a collection of requirements at multiple levels of abstractions. The Requirements Abstraction Model (RAM) is a Market Driven Requirements Engineering (MDRE) model that helps in managing requirements by organizing them at four levels (product, feature, function and component) of abstraction. The model is adaptable and can be tailored to meet the needs of the various organizations e.g. number of abstraction levels can be changed according to the needs of the organization. Software requirements are an important source of information when developing high-level tests (acceptance and system level tests). In order to place a requirement on a suitable level, workup activities (producing abstraction or breaking down a requirement) can be performed on the requirement. Such activities on the requirements can affect the test cases designed from them. Organizations willing to adopt the RAM need to know the suitability of the RAM requirements for designing high-level tests. This master thesis analyzes the requirements at product, feature, function and component level to evaluate their suitability for supporting the creation of high-level system test. This analysis includes designing test cases from requirements at different levels and evaluating how much of the information needed in the test cases is available in the RAM requirements. Test cases are graded on a 5 to 1 scale according to the level of detail they contain, 5 for better detailed and 1 for very incomplete. Twenty requirements have been selected for this document analysis; twenty requirements contain five requirements from each level (product, feature, function and component). Organizations can utilize the results of this study, while making decision to adopt the RAM model. Decomposition of the tests developed from the requirements is another area that has been explored in this study. Test decomposition involves dividing tests into sub-tests. Some benefits of the test decomposition include better resource utilization, meet time-to-market and better test prioritization. This study explores how tests designed from the RAM requirements support test decomposition, and help in utilizing above listed benefits of the test decomposition.
22

A Mix Testing Process Integrating Two Manual Testing Approaches : Exploratory Testing and Test Case Based Testing

Shah, Syed Muhammad Ali, Alvi, Usman Sattar January 2010 (has links)
Software testing is a key phase in software development lifecycle. Testing objectives corresponds to the discovery and detection of faults, which can be attained by utilizing manual or automated testing approaches. In this thesis, we are mainly concerned with the manual test approaches. The most commonly used manual testing approaches in the software industry are the Exploratory Testing (ET) approach and the Test Case Based Testing (TCBT) approach. TCBT is primarily used by software testers to formulize and guide their testing tasks and set the theoretical principles for testing. On the other hand ET is simultaneous learning, test design, and test execution. Software testing might benefit from an intelligent combination of these approaches of testing however there is no proof of any formal process that accommodates the usage of both test approaches in a combination. This thesis presents a process for Mix Testing (MT) based on the strengths and weaknesses of both test approaches, identified through a systematic literature review and interviews with testers in a software organization. The new process is defined through the mapping of weaknesses of one approach to the strengths of other. Static validation of the MT process through interviews in the software organization suggested that MT has ability to resolve the problems of both test approaches to some extent. Furthermore, MT was validated by conducting an experiment in an industrial setting. The analysis of the experimentation results indicated that MT has better defect detection than TCBT and less than ET. In addition, the results of the experiments also indicate that MT provides equal functionality coverage as compared to ET and TCBT.
23

Automated Selective Test Case Generation Methods for Real-Time Systems

Nilsson, Robert January 2000 (has links)
This work aims to investigate the state of the art in test case generation for real-time systems, to analyze existing methods, and to propose future research directions in this area. We believe that a combination of design for testability, automation, and sensible test case selection is the key for verifying modern real-time systems. Existing methods for system-level test case generation for real-time systems are presented, classified, and evaluated against a real-time system model. Significant for real-time systems is that timeliness is crucial for their correctness. Our system model of the testing target adopts the event-triggered design paradigm for maximum flexibility. This paradigm results in target systems that are harder to test than its time-triggered counterpart, but the model improves testability by adopting previously proposed constraints on application behavior. This work investigates how time constraints can be tested using current methods and reveals problems relating to test-case generation for verifying such constraints. Further, approaches for automating the test-case generation process are investigated, paying special attention to methods aimed for real-time systems. We also note a need for special test-coverage criteria for concurrent and real-time systems to select test cases that increase confidence in such systems. We analyze some existing criteria from the perspective of our target model. The results of this dissertation are a classification of methods for generating test cases for real-time systems, an identification of contradictory terminology, and an increased body of knowledge about problems and open issues in this area. We conclude that the test-case generation process often neglects the internal behavior of the tested system and the properties of its execution environment as well as the effects of these on timeliness. Further, we note that most of the surveyed articles on testing methods incorporate automatic test-case generation in some form, but few consider the issues of automated execution of test cases. Four high-level future research directions are proposed that aim to remedy one or more of the identified problems.
24

Exploring the use of call stack depth limits to reduce regression testing costs

Bogren, Patrik, Kristola, Isak January 2021 (has links)
Regression testing is performed after existing source code has been modified to verify that no new faults have been introduced by the changes. Test case selection can be used to reduce the effort of regression testing by selecting a smaller subset of the test suite for later execution. Several criteria and objectives can be used as constraints that should be satisfied by the selection process. One common criteria is function coverage, which can be represented by a coverage matrix that maps test cases to methods under test. The process of generating and evaluating these matrices can be very time consuming for large matrices since their complexity increases exponentially with the number of tests included. To the best of our knowledge, no techniques for reducing execution matrix size have been proposed. This thesis develops a matrix-reduction technique based on analysis of call stack data. It studies the effects of limiting the call stack depth in terms of coverage accuracy, matrix size, and generation costs. Further, it uses a tool that can instrument Java projects using Java’s instrumentation API to collect coverage information on open-source Java projects for varying depth limits of the call stack. Our results show that the stack depth limit can be significantly reduced while retaining high coverage and that matrix size can be decreased by up to 50%. The metric we used to indicate the difficulty of splitting up the matrix closely resembled the curve for coverage. However, we did not see any significant differences in execution time for lower depth limits.
25

Generate Test Selection Statistics With Automated Selective Mutation

Gamini, Devi charan January 2020 (has links)
Context. Software systems are under constant updating for being faulty and to improve and introduce features. The Software testing is the most commonly used  method for validating the quality of software systems. Agile processes help to  automate testing process. A regression test is the main strategy used in testing. Regression testing is time consuming, but with increase in codebases is making it more time extensive and time consuming. Making regression testing time efficient for continuous integration is the new strategy.   Objectives. This thesis focuses on co-relating code packages to test packages by automating mutation to inject error into C code. Regression testing against mutated code establishes co-relations. Co-relation data of particular modified code packages can be used for test selections. This method is most effective than the traditional test selection method. For this thesis to reduce the mutation costs selective mutation method is selected. Demonstrating the proof of concept helps to prove proposed  hypothesis.   Methods. An experiment answers the research questions. Testing of hypothesis on open source C programs will evaluate efficiency. Using this correlation method testers can reduce the testing cycles regardless of test environments. Results. Experimenting with sample programs using automated selective mutation the efficiency to co-relate tests to code packages was 93.4%.   Results. After experimenting with sample programs using automated selective mutation the efficiency to co-relate tests to code packages was 93.4%.   Conclusions. This research concludes that the automated mutation to obtain test selection statistics can be adopted. Though it is difficult for mutants to fail every test case, supposing that this method works with 93.4% efficient test failure on an average, then this method can reduce the test suite size to 5% for the particular modified code package.
26

Understanding Test-Artifact Quality in Software Engineering

Tran, Huynh Khanh Vi January 2022 (has links)
Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place. Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering. Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective. Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach. Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.
27

Enhancing System Reliability using Abstraction and Efficient Logical Computation / 抽象化技術と高速な論理演算を利用したシステムの高信頼化

Kutsuna, Takuro 24 September 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19335号 / 情博第587号 / 新制||情||102(附属図書館) / 32337 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 山本 章博, 教授 鹿島 久嗣, 教授 五十嵐 淳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
28

Test Case Selection in Continuous Integration Using Reinforcement Learning with Linear Function Approximator

Salman, Younus January 2023 (has links)
Continuous Integration (CI) has become an essential practice in software development, allowing teams to integrate code changes frequently and detect issues early. However, the selection of proper test cases for CI remains a challenge, as it requires balancing the need for thorough testing with the minimization of execution time and resources. This study proposes a practical and lightweight approach that leverages Reinforcement Learning with a linear function approximator for test case selection in CI. Several models are created where each one focuses on a different feature set. The proposed method aims to optimize the selection of test cases by learning from past CI outcomes, both the historical data of the test cases and the coverage data of the source code, and dynamically adapting the models for encountering new test cases and modified source code. Through experimentation and comparison between the models, the study demonstrates which feature set is optimal and efficient. The result indicates that Reinforcement Learning with a linear function approximator using coverage information can effectively assist in selecting test cases in CI, leading to enhanced software quality and development efficiency.
29

A comparison between random testing and adaptive random testing

Johansson, Nicklas, Aareskjold, Ola January 2023 (has links)
Software testing is essential for quality assurance, with automated techniques such as random testing and adaptive random testing being cost-effective solutions compared to others. Adaptive random testing seeks to enhance random testing, and there is a conception that adaptive random testing always should replace random testing. Our research question investigates this conception by addressing a gap in the literature, where a comparison between the two techniques in terms of certain key metrics is missing, namely defect detection efficiency and test case generation time. Defect detection efficiency is the amount of defects detected divided by the number defects in the system multiplied by one hundred. Test case generation time is the time it takes to generate all of the test case inputs. These metrics where chosen as they can be seen as a measurement of the techniques effectiveness and efficiency respectively. In order to address this research question we employ a quantitative experiment where we compare the performance of random testing and adaptive random testing with a sole focus on these two metrics. The comparison is performed by implementing and testing both algorithms on eight error-seeded numerical programs and measuring the results. The results displayed that adaptive random testing had a defect detection efficiency total average of 21.59% and a test case generation time total average of 35.37 (ms), while random testing had a defect detection efficiency total average of 22.28% and a test case generation time total average of 0.26 (ms). These results might contribute to disproving the conception that adaptive random testing always should replace random testing, as random testing evidently performed better on both the measured metrics.
30

Quantitative Analysis of Exploration Schedules for Symbolic Execution / Kvantitativ analys av utforskningsscheman för Symbolisk Exekvering

Kaiser, Christoph January 2017 (has links)
Due to complexity in software, manual testing is not enough to cover all relevant behaviours of it. A different approach to this problem is Symbolic Execution. Symbolic Execution is a software testing technique that tests all possible inputs of a program in the hopes of finding all bugs. Due to the often exponential increase in possible program paths, Symbolic Execution usually cannot exhaustively test a program. To nevertheless cover the most important or error prone areas of a program, search strategies that prioritize these areas are used. Such search strategies navigate the program execution tree, analysing which paths seem interesting enough to execute and which to prune. These strategies are typically grouped into two categories, general purpose searchers, with no specific target but the aim to cover the whole program and targeted searchers which can be directed towards specific areas of interest. To analyse how different searching strategies in Symbolic Execution affect the finding of errors and how they can be combined to improve the general outcome, the experiments conducted consist of several different searchers and combinations of them, each run on the same set of test targets. This set of test targets contains amongst others one of the most heavily tested sets of open source tools, the GNU Coreutils. With these, the different strategies are compared in distinct categories like the total number of errors found or the percentage of covered code. With the results from this thesis the potential of targeted searchers is shown, with an example implementation of the Pathscore-Relevance strategy. Further, the results obtained from the conducted experiments endorse the use of combinations of search strategies. It is also shown that, even simple combinations of strategies can be highly effective. For example, interleaving strategies can provide good results even if the underlying searchers might not perform well by themselves. / På grund av programvarukomplexitet är manuell testning inte tillräcklig för att täcka alla relevanta beteenden av programvaror. Ett annat tillvägagångssätt till detta problem är Symbolisk Exekvering (Symbolic Execution). Symbolisk Exekvering är en mjukvarutestningsteknik som testar alla möjliga inmatningari ett program i hopp om att hitta alla buggar. På grund av den ofta exponentiella ökningeni möjliga programsökvägar kan Symbolisk Exekvering vanligen inte uttömmande testa ettprogram. För att ändå täcka de viktigaste eller felbenägna områdena i ett program, används sökstrategier som prioriterar dessa områden. Sådana sökstrategier navigerar i programexekveringsträdet genom att analysera vilka sökvägar som verkar intressanta nog att utföra och vilka att beskära. Dessa strategier grupperas vanligtvis i två kategorier, sökare med allmänt syfte, utan något specifikt mål förutom att täcka hela programmet, och riktade sökare som kan riktas mot specifika intresseområden. För att analysera hur olika sökstrategier i Symbolisk Exekvering påverkar upptäckandetav fel och hur de kan kombineras för att förbättra det allmänna utfallet, bestod de experimentsom utfördes av flera olika sökare och kombinationer av dem, som alla kördes på samma uppsättning av testmål. Denna uppsättning av testmål innehöll bland annat en av de mest testade uppsättningarna av öppen källkod-verktyg, GNU Coreutils. Med dessa jämfördes de olika strategierna i distinkta kategorier såsom det totala antalet fel som hittats eller procenttalet av täckt kod. Med resultaten från denna avhandling visas potentialen hos riktade sökare, med ett exempeli form av implementeringen av Pathscore-Relevance strategin. Vidare stöder resultaten som erhållits från de utförda experimenten användningen av sökstrategikombinationer. Det visas också att även enkla kombinationer av strategier kan vara mycket effektiva.Interleaving-strategier kan till exempel ge bra resultat även om de underliggande sökarna kanske inte fungerar bra själva.

Page generated in 0.0751 seconds