1 |
TRACEABILITY OF REQUIREMENTS IN SCRUM SOFTWARE DEVELOPMENT PROCESSKodali, Manvisha January 2015 (has links)
Incomplete and incorrect requirements might lead to sub-optimal software products, which might not satisfy customers’ needs and expectations. Software verification and validation is one way to ensure that the software products meets the customers’ expectations while delivering the correct functionality. In this direction, the establishment and the maintenance of traceability links between requirements and test cases have been appointed as promising technique towards a more efficient software verification and validation. Through the last decades, several methodologies supporting traceability have been proposed, where most of them realize traceability by implicitly exploiting existing documents and relations. Nevertheless, parts of the industry is reluctant to implement traceability within software development processes due to the intrinsic overhead it brings. This is especially true for all those light-weight, code-centric software development processes, such as scrum, which focus on the coding activities, trying to minimizing the administrative overhead. In fact, the lack of documentation finishes to hamper the establishment of those trace links which are the means by which traceability is realized. In this thesis, we propose a methodology which integrates traceability within a scrum development process minimizing the development effort and administrative overhead. More precisely we i) investigate the state-of-the-art of traceability in a scrum development process, ii) propose a methodology for supporting traceability in scrum and iii) evaluate such a methodology upon an industrial case study provided by Westermo.
|
2 |
TEST CASES REDUCTION IN SOFTWARE PRODUCT LINE USING REGRESSION TESTING28 March 2012 (has links)
Application Engineering is a field where software organizations develop software products from a predefined Software Product Line. The time and cost allotted to come up with a new product variant is limited. Lack of systematic support in testing leads to redundancy. Redundancy in this context can be found in test-cases that do not contribute towards fault-detection and testing leads to an increased testing effort. This thesis work proposes a framework to reduce the testing effort, aimed at avoiding testing redundancy. Feature Model diagrams have been constructed from the assumed specification requirements. These Feature Model diagrams have been used to derive test models such as Object Model diagram and State Chart diagram. Unit testing and System testing have been performed on test models to obtain test cases that have been stored in the repository. Regression testing has been applied to these test cases to classify them into Reusable, Re-testable and Obsolete.
|
3 |
ucsCNL A controlled natural language for use case specificationsHORI, Érica Aguiar Andrade 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:57:41Z (GMT). No. of bitstreams: 2
arquivo3220_1.pdf: 1307302 bytes, checksum: 42435c33fd14be36778e3c202d24fd2d (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2010 / A maioria das empresas utiliza a linguagem natural livre para documentar software,
desde os seus requisitos, até os casos de uso e testes usados para verificar o produto final.
Visto que as fases de análise, projeto, implementação e teste do sistema dependem
essencialmente dessa documentação, é preciso assegurar inicialmente a qualidade desses
textos. Contudo, textos escritos em linguagem natural nem sempre são precisos, devido ao
fenômeno da ambigüidade (léxica e estrutural), podendo dar margem a diferentes
interpretações. Uma alternativa para se minimizar esse problema é o uso de uma Linguagem
Natural Controlada - um subconjunto de alguma língua natural, que usa um vocabulário
restrito a um domínio particular, e regras gramaticais que guiam a construção de sentenças
com redução de ambigüidade semântica visando padronização e precisão dos textos.
Este trabalho, na área de Teste de Software, apresenta a ucsCNL (Use Case Specification
CNL), uma Linguagem Natural Controlada para escrever especificações de casos de uso no
domínio de dispositivos móveis. A ucsCNL foi integrada à TaRGeT (Test and Requirements
Generation Tool), uma ferramenta para geração automática de casos de teste funcionais
baseados em cenários de casos de uso escritos em Inglês. A ucsCNL provê um ambiente para
geração de casos de teste mais claros, com ambigüidade reduzida, influindo diretamente na
qualidade dos testes e na produtividade dos testadores. A ucsCNL já está em uso e tem
alcançado resultados satisfatórios
|
4 |
Understanding Test Case Design: An Exploratory Survey of the Testers’ Routine and BehaviorEsber, Jameel January 2022 (has links)
Testing is an important component of every software since it enables the delivery of reliable solutions that meet the needs of end-users. Valuable testing is represented by the results of the test cases, which may provide knowledge into the presence of software system flaws. Testers create test cases because it lets one check and ensures that all user requirements and situations that users can go through are fully covered. In addition, test cases enable to find out the design problems early and thus allow testers to find solutions as soon as possible. The main goal of this thesis is to understand the routine and behavior of human testers when performing testing and to gain a better understanding of the software testing field. This thesis also aims to discover the challenges testers face when testing. The report shows the results of a combined qualitative and quantitative survey, responded by 38 experienced software testers and developers. The survey explores testers' cognitive processes when performing testing by investigating the knowledge they bring, the activities they perform, and the challenges they face in their routine. By analyzing the survey's results, we identified several main themes (related to knowledge, activities, and challenges) and brought more knowledge on the course of the problem-solving process cycle from understanding the test goal, planning the test strategy, executing tests to checking of the test results. We report a more refined test design model. The results of this thesis suggest that testers use several sources of knowledge in their routine when creating and executing test cases such as documentation, code, and their experience. In addition, we found that the main activities of testers are related to specific tasks such as the comprehension of software requirements, learning as much as possible about the software, and discussing of the results with the developing team or other testers to get feedback about the outcomes. Finally, testers face many challenges in their routine when understanding, planning, executing, and checking tests: e.g., incomplete, or ambiguous requirements, complex or highly configurable scenarios that are hard to test, the lack of time and hard deadlines and unstable environments.
|
5 |
Fit Refactoring-Improving the Quality of Fit Acceptance TestLiu, Xu 28 June 2007 (has links)
No description available.
|
6 |
Enhancing Software Security through Modeling Attacker ProfilesHussein, Nesrin 21 September 2018 (has links)
No description available.
|
7 |
Creating, Validating, and Using Synthetic Power Flow Cases: A Statistical Approach to Power System AnalysisJanuary 2019 (has links)
abstract: Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating and validating synthetic test cases, as well as a few use-cases for how access to synthetic data enables otherwise impossible analysis.
First, the question of how synthetic cases may be generated in an automatic manner, and how synthetic samples should be validated to assess whether they are sufficiently ``real'' is considered. Transmission and distribution levels are treated separately, due to the different nature of the two systems. Distribution systems are constructed by sampling distributions observed in a dataset from the Netherlands. For transmission systems, only first-order statistics, such as generator limits or line ratings are sampled statistically. The task of constructing an optimal power flow case from the sample sets is left to an optimization problem built on top of the optimal power flow formulation.
Secondly, attention is turned to some examples where synthetic models are used to inform analysis and modeling tasks. Co-simulation of transmission and multiple distribution systems is considered, where distribution feeders are allowed to couple transmission substations. Next, a distribution power flow method is parametrized to better account for losses. Numerical values for the parametrization can be statistically supported thanks to the ability to generate thousands of feeders on command. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
|
8 |
Increasing Security and Trust in HDL IP through Evolutionary ComputingKing, Bayley 23 August 2022 (has links)
No description available.
|
9 |
A Quality Criteria Based Evaluation of Topic ModelsSathi, Veer Reddy, Ramanujapura, Jai Simha January 2016 (has links)
Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources. Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task. Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure. Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.
|
10 |
Testing in Software Product Lines / Testing in Software Product LinesOdia, Osaretin Edwin January 2007 (has links)
This thesis presents research aimed at investigating different activities involved in software product lines testing process and possible improvements towards achieving developing high quality software product lines at reduced cost and time. The research was performed using systematic review procedures of Kitchenham. The reviews carried out in this research covers several areas relating to software product lines testing. The reasons for performing a systematic review in this research are to; summarize the existing evidence covering testing in software product line context, to identify gaps in current research and to suggest areas for further research. The contribution of this thesis is research aimed at revealing the different activities, issues and challenges in software product lines testing. The research into the different activities in software product lines lead to the proposed SPLIT Model for software product lines testing. The model helps to clarify the steps and activities involved in the software product line testing process. It provides and easy to follow map for testers and managers in software product line development organizations. The results were mainly on how testing in software product lines can be improved upon, towards achieving software product line goals. The basic contribution is the proposed model for product line testing, investigation into, and possible improvement in, issues related to software product line testing activities. / The main purpose of the research as presented in this thesis is to present a clear picture of testing in the context of software product lines, which is quite different from testing in single product. The focus of this thesis is specifically the different steps and activities involved in software product lines testing and possible improvements in software product lines testing activities and issues towards achieving the goals of developing high quality software product lines at reduced cost and time. But, for software product lines to achieve its goals, there should be a comprehensive set of testing activities in software product lines development. The development activities from performing analyses and creating designs to integrating programs in software product line context, component testing and tools support for software product lines testing should be taken into consideration. / 0046762913149 eddy_odia2002@yahoo.co.uk
|
Page generated in 0.03 seconds