• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 10
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Test Design

Eldh, Sigrid January 2011 (has links)
Testing is the dominating method for quality assurance of industrial software. Despite its importance and the vast amount of resources invested, there are surprisingly limited efforts spent on testing research, and the few industrially applicable results that emerge are rarely adopted by industry. At the same time, the software industry is in dire need of better support for testing its software within the limited time available. Our aim is to provide a better understanding of how test cases are created and applied, and what factors really impact the quality of the actual test. The plethora of test design techniques (TDTs) available makes decisions on how to test a difficult choice. Which techniques should be chosen and where in the software should they be applied? Are there any particular benefits of using a specific TDT? Which techniques are effective? Which can you automate? What is the most beneficial way to do a systematic test of a system? This thesis attempts to answer some of these questions by providing a set of guidelines for test design, including concrete suggestions for how to improve testing of industrial software systems, thereby contributing to an improved overall system quality. The guidelines are based on ten studies on the understanding and use of TDTs. The studies have been performed in a variety of system domains and consider several different aspects of software test. For example, we have investigated some of the common mistakes in creating test cases that can lead to poor and costly testing. We have also compared the effectiveness of different TDTs for different types of systems. One of the key factors for these comparisons is a profound understanding of faults and their propagation in different systems. Furthermore, we introduce a taxonomy for TDTs based on their effectiveness (fault finding ability), efficiency (fault finding rate), and applicability. Our goal is to provide an improved basis for making well-founded decisions regarding software testing, together with a better understanding of the complex process of test design and test case writing. Our guidelines are expected to lead to improvements in testing of complex industrial software, as well as to higher product quality and shorter time to market.
2

Metrics in Software Test Planning and Test Design Processes / Metrics in Software Test Planning and Test Design Processes

Afzal, Wasif January 2007 (has links)
Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes. / 00 92 51 4430327
3

Testų kūrimo proceso optimizavimas / Optimization of test design process

Glodenytė, Romualda 28 June 2010 (has links)
Baigiamajame magistro darbe nagrinėjamas visas testavimo procesas, jo valdymas, problemų sprendimas ir tobulinimas. Smulkiau išdėstomas testų kūrimo proceso optimizavimas. Atrenkami labiausiai atitinkantys įmonės poreikius optimizavimo metodai, jie aptariami detaliau bei aprašomas jų pritaikymas praktikoje. Pateikiami pavyzdžiai bei jų analizė ir praktinė vertė. Darbo tikslas – išnagrinėti testų kūrimo proceso optimizavimo metodus bei juos pritaikyti praktinėje įmonės veikloje. Darbą sudaro 8 dalys: įvadas, testavimo procesų analizė, testavimo procesų planavimas ir valdymas, testavimo proceso problemos ir tobulinimas, testų kūrimo proceso optimizavimas, išvados, literatūros sąrašas bei priedai. / The master thesis deals with all testing process, its management, issue solving and improvement. Test design optimization is explained in details. Most attractive process optimization methods are selected for the company. Methods are analyzed in greater details and adjustment is explained. Examples, its analysis and practical value are displayed in thesis. The aim – to analyze optimization methods of test design process and adjust it in company’s practical work. Structure: introduction, testing process analysis, testing process planning and management, testing process problems and improvement, test design process optimization, conclusions, list of references, appendixes.
4

Towards more effective testing of communications-critical large scale systems

Nabulsi, Mohammad January 2014 (has links)
None of today’s large scale systems could function without the reliable availability of a varied range of network communications capabilities. Whilst software, hardware and communications technologies have been advancing throughout the past two decades, the methods commonly used by industry for testing large scale systems which incorporate critical communications interfaces have not kept pace. This thesis argues for the need for a specifically tailored framework to achieve effective testing of communications-critical large scale systems (CCLSS). The thesis initially discusses how generic test approaches are leading to inefficient and costly test activities in industry. The thesis then presents the form and features of an alternative CCLSS domain-specific test framework, develops its ideas further into a detailed and structured test approach for one of its layers, and then provides a detailed example of how this framework can be applied using a real-life case study. The thesis concludes with a qualitative as well a simulation-based evaluation of the framework’s benefits observed during the case study and an evaluation by expert external participants considering whether similar benefits can be realised if the framework is adopted for the testing of other comparable systems. Requirements data from a second CCLSS is included in the evaluation by external participants as a second smaller case study.
5

Rethinking Vocabulary Size Tests: Frequency Versus Item Difficulty

Hashimoto, Brett James 01 June 2016 (has links)
For decades, vocabulary size tests have been built upon the idea that if a test-taker knows enough words at a given level of frequency based on a list from corpus, they will also know other words of that approximate frequency as well as all words that are more frequent. However, many vocabulary size tests are based on corpora that are as out-of-date as 70 years old and that may be ill-suited for these tests. Based on these potentially problematic areas, the following research questions were asked. First, to what degree would a vocabulary size test based on a large, contemporary corpus be reliable and valid? Second, would it be more reliable and valid than previously designed vocabulary size tests? Third, do words across, 1,000-word frequency bands vary in their item difficulty? In order to answer these research questions, 403 ESL learners took the Vocabulary of American English Size Test (VAST). This test was based on a words list generated from the Corpus of Contemporary American English (COCA). This thesis shows that COCA word list might be better suited for measuring vocabulary size than lists used in previous vocabulary size assessments. As a 450-million-word corpus, it far surpasses any corpus used in previously designed vocabulary size tests in terms of size, balance, and representativeness. The vocabulary size test built from the COCA list was both highly valid and highly reliable according to a Rasch-based analysis. Rasch person reliability and separation was calculated to be 0.96 and 4.62, respectively. However, the most significant finding of this thesis is that frequency ranking in a word list is actually not as good of a predictor of item difficulty in a vocabulary size assessment as perhaps researchers had previously assumed. A Pearson correlation between frequency ranking in the COCA list and item difficulty for 501 items taken from the first 5,000 most frequent words was 0.474 (r^2 = 0.225) meaning that frequency rank only accounted for 22.5% of the variability of item difficulty. The correlation decreased greatly when item difficulty was correlated against bands of 1,000 words to a weak r = 0.306, (r^2 = 0.094) meaning that 1,000-word bands of frequency only accounts for 9.4% of the variance. Because frequency is a not a highly accurate predictor of item difficulty, it is important to reconsider how vocabulary size tests are designed.
6

Quality of Test Design in Test Driven Development

Čaušević, Adnan January 2013 (has links)
One of the most emphasised software testing activities in an Agile environment is the usage of the Test Driven Development (TDD) approach. TDD is a development activity where test cases are created by developers before writing the code, and all for the purpose of guiding the actual development process. In other words, test cases created when following TDD could be considered as a by-product of software development. However, TDD is not fully adopted by the industry, as indicated by respondents from our industrial survey who pointed out that TDD is the most preferred but least practised activity. Our further research identified seven potentially limiting factors for industrial adoption of TDD, out of which one of the prominent factor was lack of developers’ testing skills. We subsequently defined and categorised appropriate quality attributes which describe the quality of test case design when following TDD. Through a number of empirical studies, we have clearly established the effect of “positive test bias”, where the participants focused mainly on the functionality while generating test cases. In other words, there existed less number of “negative test cases” exercising the system beyond the specified functionality, which is an important requirement for high reliability systems. On an average, in our studies, around 70% of test cases created by the participants were positive while only 30% were negative. However, when measuring defect detecting ability of those sets of test cases, an opposite ratio was observed. Defect detecting ability of negative test cases were above 70% while positive test cases contributed only by 30%. We propose a TDDHQ concept as an approach for achieving higher quality testing in TDD by using combinations of quality improvement aspects and test design techniques to facilitate consideration of unspecified requirements during the development to a higher extent and thus minimise the impact of potentially inherent positive test bias in TDD. This way developers do not necessarily focus only on verifying functionality, but they can as well increase security, robustness, performance and many other quality improvement aspects for the given software product. An additional empirical study, evaluating this method, showed a noticeable improvement in the quality of test cases created by developers utilising TDDHQ concept. Our research findings are expected to pave way for further enhancements to the way of performing TDD, eventually resulting in better adoption of it by the industry.
7

Usage of third party components in Heterogeneous systems : An empirical study

Raavi, Jaya Krishna January 2016 (has links)
Context: The development of complex systems of systems leads to high development cost, uncontrollable software quality and low productivity. Thus Component-based software development was used to improve development effort and cost of the software. Heterogeneous systems are the system of systems that consist of functionally independent sub-systems with at least one sub-system exhibiting heterogeneity with respect to other systems. The context of this study is to investigate the usage of third party components in heterogeneous systems. Objectives. In this study an attempt was made to investigate the usage of third party components in heterogeneous systems in order to accomplish the following objectives: Identify different types of third party components. Identify challenges faced while integrating third-party components in heterogeneous systems. Investigate the difference in test design of various third party components Identify what the practitioners learn from various third party components   Methods: We have conducted a systematic literature review by following Systematic literature review guidelines by Kitchenham to identify third party components used, challenges faced while integrating third-party components and test design techniques. Qualitative interviews were conducted in order to complement, supplement the finding from the SLR and further provide guidelines to the practitioners using third party components. The studies obtained from the SLR were analyzed in relation to the quality criteria using narrative analysis. The data obtained from interview results were analyzed using thematic analysis. Results: 31 primary studies were obtained from the systematic literature review (SLR).  3 types of third components, 12 challenges, 6 test design techniques were identified from SLR.  From the analysis of interviews, it was observed that a total of 21 challenges were identified which complemented the SLR results. In addition, from interview test design techniques used for testing of heterogeneous systems having third party components were investigated. Interviews have also provided 10 Recommendations for the practitioners using different types of third party components in the product development. Conclusions: To conclude, commercial of the shelf systems (COTS and Open software systems (OSS) were the third party components mainly used in heterogeneous systems rather than in-house software from the interview and SLR results. 21 challenges were identified from SLR and interview results. The test design for testing of heterogeneous systems having different third party components vary, Due to the non-availability of source code, dependencies of the subsystems and competence of the component. From the analysis of obtained results, the author has also proposed guidelines to the practitioners based on the type of third party components used for product development. / <p>All the information provided are correct as per my knowledge.</p>
8

STEP : planejamento, geração e seleção de auto-teste on-line para processadores embarcados / STEP : planning, generation and selection of on-line self-test for embedded processors

Moraes, Marcelo de Souza January 2006 (has links)
Sistemas embarcados baseados em processadores têm sido largamente aplicados em áreas críticas no que diz respeito à segurança de seres humanos e do meio ambiente. Em tais aplicações, que compreendem desde o controle de freio de carros a missões espaciais, pode ser necessária a execução confiável de todas as funcionalidades do sistema durante longos períodos e em ambientes desconhecidos, hostis ou instáveis. Mesmo em aplicações não críticas, nas quais a confiabilidade do sistema não é um requisito primordial, o usuário final deseja que seu produto apresente comportamento estável e livre de erros. Daí vem a importância de se considerar o auto-teste on-line no projeto dos sistemas embarcados atuais. Entretanto, a crescente complexidade de tais sistemas somada às fortes restrições a que eles estão sujeitos torna o projeto do auto-teste um problema cada vez mais desafiador. Em aplicações de tempo-real a dificuldade é ainda maior, uma vez que, além dos cuidados com as restrições do sistema alvo, deve-se levar em conta o atendimento dos requisitos temporais da aplicação. Entre as técnicas de auto-teste on-line atualmente pesquisadas, uma tem se destacado pela eficácia obtida a um baixo custo de projeto e sem grande impacto no atendimento dos requisitos e restrições do sistema: o auto-teste baseado em software (SBST – Software-Based Self-Test). Neste trabalho, é proposta uma metodologia para o projeto e aplicação de auto-teste on-line para processadores embarcados, considerando-se também aplicações de temporeal. Tal metodologia, denominada STEP (Self-Test for Embedded Processors), tem como base a técnica SBST e prevê o planejamento, a geração e a seleção de rotinas de teste para o processador alvo. O método proposto garante a execução periódica do autoteste, com o menor período permitido pela aplicação de tempo-real, e assegura o atendimento de todas as restrições do sistema embarcado. Além disso, a solução fornecida pelo método alcança uma boa qualidade de teste enquanto auxilia a redução de custos do sistema final. Como estudo de caso, a metodologia proposta é aplicada a diferentes arquiteturas de processadores Java e os resultados obtidos comprovam a eficiência da mesma. Por fim, é apresentada uma ferramenta que implementa a metodologia STEP, automatizando, assim, o projeto e a aplicação de auto-teste on-line para os processadores estudados. / Processor-based embedded systems have been widely used in safety-critical applications. In such applications, which include from cars break control to spatial missions, the whole system operation must be reliable during long periods even within unknown, hostile and unstable environments. In non-critical applications, system reliability is not a prime requirement, but the final user requires an error free product, with stable behavior. Hence, one can realize the importance of on-line self-testing in current embedded systems. Self-testing is becoming an important challenge due to the increasing complexity of the systems allied to their strong constraints. In real-time applications this problem becomes even more complex, since, besides meeting systems constraints, one must take into consideration the application timing requirements. Among all on-line self-testing techniques studied, Software-Based Self-Test (SBST) has been distinguished by its effectiveness, low-cost and small impact on system constraints and requirements. This work proposes a methodology for the design and implementation of on-line self-test in embedded processors, considering real-time applications. Such a methodology, called STEP (Self-Test for Embedded Processors), is based on SBST technique and encloses planning, generation and selection of test routines for the target processor. The proposed method guarantees periodical self-test execution, at the smallest period allowed by the real-time application, and ensures that all embedded system constraints are met. Furthermore, provided solution achieves high test quality while helping in the optimization of the costs of the final system. The proposed methodology is applied to different architectures of Java processors to demonstrate its efficiency. Finally, this work presents a tool that automates the design and implementation of on-line self-test in the studied processors by implementing the STEP methodology.
9

Contributions to Kernel Equating

Andersson, Björn January 2014 (has links)
The statistical practice of equating is needed when scores on different versions of the same standardized test are to be compared. This thesis constitutes four contributions to the observed-score equating framework kernel equating. Paper I introduces the open source R package kequate which enables the equating of observed scores using the kernel method of test equating in all common equating designs. The package is designed for ease of use and integrates well with other packages. The equating methods non-equivalent groups with covariates and item response theory observed-score kernel equating are currently not available in any other software package. In paper II an alternative bandwidth selection method for the kernel method of test equating is proposed. The new method is designed for usage with non-smooth data such as when using the observed data directly, without pre-smoothing. In previously used bandwidth selection methods, the variability from the bandwidth selection was disregarded when calculating the asymptotic standard errors. Here, the bandwidth selection is accounted for and updated asymptotic standard error derivations are provided. Item response theory observed-score kernel equating for the non-equivalent groups with anchor test design is introduced in paper III. Multivariate observed-score kernel equating functions are defined and their asymptotic covariance matrices are derived. An empirical example in the form of a standardized achievement test is used and the item response theory methods are compared to previously used log-linear methods. In paper IV, Wald tests for equating differences in item response theory observed-score kernel equating are conducted using the results from paper III. Simulations are performed to evaluate the empirical significance level and power under different settings, showing that the Wald test is more powerful than the Hommel multiple hypothesis testing method. Data from a psychometric licensure test and a standardized achievement test are used to exemplify the hypothesis testing procedure. The results show that using the Wald test can provide different conclusions to using the Hommel procedure.
10

Clare W. Graves' Levels of Psychological Existence: A Test Design

Hurlbut, Marilyn Anne 08 1900 (has links)
The purpose of this study was to develop a test which would reveal a person's primary level of existence according to Clare W. Graves' model of adult psychosocial behavior. The sub-purposes of this study were (1) to translate Graves' theoretical levels of existence into discrete components of attitude and behavior which could then be assessed with a written test instrument, (2) to create such a written instrument, and (3) to test the instrument for reliability and validity. The general conclusion of this study is that the Levels of Existence test meets the standards of reliability and validity accepted within psychometrics sufficiently to recommend that it be revised with respect to certain details as specified in the study, and that further research be undertaken.

Page generated in 0.0594 seconds