• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 11
  • 11
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 89
  • 89
  • 41
  • 34
  • 33
  • 21
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Test Data Extraction and Comparison with Test Data Generation

Raza, Ali 01 August 2011 (has links)
Testing an integrated information system that relies on data from multiple sources can be a challenge, particularly when the data is confidential. This thesis describes a novel test data extraction approach, called semantic-based test data extraction for integrated systems (iSTDE) that solves many of the problems associated with creating realistic test data for integrated information systems containing confidential data. iSTDE reads a consistent cross-section of data from the production databases, manipulates that data to obscure individual identities while still preserving overall semantic data characteristics that are critical to thorough system testing, and then moves that test data to an external test environment. This thesis also presents a theoretical study that compares test-data extraction with a competing technique, named test-data generation. Specifically, this thesis a) describes a comparison method that includes a comprehensive list of characteristics essential for testing the database applications organized into seven different areas, b) presents an analysis of the relative strengths and weaknesses of the different test-data creation techniques, and c) reports a number of specific conclusions that will help testers make appropriate choices.
52

Wellbore stability analysis based on a new true-triaxial failure criterion

Al-Ajmi, Adel January 2006 (has links)
A main aspect of wellbore stability analysis is the selection of an appropriate rock failure criterion. The most commonly used criterion for brittle failure of rocks is the Mohr-Coulomb criterion. This criterion involves only the maximum and minimum principal stresses, s1 and s3, and therefore assumes that the intermediate stress s2 has no influence on rock strength. When the Mohr-Coulomb criterion had been developed, it was justified by experimental evidence from conventional triaxial tests (s1>s2=s3). Based on triaxial failure mechanics, the Mohr-Coulomb criterion has been extensively used to represent rock failure under the polyaxial stress state (s1>s2>s3). In contrast to the predictions of Mohr-Coulomb criterion, much evidence has been accumulating to suggest that s2 does indeed have a strengthening effect. In this research, I have shown that Mohr-Coulomb failure criterion only represents the triaxial stress state (s2=s3 or s2=s1), which is a special case that will only occasionally be encountered in situ. Accordingly, I then developed a new true-triaxial failure criterion called the Mogi-Coulomb criterion. This failure criterion is a linear failure envelope in the Mogi domain (toct-sm,2 space) which can be directly related to the Coulomb strength parameters, cohesion and friction angle. This linear failure criterion has been justified by experimental evidence from triaxial tests as well as polyaxial tests. It is a natural extension of the classical Coulomb criterion into three dimensions. As the Mohr-Coulomb criterion only represents rock failure under triaxial stress states, it is expected to be too conservative in predicting wellbore instability. To overcome this problem, I have developed a new 3D analytical model to estimate the mud pressure required to avoid shear failure at the wall of vertical, horizontal and deviated boreholes. This has been achieved by using linear elasticity theory to calculate the stresses, and the fully-polyaxial Mogi-Coulomb criterion to predict failure. The solution is achieved in closed-form for vertical wellbores, for all stress regimes. For deviated or horizontal wellbores, Mathcad programs have been written to evaluate the solution. These solutions have been applied to several field cases available in the literature, and the new model in each case seems to be consistent with the field experience. / QC 20100629
53

Improving the transparency and predictability of environmental risk assessments of pharmaceuticals

Ågerstrand, Marlene January 2010 (has links)
The risk assessment process and the subsequent risk management measures need tobe constantly evaluated, updated and improved. This thesis contributes to that workby considering, and suggesting improvements, regarding aspects like userfriendliness,transparency, accuracy, consistency, data reporting, data selection anddata evaluation.The first paper in this thesis reports from an empirical investigation of themotivations, intentions and expectations underlying the development andimplementation of a voluntary industry owned environmental classification systemfor pharmaceuticals. The results show that the purpose of the classification systemis to provide information, no other risk reduction measures are aimed for.The second paper reports from an evaluation of the accuracy and the consistency ofthe environmental risk assessments conducted within the classification system. Theresults show that the guideline recommendations were not followed in several casesand consequently alternative risk ratios could be determined for six of the 36pharmaceutical substances selected for evaluation in this study. When additionaldata from the open scientific literature was included the risk ratio was altered formore than one-third of the risk assessments. Seven of the 36 substances wereassessed and classified by more than one risk assessor. In two of the seven cases,different producers classified the same substance into different classificationcategories.The third paper addresses the question whether non-standard ecotoxicity data couldbe used systematically in environmental risk assessments of pharmaceuticals. Fourdifferent evaluation methods were used to evaluate nine non-standard studies. Theevaluation result from the different methods varied at surprisingly high rate and theevaluation of the non-standard data concluded that the reliability of the data wasgenerally low. / QC 20100929
54

PV Module Performance Under Real-world Test Conditions - A Data Analytics Approach

Hu, Yang 12 June 2014 (has links)
No description available.
55

Software test case generation from system models and specification : use of the UML diagrams and high level Petri nets models for developing software test cases

Alhroob, Aysh Menoer January 2010 (has links)
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
56

Uma abordagem para geração de dados de teste para o teste de mutação utilizando técnicas baseadas em busca / An approach for test data generation in mutation testing using seacrh-based techniques

Souza, Francisco Carlos Monteiro 24 May 2017 (has links)
O teste de mutação é um critério de teste poderoso para detectar falhas e medir a eficácia de um conjunto de dados de teste. No entanto, é uma técnica de teste computacionalmente cara. O alto custo provém principalmente do esforço para gerar dados de teste adequados para matar os mutantes e pela existência de mutantes equivalentes. Nesse contexto, o objetivo desta tese é apresentar uma abordagem chamada de Reach, Infect and Propagation to Mutation Testing (RIPMuT) que visa gerar dados de teste e sugerir mutantes equivalentes. A abordagem é composta por dois módulos: (i) uma geração automatizada de dados de teste usando subida da encosta e um esquema de fitness de acordo com as condições de alcançabilidade, infeção e propagação (RIP); e (ii) um método para sugerir mutantes equivalentes com base na análise das condições RIP durante o processo de geração de dados de teste. Os experimentos foram conduzidos para avaliar a eficácia da abordagem RIP-MuT e um estudo comparativo com o algoritmo genético e testes aleatórios foi realizado. A abordagem RIP-MuT obteve um escore médio de mutação de 18,25 % maior que o AG e 35,93 % maior que o teste aleatório. O método proposto para detecção de mutantes equivalentes se mostrou viável para redução de custos relacionado a essa atividade, uma vez que obteve uma precisão de 75,05% na sugestão dos mutantes equivalentes. Portanto, os resultados indicam que a abordagem gera dados de teste adequados capazes de matar a maioria dos mutantes em programas C e, também auxilia a identificar mutantes equivalentes corretamente. / Mutation Testing is a powerful test criterion to detect faults and measure the effectiveness of a test data set. However, it is a computationally expensive testing technique. The high cost comes mainly from the effort to generate adequate test data to kill the mutants and by the existence of equivalent mutants. In this thesis, an approach called Reach, Infect and Propagation to Mutation Testing (RIP-MuT) is presented to generate test data and to suggest equivalent mutants. The approach is composed of two modules: (i) an automated test data generation using hill climbing and a fitness scheme according to Reach, Infect, and Propagate (RIP) conditions; and (ii) a method to suggest equivalent mutants based on the analyses of RIP conditions during the process of test data generation. The experiments were conducted to evaluate the effectiveness of the RIP-MuT approach and a comparative study with a genetic algorithm and random testing. The RIP-MuT approach achieved a mean mutation score of 18.25% higher than the GA and 35.93% higher than random testing. The proposed method for detection of equivalent mutants demonstrate to be feasible for cost reduction in this activity since it obtained a precision of 75.05% on suggesting equivalent mutants. Therefore, the results indicate that the approach produces effective test data able to strongly kill the majority of mutants on C programs, and also it can assist in suggesting equivalent mutants correctly.
57

Seleção entre estratégias de geração automática de dados de teste por meio de métricas estáticas de softwares orientados a objetos / Selection between whole test generation strategies by analysing object oriented software static metrics

Ramos, Gustavo da Mota 09 October 2018 (has links)
Produtos de software com diferentes complexidades são criados diariamente através da elicitação de demandas complexas e variadas juntamente a prazos restritos. Enquanto estes surgem, altos níveis de qualidade são esperados para tais, ou seja, enquanto os produtos tornam-se mais complexos, o nível de qualidade pode não ser aceitável enquanto o tempo hábil para testes não acompanha a complexidade. Desta maneira, o teste de software e a geração automática de dados de testes surgem com o intuito de entregar produtos contendo altos níveis de qualidade mediante baixos custos e rápidas atividades de teste. Porém, neste contexto, os profissionais de desenvolvimento dependem das estratégias de geração automáticas de testes e principalmente da seleção da técnica mais adequada para conseguir maior cobertura de código possível, este é um fator importante dados que cada técnica de geração de dados de teste possui particularidades e problemas que fazem seu uso melhor em determinados tipos de software. A partir desde cenário, o presente trabalho propõe a seleção da técnica adequada para cada classe de um software com base em suas características, expressas por meio de métricas de softwares orientados a objetos a partir do algoritmo de classificação Naive Bayes. Foi realizada uma revisão bibliográfica de dois algoritmos de geração, algoritmo de busca aleatório e algoritmo de busca genético, compreendendo assim suas vantagens e desvantagens tanto de implementação como de execução. As métricas CK também foram estudadas com o intuito de compreender como estas podem descrever melhor as características de uma classe. O conhecimento adquirido possibilitou coletar os dados de geração de testes de cada classe como cobertura de código e tempo de geração a partir de cada técnica e também as métricas CK, permitindo assim a análise destes dados em conjunto e por fim execução do algoritmo de classificação. Os resultados desta análise demonstraram que um conjunto reduzido e selecionado das métricas CK é mais eficiente e descreve melhor as características de uma classe se comparado ao uso do conjunto por completo. Os resultados apontam também que as métricas CK não influenciam o tempo de geração dos dados de teste, entretanto, as métricas CK demonstraram correlação moderada e influência na seleção do algoritmo genético, participando assim na sua seleção pelo algoritmo Naive Bayes / Software products with different complexity are created daily through analysis of complex and varied demands together with tight deadlines. While these arise, high levels of quality are expected for such, as products become more complex, the quality level may not be acceptable while the timing for testing does not keep up with complexity. In this way, software testing and automatic generation of test data arise in order to deliver products containing high levels of quality through low cost and rapid test activities. However, in this context, software developers depend on the strategies of automatic generation of tests and especially on the selection of the most adequate technique to obtain greater code coverage possible, this is an important factor given that each technique of data generation of test have peculiarities and problems that make its use better in certain types of software. From this scenario, the present work proposes the selection of the appropriate technique for each class of software based on its characteristics, expressed through object oriented software metrics from the naive bayes classification algorithm. Initially, a literature review of the two generation algorithms was carried out, random search algorithm and genetic search algorithm, thus understanding its advantages and disadvantages in both implementation and execution. The CK metrics have also been studied in order to understand how they can better describe the characteristics of a class. The acquired knowledge allowed to collect the generation data of tests of each class as code coverage and generation time from each technique and also the CK metrics, thus allowing the analysis of these data together and finally execution of the classification algorithm. The results of this analysis demonstrated that a reduced and selected set of metrics is more efficient and better describes the characteristics of a class besides demonstrating that the CK metrics have little or no influence on the generation time of the test data and on the random search algorithm . However, the CK metrics showed a medium correlation and influence in the selection of the genetic algorithm, thus participating in its selection by the algorithm naive bayes
58

Seleção entre estratégias de geração automática de dados de teste por meio de métricas estáticas de softwares orientados a objetos / Selection between whole test generation strategies by analysing object oriented software static metrics

Gustavo da Mota Ramos 09 October 2018 (has links)
Produtos de software com diferentes complexidades são criados diariamente através da elicitação de demandas complexas e variadas juntamente a prazos restritos. Enquanto estes surgem, altos níveis de qualidade são esperados para tais, ou seja, enquanto os produtos tornam-se mais complexos, o nível de qualidade pode não ser aceitável enquanto o tempo hábil para testes não acompanha a complexidade. Desta maneira, o teste de software e a geração automática de dados de testes surgem com o intuito de entregar produtos contendo altos níveis de qualidade mediante baixos custos e rápidas atividades de teste. Porém, neste contexto, os profissionais de desenvolvimento dependem das estratégias de geração automáticas de testes e principalmente da seleção da técnica mais adequada para conseguir maior cobertura de código possível, este é um fator importante dados que cada técnica de geração de dados de teste possui particularidades e problemas que fazem seu uso melhor em determinados tipos de software. A partir desde cenário, o presente trabalho propõe a seleção da técnica adequada para cada classe de um software com base em suas características, expressas por meio de métricas de softwares orientados a objetos a partir do algoritmo de classificação Naive Bayes. Foi realizada uma revisão bibliográfica de dois algoritmos de geração, algoritmo de busca aleatório e algoritmo de busca genético, compreendendo assim suas vantagens e desvantagens tanto de implementação como de execução. As métricas CK também foram estudadas com o intuito de compreender como estas podem descrever melhor as características de uma classe. O conhecimento adquirido possibilitou coletar os dados de geração de testes de cada classe como cobertura de código e tempo de geração a partir de cada técnica e também as métricas CK, permitindo assim a análise destes dados em conjunto e por fim execução do algoritmo de classificação. Os resultados desta análise demonstraram que um conjunto reduzido e selecionado das métricas CK é mais eficiente e descreve melhor as características de uma classe se comparado ao uso do conjunto por completo. Os resultados apontam também que as métricas CK não influenciam o tempo de geração dos dados de teste, entretanto, as métricas CK demonstraram correlação moderada e influência na seleção do algoritmo genético, participando assim na sua seleção pelo algoritmo Naive Bayes / Software products with different complexity are created daily through analysis of complex and varied demands together with tight deadlines. While these arise, high levels of quality are expected for such, as products become more complex, the quality level may not be acceptable while the timing for testing does not keep up with complexity. In this way, software testing and automatic generation of test data arise in order to deliver products containing high levels of quality through low cost and rapid test activities. However, in this context, software developers depend on the strategies of automatic generation of tests and especially on the selection of the most adequate technique to obtain greater code coverage possible, this is an important factor given that each technique of data generation of test have peculiarities and problems that make its use better in certain types of software. From this scenario, the present work proposes the selection of the appropriate technique for each class of software based on its characteristics, expressed through object oriented software metrics from the naive bayes classification algorithm. Initially, a literature review of the two generation algorithms was carried out, random search algorithm and genetic search algorithm, thus understanding its advantages and disadvantages in both implementation and execution. The CK metrics have also been studied in order to understand how they can better describe the characteristics of a class. The acquired knowledge allowed to collect the generation data of tests of each class as code coverage and generation time from each technique and also the CK metrics, thus allowing the analysis of these data together and finally execution of the classification algorithm. The results of this analysis demonstrated that a reduced and selected set of metrics is more efficient and better describes the characteristics of a class besides demonstrating that the CK metrics have little or no influence on the generation time of the test data and on the random search algorithm . However, the CK metrics showed a medium correlation and influence in the selection of the genetic algorithm, thus participating in its selection by the algorithm naive bayes
59

Uma abordagem para geração de dados de teste para o teste de mutação utilizando técnicas baseadas em busca / An approach for test data generation in mutation testing using seacrh-based techniques

Francisco Carlos Monteiro Souza 24 May 2017 (has links)
O teste de mutação é um critério de teste poderoso para detectar falhas e medir a eficácia de um conjunto de dados de teste. No entanto, é uma técnica de teste computacionalmente cara. O alto custo provém principalmente do esforço para gerar dados de teste adequados para matar os mutantes e pela existência de mutantes equivalentes. Nesse contexto, o objetivo desta tese é apresentar uma abordagem chamada de Reach, Infect and Propagation to Mutation Testing (RIPMuT) que visa gerar dados de teste e sugerir mutantes equivalentes. A abordagem é composta por dois módulos: (i) uma geração automatizada de dados de teste usando subida da encosta e um esquema de fitness de acordo com as condições de alcançabilidade, infeção e propagação (RIP); e (ii) um método para sugerir mutantes equivalentes com base na análise das condições RIP durante o processo de geração de dados de teste. Os experimentos foram conduzidos para avaliar a eficácia da abordagem RIP-MuT e um estudo comparativo com o algoritmo genético e testes aleatórios foi realizado. A abordagem RIP-MuT obteve um escore médio de mutação de 18,25 % maior que o AG e 35,93 % maior que o teste aleatório. O método proposto para detecção de mutantes equivalentes se mostrou viável para redução de custos relacionado a essa atividade, uma vez que obteve uma precisão de 75,05% na sugestão dos mutantes equivalentes. Portanto, os resultados indicam que a abordagem gera dados de teste adequados capazes de matar a maioria dos mutantes em programas C e, também auxilia a identificar mutantes equivalentes corretamente. / Mutation Testing is a powerful test criterion to detect faults and measure the effectiveness of a test data set. However, it is a computationally expensive testing technique. The high cost comes mainly from the effort to generate adequate test data to kill the mutants and by the existence of equivalent mutants. In this thesis, an approach called Reach, Infect and Propagation to Mutation Testing (RIP-MuT) is presented to generate test data and to suggest equivalent mutants. The approach is composed of two modules: (i) an automated test data generation using hill climbing and a fitness scheme according to Reach, Infect, and Propagate (RIP) conditions; and (ii) a method to suggest equivalent mutants based on the analyses of RIP conditions during the process of test data generation. The experiments were conducted to evaluate the effectiveness of the RIP-MuT approach and a comparative study with a genetic algorithm and random testing. The RIP-MuT approach achieved a mean mutation score of 18.25% higher than the GA and 35.93% higher than random testing. The proposed method for detection of equivalent mutants demonstrate to be feasible for cost reduction in this activity since it obtained a precision of 75.05% on suggesting equivalent mutants. Therefore, the results indicate that the approach produces effective test data able to strongly kill the majority of mutants on C programs, and also it can assist in suggesting equivalent mutants correctly.
60

Confidence-based model validation for reliability assessment and its integration with reliability-based design optimization

Moon, Min-Yeong 01 August 2017 (has links)
Conventional reliability analysis methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased due to simplifications and idealizations. Simulation models are approximate mathematical representations of real-world systems and thus cannot exactly imitate the real-world systems. The accuracy of a simulation model is especially critical when it is used for the reliability calculation. Therefore, a simulation model should be validated using prototype testing results for reliability analysis. However, in practical engineering situation, experimental output data for the purpose of model validation is limited due to the significant cost of a large number of physical testing. Thus, the model validation needs to be carried out to account for the uncertainty induced by insufficient experimental output data as well as the inherent variability existing in the physical system and hence in the experimental test results. Therefore, in this study, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. Reliability assessment using the confidence-based model validation can provide conservative estimation of the reliability of a system with confidence when only insufficient experimental output data are available. Without confidence-based model validation, the designed product obtained using the conventional reliability-based design optimization (RBDO) optimum could either not satisfy the target reliability or be overly conservative. Therefore, simulation model validation is necessary to obtain a reliable optimum product using the RBDO process. In this study, the developed confidence-based model validation is integrated in the RBDO process to provide truly confident RBDO optimum design. The developed confidence-based model validation will provide a conservative RBDO optimum design at the target confidence level. However, it is challenging to obtain steady convergence in the RBDO process with confidence-based model validation because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve this issue, a practical optimization procedure, which terminates the RBDO process once the target reliability is satisfied, is proposed. In addition, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with the confidence-based model validation. Numerical examples are presented to demonstrate that the proposed RBDO approach obtains a conservative and practical optimum design that satisfies the target reliability of designed product given a limited number of experimental output data. Thus far, while the simulation model might be biased, it is assumed that we have correct distribution models for input variables and parameters. However, in real practical applications, only limited numbers of test data are available (parameter uncertainty) for modeling input distributions of material properties, manufacturing tolerances, operational loads, etc. Also, as before, only a limited number of output test data is used. Therefore, a reliability needs to be estimated by considering parameter uncertainty as well as biased simulation model. Computational methods and a process are developed to obtain confidence-based reliability assessment. The insufficient input and output test data induce uncertainties in input distribution models and output distributions, respectively. These uncertainties, which arise from lack of knowledge – the insufficient test data, are different from the inherent input distributions and corresponding output variabilities, which are natural randomness of the physical system.

Page generated in 0.4465 seconds