• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 34
  • 33
  • 30
  • 20
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Geração automática de dados de teste para programas concorrrentes com meta-heurística / Automatic test data generation for concurrent programs with metaheuristic

Silva, José Dario Pintor da 22 September 2014 (has links)
A programação concorrente é cada vez mais utilizada nos sistemas atuais com o objetivo de reduzir custos e obter maior eficiência no processamento. Com a importância da programação concorrente é imprescindível que programas que implementam esse paradigma apresentem boa qualidade e estejam livres de defeitos. Assim,diferentes técnicas e critérios de teste vêm sendo definidos para apoiar a validação de aplicações desenvolvidas nesse paradigma. Nesse contexto, a geração automática de dados de teste é importante, pois permite reduzir o custo na geração e seleção de dados relevantes. O uso de técnicas meta-heurísticas tem sido uma área de grande interesse entre os pesquisadores para geração de dados, pois essas técnicas apresentam abordagens aplicáveis a problemas complexos e de difícil solução. Considerando esse aspecto, este trabalho apresenta uma abordagem de geração automática de dados para o teste estrutural de programas concorrentes em MPI (Message Passing Interface). A meta-heurística usada foi Algoritmo Genético em que a busca é guiada por critérios de teste que consideram características implícitas de programas concorrentes. O desempenho da abordagem foi avaliado por meio da cobertura dos dados detestes, da eficácia em revelar defeitos e do custo de execução. Para comparação, a geração aleatória foi considerada. Os resultados indicaram que é promissor usar geração de dados de teste no contexto de programas concorrentes, com resultados interessantes em relação à eficácia e cobertura dos requisitos de teste. / Concurrent programming has been increasingly used in current systems in order to reduce costs and obtain higher processing efficiency and, consequently, it is expected that these systems have high quallity. Therefore, different techniques and testing criteria have been proposed aiming to support the verification and validation of the concurrent applications. In this context, the automated data test generation allows to reduce the testing costs during the generation and selection of data tests. Metaheuristic technique has been widely investigated to support the data test generation because this technique has presented good results to complex and costly problems. In this work, we present an approach to the automated data test generation for message passing concurrent programs in MPI (Message Passing Interface). The generation of data test is performed using the genetic algorithm metaheuristic technique, guiding by structural testing criteria. An experimental study was conducted to evaluate the proposed approach, analyzing the effectiveness and application cost. The results indicate that the genetic algorithm is a promising approach to automated test data generation for concurrent programs, presenting good results in relation to effectiveness and data test coverage.
12

Techniques for Automatic Generation of Tests from Programs and Specifications

Edvardsson, Jon January 2006 (has links)
<p>Software testing is complex and time consuming. One way to reduce the effort associated with testing is to generate test data automatically. This thesis is divided into three parts. In the first part a mixed-integer constraint solver developed by Gupta et. al is studied. The solver, referred to as the Unified Numerical Approach (una), is an important part of their generator and it is responsible for solving equation systems that correspond to the program path currently under test.</p><p>In this thesis it is shown that, in contrast to traditional optimization methods, the una is not bounded by the size of the solved equation system. Instead, it depends on how the system is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. It is also shown that the una is not complete, that is, it does not always find a mixed-integer solution when there is one. It is found that a better approach is to use a traditional optimization method, like the simplex method in combination with branch-and-bound and/or a cutting-plane algorithm as a constraint solver.</p><p>The second part explores a specification-based approach for generating tests developed by Meudec. Tests are generated by partitioning the specification input domain into a set of subdomains using a rule-based automatic partitioning strategy. An important step of Meudec’s method is to reduce the number of generated subdomains and find a minimal partition. This thesis shows that Meudec’s minimal partition algorithm</p><p>is incorrect. Furthermore, two new efficient alternative algorithms are developed. In addition, an algorithm for finding the upper and lower bound on the number of subdomains in a partition is also presented.</p><p>Finally, in the third part, two different designs of automatic testing tools are studied. The first tool uses a specification as an oracle. The second tool, on the other hand, uses a reference program. The fault-detection effectiveness of the tools is evaluated using both randomly and systematically generated inputs.</p>
13

Consistency techniques for test data generation

Tran Sy, Nguyen 10 June 2005 (has links)
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
14

Consistency techniques for test data generation

Tran Sy, Nguyen 10 June 2005 (has links)
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
15

Simulated Fmri Toolbox

Turkay, Kemal Dogus 01 December 2009 (has links) (PDF)
In this thesis a simulated fMRI toolbox is developed in order to generate simulated data to compare and benchmark different functional magnetic resonance image analysis methods. This toolbox is capable of loading a high resolution anatomic brain volume, generating 4D fMRI data in the same data space with the anatomic image, and allowing the user to create block and event-related design paradigms. Common fMRI artifacts such as scanner drift, cardiac pulsation, habituation and task related or spontaneous head movement can be incorporated into the 4D fMRI data. Input to the toolbox is possible through MINC 2.0 file format, and output is provided in ANALYZE format. The major contribution of this toolbox is its facilitation of comparison of fMRI analysis methods by generating several different fMRI data under varying noise and experiment parameters.
16

Diversified Ensemble Classifiers for Highly Imbalanced Data Learning and their Application in Bioinformatics

Ding, Zejin 07 May 2011 (has links)
In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in this dissertation.We propose a new ensemble learning framework—Diversified Ensemble Classifiers for Imbal-anced Data Learning (DECIDL), based on the advantages of existing ensemble imbalanced learning strategies. Our framework combines three learning techniques: a) ensemble learning, b) artificial example generation, and c) diversity construction by reversely data re-labeling. As a meta-learner, DECIDL utilizes general supervised learning algorithms as base learners to build an ensemble committee. We create a standard benchmark data pool, which contains 30 highly skewed sets with diverse characteristics from different domains, in order to facilitate future research on imbalance data learning. We use this benchmark pool to evaluate and compare our DECIDL framework with several ensemble learning methods, namely under-bagging, over-bagging, SMOTE-bagging, and AdaBoost. Extensive experiments suggest that our DECIDL framework is comparable with other methods. The data sets, experiments and results provide a valuable knowledge base for future research on imbalance learning. We develop a simple but effective artificial example generation method for data balancing. Two new methods DBEG-ensemble and DECIDL-DBEG are then designed to improve the power of imbalance learning. Experiments show that these two methods are comparable to the state-of-the-art methods, e.g., GSVM-RU and SMOTE-bagging. Furthermore, we investigate learning on imbalanced data from a new angle—active learning. By combining active learning with the DECIDL framework, we show that the newly designed Active-DECIDL method is very effective for imbalance learning, suggesting the DECIDL framework is very robust and flexible.Lastly, we apply the proposed learning methods to a real-world bioinformatics problem—protein methylation prediction. Extensive computational results show that the DECIDL method does perform very well for the imbalanced data mining task. Importantly, the experimental results have confirmed our new contributions on this particular data learning problem.
17

Parametric human spine modelling

Ceran, Murat January 2006 (has links)
3-D computational modelling of the human spine provides a sophisticated and cost-effective medium for bioengineers, researchers, and ergonomics designers in order to study the biomechanical behaviour of the human spine under different loading conditions. Developing a generic parametric computational human spine model to be employed in biomechanical modelling introduces a considerable potential to reduce the complexity of implementing and amending the intricate spinal geometry. The main objective of this research is to develop a 3-D parametric human spine model generation framework based on a command file system, by which the parameters of each vertebra are read from the database system, and then modelled within commercial 3-D CAD software. A novel data acquisition and generation system was developed as a part of the framework for determining the unknown vertebral dimensions, depending on the correlations between the parameters estimated from existing anthropometrical studies in the literature. The data acquisition system embodies a predictive methodology that comprehends the relations between the features of the vertebrae by employing statistical and geometrical techniques. Relations amongst vertebral parameters such as golden ratio were investigated and successfully implemented into the algorithms. The validation of the framework was carried out by comparing the developed 3-D computational human spine models against various real life human spine data, where good agreements were achieved. The constructed versatile framework possesses the capability to be utilised as a basis for quickly and effectively developing biomechanical models of the human spine such as finite element models.
18

Techniques for Automatic Generation of Tests from Programs and Specifications

Edvardsson, Jon January 2006 (has links)
Software testing is complex and time consuming. One way to reduce the effort associated with testing is to generate test data automatically. This thesis is divided into three parts. In the first part a mixed-integer constraint solver developed by Gupta et. al is studied. The solver, referred to as the Unified Numerical Approach (una), is an important part of their generator and it is responsible for solving equation systems that correspond to the program path currently under test. In this thesis it is shown that, in contrast to traditional optimization methods, the una is not bounded by the size of the solved equation system. Instead, it depends on how the system is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. It is also shown that the una is not complete, that is, it does not always find a mixed-integer solution when there is one. It is found that a better approach is to use a traditional optimization method, like the simplex method in combination with branch-and-bound and/or a cutting-plane algorithm as a constraint solver. The second part explores a specification-based approach for generating tests developed by Meudec. Tests are generated by partitioning the specification input domain into a set of subdomains using a rule-based automatic partitioning strategy. An important step of Meudec’s method is to reduce the number of generated subdomains and find a minimal partition. This thesis shows that Meudec’s minimal partition algorithm is incorrect. Furthermore, two new efficient alternative algorithms are developed. In addition, an algorithm for finding the upper and lower bound on the number of subdomains in a partition is also presented. Finally, in the third part, two different designs of automatic testing tools are studied. The first tool uses a specification as an oracle. The second tool, on the other hand, uses a reference program. The fault-detection effectiveness of the tools is evaluated using both randomly and systematically generated inputs.
19

Search-based software testing and complex test data generation in a dynamic programming language

Mairhofer, Stefan January 2008 (has links)
Manually creating test cases is time consuming and error prone. Search-based software testing (SBST) can help automate this process and thus to reduce time and effort and increase quality by automatically generating relevant test cases. Previous research have mainly focused on static programming languages with simple test data inputs such as numbers. In this work we present an approach for search-based software testing for dynamic programming languages that can generate test scenarios and both simple and more complex test data. This approach is implemented as a tool in and for the dynamic programming language Ruby. It uses an evolutionary algorithm to search for tests that gives structural code coverage. We have evaluated the system in an experiment on a number of code examples that differ in complexity and the type of input data they require. We compare our system with the results obtained by a random test case generator. The experiment shows, that the presented approach can compete with random testing and, for many situations, quicker finds tests and data that gives a higher structural code coverage.
20

A Systematic Review of Automated Test Data Generation Techniques / A Systematic Review of Automated Test Data Generation Techniques

Mahmood, Shahid January 2007 (has links)
Automated Test Data Generation (ATDG) is an activity that in the course of software testing automatically generates test data for the software under test (SUT). It usually makes the testing more efficient and cost effective. Test Data Generation (TDG) is crucial for software testing because test data is one of the key factors for determining the quality of any software test during its execution. The multi-phased activity of ATDG involves various techniques for each of its phases. This research field is not new by any means, albeit lately new techniques have been devised and a gradual increase in the level of maturity has brought some diversified trends into it. To this end several ATDG techniques are available, but emerging trends in computing have raised the necessity to summarize and assess the current status of this area particularly for practitioners, future researchers and students. Further, analysis of the ATDG techniques becomes even more important when Miller et al. [4] highlight the hardship in general acceptance of these techniques. Under this scenario only a systematic review can address the issues because systematic reviews provide evaluation and interpretation of all available research relevant to a particular research question, topic area, or phenomenon of interest. This thesis, by using a trustworthy, rigorous, and auditable methodology, provides a systematic review that is aimed at presenting a fair evaluation of research concerning ATDG techniques of the period 1997-2006. Moreover it also aims at identifying probable gaps in research about ATDG techniques of defined period so as to suggest the scope for further research. This systematic review is basically presented on the pattern of [5 and 8] and follows the techniques suggested by [1].The articles published in journals and conference proceedings during the defined period are of concern in this review. The motive behind this selection is quite logical in the sense that the techniques that are discussed in literature of this period might reflect their suitability for the prevailing software environment of today and are believed to fulfill the needs of foreseeable future. Furthermore only automated and/or semiautomated ATDG techniques have been chosen for consideration while leaving the manual techniques as they are out of the scope. As a result of the preliminary study the review identifies ATDG techniques and relevant articles of the defined period whereas the detailed study evaluates and interprets all available research relevant to ATDG techniques. For interpretation and elaboration of the discovered ATDG techniques a novel approach called ‘Natural Clustering’ is introduced. To accomplish the task of systematic review a comprehensive research method has been developed. Then on the practical implications of this research method important results have been gained. These results have been presented in statistical/numeric, diagrammatic, and descriptive forms. Additionally the thesis also introduces various criterions for classification of the discovered ATDG techniques and presents a comprehensive analysis of the results of these techniques. Some interesting facts have also been highlighted during the course of discussion. Finally, the discussion culminates with inferences and recommendations which emanate from this analysis. As the research work produced in the thesis is based on a rich amount of trustworthy information, therefore, it could also serve the purpose of being an upto- date guide about ATDG techniques. / Shahid Mahmood Folkparksvägen 14:23 372 40 Ronneby Sweden +46 76 2971676

Page generated in 0.1403 seconds