• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 34
  • 33
  • 30
  • 20
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Heuristic generation of software test data

Holmes, Stephen Terry January 1996 (has links)
Incorrect system operation can, at worst, be life threatening or financially devastating. Software testing is a destructive process that aims to reveal software faults. Selection of good test data can be extremely difficult. To ease and assist test data selection, several test data generators have emerged that use a diverse range of approaches. Adaptive test data generators use existing test data to produce further effective test data. It has been observed that there is little empirical data on the adaptive approach. This thesis presents the Heuristically Aided Testing System (HATS), which is an adaptive test data generator that uses several heuristics. A heuristic embodies a test data generation technique. Four heuristics have been developed. The first heuristic, Direct Assignment, generates test data for conditions involving an input variable and a constant. The Alternating Variable heuristic determines a promising direction to modify input variables, then takes ever increasing steps in this direction. The Linear Predictor heuristic performs linear extrapolations on input variables. The final heuristic, Boundary Follower, uses input domain boundaries as a guide to locate hard-to-find solutions. Several Ada procedures have been tested with HATS; a quadratic equation solver, a triangle classifier, a remainder calculator and a linear search. Collectively they present some common and rare test data generation problems. The weakest testing criterion HATS has attempted to satisfy is all branches. Stronger, mutation-based criteria have been used on two of the procedures. HATS has achieved complete branch coverage on each procedure, except where there is a higher level of control flow complexity combined with non-linear input variables. Both branch and mutation testing criteria have enabled a better understanding of the test data generation problems and contributed to the evolution of heuristics and the development of new heuristics. This thesis contributes the following to knowledge: Empirical data on the adaptive heuristic approach to test data generation. How input domain boundaries can be used as guidance for a heuristic. An effective heuristic termination technique based on the heuristic's progress. A comparison of HATS with random testing. Properties of the test software that indicate when HATS will take less effort than random testing are identified.
2

Polyhedric configurations

Champion, Oliver Charles January 1997 (has links)
Polyhedra have been the subject of fascination and interest to mathematicians, philosophers and artists since the ancient times. Forms based on polyhedra have subsequently become popular with engineers and architects. However, data generation for these polyhedric configurations has traditionally been a barrier to advancement in this area. Graphical based solutions have been applied which have severely limited the scope of the applications. The objective of the present work is to facilitate the creation of forms based on polyhedral, and so broaden the boundaries as to what is achievable. To this end the work is concerned with the development and implementation of the concepts in a computer based environment. In order for this to be achieved three key elements are required for each polyhedron. They are as follows; o Establishment of a coordinate system, o Evolution of a set of conventions for assigning identity numbers for faces, edges and vertices of polyhedra, and o Establishment of an orientation system for mapped objects. These have been developed to be compatible within the computer based environment. The implementation of the concepts is through the 'polymation function', which has been created to be a standard function within the programming language Formian. A series of other functions, complementary to the polymation function have been developed to be used within Formian. The most prominent of these is the 'tractation function' which is used to project configurations onto a range of surfaces, including a user-defined surface. The work includes a look at some of the forms which may be created using the new tools, particularly in the area of 'geodesic' forms. Suggestions for future research in this field include widening the range of polyhedra available, looking at the problem of 'mitring', exploring rendering techniques and the development of a more general function which could encompass user-defined polyhedra.
3

Analysis of Red Oak Timber Defects and Associated Internal Defect Area for the Generation of Simulated Logs

Winn, Matthew F. 30 December 2002 (has links)
Log sawing simulation computer programs can be a valuable tool for training sawyers as well as for testing different sawing patterns. Most available simulation programs rely on databases from which to draw logs and can be very costly and time-consuming to develop. In this study, a computer program was developed that can accurately generate random, artificial logs and serve as an alternative to using a log database. One major advantage of using such a program is that every log generated is unique, whereas a database is finite. Real log and external defect data was obtained from the Forest Service Northeastern Research Station in Princeton, West Virginia for red oak (Quercus rubra, L.) logs. These data were analyzed to determine distributions for log and external defect attributes, and the information was used in the program to assure realistic log generation. An attempt was made to relate the external defect attributes to internal defect characteristics such as volume, depth, and angle. CT scanning was used to obtain internal information for the five most common defect types according to the Princeton log data. Results indicate that external indicators have the potential to be good predictors for internal defect volume. Tests performed to determine whether a significant amount of variation in volume was explained by the predictor variables proved significant for all defect types. Corresponding R2 values ranged from 0.39 to 0.93. External indicators contributed little to the explanation of variation in the other dependent variables. Additional predictor variables should be tested to determine if further variation could be explained. / Master of Science
4

Empirical study on strategy for Regression Testing

Hsu, Pai-Hung 03 August 2006 (has links)
Software testing plays a necessary role in software development and maintenance. This activity is performed to support quality assurance. It is very common to design a number of testing suite to test their programs manually for most test engineers. To design test data manually is an expensive and labor-wasting process. Base on this reason, how to generate software test data automatically becomes a hot issue. Most researches usually use the meta-heuristic search methods like genetic algorithm or simulated annealing to gain the test data. In most circumstances, test engineers will generate the test suite first if they have a new program. When they debug or change some code to become a new one, they still design another new test suite to test it. Nearly no people will reserve the first test data and reuse it. In this research, we want to discuss whether it is useful to store the original test data.
5

Parenting processes in families of children who have sustained burns: a grounded theory study

Paul Ravindran, Vinitha Priscilla Unknown Date
No description available.
6

Closing the gap in WSD : supervised results with unsupervised methods

Brody, Samuel January 2009 (has links)
Word-Sense Disambiguation (WSD), holds promise for many NLP applications requiring broad-coverage language understanding, such as summarization (Barzilay and Elhadad, 1997) and question answering (Ramakrishnan et al., 2003). Recent studies have also shown that WSD can benefit machine translation (Vickrey et al., 2005) and information retrieval (Stokoe, 2005). Much work has focused on the computational treatment of sense ambiguity, primarily using data-driven methods. The most accurate WSD systems to date are supervised and rely on the availability of sense-labeled training data. This restriction poses a significant barrier to widespread use of WSD in practice, since such data is extremely expensive to acquire for new languages and domains. Unsupervised WSD holds the key to enable such application, as it does not require sense-labeled data. However, unsupervised methods fall far behind supervised ones in terms of accuracy and ease of use. In this thesis we explore the reasons for this, and present solutions to remedy this situation. We hypothesize that one of the main problems with unsupervised WSD is its lack of a standard formulation and general purpose tools common to supervised methods. As a first step, we examine existing approaches to unsupervised WSD, with the aim of detecting independent principles that can be utilized in a general framework. We investigate ways of leveraging the diversity of existing methods, using ensembles, a common tool in the supervised learning framework. This approach allows us to achieve accuracy beyond that of the individual methods, without need for extensive modification of the underlying systems. Our examination of existing unsupervised approaches highlights the importance of using the predominant sense in case of uncertainty, and the effectiveness of statistical similarity methods as a tool for WSD. However, it also serves to emphasize the need for a way to merge and combine learning elements, and the potential of a supervised-style approach to the problem. Relying on existing methods does not take full advantage of the insights gained from the supervised framework. We therefore present an unsupervised WSD system which circumvents the question of actual disambiguation method, which is the main source of discrepancy in unsupervised WSD, and deals directly with the data. Our method uses statistical and semantic similarity measures to produce labeled training data in a completely unsupervised fashion. This allows the training and use of any standard supervised classifier for the actual disambiguation. Classifiers trained with our method significantly outperform those using other methods of data generation, and represent a big step in bridging the accuracy gap between supervised and unsupervised methods. Finally, we address a major drawback of classical unsupervised systems – their reliance on a fixed sense inventory and lexical resources. This dependence represents a substantial setback for unsupervised methods in cases where such resources are unavailable. Unfortunately, these are exactly the areas in which unsupervised methods are most needed. Unsupervised sense-discrimination, which does not share those restrictions, presents a promising solution to the problem. We therefore develop an unsupervised sense discrimination system. We base our system on a well-studied probabilistic generative model, Latent Dirichlet Allocation (Blei et al., 2003), which has many of the advantages of supervised frameworks. The model’s probabilistic nature lends itself to easy combination and extension, and its generative aspect is well suited to linguistic tasks. Our model achieves state-of-the-art performance on the unsupervised sense induction task, while remaining independent of any fixed sense inventory, and thus represents a fully unsupervised, general purpose, WSD tool.
7

Evaluation of Test Data Generation Techniques for String Inputs

Li, Junyang, Xing, Xueer January 2017 (has links)
Context. The effective generation of test data is regarded as very important in the software testing. However, mature and effective techniques for generating string test data have seldom been explored due to the complexity and flexibility in the expression form of the string comparing to other data types. Objectives. Based on this problem, this study is to investigate strengths and limitations of existing string test data generation techniques to support future work for exploring an effective technique to generate string test data. This main goal was achieved via two objectives. First is investigating existing techniques for string test data generation; as well as finding out criteria and Classes-Under-Test (CUTs) used for evaluating the ability of string test generation. Second is to assess representative techniques through comparing effectiveness and efficiency. Methods. For the first objective, we used a systematic mapping study to collect data about existing techniques, criteria, and CUTs. With respect to the second objective, a comparison study was conducted to compare representative techniques selected from the results of systematic mapping study. The data from comparison study was analysed in a quantitative way by using statistical methods. Results. The existing techniques, criteria and CUTs which are related to string test generation were identified. A multidimensional categorisation was proposed to classify existing string test data generation techniques. We selected representative techniques from the search-based method, symbolic execution method, and random generation method of categorisation. Meanwhile, corresponding automated test generation tools including EvoSuite, Symbolic PathFinder (SPF), and Randoop, which achieved representative techniques, were selected to assess through comparing effectiveness and efficiency when applied to 21 CUTs. Conclusions. We concluded that: search-based method has the highest effectiveness and efficiency in three selected solution methods; random generation method has a low efficiency, but has a high fault-detecting ability for some specific CUTs; symbolic execution solution achieved by SPF cannot support string test generation well currently due to possibly incomplete string constraint solver or string generator.
8

Privacy-Preserving Synthetic Medical Data Generation with Deep Learning

Torfi, Amirsina 26 August 2020 (has links)
Deep learning models demonstrated good performance in various domains such as ComputerVision and Natural Language Processing. However, the utilization of data-driven methods in healthcare raises privacy concerns, which creates limitations for collaborative research. A remedy to this problem is to generate and employ synthetic data to address privacy concerns. Existing methods for artificial data generation suffer from different limitations, such as being bound to particular use cases. Furthermore, their generalizability to real-world problems is controversial regarding the uncertainties in defining and measuring key realistic characteristics. Hence, there is a need to establish insightful metrics (and to measure the validity of synthetic data), as well as quantitative criteria regarding privacy restrictions. We propose the use of Generative Adversarial Networks to help satisfy requirements for realistic characteristics and acceptable values of privacy metrics, simultaneously. The present study makes several unique contributions to synthetic data generation in the healthcare domain. First, we propose a novel domain-agnostic metric to evaluate the quality of synthetic data. Second, by utilizing 1-D Convolutional Neural Networks, we devise a new approach to capturing the correlation between adjacent diagnosis records. Third, we employ ConvolutionalAutoencoders for creating a robust and compact feature space to handle the mixture of discrete and continuous data. Finally, we devise a privacy-preserving framework that enforcesRényi differential privacy as a new notion of differential privacy. / Doctor of Philosophy / Computers programs have been widely used for clinical diagnosis but are often designed with assumptions limiting their scalability and interoperability. The recent proliferation of abundant health data, significant increases in computer processing power, and superior performance of data-driven methods enable a trending paradigm shift in healthcare technology. This involves the adoption of artificial intelligence methods, such as deep learning, to improve healthcare knowledge and practice. Despite the success in using deep learning in many different domains, in the healthcare field, privacy challenges make collaborative research difficult, as working with data-driven methods may jeopardize patients' privacy. To overcome these challenges, researchers propose to generate and utilize realistic synthetic data that can be used instead of real private data. Existing methods for artificial data generation are limited by being bound to special use cases. Furthermore, their generalizability to real-world problems is questionable. There is a need to establish valid synthetic data that overcomes privacy restrictions and functions as a real-world analog for healthcare deep learning data training. We propose the use of Generative Adversarial Networks to simultaneously overcome the realism and privacy challenges associated with healthcare data.
9

Framework de geração de dados de teste para programas orientados a objetos / Test data generation framework for object-oriented software

Ferreira, Fernando Henrique Inocêncio Borba 13 December 2012 (has links)
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG. / Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.
10

Framework de geração de dados de teste para programas orientados a objetos / Test data generation framework for object-oriented software

Fernando Henrique Inocêncio Borba Ferreira 13 December 2012 (has links)
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG. / Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.

Page generated in 0.1511 seconds