• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 34
  • 33
  • 30
  • 20
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Test case generation using symbolic grammars and quasirandom sequences

Felix Reyes, Alejandro 06 1900 (has links)
This work presents a new test case generation methodology, which has a high degree of automation (cost reduction); while providing increased power in terms of defect detection (benefits increase). Our solution is a variation of model-based testing, which takes advantage of symbolic grammars (a context-free grammar where terminals are replaced by regular expressions that represent their solution space) and quasi-random sequences to generate test cases. Previous test case generation techniques are enhanced with adaptive random testing to maximize input space coverage; and selective and directed sentence generation techniques to optimize sentence generation. Our solution was tested by generating 200 firewall policies containing up to 20 000 rules from a generic firewall grammar. Our results show how our system generates test cases with superior coverage of the input space, increasing the probability of defect detection while reducing considerably the needed number the test cases compared with other previously used approaches. / Software Engineering and Intelligent Systems
52

Test case generation using symbolic grammars and quasirandom sequences

Felix Reyes, Alejandro Unknown Date
No description available.
53

Génération automatique de test pour les contrôleurs logiques programmables synchrones / Automated test generation for logical programmable synchronous controllers

Tka, Mouna 02 June 2016 (has links)
Ce travail de thèse, effectué dans la cadre du projet FUI Minalogic Bluesky, porte sur le test fonctionnel automatisé d'une classe particulière de contrôleurs logiques programmables (em4) produite par InnoVista Sensors. Ce sont des systèmes synchrones qui sont programmés au moyen d'un environnement de développement intégré (IDE). Les personnes qui utilisent et programment ces contrôleurs ne sont pas nécessairement des programmeurs experts. Le développement des applications logicielles doit être par conséquent simple et intuitif. Cela devrait également être le cas pour les tests. Même si les applications définies par ces utilisateurs ne sont pas nécessairement très critiques, il est important de les tester d'une manière adéquate et efficace. Un simulateur inclu dans l'IDE permet aux programmeurs de tester leurs programmes d'une façon qui reste à ce jour informelle et interactive en entrant manuellement des données de test. En se basant sur des recherches précédentes dans le domaine du test des programmes synchrones, nous proposons un nouveau langage de spécification de test, appelé SPTL (Synchronous Programs Testing Language) qui rend possible d'exprimer simplement des scénarios de test qui peuvent être exécutées à la volée pour générer automatiquement des séquences d'entrée de test. Il permet aussi de décrire l'environnement où évolue le système pour mettre des conditions sur les entrées afin d'arriver à des données de test réalistes et de limiter celles qui sont inutiles. SPTL facilite cette tâche de test en introduisant des notions comme les profils d'utilisation, les groupes et les catégories. Nous avons conçu et développé un prototype, nommé "Testium", qui traduit un programme SPTL en un ensemble de contraintes exploitées par un solveur Prolog qui choisit aléatoirement les entrées de test. La génération de données de test s'appuie ainsi sur des techniques de programmation logique par contraintes. Pour l'évaluer, nous avons expérimenté cette méthode sur des exemples d'applications EM4 typiques et réels. Bien que SPTL ait été évalué sur em4, son utilisation peut être envisagée pour la validation d'autres types de contrôleurs ou systèmes synchrones. / This thesis work done in the context of the FUI project Minalogic Bluesky, concerns the automated functional testing of a particular class of programmable logic controllers (em4) produced by InnoVista Sensors. These are synchronous systems that are programmed by means of an integrated development environment (IDE). People who use and program these controllers are not necessarily expert programmers. The development of software applications should be as result simple and intuitive. This should also be the case for testing. Although applications defined by these users need not be very critical, it is important to test them adequately and effectively. A simulator included in the IDE allows programmers to test their programs in a way that remains informal and interactive by manually entering test data.Based on previous research in the area of synchronous test programs, we propose a new test specification language, called SPTL (Synchronous Testing Programs Language) which makes possible to simply express test scenarios that can be executed on the fly to automatically generate test input sequences. It also allows describing the environment in which the system evolves to put conditions on inputs to arrive to realistic test data and limit unnecessary ones. SPTL facilitates this testing task by introducing concepts such as user profiles, groups and categories. We have designed and developed a prototype named "Testium", which translates a SPTL program to a set of constraints used by a Prolog solver that randomly selects the test inputs. So, generating test data is based on constraint logic programming techniques.To assess this, we experimented this method on realistic and typical examples of em4 applications. Although SPTL was evaluated on EM4, its use can be envisaged for the validation of other types of synchronous controllers or systems.
54

Investigating Metrics that are Good Predictors of Human Oracle Costs An Experiment

Kartheek arun sai ram, chilla, Kavya, Chelluboina January 2017 (has links)
Context. Human oracle cost, the cost associated in estimating the correctness of the output for the given test inputs is manually evaluated by humans and this cost is significant and is a concern in the software test data generation field. This study has been designed in the context to assess metrics that might predict human oracle cost. Objectives. The major objective of this study is to address the human oracle cost, for this the study identifies the metrics that are good predictors of human oracle cost and can further help to solve the oracle problem. In this process, the identified suitable metrics from the literature are applied on the test input, to see if they can help in predicting the correctness of the output for the given test input. Methods. Initially a literature review was conducted to find some of the metrics that are relevant to the test data. Besides finding the aforementioned metrics, our literature review also tries to find out some possible code metrics that can be ap- plied on test data. Before conducting the actual experiment two pilot experiments were conducted. To accomplish our research objectives an experiment is conducted in the BTH university with master students as sample population. Further group interviews were conducted to check if the participants perceive any new metrics that might impact the correctness of the output. The data obtained from the experiment and the interviews is analyzed using linear regression model in SPSS suite. Further to analyze the accuracy vs metric data, linear discriminant model using SPSS pro- gram suite was used. Results.Our literature review resulted in 4 metrics that are suitable to our study. As our test input is HTML we took HTML depth, size, compression size, number of tags as our metrics. Also, from the group interviews another 4 metrics are drawn namely number of lines of code and number of <div>, anchor <a> and paragraph <p> tags as each individual metric. The linear regression model which analyses time vs metric data, shows significant results, but with multicollinearity effecting the result, there was no variance among the considered metrics. So, the results of our study are proposed by adjusting the multicollinearity. Besides, the above analysis, linear discriminant model which analyses accuracy vs metric data was conducted to predict the metrics that influences accuracy. The results of our study show that metrics positively correlate with time and accuracy. Conclusions. From the time vs metric data, when multicollinearity is adjusted by applying step-wise regression reduction technique, the program size, compression size and <div> tag are influencing the time taken by sample population. From accuracy vs metrics data number of <div> tags and number of lines of code are influencing the accuracy of the sample population.
55

Automated Software Testing : A Study of the State of Practice / Automated Software Testing : A Study of the State of Practice

Rafi, Dudekula Mohammad, Reddy, Kiran Moses Katam January 2012 (has links)
Context: Software testing is expensive, labor intensive and consumes lot of time in a software development life cycle. There was always a need in software testing to decrease the testing time. This also resulted to focus on Automated Software Testing (AST), because using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease [11]. Manual Testing (MT) requires lot of effort and hard work, if we measure in terms of person per month [11]. Automated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems are cheap, they are faster and don‘t get bored and can work continuously in the weekends. Due to this advantage many researches are working towards the Automation of software testing, which can help to complete the task in less testing time [10]. Objectives: The main aims of this thesis is to 1.) To systematically classify contributions within AST. 2.) To identify the different benefits and challenges of AST. 3.) To identify the whether the reported benefits and challenges found in the literature are prevalent in industry. Methods: To fulfill our aims and objectives, we used Systematic mapping research methodology to systematically classify contributions within AST. We also used SLR to identify the different benefits and challenges of AST. Finally, we performed web based survey to validate the finding of SLR. Results: After performing Systematic mapping, the main aspects within AST include purpose of automation, levels of testing, Technology used, different types of research types used and frequency of AST studies over the time. From Systematic literature review, we found the benefits and challenges of AST. The benefits of AST include higher product quality, less testing time, reliability, increase in confidence, reusability, less human effort, reduction of cost and increase in fault detection. The challenges include failure to achieve expected goals, difficulty in maintenance of test automation, Test automation needs more time to mature, false expectations and lack of skilled people for test automation tools. From web survey, it is observed that almost all the benefits and challenges are prevalent in industry. The benefits such as fault detection and confidence are in contrary to the results of SLR. The challenge about the appropriate test automation strategy has 24 % disagreement from the respondents and 30% uncertainty. The reason is that the automation strategy is totally dependent on the test manager of the project. When asked “Does automated software testing fully replace manual testing”, 80% disagree with this challenge. Conclusion: The classification of the AST studies using systematic mapping gives an overview of the work done in the area of AST and also helps to find research coverage in the area of AST. These results can be used by researchers to use the gaps found in the mapping studies to carry on future work. The results of SLR and web survey clearly show that the practitioners clearly realize the benefits and challenges of AST reported in the literature. / Mobile no: +46723069909
56

Material Artefact Generation / Material Artefact Generation

Rončka, Martin January 2019 (has links)
Ne vždy je jednoduché získání dostatečně velké a kvalitní datové sady s obrázky zřetelných artefaktů, ať už kvůli nedostatku ze strany zdroje dat nebo složitosti tvorby anotací. To platí například pro radiologii, nebo také strojírenství. Abychom mohli využít moderní uznávané metody strojového učení které se využívají pro klasifikaci, segmentaci a detekci defektů, je potřeba aby byla datová sada dostatečně velká a vyvážená. Pro malé datové sady čelíme problémům jako je přeučení a slabost dat, které způsobují nesprávnou klasifikaci na úkor málo reprezentovaných tříd. Tato práce se zabývá prozkoumáváním využití generativních sítí pro rozšíření a vyvážení datové sady o nové vygenerované obrázky. Za použití sítí typu Conditional Generative Adversarial Networks (CGAN) a heuristického generátoru anotací jsme schopni generovat velké množství nových snímků součástek s defekty. Pro experimenty s generováním byla použita datová sada závitů. Dále byly použity dvě další datové sady keramiky a snímků z MRI (BraTS). Nad těmito dvěma datovými sadami je provedeno zhodnocení vlivu generovaných dat na učení a zhodnocení přínosu pro zlepšení klasifikace a segmentace.
57

Framework pro tvorbu generátorů dat / Framework for Data Generators

Kříž, Blažej January 2012 (has links)
This master's thesis is focused on the problem of data generation. At the beginning, it presents several applications for data generation and describes the data generation process. Then it deals with development of framework for data generators and demonstrational application for validating the framework.
58

The Use of Big Data in Process Management : A Literature Study and Survey Investigation

Ephraim, Ekow Esson, Sehic, Sanel January 2021 (has links)
In recent years there has been an increasing interest in understanding how organizations can utilize big data in their process management to create value and improve their processes. This is due to new challenges for process management which have arisen from increasing competition and the complexity of large data sets due to technological advancements. These large data sets have been described by scholars as big data which involves data that are so complex traditional data analysis software are not sufficient in managing or analyzing them. Because of the complexity of handling such great volumes of data there is a big gap in practical examples where organizations have incorporated big data in their process management. Therefore, in order to fill relevant gaps and contribute to advancements in this field, this thesis will explore how big data can contribute to improved process management. Hence, the aim of this thesis entailed investigating how, why and to what extent big data is used in process management. As well as to outline the purpose and challenges of using big data in process management. This was accomplished through a literature review and a survey, respectively, in order to understand how big data had previously been used to create value and improve processes in organizations. From the extensive literature review, an analysis matrix of how big data is used in process management is provided through the intersections between big data and process management dimensions. The analysis matrix showed that most of the instances in which big data was used in process management were in process analysis & improvement and process control & agility. Simply put, organizations used big data in specific activities involved in process management but not in a holistic manner. Furthermore, the limited findings from the survey indicate that the main challenges and purposes of big data use in Swedish organizations are the complexity of handling data and making statistically better decisions, respectively.
59

Privacy-aware data generation : Using generative adversarial networks and differential privacy

Hübinette, Felix January 2022 (has links)
Today we are surrounded by IOT devices that constantly generate different kinds of data about its environment and its users. Much of this data could be useful for different research purposes and development, but a lot of this collected data is privacy-sensitive for the individual person. To protect the individual's privacy, we have data protection laws. But these restrictions by laws also dramatically reduce the amount of data available for research and development. Therefore it would be beneficial if we could find a work around that respects people's privacy without breaking the laws while still maintaining the usefulness of data. The purpose of this thesis is to show how we can generate privacy-aware data from a dataset by using Generative Adversarial Networks (GANS) and Differential Privacy (DP), that maintains data utility. This is useful because it allows for the sharing of privacy-preserving data, so that the data can be used in research and development with concern for privacy. GANS is used for generating synthetic data. DP is an anonymization technique of data. With the combination of these two techniques, we generate synthetic-privacy-aware data from an existing open-source Fitbit dataset. The specific type of GANS model that is used is called CTGAN and differential privacy is achieved with the help of gaussian noise. The results from the experiments performed show many similarities between the original dataset and the experimental datasets. The experiments performed very well at the Kolmogorov Smirnov test, with the lowest P-value of all experiments sitting at 0.92. The conclusion that is drawn is that this is another promising methodology for creating privacy-aware-synthetic data, that maintains reasonable data utility while still utilizing DP techniques to achieve data privacy.
60

Analysis and comparison of interfacing, data generation and workload implementation in BigDataBench 4.0 and Intel HiBench 7.0

Barosen, Alexander, Dalin, Sadok January 2018 (has links)
One of the major challenges in Big Data is the accurate and meaningful assessment of system performance. Unlike other systems, minor differences in efficiency can escalate to large differences in costs and power consumption. While there are several tools on the marketplace for measuring the performance of Big Data systems, few of them have been explored in-depth. This report investigated the interfacing, data generation and workload implementations of two Big Data benchmarking suites, BigDataBench and Hibench. The purpose of the study was to establish the capabilities of each tool with regards to interfacing, data generation and workload implementation. An exploratory and qualitative approach was used to gather information and analyze each benchmarking tool. Source code, documentation, and reports published by the developers were used as information sources. The results showed that BigDataBench and HiBench were designed similarly with regards to interfacing and data flow during the execution of a workload with the exception of streaming workloads. BigDataBench provided for more realistic data generation while the data generation for HiBench was easier to control. With regards to workload design, the workloads in BigDataBench were designed to be applicable to multiple frameworks while the workloads in HiBench were focused on the Hadoop family. In conclusion, neither of benchmarking suites was superior to the other. They were both designed for different purposes and should be applied on a case-by-case basis. / En av de stora utmaningarna i Big Data är den exakta och meningsfulla bedömningen av systemprestanda. Till skillnad från andra system kan mindre skillnader i effektivitet eskalera till stora skillnader i kostnader och strömförbrukning. Medan det finns flera verktyg på marknaden för att mäta prestanda för Big Data-system, har få av dem undersökts djupgående. I denna rapport undersöktes gränssnittet, datagenereringen och arbetsbelastningen av två Big Data benchmarking-sviter, BigDataBench och HiBench. Syftet med studien var att fastställa varje verktygs kapacitet med hänsyn till de givna kriterierna. Ett utforskande och kvalitativt tillvägagångssätt användes för att samla information och analysera varje benchmarking verktyg. Källkod, dokumentation och rapporter som hade skrivits och publicerats av utvecklarna användes som informationskällor. Resultaten visade att BigDataBench och HiBench utformades på samma sätt med avseende på gränssnitt och dataflöde under utförandet av en arbetsbelastning med undantag för strömmande arbetsbelastningar. BigDataBench tillhandahöll mer realistisk datagenerering medan datagenerering för HiBench var lättare att styra. När det gäller arbetsbelastningsdesign var arbetsbelastningen i BigDataBench utformad för att kunna tillämpas på flera ramar, medan arbetsbelastningen i HiBench var inriktad på Hadoop-familjen. Sammanfattningsvis var ingen av benchmarkingssuperna överlägsen den andra. De var båda utformade för olika ändamål och bör tillämpas från fall till fall.

Page generated in 0.1359 seconds