Spelling suggestions: "subject:"test data"" "subject:"est data""
61 |
Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning / Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile CodingPersson, Jon January 2005 (has links)
<p>Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. </p><p>Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. </p><p>A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.</p>
|
62 |
Test case generation using symbolic grammars and quasirandom sequencesFelix Reyes, Alejandro 06 1900 (has links)
This work presents a new test case generation methodology, which has a high degree of automation (cost reduction); while providing increased power in terms of defect detection (benefits increase). Our solution is a variation of model-based testing, which takes advantage of symbolic grammars (a context-free grammar where terminals are replaced by regular expressions that represent their solution space) and quasi-random sequences to generate test cases.
Previous test case generation techniques are enhanced with adaptive random testing to maximize input space coverage; and selective and directed sentence generation techniques to optimize sentence generation.
Our solution was tested by generating 200 firewall policies containing up to 20 000 rules from a generic firewall grammar. Our results show how our system generates test cases with superior coverage of the input space, increasing the probability of defect detection while reducing considerably the needed number the test cases compared with other previously used approaches. / Software Engineering and Intelligent Systems
|
63 |
From Science to Policy : Improving environmental risk assessment andmanagement of chemicalsÅgerstrand, Marlene January 2012 (has links)
A complex process like risk assessment and the subsequent risk management decision makingshould be regularly evaluated, in order to assess the need to improve its workings. In this thesisthree related matters are addressed: evaluation of environmental risk management strategies,evaluation of environmental risk assessments, and how ecotoxicity data from the open scientificliterature can be used in a systematic way in regulatory risk assessments. It has resulted in thefollowing: a publically available database with ecotoxicity data for pharmaceuticals (Paper I); anevaluation and review of the Swedish Environmental Classification and Information System forpharmaceuticals (Papers II and III); a comparison of current reliability evaluation methods and areliability evaluation of ecotoxicity data (Paper IV); and an improved reliability and relevancereporting and evaluation scheme (Paper V).There are three overall conclusions from this thesis:(1) Ecotoxicity data from the open scientific literature is not used to the extent it could be inregulatory risk assessment of chemicals. Major reasons for this are that regulators prefer standarddata and that research studies in the open scientific literature can be reported in a way that affectstheir reliability and the user-friendliness. To enable the use of available data more efficiently actionsmust be taken by researchers, editors, and regulators. A more structured reliability and relevanceevaluation is needed to reach the goal of transparent, robust and predictable risk assessments.(2) A risk assessment is the result of the selected data and the selected methods used in theprocess. Therefore a transparent procedure, with clear justifications of choices made, is necessaryto enable external review. The risk assessments conducted within the Swedish EnvironmentalClassification and Information System for pharmaceuticals vary in their transparency and choice ofmethod. This could come to affect the credibility of the system since risk assessments are notalways consistent and guidelines are not always followed.(3) The Swedish Environmental Classification and Information System for pharmaceuticalscontribute, in its current form, to data availability and transparency but not to risk reduction. Thesystem has contributed to the general discussion about pharmaceuticals’ effect on the environmentand made data publicly available. However, to be an effective risk reduction tool this is not sufficient. / <p>QC 20121119</p> / MistraPharma / Formas - Evaluation of the Swedish Environmental Classification and Information System for Pharmaceutcals.
|
64 |
POD Approach for Aeroelastic Updating / Approche POD pour le Recalage du Modele AeroelastiqueVetrano, Fabio 17 December 2014 (has links)
Bien que les méthodes de calcul peuvent donner de bons résultats, ils ne sont généralement pas en accord avec exactement les données d'essais en vol en raison des incertitudes dans les modelé de calcul de structure et aérodynamiques. Une méthode efficace est nécessaire pour la mise à jour des modelé aeroelastiques en utilisant les données d'essais en vol, les données d'essais de vibration au sol ( GVT ) et les données de soufflerie. Tout d'abord tous les développements ont été valides sur une section de l'aile 2D et sur un modèle 3D simple et après l'approche POD a été applique= a une configuration industrielle (modèle de soufflerie aile-fuselage et modèle d' avions complète). / Although computational methods can provide good results, they usually do not agree exactly with the flight test data due to uncertainties in structural and aerodynamic computational models. An effective method is required for updating computational aeroelastic models using the flight test data along with Ground Vibration Test (GVT) data and wind tunnel data. Firstly all developments have been validated on a 2D wing section and on a simple 3D model and after the POD approach has been applied to an industrial configuration (wing-fuselage wind tunnel model and complete aircraft model).
|
65 |
Test case generation using symbolic grammars and quasirandom sequencesFelix Reyes, Alejandro Unknown Date
No description available.
|
66 |
Génération automatique de test pour les contrôleurs logiques programmables synchrones / Automated test generation for logical programmable synchronous controllersTka, Mouna 02 June 2016 (has links)
Ce travail de thèse, effectué dans la cadre du projet FUI Minalogic Bluesky, porte sur le test fonctionnel automatisé d'une classe particulière de contrôleurs logiques programmables (em4) produite par InnoVista Sensors. Ce sont des systèmes synchrones qui sont programmés au moyen d'un environnement de développement intégré (IDE). Les personnes qui utilisent et programment ces contrôleurs ne sont pas nécessairement des programmeurs experts. Le développement des applications logicielles doit être par conséquent simple et intuitif. Cela devrait également être le cas pour les tests. Même si les applications définies par ces utilisateurs ne sont pas nécessairement très critiques, il est important de les tester d'une manière adéquate et efficace. Un simulateur inclu dans l'IDE permet aux programmeurs de tester leurs programmes d'une façon qui reste à ce jour informelle et interactive en entrant manuellement des données de test. En se basant sur des recherches précédentes dans le domaine du test des programmes synchrones, nous proposons un nouveau langage de spécification de test, appelé SPTL (Synchronous Programs Testing Language) qui rend possible d'exprimer simplement des scénarios de test qui peuvent être exécutées à la volée pour générer automatiquement des séquences d'entrée de test. Il permet aussi de décrire l'environnement où évolue le système pour mettre des conditions sur les entrées afin d'arriver à des données de test réalistes et de limiter celles qui sont inutiles. SPTL facilite cette tâche de test en introduisant des notions comme les profils d'utilisation, les groupes et les catégories. Nous avons conçu et développé un prototype, nommé "Testium", qui traduit un programme SPTL en un ensemble de contraintes exploitées par un solveur Prolog qui choisit aléatoirement les entrées de test. La génération de données de test s'appuie ainsi sur des techniques de programmation logique par contraintes. Pour l'évaluer, nous avons expérimenté cette méthode sur des exemples d'applications EM4 typiques et réels. Bien que SPTL ait été évalué sur em4, son utilisation peut être envisagée pour la validation d'autres types de contrôleurs ou systèmes synchrones. / This thesis work done in the context of the FUI project Minalogic Bluesky, concerns the automated functional testing of a particular class of programmable logic controllers (em4) produced by InnoVista Sensors. These are synchronous systems that are programmed by means of an integrated development environment (IDE). People who use and program these controllers are not necessarily expert programmers. The development of software applications should be as result simple and intuitive. This should also be the case for testing. Although applications defined by these users need not be very critical, it is important to test them adequately and effectively. A simulator included in the IDE allows programmers to test their programs in a way that remains informal and interactive by manually entering test data.Based on previous research in the area of synchronous test programs, we propose a new test specification language, called SPTL (Synchronous Testing Programs Language) which makes possible to simply express test scenarios that can be executed on the fly to automatically generate test input sequences. It also allows describing the environment in which the system evolves to put conditions on inputs to arrive to realistic test data and limit unnecessary ones. SPTL facilitates this testing task by introducing concepts such as user profiles, groups and categories. We have designed and developed a prototype named "Testium", which translates a SPTL program to a set of constraints used by a Prolog solver that randomly selects the test inputs. So, generating test data is based on constraint logic programming techniques.To assess this, we experimented this method on realistic and typical examples of em4 applications. Although SPTL was evaluated on EM4, its use can be envisaged for the validation of other types of synchronous controllers or systems.
|
67 |
Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning / Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile CodingPersson, Jon January 2005 (has links)
Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.
|
68 |
Automated Software Testing : A Study of the State of Practice / Automated Software Testing : A Study of the State of PracticeRafi, Dudekula Mohammad, Reddy, Kiran Moses Katam January 2012 (has links)
Context: Software testing is expensive, labor intensive and consumes lot of time in a software development life cycle. There was always a need in software testing to decrease the testing time. This also resulted to focus on Automated Software Testing (AST), because using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease [11]. Manual Testing (MT) requires lot of effort and hard work, if we measure in terms of person per month [11]. Automated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems are cheap, they are faster and don‘t get bored and can work continuously in the weekends. Due to this advantage many researches are working towards the Automation of software testing, which can help to complete the task in less testing time [10]. Objectives: The main aims of this thesis is to 1.) To systematically classify contributions within AST. 2.) To identify the different benefits and challenges of AST. 3.) To identify the whether the reported benefits and challenges found in the literature are prevalent in industry. Methods: To fulfill our aims and objectives, we used Systematic mapping research methodology to systematically classify contributions within AST. We also used SLR to identify the different benefits and challenges of AST. Finally, we performed web based survey to validate the finding of SLR. Results: After performing Systematic mapping, the main aspects within AST include purpose of automation, levels of testing, Technology used, different types of research types used and frequency of AST studies over the time. From Systematic literature review, we found the benefits and challenges of AST. The benefits of AST include higher product quality, less testing time, reliability, increase in confidence, reusability, less human effort, reduction of cost and increase in fault detection. The challenges include failure to achieve expected goals, difficulty in maintenance of test automation, Test automation needs more time to mature, false expectations and lack of skilled people for test automation tools. From web survey, it is observed that almost all the benefits and challenges are prevalent in industry. The benefits such as fault detection and confidence are in contrary to the results of SLR. The challenge about the appropriate test automation strategy has 24 % disagreement from the respondents and 30% uncertainty. The reason is that the automation strategy is totally dependent on the test manager of the project. When asked “Does automated software testing fully replace manual testing”, 80% disagree with this challenge. Conclusion: The classification of the AST studies using systematic mapping gives an overview of the work done in the area of AST and also helps to find research coverage in the area of AST. These results can be used by researchers to use the gaps found in the mapping studies to carry on future work. The results of SLR and web survey clearly show that the practitioners clearly realize the benefits and challenges of AST reported in the literature. / Mobile no: +46723069909
|
69 |
POD Approach for Aeroelastic Updating / Approche POD pour le Recalage du Modele AeroelastiqueVetrano, Fabio 17 December 2014 (has links)
Bien que les méthodes de calcul peuvent donner de bons résultats, ils ne sont généralement pas en accord avec exactement les données d'essais en vol en raison des incertitudes dans les modelé de calcul de structure et aérodynamiques. Une méthode efficace est nécessaire pour la mise à jour des modelé aeroelastiques en utilisant les données d'essais en vol, les données d'essais de vibration au sol ( GVT ) et les données de soufflerie. Tout d'abord tous les développements ont été valides sur une section de l'aile 2D et sur un modèle 3D simple et après l'approche POD a été applique= a une configuration industrielle (modèle de soufflerie aile-fuselage et modèle d' avions complète). / Although computational methods can provide good results, they usually do not agree exactly with the flight test data due to uncertainties in structural and aerodynamic computational models. An effective method is required for updating computational aeroelastic models using the flight test data along with Ground Vibration Test (GVT) data and wind tunnel data. Firstly all developments have been validated on a 2D wing section and on a simple 3D model and after the POD approach has been applied to an industrial configuration (wing-fuselage wind tunnel model and complete aircraft model).
|
70 |
Framework pro tvorbu generátorů dat / Framework for Data GeneratorsKříž, Blažej January 2012 (has links)
This master's thesis is focused on the problem of data generation. At the beginning, it presents several applications for data generation and describes the data generation process. Then it deals with development of framework for data generators and demonstrational application for validating the framework.
|
Page generated in 0.0724 seconds