• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A complementary approach for testing system robustness based on passive testing and fault injection techniques / Une approche complémentaire de test de robustesse basée sur l'injection de fautes et le test passif

Bessayah, Fayçal 03 December 2010 (has links)
La robustesse est un attribut spécifique de la fiabilité qui caractérise la réaction d’un système aux fautes externes. Par conséquent, le test de robustesse consiste à tester un système en présence de fautes ou dans des conditions environnementales stressantes afin d’étudier son comportement lorsqu’il évolue dans un environnement hostile. Le test de robustesse peut être soit empirique ou formel. Les méthodes d’injection de fautes sont très communément utilisées pour évaluer le degré de robustesse d’un système. Cependant, elles ne se basent sur aucun oracle de test pour valider leurs résultats. D’autre part, les méthodes formelles de test de robustesse formalisent et la génération de fautes et le processus d’analyse. Elles présentent cependant quelques limitations par rapport aux types de fautes considérées qui dépendent fortement du modèle fonctionnel du système testé. Le travail que nous présentons dans cette thèse, consiste en un ensemble de propositions qui ont pour objectif de répondre aux défis auxquels font face les approches de test de robustesse existantes. D’abord, nous proposons une approche formelle pour la spécification et la vérification du processus d’injection de fautes. Cette approche consiste à formaliser les fautes injectées par un ensemble de triplet de Hoare et ensuite d’utiliser cette spécification pour vérifier la bonne exécution des campagnes d’injections. Notre seconde contribution concerne la spécification et la vérification des propriétés de robustesse. Nous proposons de formaliser les propriétés de robustesse en utilisant une extension de la logique temporelle linéaire qui permet la spécification de contraintes temps réel (XCTL) et nous proposons un algorithme de test passif qui permet de tester la satisfiabilité de ce type de contraintes sur des traces d’exécution finies. Nous contribuons aussi par une nouvelle approche de test de robustesse. Nous proposons une approche hybride basée sur l’injection de fautes et le test passif. L’injection de fautes est utilisée pour créer des conditions environnementales stressantes, et le test passif permet de vérifier la satisfiabilité des propriétés de robustesse sur les traces d’exécution collectées. Les fautes injectées ainsi que les propriétés de robustesse sont formellement spécifiées. Nous utilisons la logique de Hoare pour la spécification des fautes et la logique XCTL pour la formalisation des propriétés de robustesse. Ce qui nous permet de vérifier à la fois le processus d’injection et les exigences de robustesse en appliquant les approches de test passives proposées dans nos contributions précédentes. Finalement, nous proposons une plateforme de modélisation et de vérification de la robustesse des services Web. Cette plateforme est en réalité une instanciation de notre approche de test de robustesse, adaptée aux services Web. Le choix des services Web est motivé par l’émergence de cette technologie qui tend progressivement à s’imposer comme un standard du paradigme de communication programme-à-programme. Ils fournissent aussi un excellent exemple de systèmes hétérogènes fortement distribués. / Robustness is a specialized dependability attribute, characterizing a system reaction with respect to external faults. Accordingly, robustness testing involves testing a system in the presence of faults or stressful environmental conditions to study its behavior when facing abnormal conditions. Testing system robustness can be done either empirically or formally. Fault injection techniques are very suitable for assessing the robustness degree of the tested system. They do not rely however, on formal test oracles for validating their test. On the other hand, existing formal approaches for robustness testing formalize both the fault generation and the result analysis process. They have however some limitations regarding the type of the handled faults as well as the kind of systems on which they can be applied. The work presented in this thesis manuscript aims at addressing some of the issues of the existing robustness testing methods. First, we propose a formal approach for the specification and the verification of the fault injection process. This approach consists in formalizing the injected faults as a set of Hoare triples and then, verifying the good execution of the injection campaigns, based on a passive testing algorithm that checks the fault specification against a collected injection trace. Our second contribution focuses on providing a test oracle for verifying real time constraints. We propose a passive testing algorithm to check real time requirements, specified as a set of XCTL (eXplicit Clock Temporal Logic) formulas, on collected execution traces. Then, we propose a new robustness testing approach. It is a complementary approach that combines fault injection and passive testing for testing system robustness. The injected faults are specified as a set of Hoare triples and verified against the injection trace to validate the injection process. The robustness requirements are formalized as a set of XCTL formulas and are verified on collected execution traces. This approach allows one to inject a wide range of faults and can be used to test both simple and distributed systems. Finally, we propose an instantiation of our robustness testing approach for Web services. We chose Web services technology because it supports widely distributed and heterogeneous systems. It is therefore, a very good application example to show the efficiency of our approach.
2

A Search-Based Approach for Robustness Testing of Web Applications

Gurram, Karthik, Chappidi, Maheshwar Reddy January 2019 (has links)
Context: This thesis deals with the robustness testing of web applications on a different web browser using a Selenium WebDriver to automate the browser. To increase the efficiency of this automation testing, we are using a robustness method. Robustness method is a process of testing the behaviour of a system implementation under exceptional execution conditions to check if it still fulfils some robustness requirements. These robustness tests often apply random algorithms to select the actions to be executed on web applications. The search-based technique was used to automatically generate effective test cases, consisting of initial conditions and fault sequences. The success criteria in most cases: "if it does not crash or hang application, then it is robust". Problem: Software testing consumes a lot of time, labour-intensive to write test cases and expensive in a software development life cycle. There was always a need for software testing to decrease the testing time. Manual testing requires a lot of effort and hard work if we measure in terms of person per month [1]. To overcome this problem, we are using a search-based approach for robustness testing of web applications which can dramatically reduce the human effort, time and the costs related to testing. Objective: The purpose of this thesis is to develop an automated approach to carry out robustness testing of web applications focusing on revealing defects related to a sequence of events triggered by a web system. To do so, we will employ search-based techniques (e.g., NSGA-II algorithm [1]). The main focus is on Ericsson Digital BSS systems, with a special focus on robustness testing. The main purpose of this master thesis is to investigate how automated robustness testing can be done so that the effort of keeping the tests up to date is minimized when the functionality of the application changes. This kind of automation testing is well depended on the structure of the product being tested. In this thesis, the test object was structured in a way, which made the testing method simple for fault revelation and less time-consuming. Method: For this approach, a meta-heuristic search-based genetic algorithm is used to make efficiency for robustness testing of the web application. In order to evaluate the effectiveness of this proposed approach, the experimental procedure is adapted. For this, an experimental testbed is set up. The effectiveness of the proposed approach is measured by two objectives: Fault revelation, Test sequence length. The effectiveness is also measured by evaluating the feasible cost-effective output test cases. i Results:The results we collected from our approach shows that by reducing the test sequence length we can reduce the time consuming and by using the NSGA-2 algorithm we found as many faults as we can when we tested on web applications in Ericsson. Conclusion: The attempt of testing of web applications, was partly succeeded. This kind of robustness testing in our approach was strongly depended on the algorithm we are using. We can conclude that by using these two objectives, we can reduce the cost of testing and time consuming.
3

Test et évaluation de la robustesse de la couche fonctionnelle d'un robot autonome / Test and Evaluation of the Robustness of the Functional Layer of an Autonomous Robot

Chu, Hoang-Nam 01 September 2011 (has links)
La mise en oeuvre de systèmes autonomes nécessite le développement et l'utilisation d'architectures logicielles multi-couches qui soient adaptées. Typiquement, une couche fonctionnelle renferme des modules en charge de commander les éléments matériels du système et de fournir des services élémentaires. Pour être robuste, la couche fonctionnelle doit être dotée de mécanismes de protection vis-à-vis de requêtes erronées ou inopportunes issues de la couche supérieure. Nous présentons une méthodologie pour tester la robustesse de ces mécanismes. Nous définissons un cadre général pour évaluer la robustesse d'une couche fonctionnelle par la caractérisation de son comportement vis-à-vis de requêtes inopportunes. Nous proposons également un environnement de validation basé sur l'injection de fautes dans le logiciel de commande d'un robot simulé. Un grand nombre de cas de tests est généré automatiquement par la mutation d'une séquence de requêtes valides. Les statistiques descriptives des comportements en présence de requêtes inopportunes sont analysées afin d'évaluer la robustesse du système sous test. / The implementation of autonomous systems requires the development and the using of multi-layer software architecture. Typically, a functional layer contains several modules that control the material of the system and provide elementary services. To be robust, the functional layer must be implemented with protection mechanisms with respect to erroneous or inopportune requests sent from the superior layer. We present a methodology for robustness testing these mechanisms. We define a general framework to evaluate the robustness of a functional layer by characterizing its behavior with respect to inappropriate requests. We also propose an validation environment based on fault injection in the control software of a simulated robot. A great number of test cases is generated automatically by the mutation of a sequence of valid requests. The descriptive statistics of the behaviors in the presence of inappropriate requests are analyzed in order to evaluate the robustness of the system under test.
4

Optimization of Acidic Degradation of Hyaluronic Acid using Design of Experiments

Sandqvist Wedin, Emma January 2019 (has links)
Hyaluronic acid (HA) is an unbranched polysaccharide consisting of the repeating disaccharide unit β(1→4)-GlcA-β(1→3)-GlcNAc and is a naturally occurring biopolymer in bacteria and vertebras. HA is predominantly found in the extracellular matrix (ECM) and the in vivo function of HA can vary depending on molecular weight (Mw) for instance high Mw HA is reported to be anti-angiogenic while low Mw HA induces angiogenesis. HA is a popular component for hydrogels such as dermal fillers. HA is commonly used in dermal fillers. However, other materials, such as other polymers, can be used as well. The project goal was to investigate different degradation processes for production of target Mw HA. Alkaline and acidic degradation processes in combination with increased temperatures seemed as the most promising methods. Degradation tests performed both in aqueous solution as well as heterogeneously in ethanol were evaluated. The acidic degradation in aqueous solution was proven to have the largest degradation constant. Both a robustness test as well as a Design of Experiments (DoE) was performed to investigate the influence different factors had on the degradation speed. The investigated factors were HA concentration, HCl concentration and temperature. Temperature and HCl concentrations proved to be the most influencing factors and a model was developed in the DoE software MODDE to describe how the factors influenced the degradation constant. The model was established as a good significant model with a Q2 value of 0.998 and relative standard deviation (RSD) value of 0.022 after a logarithmic transformation was performed as well as a simplification of the model by excluding some of the factor interactions. The acidic degradation method also proved to be a highly robust method which easily could be used to produce target Mw HA.
5

Testabilité des services Web / Web services testability

Rabhi, Issam 09 January 2012 (has links)
Cette thèse s’est attaquée sous diverses formes au test automatique des services Web : une première partie est consacrée au test fonctionnel à travers le test de robustesse. La seconde partie étend les travaux précédents pour le test de propriétés non fonctionnelles, telles que les propriétés de testabilité et de sécurité. Nous avons abordé ces problématiques à la fois d’un point de vue théorique et pratique. Nous avons pour cela proposé une nouvelle méthode de test automatique de robustesse des services Web non composés, à savoir les services Web persistants (stateful) et ceux non persistants. Cette méthode consiste à évaluer la robustesse d’un service Web par rapport aux opérations déclarées dans sa description WSDL, en examinant les réponses reçues lorsque ces opérations sont invoquées avec des aléas et en prenant en compte l’environnement SOAP. Les services Web persistants sont modélisés grâce aux systèmes symboliques. Notre méthode de test de robustesse dédiée aux services Web persistants consiste à compléter la spécification du service Web afin de décrire l’ensemble des comportements corrects et incorrects. Puis, en utilisant cette spécification complétée, les services Web sont testés en y intégrant des aléas. Un verdict est ensuite rendu. Nous avons aussi réalisé une étude sur la testabilité des services Web composés avec le langage BPEL. Nous avons décrit précisément les problèmes liés à l’observabilité qui réduisent la faisabilité du test de services Web. Par conséquent, nous avons évalué des facteurs de la testabilité et proposé des solutions afin d’améliorer cette dernière. Pour cela, nous avons proposé une approche permettant, en premier lieu, de transformer la spécification ABPEL en STS. Cette transformation consiste à convertir successivement et de façon récursive chaque activité structurée en un graphe de sous-activités. Ensuite, nous avons proposé des algorithmes d’améliorations permettant de réduire ces problèmes de testabilité. Finalement, nous avons présenté une méthode de test de sécurité des services Web persistants. Cette dernière consiste à évaluer quelques propriétés de sécurité, tel que l’authentification, l’autorisation et la disponibilité, grâce à un ensemble de règles. Ces règles ont été crée, avec le langage formel Nomad. Cette méthodologie de test consiste d’abord à transformer ces règles en objectifs de test en se basant sur la description WSDL, ensuite à compléter, en parallèle, la spécification du service Web persistant et enfin à effectuer le produit synchronisé afin de générer les cas de test. / This PhD thesis focuses on diverse forms of automated Web services testing : on the one hand, is dedicated to functional testing through robustness testing. On the other hand, is extends previous works on the non-functional properties testing, such as the testability and security properties. We have been exploring these issues both from a theoretical and practical perspective. We proposed a robustness testing method which generates and executes test cases automatically from WSDL descriptions. We analyze the Web service over hazards to find those which may be used for testing. We show that few hazards can be really handled and then we improve the robustness issue detection by separating the SOAP processor behavior from the Web service one. Stateful Web services are modeled with Symbolic Systems. A second method dedicated to stateful Web services consists in completing the Web service specification to describe correct and incorrect behaviors. By using this completed specification, the Web services are tested with relevant hazards and a verdict is returned. We study the BPEL testability on a well-known testability criterion called observability. To evaluate, we have chosen to transform ABPEL specifications into STS to apply existing methods. Then, from STS testability issues, we deduce some patterns of ABPEL testability degradation. These latter help to finally propose testability enhancement methods of ABPEL specifications. Finally, we proposed a security testing method for stateful Web Services. We define some specific security rules with the Nomad language. Afterwards, we construct test cases from a symbolic specification and test purposes derived from the previous rules. Moreover, to validate our proposal, we have applied our testing approach on real size case studies.
6

Multivariate methods in tablet formulation

Gabrielsson, Jon January 2004 (has links)
<p>This thesis describes the application of multivariate methods in a novel approach to the formulation of tablets for direct compression. It begins with a brief historical review, followed by a basic introduction to key aspects of tablet formulation and multivariate data analysis. The bulk of the thesis is concerned with the novel approach, in which excipients were characterised in terms of multiple physical or (in most cases) spectral variables. By applying Principal Component Analysis (PCA) the descriptive variables are summarized into a few latent variables, usually termed scores or principal properties (PP’s). In this way the number of descriptive variables is dramatically reduced and the excipients are described by orthogonal continuous variables. This means that the PP’s can be used as ordinary variables in a statistical experimental design. The combination of latent variables and experimental design is termed multivariate design or experimental design in PP’s. Using multivariate design many excipients can be included in screening experiments with relatively few experiments.</p><p>The outcome of experiments designed to evaluate the effects of differences in excipient composition of formulations for direct compression is, of course, tablets with various properties. Once these properties, e.g. disintegration time and tensile strength, have been determined with standardised tests, quantitative relationships between descriptive variables and tablet properties can be established using Partial Least Squares Projections to Latent Structures (PLS) analysis. The obtained models can then be used for different purposes, depending on the objective of the research, such as evaluating the influence of the constituents of the formulation or optimisation of a certain tablet property.</p><p>Several examples of applications of the described methods are presented. Except in the first study, in which the feasibility of this approach was first tested, the disintegration time of the tablets has been studied more carefully than other responses. Additional experiments have been performed in order to obtain a specific disintegration time. Studies of mixtures of excipients with the same primary function have also been performed to obtain certain PP’s. Such mixture experiments also provide a straightforward approach to additional experiments where an interesting area of the PP space can be studied in more detail. The robustness of a formulation with respect to normal batch-to-batch variability has also been studied.</p><p>The presented approach to tablet formulation offers several interesting alternatives, for both planning and evaluating experiments.</p>
7

Multivariate methods in tablet formulation

Gabrielsson, Jon January 2004 (has links)
This thesis describes the application of multivariate methods in a novel approach to the formulation of tablets for direct compression. It begins with a brief historical review, followed by a basic introduction to key aspects of tablet formulation and multivariate data analysis. The bulk of the thesis is concerned with the novel approach, in which excipients were characterised in terms of multiple physical or (in most cases) spectral variables. By applying Principal Component Analysis (PCA) the descriptive variables are summarized into a few latent variables, usually termed scores or principal properties (PP’s). In this way the number of descriptive variables is dramatically reduced and the excipients are described by orthogonal continuous variables. This means that the PP’s can be used as ordinary variables in a statistical experimental design. The combination of latent variables and experimental design is termed multivariate design or experimental design in PP’s. Using multivariate design many excipients can be included in screening experiments with relatively few experiments. The outcome of experiments designed to evaluate the effects of differences in excipient composition of formulations for direct compression is, of course, tablets with various properties. Once these properties, e.g. disintegration time and tensile strength, have been determined with standardised tests, quantitative relationships between descriptive variables and tablet properties can be established using Partial Least Squares Projections to Latent Structures (PLS) analysis. The obtained models can then be used for different purposes, depending on the objective of the research, such as evaluating the influence of the constituents of the formulation or optimisation of a certain tablet property. Several examples of applications of the described methods are presented. Except in the first study, in which the feasibility of this approach was first tested, the disintegration time of the tablets has been studied more carefully than other responses. Additional experiments have been performed in order to obtain a specific disintegration time. Studies of mixtures of excipients with the same primary function have also been performed to obtain certain PP’s. Such mixture experiments also provide a straightforward approach to additional experiments where an interesting area of the PP space can be studied in more detail. The robustness of a formulation with respect to normal batch-to-batch variability has also been studied. The presented approach to tablet formulation offers several interesting alternatives, for both planning and evaluating experiments.
8

Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer / Fuzzing Radio Resource Control-meddelanden i 5Goch LTE-system

Potnuru, Srinath January 2021 (has links)
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols. / 5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.

Page generated in 0.1325 seconds