• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

JCML - Java Card Modeling Language: Defini??o e Implementa??o

Souza Neto, Pl?cido Ant?nio de 06 September 2007 (has links)
Made available in DSpace on 2014-12-17T15:47:43Z (GMT). No. of bitstreams: 1 PlacidoASN.pdf: 652214 bytes, checksum: b7912104bf8e3ec91262c75b9ef5d36b (MD5) Previous issue date: 2007-09-06 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler / M?todos formais poderiam ser usados para especificar e verificar software on-card em aplica??es Java Card. O estilo de programa??o para smart cards requer verifica??o em tempo de execu??o para condi??es de entrada em todos os m?todos Java Card, onde o objetivo principal ? preservar os dados do cart?o. Projeto por Contrato, em particular, a linguagem JML, ? uma op??o para este tipo de desenvolvimento e verifica??o, pelo fato da verifica??o em tempo de execu??o ser parte da implementa??o pela JML. Contudo, JML e suas respectivas ferramentas para verifica??o em tempo de execu??o n?o foram projetadas com o foco nas limita??es Java Card, sendo, dessa forma, n?o compat?veis com Java Card. Nesta disserta??o, analisamos o quanto esta situa??o ? realmente intr?nseca ?s limita??es Java Card e, se ? poss?vel re-definir a JML e suas ferramentas. Propomos requisitos para uma nova linguagem, a qual ? compat?vel com Java Card e apresentamos como o compilador desta linguagem pode ser constru?do. JCML retira da JML aspectos n?o definidos em Java Card, como por exemplo, concorr?ncia e tipos n?o suportados. Isto pode n?o ser o bastante, contudo, sem o esfor?o em otimiza??o de c?digo de verifica??o gerado pelo compilador, n?o ? poss?vel gerar c?digo de verifica??o para rodar no cart?o. O compilador JCML, apesar de ser bem mais restrito em rela??o ao compilador JML, est? habilitado a gerar c?digo de verifica??o compat?vel com Java Card, para algumas especifica??es lightweight. Como conclus?o, apresentamos uma variante da JML compat?vel com Java Card, JCML (Java Card Modeling Language), com uma vers?o de seu compilador
22

Zertifizierende verteilte Algorithmen

Völlinger, Kim 22 October 2020 (has links)
Eine Herausforderung der Softwareentwicklung ist, die Korrektheit einer Software sicherzustellen. Testen bietet es keine mathematische Korrektheit. Formale Verifikation ist jedoch oft zu aufwändig. Laufzeitverifikation steht zwischen den beiden Methoden. Laufzeitverifikation beantwortet die Frage, ob ein Eingabe-Ausgabe-Paar korrekt ist. Ein zertifizierender Algorithmus überzeugt seinen Nutzer durch ein Korrektheitsargument zur Laufzeit. Dafür berechnet ein zertifizierender Algorithmus für eine Eingabe zusätzlich zur Ausgabe noch einen Zeugen – ein Korrektheitsargument. Jeder zertifizierende Algorithmus besitzt ein Zeugenprädikat: Ist dieses erfüllt für eine Eingabe, eine Ausgabe und einen Zeugen, so ist das Eingabe-Ausgabe-Paar korrekt. Ein simpler Algorithmus, der das Zeugenprädikat für den Nutzer entscheidet, ist ein Checker. Die Korrektheit des Checkers ist folglich notwendig für den Ansatz und die formale Instanzverifikation, bei der wir Checker verifizieren und einen maschinen-geprüften Beweis für die Korrektheit eines Eingabe-Ausgabe-Paars zur Laufzeit gewinnen. Zertifizierende sequentielle Algorithmen sind gut untersucht. Verteilte Algorithmen, die auf verteilten Systemen laufen, unterscheiden sich grundlegend von sequentiellen Algorithmen: die Ausgabe ist über das System verteilt oder der Algorithmus läuft fortwährend. Wir untersuchen zertifizierende verteilte Algorithmen. Unsere Forschungsfrage ist: Wie können wir das Konzept zertifizierender sequentieller Algorithmen so auf verteilte Algorithmen übertragen, dass wir einerseits nah am ursprünglichen Konzept bleiben und andererseits die Gegebenheiten verteilter Systeme berücksichtigen? Wir stellen eine Methode der Übertragung vor. Die beiden Ziele abwägend entwickeln wir eine Klasse zertifizierender verteilter Algorithmen, die verteilte Zeugen berechnen und verteilte Checker besitzen. Wir präsentieren Fallstudien, Entwurfsmuster und ein Framework zur formalen Instanzverifikation. / A major problem in software engineering is to ensure the correctness of software. Testing offers no mathematical correctness. Formal verification is often too costly. Runtime verification stands between the two methods. Runtime verification answers the question whether an input-output pair is correct. A certifying algorithm convinces its user at runtime by offering a correctness argument. For each input, a certifying algorithm computes an output and additionally a witness. Each certifying algorithm has a witness predicate – a predicate with the property: being satisfied for an input, output and witness implies the input-output pair is correct. A simple algorithm deciding the witness predicate for the user is a checker. Hence, the checker’s correctness is crucial to the approach and motivates formal instance verification where we verify checkers and obtain machine-checked proofs for the correctness of an input-output pair at runtime. Certifying sequential algorithms are well-established. Distributed algorithms, designed to run on distributed systems, behave fundamentally different from sequential algorithms: their output is distributed over the system or they even run continuously. We investigate certifying distributed algorithms. Our research question is: How can we transfer the concept of certifying sequential algorithms to distributed algorithms such that we are in line with the original concept but also adapt to the conditions of distributed systems? In this thesis, we present a method to transfer the concept: Weighing up both sometimes conflicting goals, we develop a class of certifying distributed algorithms that compute distributed witnesses and have distributed checkers. We offer case studies, design patterns and a framework for formal instance verification. Additionally, we investigate other methods to transfer the concept of certifying algorithms to distributed algorithms.
23

Runtime Verification Using a Temporal Description Logic Revisited

Baader, Franz, Lippmann, Marcel 20 June 2022 (has links)
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system’s behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
24

Test and Validation of Web Services

Cao, Tien Dung 06 December 2010 (has links)
Nous proposons dans cette thèse les approches de test pour la composition de services web. Nous nous intéressons aux test unitaire et d’intégration d’une orchestration de services web. L’aspect de vérification d’exécution en-ligne est aussi consideré. Nous définissons une plateforme de test unitaire pour l’orchestration de services web qui compose une architecture de test, une relation de conformité et deux approches de test basés sur le modèle de machine à l’états finis étendues temporisés: l’approche offline où les activités de test comme la génération de cas de test temporisé, l’exécution de test et l’assignement de verdict sont appliquées en séquentielle tandis que ces activités sont appliquées en parallèle dans l’approche online. Pour le test d’intégration d’une orchestration, nous combinons deux approches: active et passive.Au debut, l’approche active est utilisée pour activer une nouvelle session d’orchestration par l’envoi d’un message de requête SOAP. Après, tous les messages d’entré et de sortie de l’orchestration sont collectés et analysés par l’approche passive.Pour l’aspect de vérification d’exécution en-ligne, nous nous intéressons à la vérification d’une trace qui respecte un ensemble des constraintes, noté règles, ou pas. Nous avons proposé extendre le langage Nomad en définissant des constraintes sur chaque action atomique et un ensemble de corrélation de données entre les actions pour définir des règles pour le service web. Ce langage nous permet de définir des règles avec le temps futur et passé, et d’utiliser des opérations NOT, AND, OR pour combiner quelque conditions dans le contexte de la règle. Ensuite, nous proposons un algorithme pour vérifier l’exactitude d’une séquence des messages en parallèle avec le moteur de collecte de trace. / In this thesis, we propose the testing approaches for web service composition. We focus on unit, integrated testing of an orchestration of web services and also the runtime verification aspect. We defined an unit testing framework for an orchestration that is composed of a test architecture, a conformance relation and two proposed testing approaches based on Timed Extended Finite State Machine (TEFSM) model: offline which test activities as timed test case generation, test execution and verdict assignment are applied in sequential, and online which test activities are applied in parallel. For integrated testing of an orchestration, we combines of two approaches: active and passive. Firstly, active approach is used to start a new session of the orchestration by sending a SOAP request. Then all communicating messages among services are collected and analyzed by a passive approach. On the runtime verification aspect, we are interested in the correctness of an execution trace with a set of defined constraints, called rules. We have proposed to extend the Nomad language, by defining the constraints on each atomic action (fixed conditions) and a set of data correlations between the actions to define the rules for web services. This language allows us to define a rule with future and past time, and to use the operations: NOT, AND, OR to combines some conditions into a context of the rule. Afterwards, we proposed an algorithm to check correctness of a message sequence in parallel with the trace collection engine. Specifically, this algorithm verifies message by message without storing them.
25

Model-Based Run-time Verification of Software Components by Integrating OCL into Treaty / Modellbasierte Verifikation von Softwarekomponenten zur Laufzeit am Beispiel der Treaty-OCL-Integration.

Wilke, Claas 22 April 2010 (has links) (PDF)
Model Driven Development is used to improve software quality and efficiency by automatically transforming abstract and formal models into software implementations. This is particularly sensible if the model’s integrity can be proven formally and is preserved during the model’s transformation. A standard to specify software model integrity is the Object Constraint Language (OCL). Another topic of research is the dynamic development of software components, enabling software system composition at component run-time. As a consequence, the system’s verification must be realized during system run-time (and not during transformation or compile time). Many established verification techniques cannot be used for run-time verification. A method to enable model-based run-time verification will be developed during this work. How OCL constraints can be transformed into executable software artifacts and how they can be used in the component-based system Treaty will be the major task of this diploma thesis. / Modellgetriebene Entwicklung dient der Verbesserung von Qualität und Effizienz in der Software-Entwicklung durch Automatisierung der notwendigen Transformationen von abstrakten bzw. formalen Modellen bis zur Implementierung. Dies ist insbesondere dann sinnvoll, wenn die Integrität der ursprünglichen Modelle formal bewiesen werden kann und durch die Transformation gewährleistet wird. Ein Standard zur Spezifikation der Integrität von Softwaremodellen ist die Object Constraint Language (OCL). Eine weitere Forschungsrichtung im Software-Engineering ist die Entwicklung von dynamischen Komponenten-Modellen, die die Komposition von Softwaresystemen im laufenden Betrieb ermöglichen. Dies bedeutet, dass die Systemverifikation im laufenden Betrieb realisiert werden muss. Die meisten der etablierten Verifikationstechniken sind dazu nicht geeignet. In der Diplomarbeit soll ausgehend von diesem Stand der Technik eine Methode zur modellbasierten Verifikation zur Laufzeit entwickelt werden. Insbesondere soll untersucht werden, wie OCL-Constraints zur Laufzeit in ausführbare Software-Artefakte übersetzt und in dem komponentenbasierten System Treaty verwendet werden können.
26

Automated Verification of Exam, Cash, aa Reputation, and Routing Protocols / Vérification automatique de protocoles d'examen, de monnaie, de réputation, et de routage

Kassem, Ali 18 September 2015 (has links)
La sécurité est une exigence cruciale dans les applications basées sur l'information et la technologie de communication, surtout quand un réseau ouvert tel que l'Internet est utilisé. Pour assurer la sécurité dans ces applications des protocoles cryptographiques ont été développé. Cependant, la conception de protocoles de sécurité est notoirement difficile et source d'erreurs. Plusieurs failles ont été trouvées sur des protocoles qui se sont prétendus sécurisés. Par conséquent, les protocoles cryptographiques doivent être vérifiés avant d'être utilisés. Une approche pour vérifier les protocoles cryptographiques est l'utilisation des méthodes formelles, qui ont obtenu de nombreux résultats au cours des dernières années.Méthodes formelles portent sur l'analyse des spécifications des protocoles modélisées en utilisant, par exemple, les logiques dédiés, ou algèbres de processus. Les méthodes formelles peuvent trouver des failles ou permettent de prouver qu'un protocole est sécurisé sous certaines hypothèses par rapport aux propriétés de sécurité données. Toutefois, elles abstraient des erreurs de mise en ouvre et les attaques side-channel.Afin de détecter ces erreurs et la vérification des attaques d'exécution peut être utilisée pour analyser les systèmes ou protocoles exécutions. En outre, la vérification de l'exécution peut aider dans les cas où les procédures formelles mettent un temps exponentielle ou souffrent de problèmes de terminaison. Dans cette thèse, nous contribuons à la vérification des protocoles cryptographiques avec un accent sur la vérification formelle et l'automatisation. Tout d'abord, nous étudions les protocoles d'examen. Nous proposons des définitions formelles pour plusieurs propriétés d'authentification et de confidentialité dans le Pi-calcul Appliqué.Nous fournissons également une des définitions abstraites de propriétés de vérifiabilité. Nous analysons toutes ces propriétés en utilisant automatiquement ProVerif sur plusieurs études de cas, et avons identifié plusieurs failles. En outre, nous proposons plusieurs moniteurs de vérifier les exigences d'examen à l'exécution. Ces moniteurs sont validés par l'analyse d'un exécutions d'examen réel en utilisant l'outil MARQ Java.Deuxièmement, nous proposons un cadre formel pour vérifier les propriétés de sécurité de protocoles de monnaie électronique non transférable. Nous définissons la notion de vie privée du client et les propriétés de la falsification. Encore une fois, nous illustrons notre modèle en analysant trois études de cas à l'aide ProVerif, et confirmons plusieurs attaques connues.Troisièmement, nous proposons des définitions formelles de l'authentification, la confidentialité et les propriétés de vérifiabilité de protocoles de réputation électroniques. Nous discutons les définitions proposées, avec l'aide de ProVerif, sur un protocole de réputation simple. Enfin, nous obtenons un résultat sur la réduction de la vérification de la validité d'une route dans les protocoles de routage ad-hoc, en présence de plusieurs attaquants indépendants qui ne partagent pas leurs connaissances. / Security is a crucial requirement in the applications based on information and communication technology, especially when an open network such as the Internet is used.To ensure security in such applications cryptographic protocols have been used.However, the design of security protocols is notoriously difficult and error-prone.Several flaws have been found on protocols that are claimed secure.Hence, cryptographic protocols must be verified before they are used.One approach to verify cryptographic protocols is the use of formal methods, which have achieved many results in recent years.Formal methods concern on analysis of protocol specifications modeled using, e.g., dedicated logics, or process algebras.Formal methods can find flaws or prove that a protocol is secure under ``perfect cryptographic assumption" with respect to given security properties. However, they abstract away from implementation errors and side-channel attacks.In order to detect such errors and attacks runtime verification can be used to analyze systems or protocols executions.Moreover, runtime verification can help in the cases where formal procedures have exponential time or suffer from termination problems.In this thesis we contribute to cryptographic protocols verification with an emphasis on formal verification and automation.Firstly, we study exam protocols. We propose formal definitions for several authentication and privacy propertiesin the Applied Pi-Calculus. We also provide an abstract definitions of verifiability properties.We analyze all these properties automatically using ProVerif on multiple case studies, and identify several flaws.Moreover, we propose several monitors to check exam requirements at runtime. These monitors are validated by analyzing a real exam executions using MARQ Java based tool.Secondly, we propose a formal framework to verify the security properties of non-transferable electronic cash protocols.We define client privacy and forgery related properties.Again, we illustrate our model by analyzing three case studies using ProVerif, and confirm several known attacks.Thirdly, we propose formal definitions of authentication, privacy, and verifiability properties of electronic reputation protocols. We discuss the proposed definitions, with the help of ProVerif, on a simple reputation protocol.Finally, we obtain a reduction result to verify route validity of ad-hoc routing protocols in presence of multiple independent attackers that do not share their knowledge.
27

Enhancing Trust in Autonomous Systems without Verifying Software

Stamenkovich, Joseph Allan 12 June 2019 (has links)
The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes" where the precise workings are not known by the developer applying them, and their behavior is undefined when encountering an untrained scenario. With so much code, the possibility of bugs or malicious code is considerable. An approach is developed to monitor and possibly override the behavior of autonomous systems independent of the software controlling them. Application-isolated safety monitors are implemented in configurable hardware to ensure that the behavior of an autonomous system is limited to what is intended. The sensor inputs may be shared with the software, but the output from the monitors is only engaged when the system violates its prescribed behavior. For each specific rule the system is expected to follow, a monitor is present processing the relevant sensor information. The behavior is defined in linear temporal logic (LTL) and the associated monitors are implemented in a field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the effectiveness of the monitors without any physical modifications to the drone. Upon detection of a violation, appropriate corrective actions are persistently enforced on the autonomous system. / Master of Science / Autonomous systems are surprisingly vulnerable, not just from malicious hackers, but from design errors and oversights. The lines of code required can quickly climb into the millions, and the artificial decision algorithms can be inscrutable and fully dependent upon the information they are trained on. These factors cause the verification of the core software running our autonomous cars, drones, and everything else to be prohibitively difficult by traditional means. Independent safety monitors are implemented to provide internal oversight for these autonomous systems. A semi-automatic design process efficiently creates error-free monitors from safety rules drones need to follow. These monitors remain separate and isolated from the software typically controlling the system, but use the same sensor information. They are embedded in the circuitry and act as their own small, task-specific processors watching to make sure a particular rule is not violated; otherwise, they take control of the system and force corrective behavior. The monitors are added to a consumer off-the-shelf (COTS) drone to demonstrate their effectiveness. For every rule monitored, an override is triggered when they are violated. Their effectiveness depends on reliable sensor information as with any electronic component, and the completeness of the rules detailing these monitors.
28

Model-Based Run-time Verification of Software Components by Integrating OCL into Treaty

Wilke, Claas 13 October 2009 (has links)
Model Driven Development is used to improve software quality and efficiency by automatically transforming abstract and formal models into software implementations. This is particularly sensible if the model’s integrity can be proven formally and is preserved during the model’s transformation. A standard to specify software model integrity is the Object Constraint Language (OCL). Another topic of research is the dynamic development of software components, enabling software system composition at component run-time. As a consequence, the system’s verification must be realized during system run-time (and not during transformation or compile time). Many established verification techniques cannot be used for run-time verification. A method to enable model-based run-time verification will be developed during this work. How OCL constraints can be transformed into executable software artifacts and how they can be used in the component-based system Treaty will be the major task of this diploma thesis. / Modellgetriebene Entwicklung dient der Verbesserung von Qualität und Effizienz in der Software-Entwicklung durch Automatisierung der notwendigen Transformationen von abstrakten bzw. formalen Modellen bis zur Implementierung. Dies ist insbesondere dann sinnvoll, wenn die Integrität der ursprünglichen Modelle formal bewiesen werden kann und durch die Transformation gewährleistet wird. Ein Standard zur Spezifikation der Integrität von Softwaremodellen ist die Object Constraint Language (OCL). Eine weitere Forschungsrichtung im Software-Engineering ist die Entwicklung von dynamischen Komponenten-Modellen, die die Komposition von Softwaresystemen im laufenden Betrieb ermöglichen. Dies bedeutet, dass die Systemverifikation im laufenden Betrieb realisiert werden muss. Die meisten der etablierten Verifikationstechniken sind dazu nicht geeignet. In der Diplomarbeit soll ausgehend von diesem Stand der Technik eine Methode zur modellbasierten Verifikation zur Laufzeit entwickelt werden. Insbesondere soll untersucht werden, wie OCL-Constraints zur Laufzeit in ausführbare Software-Artefakte übersetzt und in dem komponentenbasierten System Treaty verwendet werden können.

Page generated in 0.511 seconds