• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 37
  • 33
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 88
  • 83
  • 78
  • 75
  • 71
  • 70
  • 70
  • 60
  • 59
  • 37
  • 36
  • 32
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Um framework para coordenação do tratamento de exceções em sistemas tolerantes a falhas / A framework for exception handling coordination in fault-tolerant systems

David Paulo Pereira 09 March 2007 (has links)
A adoção em larga escala de redes de computadores e gerenciadores de banco de dados contribuiu para o surgimento de sistemas de informação complexos. Atualmente, estes sistemas tornaram-se elementos essenciais na vida das pessoas, dando suporte a processos de negócio e serviços corporativos indispensáveis à sociedade, como automação bancária e telefonia. A utilização de componentes na estruturação destes sistemas promove maior qualidade e flexibilidade ao produto e agiliza o processo de desenvolvimento. Entretanto, para que estes benefícios sejam totalmente observados, é fundamental que os provedores de componentes de prateleira projetem especificações precisas, completas e consistentes. Geralmente, as especificações omitem ou negligenciam o comportamento dos componentes nas situações de falha. Desta forma, a utilização de componentes não confiáveis, cujos comportamentos não podem ser inteiramente previstos, compromete seriamente o projeto de sistemas tolerantes a falhas. Uma estratégia para a especificação de componentes tolerantes a falhas é informar a ocorrência de erros através de exceções e realizar a recuperação dos mesmos por rotinas de tratamento correspondentes. A especificação deve separar claramente o comportamento normal do excepcional, destinado à recuperação do erro. Entretanto, em sistemas concorrentes e distribuídos, a especificação apenas deste tratamento local não é suficiente. Uma exceção pode ser lançada em decorrência de erros sistêmicos (i.e. problemas de rede) que afetam todo o sistema. Assim, determinadas exceções devem ser tratadas em nível arquitetural, envolvendo os demais componentes no tratamento. O modelo conceitual de ações Atômicas Coordenadas (ações CA - Coordinated Atomic actions), bastante aplicado na estruturação de sistemas tolerantes a falhas, define um mecanismo geral para a coordenação do tratamento excepcional dos componentes, que cooperam na execução das atividades e competem por recursos compartilhados. Portanto, o modelo de ações CA oferece uma solução potencialmente viável para a especificação do tratamento de exceções em nível arquitetural. Este trabalho propõe um framework para a especificação do tratamento de exceções em nível arquitetural, baseando-se no modelo de aninhamento de ações CA e utilizando a linguagem orientada a eventos CSP (Communicating Sequential Processes). Sua principal característica é prover um protocolo padronizado para a coordenação do tratamento de exceções, que envolve a cooperação dos componentes do sistema. Além disso, é apresentada uma estratégia para a verificação formal dos sistemas na ferramenta FDR (Failure Divergence Refinement), com base no modelo de refinamento por rastros. / The widespread scale adoption of computer networks and database management systems has contributed to the arising of complex information systems. Nowadays, these systems have become essential aspects in the everyday life, supporting business processes and indispensable enterprise services to society such as banking automation and telephony. The usage of components in structuring of these systems promotes higher quality and flexibility to the product and accelerates the software development process. However, in order to fully observe the benefits it is essential that the suppliers of these COTS (commercial off-the-shelf) design precise, complete and consistent specifications. Generally, the specifications omit or neglect the behavior of these components in exceptional situations. Therefore, the usage of untrustworthy components whose behavior cannot be entirely foreseen seriously compromise the design of fault-tolerant systems. One of the strategies used for the specification of fault-tolerant components is to inform the occurrence of errors through exceptions and make its recovering by the correspondent exception handling routines. The specification should separate clearly the normal behavior from the exceptional one, specially designed for error recovery. However, in concurrent and distributed systems, specification of local exception handling is not enough. An exception could be raised as a result of systemic errors (i.e. network errors) which affect the entire system, thus specific types of exceptions should be treated at an architectural level involving all the other components in this handling activity. The conceptual model of Coordinated Atomic (CA) actions, often applied in the structuring of fault-tolerant systems, defines a general mechanism for coordination of exception handling with components that cooperate while executing activities and compete for shared resources. Therefore, the model of CA actions offers a perfectly viable solution for the specification of exception handling at an architectural level. This work proposes a framework for the specification of exception handling at an architectural level, based on the nesting model of CA actions and using the event-oriented language CSP (Communicating Sequential Processes). Its main characteristic is to provide a standardized protocol for coordination of exception handling that involves the cooperation of system components. Moreover, it is presented a formal strategy for system verification using the FDR (Failure Divergence Refinement) tool, based on the traces refinement model.
182

Teste de conformidade em contexto guiado por casos de teste do componente / Comformance testing in context guided by component's test cases

Soares Junior, Jurandy Martins 12 April 2006 (has links)
Orientador: Ricardo de Oliveira Anido / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-10T11:44:38Z (GMT). No. of bitstreams: 1 SoaresJunior_JurandyMartins_M.pdf: 1209765 bytes, checksum: d8d6bebdcb7b65ed4a9fdb7e84508766 (MD5) Previous issue date: 2006 / Resumo: Testar um subsistema embarcado em um sistema complexo, assumindo-se que os demais subsistemas são livres de falhas, é conhecido como teste de conformidade em contexto. A complexidade deste teste reside no fato do subsistema mais externo, conhecido como contexto, ocultar muitas interações e eventos nos quais o sistema embarcado, conhecido como componente, participa. Nas últimas décadas alguns algoritmos foram desenvolvidos para resolver o problema. Muitos deles, no entanto, ignoram as condições nas quais podem ser aplicados. Nesta dissertação estudamos a teoria e os algoritmos relacionados a teste de conformidade e a teste de conformidade em contexto, propomos uma nova abordagem guiada por casos de teste do componente e analisamos as condições necessárias e suficientes para aplicá-Ia. A validação das condições necessárias e suficientes foi feita em estudos de casos com a pilha de protocolos do TCP/IP, com os protocolos HTTP e HTTPS via Proxy e com DHCP via relay-agent. Os algoritmos estudados foram experimentados nas especificações dos protocolos ABP, SCU e TCP / Abstract: Testing a subsystem embedded into a complex system, in which the other subsystems are assumed to be fault-free, is known as confonnance testing in contexto The complexity behind this test resides in the fact that the most external system, known as context, hides many interactions and events in which the embedded subsystem, known as component, participates. In the last decades some algorithms were developed to solve the problem. Many of them, however, ignore the conditions under which they can be applied. In this dissertation we study the theory and the algorithms related to confonnance testing and conformance testing in context, we propose a new approach guided by component's test cases, and we analyze the necessary and sufficient conditions to apply it. The validation of the necessary and sufficient conditions was done in case studies with the TCP/IP protocol stack, with the protocols HTTP and HTTPS . via Proxy, and with DHCP via relay-agent. The algorithms studied were experimented in the specifications of protocols ABP, seu e TCP / Mestrado / Engenharia de Protocolos / Mestre em Ciência da Computação
183

Higher-order graph rewriting systems / Sistemas de reescrita de grafos de alta ordem

Machado, Rodrigo January 2012 (has links)
Programas sofrem diversas modificações ao longo das etapas de desenvolvimento, implantação e manutenção. A evolução de um software pode ter várias causas: correção de erros, inclusão de novas funcionalidades ou até mesmo, como é o caso de programas orientados a aspecto, transformações estruturais podem fazer parte da semântica do sistema. Apesar de modificações serem comuns, não é tarefa trivial prever como estas afetam o comportamento dos programas, já que os componentes de software normalmente interagem de forma complexa, o que faz com que mesmo pequenas alterações possam introduzir comportamentos indesejados. Transformação de grafos, também conhecida como reescrita de grafos, é um importante paradigma para modelagem e análise de sistemas. Modelos baseados em transformação de grafos, como gramáticas de grafos, permitem uma modelagem ao mesmo tempo intuitiva e com semântica precisa, permitindo a aplicação de técnicas de análise como verificação de modelos e análise de par crítico no estudo do comportamento de sistemas. A teoria por trás de transformação de grafos vem sendo desenvolvida a várias décadas, e atualmente está descrita de uma forma bastante abstrata. Contudo, ainda não possui uma definição natural de reescritas de alta ordem, que facilitaria a definição de evolução de especificações compostas por regras de reescrita de grafo, tais como gramáticas de grafos. Nesta tese são abordadas a modelagem e a análise de sistemas sob modificações programadas no contexto de gramáticas de grafos. A generalização da abordagem de pushout duplo para reescrita de grafos é utilizada como o princípio geral para descrever, simultaneamente, a semântica do sistema e modificações estruturais. Para tal, introduzimos uma noção de reescrita de segunda ordem para modificar a estrutura de regras de transformação de grafos, e usando isso, definimos modelos equipados simultaneamente de regras de primeira e segunda ordem, chamados gramáticas de grafos de segunda ordem. Através destes modelos podemos representar simultaneamente transformações estruturais e execução do sistema, e relacionar formalmente ambos tipos de reescrita. Também propomos novas técnicas para investigar o efeito da modificação de regras sobre a aplicação destas. Finalmente, como um exemplo de aplicação da teoria, caracterizamos construções de sistemas orientados a aspectos através de gramáticas de grafos de segunda ordem, e discutimos como utilizar as novas técnicas para estudar o efeito da combinação aspectual sobre o sistema inicial. / Software systems are not static entities: they usually undergo several changes along their development and maintenance cycles. Software evolution may be required for several reasons, such as the inclusion of new functionalities, the correction of errors or even as part of the system semantics, as it is the case of aspect-oriented systems. However, it is usually not trivial to foresee how structural changes can affect the system behaviour, since system components often interact in very complex ways, and even trivial modifications may introduce new problems. Graph transformation, also known as graph rewriting, has been used throughout the years as an important paradigm for system modelling and analysis. Models based on graph transformation, such as graph grammars, allow an intuitive but formal representation of the system behaviour, allowing the usage of analysis techniques such as model checking and static analysis of rule interaction. The theory behind graph transformation is quite general, and has been studied since the 1970s. However, it still lacks a general notion of higher-order rewriting that would allow a natural definition of model transformations for graph grammars. The lack of general second-order characterization presents difficulties for employing graph grammars as targets of model transformations, and studying how model transformations affect their natural behaviour. In this thesis we address the problem of modelling and analysing systems undergoing programmed modifications in the context of graph grammars. We use the generalization of the double-pushout approach for graph rewriting as a principle for defining simultaneously the system semantics and structural modifications. To achieve this, we introduce a notion of second-order graph rewriting that acts on graph transformation rules. Based on secondorder rewriting we are able to define second-order graph grammars, models equipped with a first-order layer, representing the original system execution, and a second-order layer, representing a model transformation. Using second-order graph grammar we can encode simultaneously model transformations and system execution, allowing us to formally relate them. Moreover, we propose new techniques to investigate the effect of rule modification over their effect on graphs. As an application example, we characterize aspect-oriented constructions for graph grammars, and discuss how to relate the aspect weaving layer with the base system semantics.
184

Verificação de modelos uml de software embarcado com model checking / Verification of models uml embedded software with model checking

Custódio, Marcelo Monteiro 15 December 2008 (has links)
Made available in DSpace on 2015-04-11T14:03:15Z (GMT). No. of bitstreams: 1 DISSERTACAO MARCELO.pdf: 1313111 bytes, checksum: ddf9a22433355413e807d3bd27951a01 (MD5) Previous issue date: 2008-12-15 / Fundação de Amparo à Pesquisa do Estado do Amazonas / Embedded systems have undeniable relevance in modern society. They have temporal constraints (as long as they are real time ones), power consumption management, size, weight, etc which make their design more complex than the design of their desktop peers. Given the huge number of requirements of all kinds, the high complexity of embedded software as well as the big possibilities of critical damages in case of flaws and, at last, the even bigger pressure of market for new products faster, it make necessary methods which can assure correct, fast but intuitive specification and conception of designs. Considering this, this work aims to provide a method which contribute to the state of art. The goal of the proposed method is to provide an approach which gather an specification of an embedded software in a semi-formal, object-oriented and Industry-accepted notation, which is Unified Modeling (UML), specifically their Sequence Diagram notation which is able to capture dynamic aspects of a system and a mecanism of translation of this notation into a formal one, called SMV, apropriate for being used by the SMV model checker. The goal of the method is also provide an translation scheme of the sequence diagrams into another formal notation, the so called Petri Nets notations. Petri Net notation is well suited to formal verification. Finally, the goal of the method is to provide a mechanism of translation of high level properties queries into formal notation CTL. Property queries are only qualitative. All these functionalities are implemented in a tool called Ambiente de Verificação Formal de Software Embarcado. / Os sistemas embarcados possuem inegável importância na sociedade atual. Eles possuem restrições temporais (quando são de tempo real), de gerência de consumo de energia, tamanho, peso etc que tornam o seu projeto e concepção mais complexos do que os sistemas convencionais. Dado o grande número de requisitos de todos os tipos, a alta complexidade dos softwares embarcados desenvolvidos bem como a grande possibilidade de catástrofes significativas em caso de falha e por fim a grande pressão de mercado por produtos cada vez mais rápido, fazem-se necessários métodos que possam assegurar uma correta, rápida porém intuitiva especificação e concepção dos projetos. Diante disso, o presente trabalho visa prover um método que acrescente ao atual estado da arte. O objetivo do método então é prover uma abordagem que colete uma especificação de software embarcado em uma notação semi-formal, orientada a objetos e amplamente aceita pela Indústria, que é a Unified Modeling Language (UML), especificamente com seu Diagrama de Sequência, o qual é apto para capturar os aspectos dinâmicos de um sistema e um mecanismo de tradução dessa notação para a notação formal SMV, apta a ser utilizada pelo model checker de mesmo nome. O objetivo do método é prover também um esquema de tradução dos diagramas de sequência em UML para uma notação formal, no caso a notação de Redes de Petri, o qual é adequada para verificação formal, gerando saídas de arquivos nos formatos APNN e PNML. O formato APNN é adequado para ser usado no Model Checking Kit (MCK). Por fim, prover um esquema de tradução consultas de propriedade em alto nível para o formato de CTL puro adequado para ser usado no MCK e um programa em SMV e sua especificação 7 em CTL, formatos aptos a serem usados no model checker SMV. A verificação de propriedades é apenas qualitativa, isto é, que verificará apenas propriedades de execução do software embarcado, em oposição às propriedades quantitativas de tempo por exemplo, comuns em softwares de tempo-real. Todas essas funcionalidades são realizadas por uma ferramenta, chamada Ambiente de Verificação Formal de Software Embarcado.
185

Validação formal de modelos de manufatura flexível com lógica dinâmica: o uso de Petri-PDL / Validation of flexible manufacturing Models with dynamic logic: the use of Petri-PDL

Bastos, Thiago de Almeida 30 January 2018 (has links)
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-02-15T15:02:41Z No. of bitstreams: 2 Dissertação - Thiago de Almeida Bastos -2018 .pdf: 2834099 bytes, checksum: 20704146dd6e29dc70c067fcbd0011a2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-02-16T09:38:12Z (GMT) No. of bitstreams: 2 Dissertação - Thiago de Almeida Bastos -2018 .pdf: 2834099 bytes, checksum: 20704146dd6e29dc70c067fcbd0011a2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-02-16T09:38:12Z (GMT). No. of bitstreams: 2 Dissertação - Thiago de Almeida Bastos -2018 .pdf: 2834099 bytes, checksum: 20704146dd6e29dc70c067fcbd0011a2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-01-30 / This master's thesis seeks to contribute to the automation of production lines, and proposes a methodology for the formal verification of flexible manufacturing models by the Petri-PDL tool. The Petri-PDL framework is based on a multimodal logic associated with a scheme defined for the problem with the Petri nets to specify and model sequential problems demonstrating in logical proofs the correctness of properties inferred by the model. This formal treatment is adapted for the treatment of flexible sequential processes, since these models are used in many other applications with Petri nets. They will be considered models of flexible production system found in the systematic review to evaluate the efficiency of its model and its adaptation to this formal refinement. / Este trabalho busca contribuir com a automação de linhas de produção e propõe uma metodologia para a verificação formal de modelos de manufatura flexível a partir da ferramenta Petri-PDL. O conceito Petri-PDL baseia-se em uma lógica multimodal associada ao esquema definido para o problema com as redes de Petri para especificar e modelar problemas sequenciais demonstrando em provas lógicas a corretude de propriedades inferidas pelo modelo. Este tratamento formal será adaptado para o tratamento de processos sequenciais flexíveis, uma vez que estes modelos são usados em muitas outras aplicações com redes de Petri. Serão considerados modelos de sistema de produção flexível encontrados na revisão sistemática para avaliar a eficiência de seu modelo e sua adaptação a este refinamento formal.
186

Contribution à la vérification de programmes C par combinaison de tests et de preuves. / Contribution to software verification combining tests and proofs

Petiot, Guillaume 04 November 2015 (has links)
La vérification de logiciels repose le plus souvent sur une spécification formelle encodant les propriétés du programme à vérifier. La tâche de spécification et de vérification déductive des programmes est longue et difficile et nécessite une connaissance des outils de preuve de programmes. En effet, un échec de preuve de programme peut être dû à une non-conformité du code par rapport à sa spécification, à un contrat de boucle ou de fonction appelée trop faible pour prouver une autre propriété, ou à une incapacité du prouveur. Il est souvent difficile pour l’utilisateur de décider laquelle de ces trois raisons est la cause de l’échec de la preuve car cette information n’est pas (ou rarement) donnée par le prouveur et requiert donc une revue approfondie du code et de la spécification. L’objectif de cette thèse est de fournir une méthode de diagnostic automatique des échecs de preuve afin d’améliorer le processus de spécification et de preuve des programmes C. Nous nous plaçons dans le cadre de la plate-forme d’analyse des programmes C FRAMA-C, qui fournit un langage de spécification unique ACSL, un greffon de vérification déductive WP et un générateur de tests structurels PATHCRAWLER. La méthode que nous proposons consiste à diagnostiquer les échecs de preuve en utilisant la génération de tests structurels sur une version instrumentée du programme d’origine / Software verification often relies on a formal specification encoding the program properties to check. Formally specifying and deductively verifying programs is difficult and time consuming and requires some knowledge about theorem provers. Indeed, a proof failure for a program can be due to a noncompliance between the code and its specification, a loop or callee contrat being insufficient to prove another property, or a prover incapacity. It is often difficult for the user to decide which one of these three reasons causes a given proof failure. Indeed, this feedback is not (or rarely) provided by the theorem prover thus requires a thorough review of the code and the specification. This thesis develops a method to automatically diagnose proof failures and facilitate the specification and verification task. This work takes place within the analysis framework for C programs FRAMAC, that provides the specification language ACSL, the deductive verification plugin WP, and the structural test generator PATHCRAWLER. The proposed method consists in diagnosing proof failures using structural test generation on an instrumented version of the program under verification.
187

Runtime Enforcement of (Timed) Properties with Uncontrollable Events / Enforcement à l’exécution de propriétés temporisées régulières en présence d’évènements incontrôlables

Renard, Matthieu 11 December 2017 (has links)
Cette thèse étudie l’enforcement de propriétés temporisées à l’exécution en présence d’évènements incontrôlables. Les travaux se placent dans le cadre plus général de la vérification à l’exécution qui vise à surveiller l’exécution d’un système afin de s’assurer qu’elle respecte certaines propriétés. Ces propriétés peuvent être spécifiées à l’aide de formules logiques, ou au moyen d’autres modèles formels, parfois équivalents, comme des automates. Nous nous intéressons à l’enforcement à l’exécution de propriétés spécifiées par des automates temporisés. Tout comme la vérification à l’exécution, l’enforcement à l’exécution surveille l’exécution d’un système, la différence étant qu’un mécanisme d’enforcement réalise certaines modifications sur l’exécution afin de la contraindre à satisfaire la propriété souhaitée. Nous étudions plus particulièrement l’enforcement à l’exécution lorsque certains évènements de l’exécution sont incontrôlables, c’est-à-dire qu’ils ne peuvent pas être modifiés par un mécanisme d’enforcement. Nous définissons des algorithmes de synthèse de mécanismes d’enforcement décrits de manières fonctionnelle puis opérationnelle, à partir de propriétés temporisées régulières (pouvant être représentées par des automates temporisés). Ainsi, deux mécanismes d’enforcement équivalents sont définis, le premier présentant une approche correcte sans considération d’implémentation, alors que le second utilise une approche basée sur la théorie des jeux permettant de précalculer certains comportements, ce qui permet de meilleures performances. Une implémentation utilisant ce précalcul est également présentée et évaluée. Les résultats sont encourageant quant à la faisabilité de l’enforcement à l’exécution en temps réel, avec des temps supplémentaires suffisamment courts sur de petites propriétés pour permettre une utilisation de tels systèmes. / This thesis studies the runtime enforcement of timed properties when some events are uncontrollable. This work falls in the domain of runtime verification, which includes all the techniques and tools based on or related to the monitoring of system executions with respect to requirement properties. These properties can be specified using different models such as logic formulae or automata. We consider timed regular properties, that can be represented by timed automata. As for runtime verification, a runtime enforcement mechanism watches the executions of a system, but instead of just outputting a verdict, it modifies the execution so that it satisfies the property. We are interested in runtime enforcement with uncontrollable events. An uncontrollable event is an event that an enforcement mechanism can not modify. We describe the synthesis of enforcement mechanisms, in both a functional and an operational way, that enforce some desired timed regular property. We define two equivalent enforcement mechanisms, the first one being simple, without considering complexity aspects, whereas the second one has a better time complexity thanks to the use of game theory; the latter being better suited for implementation. We also detail a tool that implements the second enforcement mechanism, as well as some performance considerations. The overhead introduced by the use of our tool seems low enough to be used in some real-time application scenarios.
188

Formal methods for the design and analysis of robot swarms

Brambilla, Manuele 28 April 2014 (has links)
In my doctoral dissertation, I tackled two of the main open problems in swarm robotics: design and verification. I did so by using model checking.<p>Designing and developing individual-level behaviors to obtain a desired swarm-level goal is, in general, very difficult, as it is difficult to predict and thus design the non-linear interactions of tens or hundreds individual robots that result in the desired collective behavior. In my dissertation, I presented my novel contribution to the top-down design of robot swarms: property-driven design. Property-driven design is based on prescriptive modeling and model checking. Using property-driven design it is possible to design robot swarms in a systematic way, realizing systems that are "correct by design". I demonstrated property-driven design on two case-studies: aggregation and foraging.<p>Developing techniques to analyze and verify a robot swarm is also a necessary step in order to employ swarm robotics in real-world applications. In my dissertation, I explored the use of model checking to analyze and verify the properties of robot swarms. Model checking allows us to formally describe a set of desired properties of a system, in a more powerful and precise way compared to other mathematical approaches, and verify whether a given model of a system satisfies them. I explored two different approaches: the first based on Bio-PEPA and the second based on KLAIM. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
189

Ontology-based Analysis and Scalable Model Checking of Embedded Systems Models

Mahmud, Nesredin January 2017 (has links)
Currently, there is lack of effective and scalable methods to specify and ana-lyze requirements specifications, and verify the behavioral models of embed-ded systems. Most embedded systems requirements are expressed in naturallanguage which is flexible and intuitive but frequently ambiguous, vague andincomprehensive. Besides to natural language, template-based requirementsspecification methods are used to specify requirements specifications (esp. insafety-critical applications), which reduce ambiguity and improves the com-prehensibility of the specifications. However, the template-based method areusually rigid due to the fixed structures of the templates. They also lack meta-models for extensibility, and template selection is challenging.In this thesis, we proposed a domain specific language for embedded sys-tems, called ReSA, which is constrained natural language but flexible enoughto allow engineers to use different constructs to specify requirements. Thelanguage has formal semantics in proportional logic and description logic thatenables non-trivial and rigorous analysis of requirements specification, e.g.,consistency checking, completeness of specifications, etc.Moreover, we propose a scalable formal verification of Simulink models,whichisusedtodescribethebehaviorofsystemsthroughcommunicatingfunc-tional blocks. In industry, Simulink is the de facto modeling and analysis en-vironment of embedded systems. It is also used to generate code automati-cally from special Simulink models for various hardware platforms. However,Simulink lacks formal approach to verify large and hybrid Simulink models.Therefore, we also propose a formal verification of Simulink models, repre-sented as stochastic timed automata, using statistical model checking, whichhas proven to scale for industrial applications.We validate our approaches on industrial use cases from the automotiveindustry. These includes Adjustable Speed Limiter (ASL) and Brake-By-Wire(BBW) systems from Volvo Group Trucks Technology, both safety-critical. / Verispec
190

Compilation formellement vérifiée de code C de bas-niveau / Formally verified compilation of low-level C code

Wilke, Pierre 09 November 2016 (has links)
Cette thèse présente une extension du compilateur CompCert permettant de fournir des garanties formelles de préservation sémantique à des programmes auxquels CompCert n'en donne pas. CompCert est un compilateur pour le langage C vers différentes architectures qui fournit, en plus d'un exécutable compilé, des garanties formelles concernant le comportement du programme assembleur généré. En particulier, tout programme C ayant une sémantique définie selon le standard C est compilé en un programme assembleur équivalent, c'est-à-dire qui a la même sémantique. En revanche, ce théorème n'assure aucune garantie lorsque le programme source n'a pas de sémantique définie : on parle en C de comportement indéfini. Toutefois, des programmes C issus de réels projets largement utilisés contiennent des comportements indéfinis. Cette thèse détaille dans un premier temps un certain nombre d'exemples de programmes C qui déclenchent des comportements indéfinis. Nous argumentons que ces programmes devraient tout de même bénéficier du théorème de préservation sémantique de CompCert, d'abord parce qu'ils apparaissent dans de vrais projets et parce que leur utilisation des comportements indéfinis semble légitime. Dans ce but, nous proposons d'abord un modèle mémoire pour CompCert qui définit l'arithmétique arbitraire de pointeurs et la manipulation de données non initialisées, à l'aide d'un formalisme de valeurs symboliques qui capturent la sémantique d'opérations non définies dans le standard. Nous adaptons l'intégralité du modèle mémoire de CompCert avec ces valeurs symboliques, puis nous adaptons les sémantiques formelles de chacun des langages intermédiaires de CompCert. Nous montrons que ces sémantiques symboliques sont un raffinement des sémantiques existantes dans CompCert, et nous montrons par ailleurs que ces sémantiques capturent effectivement le comportement des programmes sus-cités. Enfin, afin d'obtenir des garanties similaires à celles que CompCert fournit, nous devons adapter les preuves de préservation sémantique à notre nouveau modèle. Pour ce faire, nous généralisons d'importantes techniques de preuves comme les injections mémoire, ce qui nous permet de transporter les preuves de CompCert sur nos nouvelles sémantiques. Nous obtenons ainsi un théorème de préservation sémantique qui traite plus de programmes C. / This thesis presents an extension of the CompCert compiler that aims at providing formal guarantees about the compilation of more programs than CompCert does. The CompCert compiler compiles C code into assembly code for various architectures and provides formal guarantees about the behaviour of the compiled assembly program. It states that whenever the C program has a defined semantics, the generated assembly program behaves similarly. However, the theorem does not provide any guarantee when the source program has undefined semantics, or, in C parlance, when it exhibits undefined behaviour, even though those behaviours actually happen in real-world code. This thesis exhibits a number of C idioms, that occur in real-life code and whose behaviour is undefined according to the C standard. Because they happen in real programs, our goal is to enhance the CompCert verified compiler so that it also provides formal guarantees for those programs. To that end, we propose a memory model for CompCert that makes pointer arithmetic and uninitialised data manipulation defined, introducing a notion of symbolic values that capture the meaning of otherwise undefined idioms. We adapt the whole memory model of CompCert with this new formalism and adapt the semantics of all the intermediate languages. We prove that our enhanced semantics subsumes that of CompCert. Moreover, we show that these symbolic semantics capture the behaviour of the previously undefined C idioms. The proof of semantic preservation from CompCert needs to be reworked to cope with our model. We therefore generalize important proof techniques such as memory injections, which enable us to port the whole proof of CompCert to our new memory model, therefore providing formal guarantees for more programs.

Page generated in 0.0443 seconds