• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 24
  • 22
  • 13
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 246
  • 246
  • 73
  • 72
  • 66
  • 56
  • 47
  • 46
  • 35
  • 32
  • 31
  • 28
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Applications of Formal Explanations in ML

Smyrnioudis, Nikolaos January 2023 (has links)
The most performant Machine Learning (ML) classifiers have been labeled black-boxes due to the complexity of their decision process. eXplainable Artificial Intelligence (XAI) methods aim to alleviate this issue by crafting an interpretable explanation for a models prediction. A drawback of most XAI methods is that they are heuristic with some drawbacks such as non determinism and locality. Formal Explanations (FE) have been proposed as a way to explain the decisions of classifiers by extracting a set of features that guarantee the prediction. In this thesis we explore these guarantees for different use cases: speeding up the inference speed of tree-based Machine Learning classifiers, curriculum learning using said classifiers and also reducing training data. We find that under the right circumstances we can achieve up to 6x speedup by partially compiling the model to a set of rules that are extracted using formal explainability methods. / De mest effektiva maskininlärningsklassificerarna har betecknats som svarta lådor på grund av komplexiteten i deras beslutsprocess. Metoder för förklarbar artificiell intelligens (XAI) syftar till att lindra detta problem genom att skapa en tolkbar förklaring för modellens prediktioner. En nackdel med de flesta XAI-metoder är att de är heuristiska och har vissa nackdelar såsom icke-determinism och lokalitet. Formella förklaringar (FE) har föreslagits som ett sätt att förklara klassificerarnas beslut genom att extrahera en uppsättning funktioner som garanterar prediktionen. I denna avhandling utforskar vi dessa garantier för olika användningsfall: att öka inferenshastigheten för maskininlärningsklassificerare baserade på träd, kurser med hjälp av dessa klassificerare och även minska träningsdata. Vi finner att under rätt omständigheter kan vi uppnå upp till 6 gånger snabbare prestanda genom att delvis kompilera modellen till en uppsättning regler som extraheras med hjälp av formella förklaringsmetoder.
202

Validation of efficiency of formal verification methodology for verification closure

Prabhakar, Gautham January 2022 (has links)
Application Specific Integrated Circuits (ASIC) and Field Programmable Gate Arrays (FPGA) verification is quite a time consuming phase in design flow cycle and it can be done using methodologies such as Universal Verification Methodology (UVM) and formal verification.The UVM methodology is simulation based verification where in the verifier will have to trigger the Design Under Test (DUT) manually by writing sequences which target different features of the DUT and the verification environment can also have verification directives such as assertions to spot design bugs. Formal verification on the other hand is purely assertion based verification where we describe the expected DUT behaviour using System Verilog Assertion (SVA) and we check for design sanity by exercising the assertions by letting the formal tool drive the inputs to the design in a constrained way. This completely eliminates the need of having to define sequences to drive the inputs. This thesis will bring up formal verification using jasper gold to light and will help verifiers to decide on how much of formal verification methodology can be used in verification of an IP with respect to the complexity of the design and the design behaviour to be verified. The results from this thesis proves how efficient formal verification was with respect to simulation based verification, to stress the design to test for corner case behaviour. The reason why formal verification cannot be extended for Top-Level Verification (TLV) and end to end functional verification is because of design complexity and this was also explored with the help of a complex ethernet design. Finally, a guideline as to when to use simulation based verification and formal verification was formulated. / ASIC och FPGA verifiering är en största del och är en tidskrävande fas av desingflödescyckeln. Det kan man göra den genom UVM eller Formell verifiering metoder.UVM metoder är simulering baserad verifiering där verifieraren måste utlösa DUT genom att skriva sekvenser som rikta olika funktioner av DUT och verifiering miljöer kan också ha verifiering direktiv som assertions som kan upptäcka designbugs.Formal verifiering är en assertion baserad verifiering metoder där i man kan beskriva förväntad DUT uppträdandet genom system verilog assertions (SVA) och verifiera designen genom använder assertions genom att låta det formal driva ingångarna till designen på ett begränsat sätt.Detta eliminerar helt behovet av att behöva definiera sekvenser för att driva ingångarna. Denna examensarbetena kommer att beskriva om formell verifiering med jasper gold och kommer att hjälpa verifierar bestämma hur mycket av den formal verifiering metoden kan man användas för att verifiera en ASIC IP med avseende på komplexitet och design uppträdandet att vara verifierat. Resultaten från denna examensarbetena kommer att bevisa hur effektiv formell metoderna var med avseende på simulering metoder att stressa design och verifiera den för undantags fall.Den ändleding varför formell verifiering metoder kan inte användande för TLV och från början till slut funktion verifiering är på grund av design komplexitet.Detta har analyserats med hjälp av en komplex ethernet design.En riktlinje för när kan man använda simulering metoder och formal metoder var föreslagit.
203

Automated Inference of ACSL Contracts for Programs with Heaps

Söderberg, Oskar January 2023 (has links)
Contract inference consists in automatically computing contracts that formally describe the behaviour of program functions. Contracts are used in deductive verification, which is a method for verifying whether a system behaves according to a provided specification. The Saida plugin in Frama-C is a contract inference tool for C code. This thesis explores an extension to the Saida plugin which allows support for pointers and heap allocations. The goal is to evaluate to what extent model checking tools can be used to infer contracts for deductive verification of programs that use pointers and heap allocations. This is done by proposing a translation strategy to convert contracts containing heap expressions, generated by the model checker TriCera, into ACSL, a specification language used by Frama-C. An implementation of this translation is evaluated using a set of verification tasks. The results demonstrate that the inferred contracts are sucient to verify simple code samples, including features such as recursion, aliasing, and manipulation of heap-allocated structs. However, the results also reveal cases where the contracts are too weak, although more information could be extracted in the translation. It is concluded that model checking tools can infer contracts for deductive verification of programs with pointers and heap allocations, but currently to a limited extent. Several improvements to the translation strategy are proposed for future work. / Kontrakthärledning består i att automatiskt härleda kontrakt, vilka formellt beskriver funktioners beteende i program. Kontrakt används för deduktiv verifiering, vilket är en metod för att verifiera huruvida ett system beter sig enligt en given specifikation. Pluginet Saida i Frama-C är ett verktyg för kontrakthärledning för C-kod. Denna avhandling undersöker en vidareutveckling av Saida som möjliggör stöd för pekare och heap-allokeringar. Målet är att utvärdera i vilken utsträckning modellprovningsverktyg kan användas för att härleda kontrakt för deduktiv verifiering av program som använder pekare och heap-allokeringar. Detta görs genom att presentera en översättningsstrategi för att konvertera kontrakt som involverar heaputtryck, genererade av modellkontrollverktyget TriCera, till ACSL, vilket är ett specifikationsspråk som används av Frama-C. En implementation av denna översättning utvärderas med hjälp av en samling verifieringsproblem. Resultaten visar att de härledda kontrakten är tillräckliga för att verifiera enkla verifieringsproblem, med koncept som rekursion, aliasing och manipulation av heap-allokerade structs. Resultaten visar även fall då kontrakten är för svaga, men där mer information skulle kunna utvinnas i ¨översättningen. Slutsatsen dras att modellprovningsverktyg kan härleda kontrakt för deduktiv verifiering av program med pekare och heap-allokeringar, men för närvarande i begränsad utsträckning. Flera förbättringar av översättningsstrategin föreslås för framtida vidareutveckling.
204

Automated inference of ACSL function contracts using TriCera

Amilon, Jesper January 2021 (has links)
This thesis explores synergies between deductive verification and model checking, by using the existing model checker TriCera to automatically infer specifications for the deductive verifier Frama-C. To accomplish this, a formal semantics is defined for a subset of ANSI C, extended with assume statements, called Csmall. Then, it is shown how a Hoare logic contract can be translated into statements in Csmall, and the defined formal semantics is used to prove that the translation is correct. Furthermore, it is shown that the translation can be applied also to a real specification language. This is done by defining a subset of ACSL, called ACSLsmall, and giving a formal semantics also for this. Lastly, two examples are provided showing that the theory developed in this thesis can be applied to automatically infer ACSL function contracts. / Den här avhandlingen studerar synergier mellan deduktiv verifikation och modelprovning, genom att använda Tricera, ett verktyg för modellprovning, för att automatiskt generera specifikationer för Frama-C, ett verktyg för deduktiv verifikation. Detta uppnås genom att definiera en formell semantik för en delmängd av ANSI-C, utökat med assume satser, som kallas förCsmall. Sedan visas hur kontrakt kan översättas till satser i Csmall samt att översättningen är korrekt. Därefter visas att översättningen också kan tillämpas på ett verkligt specifikationsspråk, genom att definiera en delmängd av ACSL, som kallas ACSLsmall, och definiera en formell semantik också för detta. Slutligen visas med två exempel hur teorin från uppsatsen kan appliceras för att automatiskt generera funktionskontrakt i ACSL.
205

Proof-producing resolution of indirect jumps in the binary intermediate representation BIR / Bevis-producerande bestämning av indirekta hopp i den binära mellanliggande representationen BIR

Westerberg, Adrian January 2021 (has links)
HolBA is a binary analysis library that can be used to formally verify binary programs using contracts. It is developed in the interactive theorem prover HOL4 to achieve a high degree of trust in verification, the result of verification is a machine-checked proof demonstrating its correctness. This thesis presents two proof-producing procedures. The first resolve indirect jumps in BIR, the binary intermediate language used in HolBA, given their possible targets. The second transfers contracts proved on resolved BIR programs without indirect jumps to the original ones containing indirect jumps. This allows the existing weakest precondition generator to automatically prove contracts on loop-free BIR fragments containing indirect jumps. The implemented proof-producing procedures were evaluated on a small binary program and generated synthetic BIR programs. It was found that the first proof-producing procedure is not very efficient, which could pose a problem when verifying large binary programs. Future work could include improving the efficiency of the first proof-producing procedure and integrate it with an external tool that automatically finds possible targets of indirect jumps. / HolBA är ett bibliotek för binär analys som kan användas för att formellt verifiera binära program med kontrakt. Det är utvecklat i den interaktiva teorembevisaren HOL4 för att åstadkomma en hög grad av tillit till verifiering, resultatet av verifiering är ett maskin-kontrollerat bevis som demonstrerar dess korrekthet. Detta arbete presenterar två bevis-producerande procedurer. Den första bestämmer indirekta hopp i BIR, den binära mellanliggande representationen som används i HolBA, givet deras möjliga mål. Den andra överför kontrakt bevisade för bestämda BIR program utan indirekta hopp till originalen med indirekta hopp. Detta möjliggör den existerande svagaste förutsättning generatorn att automatiskt bevisa kontrakt för sling-fria BIR fragment som innehåller indirekta hopp. De implementerade bevis-producerande procedurerna utvärderades med ett litet binärt program och med genererade syntetiska BIR program. Det visades att den första bevis-producerande proceduren inte är särskilt effektiv, vilket skulle kunna vara ett problem vid verifiering av stora binära program. Framtida arbete skulle kunna inkludera att förbättra effektiviteten för den första bevis-producerande proceduren och att integrera den med ett externt verktyg som automatiskt kan hitta de möjliga målen för indirekta hopp.
206

Formal security analysis of authentication in an asynchronous communication model / Formell säkerhetsanalys av autentisering i en asynkron kommunikationsmodell

Wahlgren, Jacob, Yousefzadegan Hedin, Sam January 2020 (has links)
Formal analysis of security protocols is becoming increasingly relevant. In formal analysis, a model is created of a protocol or system, and propositions about the security of the model are written. A program is then used to verify that the propositions hold, or find examples of where they do not. This report uses formal methods to analyse the authentication aspect of a protocol that allows private individuals, enterprises, and systems to securely and asynchronously share sensitive data. Unpublished, early drafts of the protocol were studied and algorithms described in it were verified with the help of the formal verification tool Tamarin Prover. The analysis revealed two replay attacks. Improvements to the protocol were suggested based on this analysis. In later versions of the protocol, the improvements have been implemented by the protocol developers. / Det blir alltmer relevant med formell analys av säkerhetsprotokoll. I formell analys så skapas en modell av ett protokoll eller ett system, och påståenden om modellens säkerhet skrivs. Ett program används sedan för att verifiera att påståendena gäller, eller för att hitta exempel där de inte gäller. Den här rapporten avänder formella metoder för att analysera autentiseringsaspekten av ett protokoll som tillåter privatpersoner, företag och system att asynkront dela känslig information på ett säkert sätt. Opublicerade och tidiga utkast av protokollet studerades och de algoritmer som beskrivs i protokollet verifierades med hjälp av Tamarin Prover. Analysen avslöjade två återspelningsattacker. Förbättringar till protokollet föreslogs baserat på denna analys. I senare versioner har protokollutvecklarna implementerat förslagen.
207

Identification of atomic code blocks for model checking using deductive verification based abstraction / Identifiering av atomiska kodblock för modellkontroll med deduktiv verifieringsbaserad abstraktion

Vanhainen, Erik January 2024 (has links)
Model checking is a formal verification technique for verifying temporal properties in state-transition models. The main problem with using model checking is the state explosion problem, where the number of states in the model can grow exponentially, making verification infeasible. Previous work has tried to mitigate the state explosion problem by representing code blocks as Hoare-logic contracts in an abstract state-transition model using the temporal logic TLA. This is achieved by treating the block as atomic. In order to ensure that the abstract state-transition model is faithful with respect to the temporal properties you want to verify, only some code blocks can be considered atomic. This thesis aims to answer how atomic code blocks can be identified atomically and evaluate their potential for reducing state space during model checking. We give a theoretical foundation of what it means for a code block to be considered as atomic in TLA. Moreover, we introduce a property that characterizes these atomic code blocks and presents an algorithm to identify them for sequential programs written in a subset of C. Experimental results demonstrate that the identification of atomic code blocks using our algorithm can be used to significantly reduce the state space during model checking, with an average reduction factor of 62. The potential of this verification approach is promising, however, further case studies are necessary to better understand the extent of this reduction across different program types and properties. / Modellkontroll är en formell verifieringsteknik för att bekräfta tidsrelaterade egenskaper i tillståndsövergångsmodeller. Huvudproblemet med modellkontroll är problemet med tillståndsexplosion, där antalet tillstånd i modellen kan öka exponentiellt och göra verifieringen ogenomförbar. Tidigare arbete har försökt mildra problemet med tillståndsexplosion genom att representera kodblock som Hoare-logiska kontrakt i en abstrakt tillståndsövergångsmodell med hjälp av tidslogiken TLA. Detta uppnås genom att behandla blocket som atomiskt. För att säkerställa att den abstrakta tillståndsövergångsmodellen är trogen med avseende på de tidsrelaterade egenskaper du vill verifiera kan endast vissa kodblock betraktas som atomiska. Denna avhandling syftar till att besvara hur atomiska kodblock kan identifieras automatiskt och utvärdera deras potential för att minska tillståndsutrymmet under modellkontroll. Vi ger en teoretisk grund för vad det innebär för ett kodblock att betraktas som atomiskt inom TLA. Dessutom introducerar vi en egenskap som karaktäriserar dessa atomiska kodblock och presenterar en algoritm för att identifiera dem för sekventiella program skrivna i en delmängd av C. Experimentella resultat visar att identifieringen av atomiska kodblock med hjälp av vår algoritm kan användas för att betydligt minska tillståndsutrymmet under modellkontroll, med en genomsnittlig reduktionsfaktor på 62. Potentialen för denna verifieringsmetod är lovande, men ytterligare fallstudier krävs för att bättre förstå omfattningen av denna reduktion över olika typer av program och egenskaper.
208

Checagem de equivalência de sequências de estados de projetos digitais em RTL com modelos de referência em alto nível e de protocolo de comunicação. / Equivalence checking of digital RTL design state sequences with high-level reference and communication protocol models.

Castro Márquez, Carlos Iván 20 February 2014 (has links)
A verificação funcional é o conjunto de tarefas destinado a descobrir erros gerados durante o projeto de circuitos integrados, e representa um importante desafio ao influenciar fortemente a eficiência do ciclo inteiro de produção. Estima-se que até 80% dos custos totais de projeto são devidos à verificação, tornando esta atividade o gargalo principal para reduzir o time-to-market. Tal problemática tem provocado a aparição de diversas estratégias para diminuir o esforço, ou para aumentar a capacidade de cobertura da verificação. Por um lado existe a simulação, que permite descobrir um número razoável de erros de projeto; porém, a lentidão da simulação de descrições RTL torna mínima a cobertura real de estados. Por outro lado, os métodos formais de verificação fornecem alta cobertura de estados. Um deles é a checagem de modelos, que checa a validade de um conjunto de propriedades para todos os estados do projeto sob verificação. No entanto, esta técnica padece do problema de explosão de estados, e da dificuldade de especificar um conjunto robusto de propriedades. Outra alternativa formal é a checagem de equivalência que, ao invés de verificar propriedades, compara o projeto com um modelo de referência. No entanto, a checagem de equivalência tradicional é aplicável, unicamente, a descrições no mesmo nível de abstração, e com interfaces idênticas. Como fato importante, não foram encontrados registros na literatura de sobre a verificação formal de descrições RTL, considerando ambos os aspectos computacionais (presentes no modelo de referência) e de comunicação às interfaces (provenientes da especificação funcional de protocolo). Neste trabalho apresenta-se uma metodologia de verificação formal, através do uso de técnicas de checagem de equivalência para determinar a validade de uma implementação em RTL, comparando-a com um modelo de referência em alto nível, e com um modelo formal do protocolo de comunicação. Para permitir tal checagem, a metodologia baseia-se no conceito de sequências de estados, ao invés de estados individuais como na checagem de equivalência tradicional. As discrepâncias entre níveis diferentes de abstração são consideradas, incluindo alfabetos diferentes, mapeamento entre estados, e dessemelhanças temporais. A caracterização e solução do problema são desenvolvidas através de um quadro teórico, onde se apresentam conceitos, e definições, cuja validade é provada formalmente. Uma ferramenta para aplicação prática da metodologia foi desenvolvida e aplicada sobre diferentes tipos de descrições RTL, escritas nas linguagens VHDL e SystemC. Os resultados demonstram efetividade e eficiência na verificação formal de circuitos digitais que incluem, mas não se limitam à correção de erros, encriptação, processamento de imagens, e funções matemáticas. Também, evidencia-se a capacidade da ferramenta para descobrir erros de tipo combinatório e sequencial injetados propositalmente, relacionados com a funcionalidade do modelo de referência, assim como, com a da especificação do protocolo de comunicação, dentro de tempos e número de iterações praticáveis em casos reais. / Functional verification is the group of tasks aiming the discovery of bugs created during integrated circuit design, and represents an important challenge by its strong influence on efficiency throughout production cycles. As an estimative, up to 80% of the whole design costs are due to verification, which makes verification the greatest bottleneck while attempting to reduce time-to-market. Such problem has given rise to a series of techniques to reduce the effort, or to increase verification coverage capability. On the one side, simulation allows finding a good number of bugs, but it is still far from reaching high state coverage because of RTL cycle-accurate slowness. On the other side, formal approaches supply high state coverage. Model checking, for instance, checks the validness of a set of properties for all designs states. However, a strong disadvantage resides in defining and determining the quality of the set of properties to verify, not to mention state explosion. Sequential equivalence checking, which instead of checking properties compares the design with a reference model. Nevertheless, traditionally it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. As a remarkable issue, no works were found in literature that dealt with formal verification of RTL designs, while taking care of both computational aspects, present in the high-level reference model, and interface communication aspects, which proceed from the protocol functional specification. This work presents a formal verification methodology, which uses equivalence checking techniques, to validate RTL descriptions through direct comparison with a high-level reference model, and with formal model of the communication protocol. It is based on extracting and comparing complete sequences of states, instead of single states as in traditional equivalence checking, in order to determine if the design intention is maintained in RTL implementation. The natural discrepancies between system level and RTL code are considered, including non-matching interface and memory elements, state mapping, and process concurrency. For the complete problem characterization and solution, a theoretical framework is introduced, where concepts and definitions are provided, and whose validity is formally proved. A tool to apply systematically the methodology was developed and applied on different types of RTL descriptions, written in VHDL and SystemC languages. The results show that the approach may be applied effectively and efficiently to verify formally digital circuits that include, but are not limited to error correction, encryption, image processing, and math functions. Also, evidence has been obtained about the capacity of the tool to discover both combinatory and sequential bugs injected on purpose, related with computational and protocol functionalities, on real scenarios.
209

Checagem de equivalência de sequências de estados de projetos digitais em RTL com modelos de referência em alto nível e de protocolo de comunicação. / Equivalence checking of digital RTL design state sequences with high-level reference and communication protocol models.

Carlos Iván Castro Márquez 20 February 2014 (has links)
A verificação funcional é o conjunto de tarefas destinado a descobrir erros gerados durante o projeto de circuitos integrados, e representa um importante desafio ao influenciar fortemente a eficiência do ciclo inteiro de produção. Estima-se que até 80% dos custos totais de projeto são devidos à verificação, tornando esta atividade o gargalo principal para reduzir o time-to-market. Tal problemática tem provocado a aparição de diversas estratégias para diminuir o esforço, ou para aumentar a capacidade de cobertura da verificação. Por um lado existe a simulação, que permite descobrir um número razoável de erros de projeto; porém, a lentidão da simulação de descrições RTL torna mínima a cobertura real de estados. Por outro lado, os métodos formais de verificação fornecem alta cobertura de estados. Um deles é a checagem de modelos, que checa a validade de um conjunto de propriedades para todos os estados do projeto sob verificação. No entanto, esta técnica padece do problema de explosão de estados, e da dificuldade de especificar um conjunto robusto de propriedades. Outra alternativa formal é a checagem de equivalência que, ao invés de verificar propriedades, compara o projeto com um modelo de referência. No entanto, a checagem de equivalência tradicional é aplicável, unicamente, a descrições no mesmo nível de abstração, e com interfaces idênticas. Como fato importante, não foram encontrados registros na literatura de sobre a verificação formal de descrições RTL, considerando ambos os aspectos computacionais (presentes no modelo de referência) e de comunicação às interfaces (provenientes da especificação funcional de protocolo). Neste trabalho apresenta-se uma metodologia de verificação formal, através do uso de técnicas de checagem de equivalência para determinar a validade de uma implementação em RTL, comparando-a com um modelo de referência em alto nível, e com um modelo formal do protocolo de comunicação. Para permitir tal checagem, a metodologia baseia-se no conceito de sequências de estados, ao invés de estados individuais como na checagem de equivalência tradicional. As discrepâncias entre níveis diferentes de abstração são consideradas, incluindo alfabetos diferentes, mapeamento entre estados, e dessemelhanças temporais. A caracterização e solução do problema são desenvolvidas através de um quadro teórico, onde se apresentam conceitos, e definições, cuja validade é provada formalmente. Uma ferramenta para aplicação prática da metodologia foi desenvolvida e aplicada sobre diferentes tipos de descrições RTL, escritas nas linguagens VHDL e SystemC. Os resultados demonstram efetividade e eficiência na verificação formal de circuitos digitais que incluem, mas não se limitam à correção de erros, encriptação, processamento de imagens, e funções matemáticas. Também, evidencia-se a capacidade da ferramenta para descobrir erros de tipo combinatório e sequencial injetados propositalmente, relacionados com a funcionalidade do modelo de referência, assim como, com a da especificação do protocolo de comunicação, dentro de tempos e número de iterações praticáveis em casos reais. / Functional verification is the group of tasks aiming the discovery of bugs created during integrated circuit design, and represents an important challenge by its strong influence on efficiency throughout production cycles. As an estimative, up to 80% of the whole design costs are due to verification, which makes verification the greatest bottleneck while attempting to reduce time-to-market. Such problem has given rise to a series of techniques to reduce the effort, or to increase verification coverage capability. On the one side, simulation allows finding a good number of bugs, but it is still far from reaching high state coverage because of RTL cycle-accurate slowness. On the other side, formal approaches supply high state coverage. Model checking, for instance, checks the validness of a set of properties for all designs states. However, a strong disadvantage resides in defining and determining the quality of the set of properties to verify, not to mention state explosion. Sequential equivalence checking, which instead of checking properties compares the design with a reference model. Nevertheless, traditionally it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. As a remarkable issue, no works were found in literature that dealt with formal verification of RTL designs, while taking care of both computational aspects, present in the high-level reference model, and interface communication aspects, which proceed from the protocol functional specification. This work presents a formal verification methodology, which uses equivalence checking techniques, to validate RTL descriptions through direct comparison with a high-level reference model, and with formal model of the communication protocol. It is based on extracting and comparing complete sequences of states, instead of single states as in traditional equivalence checking, in order to determine if the design intention is maintained in RTL implementation. The natural discrepancies between system level and RTL code are considered, including non-matching interface and memory elements, state mapping, and process concurrency. For the complete problem characterization and solution, a theoretical framework is introduced, where concepts and definitions are provided, and whose validity is formally proved. A tool to apply systematically the methodology was developed and applied on different types of RTL descriptions, written in VHDL and SystemC languages. The results show that the approach may be applied effectively and efficiently to verify formally digital circuits that include, but are not limited to error correction, encryption, image processing, and math functions. Also, evidence has been obtained about the capacity of the tool to discover both combinatory and sequential bugs injected on purpose, related with computational and protocol functionalities, on real scenarios.
210

Approche orientée modèles pour la vérification et l'évaluation de performances de l'interopérabilité et l'interaction des services / Model-oriented appraoch for verification and performance evaluation of service interoperability and interaction

Ait-Cheik-Bihi, Wafaa 21 June 2012 (has links)
De nos jours, les services Web sont très utilisés notamment par les entreprises pour rendre accessibles leurs métiers, leurs données et leurs savoir-faire via le Web. L'émergence des services Web a permis aux applications d'être présentées comme un ensemble de services métiers bien structurés et correctement décrits, plutôt que comme un ensemble d'objets et de méthodes. La composition automatique de services est une tâche complexe mais qui rend les services interopérables, ainsi leur interaction permet d’offrir une valeur ajoutée dans le traitement des requêtes des utilisateurs en prenant en compte des critères fonctionnels et non fonctionnels de la qualité de service. Dans ce travail de thèse, nous nous intéressons plus précisément aux services à base de localisation (LBS) qui permettent d'intégrer des informations géographiques, et de fournir des informations accessibles depuis des appareils mobiles via, les réseaux mobiles en faisant usage des positions géographiques de ces appareils. L'objectif de ce travail est de proposer une approche orientée modèles pour spécifier, valider et mettre en œuvre des processus de composition automatique de services à des fins de sécurité routière dans les transports. Cette approche est basée sur deux outils formels à savoir les Réseaux de Petri (RdP) et l'algèbre (max,+). Pour cela, nous préconisons l'utilisation des workflow patterns dans la composition, où chaque pattern est traduit par un modèle RdP et ensuite par une équation mathématique dans l'algèbre (max,+). Les modèles formels développés ont conduits, d'une part, à la description graphique et analytique des processus considérés, et d'autre part, à l'évaluation et la vérification quantitatives et qualitatives de ces processus. Une plateforme, appelée TransportML, pour la collaboration et l'interopérabilité de services à base de positionnement a été implémentée. Les résultats obtenus par la simulation des modèles formels sont comparés à ceux issus des simulations du fonctionnement de la plateforme et des expérimentations sur le terrain.Cette thèse est effectuée dans le cadre des projets Européens FP7 ASSET (2008-2011) et TeleFOT (2008-2012). / Web services are widely used by organizations to share their knowledge over the network and facilitate business-to-business collaboration. The emergence of Web services enabled applications to be presented as a set of business services well structured and correctly described. However, combining Web services and making them interoperable, to satisfy user requests taking into account functional and non-functional quality criteria, is a complex process. In this work, we focus specifically on location-based services (LBS) that integrate geographic information and provide information reachable from mobile devices, through wireless network by making use of the geographical positions of the devices. The aim of this work is to develop a model driven approach to specify, validate and implement service composition process in an automatic fashion for road security. This approach is based on two formal tools namely Petri nets (PN) and (max, +) algebra used to model, to verify and to evaluate the performance of service composition process. Workflow patterns are used to represent service composition processes. The behavior of each pattern is modeled by a PN model and then by a (max,+) state equation. The developed formal models allow the graphical and analytical description of the considered processes. Also, these models enable to evaluate some quantitative and qualitative properties of the considered processes. A platform, called TransportML, has been developed for collaboration and interoperability of different LBS. The obtained simulation results from the formal models are compared, on one hand, to those obtained from trials of the platform, and on the other hand, to those obtained from the real experimentations on the field.This work is a part of the FP7 European projects ASSET (2008-2011) and TeleFOT (2008-2012).

Page generated in 0.1296 seconds