• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Le domaine abstrait des polyèdres revisité : représentation par contraintes et preuve formelle / Revisiting the abstract domain of polyhedra : constraints-only representation and formal proof

Fouilhé, Alexis 15 October 2015 (has links)
Cette thèse revisite de deux manières le domaine abstrait des polyèdres utilisé pour l'analyse statique de programmes.D'abord, elle montre comment utiliser l'assistant à la preuve Coq pour apporter des garanties sur la correction des opérations sur les polyèdres sans compromettre l'efficacité de l'outil VP Lissu de ces travaux.L'outil est fondé sur le principe de la vérification de résultats :un oracle, auquel on ne fait pas confiance, fait les calculs,puis les résultats sont vérifiés par un validateur dont la correction est prouvée avec Coq. De plus, l'oracle fournit des témoins de la correction des résultats afin d'accélérer la vérification.L'autre caractéristique de VPL est l' utilsation de la seule représentation par contraintes des polyèdres,par opposition à l'approche habituelle qui consiste à utiliser à la fois des contraintes et des générateurs.Malgré ce choix inhabituel,les performances de VPL s'avèrent compétitives.Comme on pouvait le prévoir,l'opérateur "join",qui calcule l'enveloppe convexe de deux polyèdres,est le plus coûteux.Puisqu'il nécessite un grand nombre de projections,cette thèse explore plusieurs nouvelles approches de l'opérateur de projection,basées sur la programmation linéaire paramétrique.Elle propose une synthèse des variantes et des combinaisons possibles.La thèse se termine sur les éléments clés d'un nouvel algorithme de résolution tirant parti des spécificités de l'encodage afin d'obtenir de bonnes performances. / The work reported in this thesis revisits in two waysthe abstract domain of polyhedraused for static analysis of programs.First, strong guarantees are provided on the soundness of the operationson polyhedra,by using of the Coq proof assistant to check the soundness proofs.The means used to ensure correctnessdon't hinder the performance of the resultingVerimag Polyhedra Library (VPL).It is built on the principle of result verification:computations are performed by an untrusted oracleand their results are verified by a checkerwhose correctness is proved in Coq.In order to make verification cheap,the oracle computes soundness witnesses along with the results.The other distinguishing feature of VPL is thatit relies only on the constraint representation of polyhedra,as opposed to the common practice of using both constraints and generators.Despite this unusual choice,VPL turns out to be a competitive abstract domain of polyhedra,performance-wise.As expected, the join operator of VPL,which performs the convex hull of two polyhedra,is the costliest operator.Since it builds on the projection operator,this thesis also investigates a new approach toperforming projections,based on parametric linear programming.A new understanding of projection encoded asa parametric linear problem is presented.The thesis closes on a progress report in the design of a new solvingalgorithm,tailored to the specifics of the encodingso as to achieve good performance.
2

A Calculus of Complex Zonotopes for Invariance and Stability Verification of Hybrid Systems / Un calcul des zonotopes complexes pour l'invariance et la vérification de la stabilité des systèmes hybrides

Adimoolam, Santosh Arvind 16 May 2018 (has links)
Le calcul des ensembles atteignables est une approche de facto utilisée dans de nombreuses méthodes de vérification formelles pour les systèmes hybrides. Mais le calcul exact de l'ensemble atteignable est un problème insurmontable pour de nombreux types de systèmes hybrides, soit en raison de l'indécidabilité ou de la complexité de calcul élevée. Alternativement, beaucoup de recherches ont été axées sur l'utilisation de représentations d'ensembles qui peuvent être manipulées efficacement pour calculer une surestimation suffisamment précise de l'ensemble atteignable. Les zonotopes sont une représentation utile de l'ensemble dans l'analyse de l'accessibilité en raison de leur fermeture et de leur faible complexité pour le calcul de la transformation linéaire et des opérations sommaires de Minkowski. Mais pour approximer les ensembles de temps non bornés atteignables par des invariants positifs, les zonotopes ont l'inconvénient suivant. L'efficacité d'une représentation d'ensemble pour calculer un invariant positif dépend de l'encodage efficace des directions de convergence des états vers un équilibre. Dans un système hybride affine, certaines des directions de convergence peuvent être codées par les vecteurs propres à valeur complexe des matrices de transformation. Mais la représentation zonotopique ne peut pas exploiter la structure propre complexe des matrices de transformation car elle n'a que des générateurs à valeur réelle.Par conséquent, nous étendons les zonotopes réels au domaine de valeur complexe d'une manière qui peut capturer la contraction le long de vecteurs évalués complexes. Cela donne une nouvelle représentation d'ensemble appelée zonotope complexe. Géométriquement, les zonotopes complexes représentent une classe plus large d'ensembles qui comprennent des ensembles non polytopiques ainsi que des zonotopes polytopiques. Ils conservent le mérite des zonotopes réels que nous pouvons effectuer efficacement la transformation linéaire et les opérations sommaires de Minkowski et calculer la fonction de support. De plus, nous montrons qu'ils peuvent capturer la contraction le long de vecteurs propres complexes. De plus, nous développons des approximations traitables par calcul pour la vérification d'inclusion et l'intersection avec des demi-espaces. En utilisant ces opérations sur des zonotopes complexes, nous développons des programmes convexes pour vérifier les propriétés d'invariance linéaire des systèmes hybrides affines à temps discret et la stabilité exponentielle des systèmes impulsifs linéaires. Nos expériences sur certains exemples de benchmarks démontrent l'efficacité des techniques de vérification basées sur des zonotopes complexes. / Computing reachable sets is a de facto approach used in many formal verification methods for hybrid systems. But exact computation of the reachable set is an in- tractable problem for many kinds of hybrid systems, either due to undecidability or high computational complexity. Alternatively, quite a lot of research has been focused on using set representations that can be efficiently manipulated to com- pute sufficiently accurate over-approximation of the reachable set. Zonotopes are a useful set representation in reachability analysis because of their closure and low complexity for computing linear transformation and Minkowski sum operations. But for approximating the unbounded time reachable sets by positive invariants, zonotopes have the following drawback. The effectiveness of a set representation for computing a positive invariant depends on efficiently encoding the directions for convergence of the states to an equilibrium. In an affine hybrid system, some of the directions for convergence can be encoded by the complex valued eigen- vectors of the transformation matrices. But the zonotope representation can not exploit the complex eigenstructure of the transformation matrices because it only has real valued generators.Therefore, we extend real zonotopes to the complex valued domain in a way that can capture contraction along complex valued vectors. This yields a new set representation called complex zonotope. Geometrically, complex zonotopes repre- sent a wider class of sets that include some non-polytopic sets as well as polytopic zonotopes. They retain the merit of real zonotopes that we can efficiently perform linear transformation and Minkowski sum operations and compute the support function. Additionally, we show that they can capture contraction along complex valued eigenvectors. Furthermore, we develop computationally tractable approx- imations for inclusion-checking and intersection with half-spaces. Using these set operations on complex zonotopes, we develop convex programs to verify lin- ear invariance properties of discrete time affine hybrid systems and exponential stability of linear impulsive systems. Our experiments on some benchmark exam- ples demonstrate the efficiency of the verification techniques based on complex zonotopes.
3

Generická syntéza invariantů v programu založená na šablonách / Generic Template-Based Synthesis of Program Abstractions

Marušák, Matej January 2019 (has links)
Cieľom tejto práce je návrh a implementácia generického strategy solveru pre nástroj 2LS. 2LS je analyzátor na statickú verifikáciu programov napísaných v jazyku C. Verifikovaný program je za využita abstraktnej interpretácie analyzovaný SMT solverom. Prevod z ab- straktného stavu programu do logickej formule, s ktorou vie pracovať SMT solver vykonáva komponenta nazývaná strategy solver. Aktuálne pre každú doménu existuje jeden takýto solver. Navrhované riešenie vytvára jeden obecný strategy solver, ktorý zjednodušuje tvorbu nových domén. Zároveň navrhovaný spôsob umožnuje prevedenie existujúcich domén a teda zmenšuje program analyzátora.
4

Checking Compatability of Programs on Shared Data

Pranavadatta, DN January 2011 (has links) (PDF)
A large software system is built by composing multiple programs, possibly developed independently. The component programs communicate by sharing data. Data sharing involves creation of instances of the shared data by one program, called the producer, and its interpretation by another program, called the consumer. Valid instances of shared data and their correct interpretation is usually specified by a protocol or a standard that governs the communication. If a consumer misinterprets or does not handle some instances of data produced by a producer, it is called as a data compatibility bug. Such bugs manifest as various forms of runtime errors that are difficult to find and fix. In this work, we define various compatibility relations, between both producer-consumer programs and version-related programs, that characterize various subtle requirements for correct sharing of data. We design and implement a static analysis to infer types and guards over elements of shared data and the results are used for automatic compatibility checking. As case studies, we consider two widely used shared data-the TIFF structure, used to store TIFF directory attributes in memory, and IEEE 802. 11 MAC frame header which forms the layer 2 header in Wireless LAN communication. We analyze and check compatibility of 6 pairs of producer-consumer programs drawn from the transmit-receive code of Linux WLAN drivers of 3 different vendors. In the setting of version-related programs, we analyze a total of 48 library and utility routines of 2 pairs of TIFF image library (libtiff) versions. We successfully identify 5 known bugs and 1 new bug. For two of known bugs, bug fixes are available and we verify that they resolve the compatibility issues.
5

Two-phase WCET analysis for cache-based symmetric multiprocessor systems

Tsoupidi, Rodothea Myrsini January 2017 (has links)
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence. / Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.

Page generated in 0.0456 seconds