• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 30
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelling and verification of web services protocols.

Ramsokul, Pemadeep Kumar, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Among the plethora of solutions to the Business-to-Business interoperability problem, no other solution has obtained as much attention asWeb Services Technology (WST), which allows entities to exchange data regardless of their underlying platforms. WST also allows services to be composed in order to provide high quality customer service over the web. In order to perform transactions across different service providers, standard protocols need to be supported by participating providers. Many useful protocols are coming into the market, but are often ambiguously specified by protocol designers and not fully verified. Furthermore, even if the specifications are reasonably clear, programmers often make subtle assumptions, possibly leading to errors that are hard to detect and locate, especially when the number of participating entities is dynamic. Consequently, these can lead to interoperability problems among implementations of the same protocol and high software maintenance costs. To address these issues, a hierarchical automata-based framework is proposed to model the functional aspects of Web Services (WS) protocols that also assists in verifying their correctness. The modelling formalism has a sound mathematical foundation and aims to reconcile desirable features while still maintaining syntactic and semantic simplicity. The properties to be verified are specified using a pattern system and/or 'observer' states, which have been adapted for WS protocols. In particular, always in a positive observer state implies proper termination and partial functional correctness while reachability of a negative observer state signifies deadlock and/or violation of a safety property. Verification itself is handled by automatic translation of the model and its properties into a model-checker's input code and interpretation of the output produced by the model-checker. A test-bed is proposed to check the conformance of a protocol implementation to its specification It helps in locating errors in the implementations of WS protocols especially where the number of participating entities is dynamic. Conformance checking is achieved by capturing sequences of exchanged messages of the actual implementations and checking them against the formal specification. Experience using the framework is also described and illustrated using two non-trivial WS protocols, namely WS-BusinessActivity and WS-AtomicTransaction.
22

Analyse d'atteignabilité pour les programmes fonctionnels avec stratégie d'évaluation en profondeur / Reachability analysis for functional programs with innermost evaluation strategy

Salmon, Yann 07 December 2015 (has links)
Établir des preuves de bon fonctionnement des programmes est délicat ; on a recours à des outils de preuve, qui doivent procéder par surapproximation (à cause du théorème de Rice). La complétion d'automate est un tel outil, qui surapproxime l'ensemble des termes accessibles lors de l'exécution d'un programme représenté par un système de réécriture. La stratégie d'évaluation donne l'ordre dans lequel les sous-termes d'un terme doivent être réécrits ; en tenir compte permet une meilleur précision de l'analyse. Notre thèse propose une adaptation de la complétion d'automate à la stratégie en profondeur, utilisée notamment par OCaml. Nous établissons la correction et la précision de notre méthode et montrons comment elle s'inscrit dans le cadre plus large de l'analyse de programmes fonctionnels (OCaml). / Proving that programs behave correctly is difficult; one uses proof tools, which must rely on overapproximation (because of Rice's theorem). Automaton completion is such a tool, which overapproximates the set of reachable terms during the execution of a program represented as a TRS. An evaluation strategy dictates which subterm of a term should be rewritten first; taking this into account allows for a better approximation. Our thesis sets forward an adaptation of automaton completion to the innermost strategy, which is used among others by OCaml. We prove the soundness and the precision of our adaptation and show how it is part of a greater framework for analysis of functional programms (OCaml).
23

HOLCF '11: A Definitional Domain Theory for Verifying Functional Programs

Huffman, Brian Charles 01 January 2011 (has links)
HOLCF is an interactive theorem proving system that uses the mathematics of domain theory to reason about programs written in functional programming languages. This thesis introduces HOLCF '11, a thoroughly revised and extended version of HOLCF that advances the state of the art in program verification: HOLCF '11 can reason about many program definitions that are beyond the scope of other formal proof tools, while providing a high degree of proof automation. The soundness of the system is ensured by adhering to a definitional approach: New constants and types are defined in terms of previous concepts, without introducing new axioms. Major features of HOLCF '11 include two high-level definition packages: the Fixrec package for defining recursive functions, and the Domain package for defining recursive datatypes. Each of these uses the domain-theoretic concept of least fixed points to translate user-supplied recursive specifications into safe low-level definitions. Together, these tools make it easy for users to translate a wide variety of functional programs into the formalism of HOLCF. Theorems generated by the tools also make it easy for users to reason about their programs, with a very high level of confidence in the soundness of the results. As a case study, we present a fully mechanized verification of a model of concurrency based on powerdomains. The formalization depends on many features unique to HOLCF '11, and is the first verification of such a model in a formal proof tool.
24

A model checker for the LF system

Gerber, Erick D. B. 03 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: Computer aided veri cation techniques, such as model checking, can be used to improve the reliability of software. Model checking is an algorithmic approach to illustrate the correctness of temporal logic speci cations in the formal description of hardware and software systems. In contrast to traditional testing tools, model checking relies on an exhaustive search of all the possible con gurations that these systems may exhibit. Traditionally model checking is applied to abstract or high level designs of software. However, often interpreting or translating these abstract designs to implementations introduce subtle errors. In recent years one trend in model checking has been to apply the model checking algorithm directly to the implementations instead. This thesis is concerned with building an e cient model checker for a small concurrent langauge developed at the University of Stellenbosch. This special purpose langauge, LF, is aimed at developement of small embedded systems. The design of the language was carefully considered to promote safe programming practices. Furthermore, the language and its runtime support system was designed to allow directly model checking LF programs. To achieve this, the model checker extends the existing runtime support infrastructure to generate the state space of an executing LF program. / AFRIKAANSE OPSOMMING: Rekenaar gebaseerde program toetsing, soos modeltoetsing, kan gebruik word om die betroubaarheid van sagteware te verbeter. Model toetsing is 'n algoritmiese benadering om die korrektheid van temporale logika spesi kasies in die beskrywing van harde- of sagteware te bewys. Anders as met tradisionlee program toetsing, benodig modeltoetsing 'n volledige ondersoek van al die moontlike toestande waarin so 'n beskrywing homself kan bevind. Model toetsing word meestal op abstrakte modelle van sagteware of die ontwerp toegepas. Indien die ontwerp of model aan al die spesi kasies voldoen word die abstrakte model gewoontlik vertaal na 'n implementasie. Die vertalings proses word gewoontlik met die hand gedoen en laat ruimte om nuwe foute, en selfs foute wat uitgeskakel in die model of ontwerp is te veroorsaak. Deesdae, is 'n gewilde benadering tot modeltoetsing om di e tegnieke direk op die implementasie toe te pas, en sodoende die ekstra moeite van model konstruksie en vertaling uit te skakel. Hierdie tesis handel oor die ontwerp, implementasie en toetsing van 'n e ektiewe modeltoetser vir 'n klein gelyklopende taal, LF, wat by die Universiteit van Stellenbosch ontwikkel is. Die enkeldoelige taal, LF, is gemik op die veilige ontwikkeling van ingebedde sagteware. Die taal is ontwerp om veilige programmerings praktyke aan te moedig. Verder is die taal en die onderliggende bedryfstelsel so ontwerp om 'n model toetser te akkomodeer. Om die LF programme direk te kan toets, is die model toetser 'n integrale deel van die bedryfstelsel sodat dit die program kan aandryf om alle moontlike toestande te besoek.
25

Structured arrows : a type-based framework for structured parallelism

Castro, David January 2018 (has links)
This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.
26

Three essays on the interface of computer science, economics and information systems

Hidvégi, Zoltán Tibor, 1970- 28 August 2008 (has links)
This thesis looks at three aspects related to the design of E-commerce systems, online auctions and distributed grid computing systems. We show how formal verification techniques from computer science can be applied to ensure correctness of system design and implementation at the code level. Through e-ticket sales example, we demonstrate that model checking can locate subtle but critical flaws that traditional control and auditing methods (e.g., penetration testing, analytical procedure) most likely miss. Auditors should understand formal verification methods, enforce engineering to use them to create designs with less of a chance of failure, and even practice formal verification themselves in order to offer credible control and assistance for critical e-systems. Next, we study why many online auctions offer fixed buy prices to understand why sellers and auctioneers voluntarily limit the surplus they can get from an auction. We show when either the seller of the dibbers are risk-averse, a properly chosen fixed permanent buy-price can increase the social surplus and does not decrease the expected utility of the sellers and bidders, and we characterize the unique equilibrium strategies of uniformly risk-averse buyers in a buy-price auction. In the final chapter we look at the design of a distributed grid-computing system. We show how code-instrumentation can be used to generate a witness of program execution, and show how this witness can be used to audit the work of self-interested grid agents. Using a trusted intermediary between grid providers and customers, the audit allows payment to be contingent on the successful audit results, and it creates a verified reputation history of grid providers. We show that enabling the free trade of reputations provides economic incentives to agents to perform the computations assigned, and it induces increasing effort levels as the agents' reputation increases. We show that in such a reputation market only high-type agents would have incentive to purchase a high reputation, and only low-type agents would use low reputations, thus a market works as a natural signaling mechanism about the agents' type. / text
27

The Omnibus language and integrated verification approach

Wilson, Thomas January 2007 (has links)
This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed. The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The use of the integrated verification approach to meet two key requirements of safe software component reuse, to have clear descriptions and some form of certification, are discussed along with the specialised facilities provided by the Omnibus tool to manage the distribution of components. The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic. The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application.
28

Improving the Precision of a Scalable Demand-Driven Null- Dereference Verification for Java

Margoor, Amogh January 2013 (has links) (PDF)
The problem addressed in this thesis is sound, scalable, demand-driven null-dereference verification for Java programs via over-approximated weakest preconditions analysis. The base version of this analysis having been described in a previous publication, in this thesis we focus primarily on describing two major optimizations that we have incorporated that allow for longer program paths to be traversed more efficiently, hence increasing the precision of the approach. The first optimization is to bypass certain expensive-to-analyze constructs, such as virtual calls with too many possible targets, by directly transferring dataflow facts from points after the construct to points before along def-use edges of a certain kind. The second optimization is to use manually constructed summaries of Java container class methods, rather than analyze the code of these methods directly. We evaluate our approach using 10 real world Java programs, as well as several micro benchmarks. We demonstrate that our optimizations result in a 45% reduction in false positives over the base version on the real programs, without significant impact on running time.
29

Enforcement à l'éxécution de propriétés temporisées / Runtime enforcement of timed properties

Pinisetty, Srinivas 23 January 2015 (has links)
L'enforcement à l'exécution est une technique efficace de vérification et de validation dont le but est de corriger les exécutions incorrectes d'un système, par rapport à un ensemble de propriétés désirées. En utilisant un moniteur d'enforcement, une exécution (possiblement incorrecte), vue comme une séquence d'événements, est passée en entrée du moniteur, puis corrigée en sortie par rapport à la propriété. Durant les dix dernières années, l'enforcement à l'exécution a été étudiée pour des propriétés non temporisées. Dans cette thèse, nous considérons l'enforcement à l'exécution pour des systèmes où le temps entre les actions du système influence les propriétés à valider. Les exécutions sont donc modélisées par des séquences d'événements composées d'actions avec leurs dates d'occurence (des mots temporisés). Nous considérons l'enforcement à l'exécution pour des spécifications régulières modélisées par des automates temporisés. Les moniteurs d'enforcement peuvent, soit retarder les actions, soit les supprimer lorsque retarder les actions ne permet pas de satisfaire la spécification, permettant ainsi à l'exécution de continuer. Pour faciliter leur conception et la preuve de leur correction, les mécanismes d'enforcement sont modélisés à différents niveaux d'abstraction : les fonctions d'enforcement qui spécifient le comportement attendu des mécanismes en termes d'entrées-sorties, les contraintes qui doivent être satisfaites par ces fonctions, les moniteurs d'enforcement qui décrivent les mécanismes de manière opérationnelle, et les algorithmes d'enforcement qui fournissent une implémentation des moniteurs d'enforcement. La faisabilité de l'enforcement à l'exécution pour des propriétés temporisées est validée en prototypant la synthèse des moniteurs d'enforcement à partir d'automates temporisés. Nous montrons également l'utilité de l'enforcement à l'exécution de spécifications temporisées pour plusieurs domaines d'application. / Runtime enforcement is a verification/validation technique aiming at correcting possibly incorrect executions of a system of interest. It is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties. In this thesis, we consider enforcement monitoring for systems where the physical time elapsing between actions matters. Executions are thus modeled as sequences of events composed of actions with dates (called timed words). We consider runtime enforcement for timed specifications modeled as timed automata, in the general case of regular timed properties. The proposed enforcement mechanism has the power of both delaying events to match timing constraints, and suppressing events when no delaying is appropriate, thus allowing the enforcement mechanisms and systems to continue executing. To ease their design and correctness-proof, enforcement mechanisms are described at several levels: enforcement functions that specify the input-output behavior in terms of transformations of timed words, constraints that should be satisfied by such functions, enforcement monitors that describe the operational behavior of enforcement functions, and enforcement algorithms that describe the implementation of enforcement monitors. The feasibility of enforcement monitoring for timed properties is validated by prototyping the synthesis of enforcement monitors from timed automata. We also show the usefulness of enforcement monitoring of timed specifications for several application-domains.
30

The Fixpoint checking problem: an abstraction refinement perspective

Ganty, Pierre 28 September 2007 (has links)
<P align="justify">Model-checking is an automated technique which aims at verifying properties of computer systems. A model-checker is fed with a model of the system (which capture all its possible behaviors) and a property to verify on this model. Both are given by a convenient mathematical formalism like, for instance, a transition system for the model and a temporal logic formula for the property.</P><p><p><P align="justify">For several reasons (the model-checking is undecidable for this class of model or the model-checking needs too much resources for this model) model-checking may not be applicable. For safety properties (which basically says "nothing bad happen"), a solution to this problem uses a simpler model for which model-checkers might terminate without too much resources. This simpler model, called the abstract model, over-approximates the behaviors of the concrete model. However the abstract model might be too imprecise. In fact, if the property is true on the abstract model, the same holds on the concrete. On the contrary, when the abstract model violates the property, either the violation is reproducible on the concrete model and so we found an error; or it is not reproducible and so the model-checker is said to be inconclusive. Inconclusiveness stems from the over-approximation of the concrete model by the abstract model. So a precise model yields the model-checker to conclude, but precision comes generally with an increased computational cost.</P><p><p><P align="justify">Recently, a lot of work has been done to define abstraction refinement algorithms. Those algorithms compute automatically abstract models which are refined as long as the model-checker is inconclusive. In the thesis, we give a new abstraction refinement algorithm which applies for safety properties. We compare our algorithm with previous attempts to build abstract models automatically and show, using formal proofs that our approach has several advantages. We also give several extensions of our algorithm which allow to integrate existing techniques used in model-checking such as acceleration techniques.</P><p><p><P align="justify">Following a rigorous methodology we then instantiate our algorithm for a variety of models ranging from finite state transition systems to infinite state transition systems. For each of those models we prove the instantiated algorithm terminates and provide encouraging preliminary experimental results.</P><p><br><p><br><p><P align="justify">Le model-checking est une technique automatisée qui vise à vérifier des propriétés sur des systèmes informatiques. Les données passées au model-checker sont le modèle du système (qui en capture tous les comportements possibles) et la propriété à vérifier. Les deux sont donnés dans un formalisme mathématique adéquat tel qu'un système de transition pour le modèle et une formule de logique temporelle pour la propriété.</P><p><p><P align="justify">Pour diverses raisons (le model-checking est indécidable pour cette classe de modèle ou le model-checking nécessite trop de ressources pour ce modèle) le model-checking peut être inapplicable. Pour des propriétés de sûreté (qui disent dans l'ensemble "il ne se produit rien d'incorrect"), une solution à ce problème recourt à un modèle simplifié pour lequel le model-checker peut terminer sans trop de ressources. Ce modèle simplifié, appelé modèle abstrait, surapproxime les comportements du modèle concret. Le modèle abstrait peut cependant être trop imprécis. En effet, si la propriété est vraie sur le modèle abstrait alors elle l'est aussi sur le modèle concret. En revanche, lorsque le modèle abstrait enfreint la propriété :soit l'infraction peut être reproduite sur le modèle concret et alors nous avons trouvé une erreur ;soit l'infraction ne peut être reproduite et dans ce cas le model-checker est dit non conclusif. Ceci provient de la surapproximation du modèle concret faite par le modèle abstrait. Un modèle précis aboutit donc à un model-checking conclusif mais son coût augmente avec sa précision.</P><p><P align="justify">Récemment, différents algorithmes d'abstraction raffinement ont été proposés. Ces algorithmes calculent automatiquement des modèles abstraits qui sont progressivement raffinés jusqu'à ce que leur model-checking soit conclusif. Dans la thèse, nous définissons un nouvel algorithme d'abstraction raffinement pour les propriétés de sûreté. Nous comparons notre algorithme avec les algorithmes d'abstraction raffinement antérieurs. A l'aide de preuves formelles, nous montrons les avantages de notre approche. Par ailleurs, nous définissons des extensions de l'algorithme qui intègrent d'autres techniques utilisées en model-checking comme les techniques d'accélérations.</P><p><P align="justify">Suivant une méthodologie rigoureuse, nous instancions ensuite notre algorithme pour une variété de modèles allant des systèmes de transitions finis aux systèmes de transitions infinis. Pour chacun des modèles nous établissons la terminaison de l'algorithme instancié et donnons des résultats expérimentaux préliminaires encourageants.</P><p><p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.1524 seconds