• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Animation and Visualisation of Refinements

Robinson, Neil John Unknown Date (has links)
Specification animation has become a popular technique in industry, particularly for validation in model-based design processes. Animation tools provide the ability to explore and visualise the behaviour of a model without needing to study its internal workings. Formal refinement techniques should also be of interest to industry since they support verifiably correct transformations of system models towards implementation. So far, however, refinement techniques are not widely used. Their application requires a high degree of mathematical skill, even with the currently available tool support. Better tool support is needed to make refinement techniques accessible to industry. In this thesis we investigate the application of existing specification animation and visualisation tools to problems in refinement theory. We show how animation and visualisation can be used to support verification, by refinement, and validation, by comparing the behaviour of a refined specification against its abstract specification. Such techniques can be used to explain and/or improve the understanding of a refinement and to check for the presence of errors in a refinement, for example, before attempting a proof. In the most challenging cases, data refinements, the designer needs to supply an abstraction relation in order to prove the refinement. We initially assume that an abstraction relation is provided as an input to the verification and validation tasks. However, finding abstraction relations is hard, and is currently a matter of trial and error. We therefore study the problem of finding abstraction relations. We show that, if an abstraction relation exists, there is always a unique weakest abstraction relation and at least one minimal abstraction relation, and we describe algorithms for finding both the weakest abstraction relation and minimal abstraction relations. These algorithms can be applied to small finite-state systems to produce abstraction relations in terms of explicit values of state variables. We then investigate a symbolic algorithm for finding abstraction relations, which can be applied to systems with infinite states, to produce abstraction relations in predicate form. The theory and the algorithms we develop thus make it possible for us to extend our animation-based verification and validation techniques so that they can be used without providing a complete abstraction relation. Additionally our extended techniques can help a designer construct an abstraction relation or check a proposed one.
2

Abstraction of infinite and communicating CSPZ processes

FARIAS, Adalberto Cajueiro de 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:49:26Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Esta tese trata de um problema muito comum em verificação formal: explosão de estados. O problema desabilita a verificação automática de propriedades através da verificação de modelos. Isto é superado pelo uso de abstração de dados, em que o espaço de estados de umsistema é reduzido usandoumprincípio simples: descartando detalhes de tal forma que o espaço de estados torna-se finito exibindo ainda propriedades desejáveis. Isso habilita o uso de verificacao de modelos, já que o modelo mais simples (abstrato) pode ser usado no lugar do modelo original (concreto). Entretanto, abstrações podem perder propriedades já que o nível de precisão é degradado, para algumas propriedades. Abstrair tipos de dados é, normalmente, uma tarefa não-trivial e requer uma profunda experiência: o usuário deve prover domínios abstratos, uma relacao matemática entre os estados (concreto e abstrato), uma inicialização abstrata, e uma versão abstrata para cada operação. A abordagem proposta nesta tese transfere a maior parte dessa experiência para um procedimento sistemático que calcula relações de abstração. Essas relações são a base para as relações matemáticas entre os estados, como também suas imagens determinam os domínios abstratos (os valores de dados mínimos para preservar propriedades). Também propomos meta-modelos para estabelecer como o sistema abstrato é inicializado e como operações são tornadas fechadas sob os domínios abstratos. Isso elimina o conhecimento requerido do usuário para fornecer as versões abstratas para a inicialização e operações. Os meta-modelos garantem a correspondência entre os sistemas concreto e abstrato. Assim, nós derivamos especificações abstratasa partir de concretas de tal formaque a especificação concreta é mais determinística que a abstrata por construção. Esta é a idéia por trás da teoria sobrejacente de nossa abordagem de abstração de dados: refinamento de dados. A notação adotada é CSPZ uma integração formal das linguagens de especificação CSP e Z. Uma especificação CSPZ tem duas partes: uma parte comportamental (CSP) e outra de dados (Z). O procedimento de cálculo foca na parte de Z, mas os resultados são usados na especificação CSPZ por completo; isso segue da independência de dados da parte de CSP (os dados não podem afetar seu comportamento). Ao final, a verificação automática é obtida pela conversão da especificação CSPZ em CSP puro e em seguida pelo reuso do verificador de modelos padrão de CSP. Nossa abordagem compreende as seguintes tarefas: nós extraímos a parte de Z de uma especificação CSPZ (puramente sintática), calculamos as relações de abstração (através de uma análise sistemática de predicados com uso de ferramenta de suporte), construímos as relações matemáticas entre os estados, os esquemas abstratos (definidos por meta-modelos), e realizamos um pós-processamento na especificação abstrata. A última tarefa pode resultar em alguns ajustes nas relações de abstração. A novidade prática e maior contribuição de nossa abordagem é o cálculo sistemático das das relações de abstração, que são os elementos chave de todas abordagens de abstração de dados que estudamos ao longo dos últimos anos. O refinamento de dados entre o sistema produzido por nossa abordagem e o original (concreto) é a segunda contribuição deste trabalho. O procedimento sistemático é na verdade uma técnica de análise de predicado que usa as restrições sobre os dados para determinar seus valores mínimos que são suficientes para preservar o comportamento do sistema. Isso evita a execução (concreta ou simbólica) do sistema analisado. Os passos produzem mapeamentos que revelam alguns elementos cruciais: o espaço de estados abstrato e as relações matemáticas entre ele e o espaço de estados concreto. Essas relações são usadas para construir o sistema abstrato seguindo o formato estabelecido pelos meta-modelos. As limitações de nossa abordagem são também discutidas. Nós aplicamos a abordagem a alguns exemplos também analisados por outras técnicas da literatura. Discutimos também sobre trabalhos relacionados procurando destacar vantagens, desvantagens e aspectos complementares. Finalmente, apresentamos nossas conclusões e futuras direções para este trabalho
3

Sytnéza povrchových dat pro digitální modely terénu / Synthesis of digital landscape surface data

Šebesta, Michal January 2016 (has links)
A procedural generation of landscapes often meets a need for real spatial data at finer resolution that data available at the moment. We introduce a method that refines the spatial data at the coarse resolution into the finer resolution utilizing other data sources which are already at the better resolution. We construct weighted local linear statistical models from both the coarse and utility data and use the by- models-learned dependencies between the data sources to predict the needed data at better resolution. To achieve higher computational speed and evade utility data imperfection, we utilize truncated singular value decomposition which reduce a dimensionality of the data space we work with. The~method is highly modifiable and its application shows plausible real-like results. Thanks to this, the method can be of practical use for simulation software development. Powered by TCPDF (www.tcpdf.org)
4

Un système de types pragmatique pour la vérification déductive des programmes / A Pragmatic Type System for Deductive Software Verification

Gondelman, Léon 13 December 2016 (has links)
Cette thèse se place dans le contexte de la vérification déductive des programmes et a pour objectif de formaliser un certain nombre de concepts qui sont mis en œuvre dans l'outil de vérification Why3.L'idée générale est d'explorer des solutions qu'une approche à base de systèmes de types peut apporter à la vérification. Nous commençons par nous intéresser à la notion du code fantôme, une technique implantée dans de nombreux outils de vérification modernes, qui consiste à donner à des éléments de la spécification les apparences d'un code opérationnel. L'utilisation correcte du code fantôme requiert maintes précautions puisqu'il ne doit jamais interférer avec le reste du code. Le premier chapitre est consacré à une formalisation du code fantôme, en illustrant comment un système de types avec effets en permet une utilisation à la fois correcte et expressive. Puis nous nous intéressons à la vérification des programmes manipulant des pointeurs. En présence d'aliasing, c'est-à-dire lorsque plusieurs pointeurs manipulés dans un programme dénotent une même case mémoire, la spécification et la vérification deviennent non triviales. Plutôt que de nous diriger vers des approches existantes qui abordent le problème d'aliasing dans toute sa complexité, mais sortent du cadre de la logique de Hoare, nous présentons un système de types avec effets et régions singletons qui permet d'effectuer un contrôle statique des alias avant même de générer les obligations de preuve. Bien que ce système de types nous limite à des pointeurs dont l'identité peut être connue statiquement, notre observation est qu'il convient à une grande majorité des programmes que l'on souhaite vérifier. Enfin, nous abordons les questions liées à la vérification de programmes conçus de façon modulaire. Concrètement, nous nous intéressons à une situation où il existe une barrière d'abstraction entre le code de l'utilisateur et celui des bibliothèques dont il dépend. Cela signifie que les bibliothèques fournissent à l'utilisateur une énumération de fonctions et de structures de données manipulées, sans révéler les détails de leur implémentation. Le code de l'utilisateur ne peut alors exploiter ces données qu'à travers un ensemble de fonctions fournies. Dans une telle situation, la vérification peut elle-même être modulaire. Du côté de l'utilisateur, la vérification ne doit alors s'appuyer que sur des invariants de type et des contrats de fonctions exposés par les bibliothèques. Du côté de ces dernières, la vérification doit garantir que la représentation concrète raffine correctement les entités exposées, c'est-à-dire en préservant les invariants de types et les contrats de fonctions. Dans le troisième chapitre nous explorons comment un système de types permettant le contrôle statique des alias peut être adapté à la vérification modulaire et le raffinement des structures de données. / This thesis is conducted in the framework of deductive software verification.is aims to formalize some concepts that are implemented in the verification tool Why3. The main idea is to explore solutions that a type system based approach can bring to deductive verification. First, we focus our attention on the notion of ghost code, a technique that is used in most of modern verification tools and which consists in giving to some parts of specification the appearance of operational code. Using ghost code correctly requires various precautions since the ghost code must never interfere with the operational code. The first chapter presents a type system with effects illustrating how ghost code can be used in a way which is both correct and expressive. The second chapter addresses some questions related to verification of programs with pointers in the presence of aliasing, i.e. when several pointers handled by a program denote a same memory cell. Rather than moving towards to approaches that address the problem in all its complexity to the costs of abandoning the framework of Hoare logic, we present a type system with effects and singleton regions which resolves a liasing issues by performing a static control of aliases even before the proof obligations are generated. Although our system is limited to pointers whose identity must be known statically, we observe that it fits for most of the code we want to verify. Finally, we focus our attention on a situation where there exists an abstraction barrier between the user's code and the one of the libraries which it depends on. That means that libraries provide the user a set of functions and of data structures, without revealing details of their implementation. When programs are developed in a such modular way, verification must be modular it self. It means that the verification of user's code must take into account only function contracts supplied by libraries while the verification of libraries must ensure that their implementations refine correctly the exposed entities. The third chapter extends the system presented in the previous chapter with these concepts of modularity and data refinement.
5

From specification through refinement to implementation : a comparative study

Van Coppenhagen, Ingrid H. M. 30 June 2002 (has links)
This dissertation investigates the role of specification, refinement and implementation in the software development cycle. Both the structured and object-oriented paradigms are looked at. Particular emphasis is placed on the role of the refinement process. The requirements for the product (system) are determined, the specifications are drawn up, the product is designed, specified, implemented and tested. The stage between the (formal) specification of the system and the implementation of the system is the refinement stage. The refinement process consists out of data refinement, operation refinement, and operation decomposition. In this dissertation, Z, Object-Z and UML (Unified Modelling Language) are used as specification languages and C, C++, Cobol and Object-Oriented Cobol are used as implementation languages. As an illustration a small system, The ITEM System, is specified in Z and UML and implemented in Object-Oriented Cobol. / Computing / M. Sc. (Information Systems)
6

From specification through refinement to implementation : a comparative study

Van Coppenhagen, Ingrid H. M. 30 June 2002 (has links)
This dissertation investigates the role of specification, refinement and implementation in the software development cycle. Both the structured and object-oriented paradigms are looked at. Particular emphasis is placed on the role of the refinement process. The requirements for the product (system) are determined, the specifications are drawn up, the product is designed, specified, implemented and tested. The stage between the (formal) specification of the system and the implementation of the system is the refinement stage. The refinement process consists out of data refinement, operation refinement, and operation decomposition. In this dissertation, Z, Object-Z and UML (Unified Modelling Language) are used as specification languages and C, C++, Cobol and Object-Oriented Cobol are used as implementation languages. As an illustration a small system, The ITEM System, is specified in Z and UML and implemented in Object-Oriented Cobol. / Computing / M. Sc. (Information Systems)

Page generated in 0.1163 seconds