• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 7
  • 2
  • 1
  • Tagged with
  • 56
  • 56
  • 20
  • 12
  • 12
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Proof of security protocols revisited / Les preuves de protocoles cryprographiques revisitées

Scerri, Guillaume 29 January 2015 (has links)
Avec la généralisation d'Internet, l'usage des protocoles cryptographiques est devenu omniprésent. Étant donné leur complexité et leur l'aspect critique, une vérification formelle des protocoles cryptographiques est nécessaire.Deux principaux modèles existent pour prouver les protocoles. Le modèle symbolique définit les capacités de l'attaquant comme un ensemble fixe de règles, tandis que le modèle calculatoire interdit seulement a l'attaquant derésoudre certain problèmes difficiles. Le modèle symbolique est très abstrait et permet généralement d'automatiser les preuves, tandis que le modèle calculatoire fournit des garanties plus fortes.Le fossé entre les garanties offertes par ces deux modèles est dû au fait que le modèle symbolique décrit les capacités de l'adversaire alors que le modèle calculatoire décrit ses limitations. En 2012 Bana et Comon ont proposé unnouveau modèle symbolique dans lequel les limitations de l'attaquant sont axiomatisées. De plus, si la sémantique calculatoire des axiomes découle des hypothèses cryptographiques, la sécurité dans ce modèle symbolique fournit desgaranties calculatoires.L'automatisation des preuves dans ce nouveau modèle (et l'élaboration d'axiomes suffisamment généraux pour prouver un grand nombre de protocoles) est une question laissée ouverte par l'article de Bana et Comon. Dans cette thèse nous proposons une procédure de décision efficace pour une large classe d'axiomes. De plus nous avons implémenté cette procédure dans un outil (SCARY). Nos résultats expérimentaux montrent que nos axiomes modélisant la sécurité du chiffrement sont suffisamment généraux pour prouver une large classe de protocoles. / With the rise of the Internet the use of cryptographic protocols became ubiquitous. Considering the criticality and complexity of these protocols, there is an important need of formal verification.In order to obtain formal proofs of cryptographic protocols, two main attacker models exist: the symbolic model and the computational model. The symbolic model defines the attacker capabilities as a fixed set of rules. On the other hand, the computational model describes only the attacker's limitations by stating that it may break some hard problems. While the former is quiteabstract and convenient for automating proofs the later offers much stronger guarantees.There is a gap between the guarantees offered by these two models due to the fact the symbolic model defines what the adversary may do while the computational model describes what it may not do. In 2012 Bana and Comon devised a new symbolic model in which the attacker's limitations are axiomatised. In addition provided that the (computational semantics) of the axioms follows from the cryptographic hypotheses, proving security in this symbolic model yields security in the computational model.The possibility of automating proofs in this model (and finding axioms general enough to prove a large class of protocols) was left open in the original paper. In this thesis we provide with an efficient decision procedure for a general class of axioms. In addition we propose a tool (SCARY) implementing this decision procedure. Experimental results of our tool shows that the axioms we designed for modelling security of encryption are general enough to prove a large class of protocols.
32

Efficient equational reasoning for the Inst-Gen Framework

Sticksel, Christoph January 2011 (has links)
We can classify several quite different calculi for automated reasoning in first-order logic as instantiation-based methods (IMs). Broadly speaking, unlike in traditional calculi such as resolution where the first-order satisfiability problem is tackled by deriving logical conclusions, IMs attempt to reduce the first-order satisfiability problem to propositional satisfiability by intelligently instantiating clauses. The Inst-Gen-Eq method is an instantiation-based calculus which is complete for first-order clause logic modulo equality. Its distinctive feature is that it combines first-order reasoning with efficient ground satisfiability checking, which is delegated in a modular way to any state-of-the-art ground solver for satisfiability modulo theories (SMT). The first-order reasoning modulo equality employs a superposition-style calculus which generates the instances needed by the ground solver to refine a model of a ground abstraction or to witness unsatisfiability. The thesis addresses the main issue in the Inst-Gen-Eq method, namely efficient extraction of instances, while providing powerful redundancy elimination techniques. To that end we introduce a novel labelled unit superposition calculus with sets, AND/OR trees and ordered binary decision diagrams (OBDDs) as labels. The different label structures permit redundancy elimination each to a different extent. We prove completeness of redundancy elimination from labels and further integrate simplification inferences based on term rewriting. All presented approaches, in particular the three labelled calculi are implemented in the iProver-Eq system and evaluated on standard benchmark problems.
33

Impact analysis in description logic ontologies

Goncalves, Joao Rafael Landeiro De sousa January 2014 (has links)
With the growing popularity of the Web Ontology Language (OWL) as a logic-based ontology language, as well as advancements in the language itself, the need for more sophisticated and up-to-date ontology engineering services increases as well. While, for instance, there is active focus on new reasoners and optimisations, other services fall short of advancing at the same rate (it suffices to compare the number of freely-available reasoners with ontology editors). In particular, very little is understood about how ontologies evolve over time, and how reasoners’ performance varies as the input changes. Given the evolving nature of ontologies, detecting and presenting changes (via a so-called diff) between them is an essential engineering service, especially for version control systems or to support change analysis. In this thesis we address the diff problem for description logic (DL) based ontologies, specifically OWL 2 DL ontologies based on the SROIQ DL. The outcomes are novel algorithms employing both syntactic and semantic techniques to, firstly, detect axiom changes, and what terms had their meaning affected between ontologies, secondly, categorise their impact (for example, determining that an axiom is a stronger version of another), and finally, align changes appropriately, i.e., align source and target of axiom changes (so the stronger axiom with the weaker one, from our example), and axioms with the terms they affect. Subsequently, we present a theory of reasoner performance heterogeneity, based on field observations related to reasoner performance variability phenomena. Our hypothesis is that there exist two kinds of performance behaviour: an ontology/reasoner combination can be performance-homogeneous or performance-heterogeneous. Finally, we verify that performance-heterogeneous reasoner/ontology combinations contain small, performance-degrading sets of axioms, which we call hot spots. We devise a performance hot spot finding technique, and show that hot spots provide a promising basis for engineering efficient reasoners.
34

Neuronové modelování matematických struktur a jejich rozšíření / Neural modelling of mathematical structures and their extensions

Smolík, Martin January 2019 (has links)
In this thesis we aim to build algebraic models in computer using machine learning methods and in particular neural networks. We start with a set of axioms that describe functions, constants and relations and use them to train neural networks approximating them. Every element is represented as a real vector, so that neural networks can operate on them. We also explore and compare different representations. The main focus in this thesis are groups. We train neural representations for cyclic (the simplest) and symmetric (the most complex) groups. Another part of this thesis are experiments with extending such trained models by introducing new "algebraic" elements, not unlike the classic extension of rational numbers Q[ √ 2]. 1
35

Contributions to Formal Specification and Modular Verification of Parallel and Sequential Software

Weide, Alan January 2021 (has links)
No description available.
36

Mapping Inferences: Constraint Propagation and Diamond Satisfaction

Gennari, Rosella 12 1900 (has links)
The main theme shared by the two main parts of this thesis is EFFICIENT AUTOMATED REASONING.Part I is focussed on a general theory underpinning a number of efficient approximate algorithms for Constraint Satisfaction Problems (CSPs),the constraint propagation algorithms.In Chapter 3, we propose a Structured Generic Algorithm schema (SGI) for these algorithms. This iterates functions according to a certain strategy, i.e. by searching for a common fixpoint of the functions. A simple theory for SGI is developed by studying properties of functions and of the ways these influence the basic strategy. One of the primary objectives of our theorisation is thus the following: using SGI or some of its variations for DESCRIBINING and ANALISYING HOW the "pruning" and "propagation" process is carried through by constraint propagation algorithms.Hence, in Chapter 4, different domains of functions (e.g., domain orderings) are related to different classes of constraint propagation algorithms (e.g., arc consistency algorithms); thus each class of constraint propagation algorithms is associated with a "type" of function domains, and so separated from the others. Then we analys each such class: we distinguished functions on the same domains for their different ways of performing pruning (point or set based), and consequently differentiated between algorithms of the same class (e.g., AC-1 and AC-3 versus AC-4 or AC-5). Besides, we also show how properties of functions (e.g., commutativity or stationarity) are related to different strategies of propagation in constraint algorithms of the same class (see, for instance, AC-1 versus AC-3). In Chapter 5 we apply the SGI schema to the case of soft CSPs (a generalisation of CSPs with sort-of preferences), thereby clarifying some of the similarities and differences between the "classical" and soft constraint-propagation algorithms. Finally, in Chapter 6, we summarise and characterise all the functions used for constraint propagation; in fact, the other goal of our theorisation is abstracting WHICH functions, iterated as in SGI or its variations, perform the task of "pruning" or "propagation" of inconsistencies in constraint propagation algorithms.We focus on relations and relational structures in Part II of the thesis. More specifically, modal languages allow us to talk about various relational structures and their properties. Once the latter are formulated in a modal language, they can be passed to automated theorem provers and tested for satisfiability, with respect to certain modal logics. Our task, in this part, can be described as follows: determining the satisfiability of modal formulas in an efficient manner. In Chapter 8, we focus on one way of doing this: we refine the standard translation as the layered translation, and use existing theorem provers for first-order logic on the output of this refined translation. We provide ample experimental evidence on the improvements in performances that were obtained by means of the refinement.The refinement of the standard translation is based on the tree model property. This property is also used in the basic algorithm schema in Chapter 9 ---the original schema is due to~\cite{seb97}. The proposed algorithm proceeds layer by layer in the modal formula and in its candidate models, applying constraint propagation and satisfaction algorithms for finite CSPs at each layer. With Chapter 9, we wish to draw the attention of constraint programmers to modal logics, and of modal logicians to CSPs.Modal logics themselves express interesting problems in terms of relations and unary predicates, like temporal reasoning tasks. On the other hand, constraint algorithms manipulate relations in the form of constraints, and unary predicates in the form of domains or unary constraints, see Chapter 6. Thus the question of how efficiently those algorithms can be applied to modal reasoning problems seems quite natural and challenging.
37

Ontology as a means for systematic biology

Tirmizi, Syed Hamid Ali 03 July 2012 (has links)
Biologists use ontologies as a method to organize and publish their acquired knowledge. Computer scientists have shown the value of ontologies as a means for knowledge discovery. This dissertation makes a number of contributions to enable systematic biologists to better leverage their ontologies in their research. Systematic biology, or phylogenetics, is the study of evolution. “Assembling a Tree of Life” (AToL) is an NSF grand challenge to describe all life on Earth and estimate its evolutionary history. AToL projects commonly include a study a taxon (organism) to create an ontology to capture its anatomy. Such anatomy ontologies are manually curated based on the data from morphology-based phylogenetic studies. Annotated digital imagery, morphological characters and phylogenetic (evolutionary) trees are the key components of morphological studies. Given the scale of AToL, building an anatomy ontology for each taxon manually is infeasible. The primary contribution of this dissertation is automatic inference and concomitant formalization required to compute anatomy ontologies. New anatomy ontologies are formed by applying transformations on an existing anatomy ontology for a model organism. The conditions for the transformations are derived from observational data recorded as morphological characters. We automatically created the Cypriniformes Gill and Hyoid Arches Ontology using the morphological character data provided by the Cypriniformes Tree of Life (CTOL) project. The method is based on representing all components of a phylogenetic study as an ontology using a domain meta-model. For this purpose we developed Morphster, a domain-specific knowledge acquisition tool for biologists. Digital images often serve as proxies for natural specimens and are the basis of many observations. A key problem for Morphster is the treatment of images in conjunction with ontologies. We contributed a formal system for integrating images with ontologies where images either capture observations of nature or scientific hypotheses. Our framework for image-ontology integration provides opportunities for building workflows that allow biologists to synthesize and align ontologies. Biologists building ontologies often had to choose between two ontology systems: Open Biomedical Ontologies (OBO) or the Semantic Web. It was critical to bridge the gap between the two systems to leverage biological ontologies for inference. We created a methodology and a lossless round-trip mapping for OBO ontologies to the Semantic Web. Using the Semantic Web as a guide to organize OBO, we developed a mapping system which is now a community standard. / text
38

Minimal model reasoning for modal logic

Papacchini, Fabio January 2015 (has links)
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
39

Ontološki zasnovana analiza semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja / Ontology based semantic analyses of data model correctness by using automated reasoning system

Kazi Zoltan 09 June 2014 (has links)
<p>U radu je izvr&scaron;eno teoretsko istraživanje i analiza postojećih stavova i re&scaron;enja u oblasti validacije i provere kvaliteta modela podataka. Kreiran je teorijski model ontolo&scaron;ki zasnovane analize semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja i izvr&scaron;ena praktična implementacija teorijskog modela, &scaron;to je potvrđeno i sprovedenim eksperimentalnim istraživanjem. Razvijena je softverska aplikacija za formalizaciju modela podataka i mapiranje ontologije u oblik Prolog klauzula. Formirana su pravila zaključivanja na predikatskom računu prvog reda, koja su integrisana sa modelom podataka i domenskom ontologijom. Upitima u okviru Prolog sistema, vr&scaron;i se provera semantičke korektnosti modela podataka. Definisana je i metrika ontolo&scaron;kog kvaliteta modela podataka koja se bazira na odgovorima sistema automatskog rezonovanja.</p> / <p>Work presents a theoretical study and analysis of existing theories and solutions in the area of data model validation and quality checking. It is created a theoretical model of ontology based analysis of data model semantic correctness by applying automated reasoning system which is practicaly implemented and confirmed by the conducted experimental research. A software application is developed for data model formalization and ontology mapping in Prolog clauses form. Reasoning rules are formed the in first-order predicate logic, which are integrated with the data model and domain ontology. Semantic correctness of the data model is checked with queries within Prolog system. Metrics of ontological quality of the data model are defined which are based on automated reasoning system replies.</p>
40

Towards More Scalable and Practical Program Synthesis

Yanjun Wang (12240227) 29 April 2022 (has links)
<p>Program synthesis aims to generate programs automatically from user-provided specifications and has the potential to aid users in real-world programming tasks from different domains. Although there have been great achievements of synthesis techniques in specific domains such as spreadsheet programming, computer-aided education and software engineering, there still exist huge barriers that keep us from achieving scalable and practical synthesis tools.</p> <p><br></p> <p>This dissertation presents several techniques towards more scalable and practical program synthesis from three perspectives: 1) intention: Writing formal specification for synthesis is a major barrier for average programmers. In particular, in some quantitative synthesis scenarios (such as network design), the first challenge faced by users is expressing their optimization targets. To address this problem, we present comparative synthesis, an interactive synthesis framework that learns near optimal programs through comparative queries, without explicitly specified optimization targets. 2) invention: Synthesis algorithms are key to pushing the performance limit of program synthesis. Aiming to solve syntax-guided synthesis problems efficiently, we introduce a cooperative synthesis technique that combines the merits of enumerative and deductive synthesis. 3) adaptation: Besides functional correctness, quality of generated code is another important aspect. Towards automated provably-correct optimization over tree traversals, we propose a stack-based representation for iterations in tree traversals and an encoding to Monadic Second-Order logic over trees, which enables reasoning about tree traversal transformations which were not possible before.</p>

Page generated in 0.0945 seconds