• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 13
  • 13
  • 11
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A General View of Normalisation through Atomic Flows

Gundersen, Tom 10 November 2009 (has links) (PDF)
Atomic flows are a geometric invariant of classical propositional proofs in deep inference. In this thesis we use atomic flows to describe new normal forms of proofs, of which the traditional normal forms are special cases, we also give several normalisation procedures for obtaining the normal forms. We define, and use to present our results, a new deep-inference formalism called the functorial calculus, which is more flexible than the traditional calculus of structures. To our surprise we are able to 1) normalise proofs without looking at their logical connectives or logical rules; and 2) normalise proofs in less than exponential time.
2

Graphical representation of canonical proof : two case studies

Heijltjes, Willem Bernard January 2012 (has links)
An interesting problem in proof theory is to find representations of proof that do not distinguish between proofs that are ‘morally’ the same. For many logics, the presentation of proofs in a traditional formalism, such as Gentzen’s sequent calculus, introduces artificial syntactic structure called ‘bureaucracy’; e.g., an arbitrary ordering of freely permutable inferences. A proof system that is free of bureaucracy is called canonical for a logic. In this dissertation two canonical proof systems are presented, for two logics: a notion of proof nets for additive linear logic with units, and ‘classical proof forests’, a graphical formalism for first-order classical logic. Additive linear logic (or sum–product logic) is the fragment of linear logic consisting of linear implication between formulae constructed only from atomic formulae and the additive connectives and units. Up to an equational theory over proofs, the logic describes categories in which finite products and coproducts occur freely. A notion of proof nets for additive linear logic is presented, providing canonical graphical representations of the categorical morphisms and constituting a tractable decision procedure for this equational theory. From existing proof nets for additive linear logic without units by Hughes and Van Glabbeek (modified to include the units naively), canonical proof nets are obtained by a simple graph rewriting algorithm called saturation. Main technical contributions are the substantial correctness proof of the saturation algorithm, and a correctness criterion for saturated nets. Classical proof forests are a canonical, graphical proof formalism for first-order classical logic. Related to Herbrand’s Theorem and backtracking games in the style of Coquand, the forests assign witnessing information to quantifiers in a structurally minimal way, reducing a first-order sentence to a decidable propositional one. A similar formalism ‘expansion tree proofs’ was presented by Miller, but not given a method of composition. The present treatment adds a notion of cut, and investigates the possibility of composing forests via cut-elimination. Cut-reduction steps take the form of a rewrite relation that arises from the structure of the forests in a natural way. Yet reductions are intricate, and initially not well-behaved: from perfectly ordinary cuts, reduction may reach unnaturally configured cuts that may not be reduced. Cutelimination is shown using a modified version of the rewrite relation, inspired by the game-theoretic interpretation of the forests, for which weak normalisation is shown, and strong normalisation is conjectured. In addition, by a more intricate argument, weak normalisation is also shown for the original reduction relation.
3

On the infinitary proof theory of logics with fixed points / Théorie de la preuve infinitaire pour les logiques à points fixes

Doumane, Amina 27 June 2017 (has links)
Cette thèse traite de la theorie de la preuve pour les logiques a points fixes, telles que le μ-calcul, lalogique lineaire a points fixes, etc. ces logiques sont souvent munies de systèmes de preuves finitairesavec des règles d’induction à la Park. Il existe néanmoins d’autres sytèmes de preuves pour leslogiques à points fixes, qui reposent sur la notion de preuve infinitaire, mais qui sont beaucoupmoins developpés dans la litterature. L’objectif de cette thèse est de pallier à cette lacune dansl’état de l’art, en developpant la théorie de la preuve infnitaire pour les logiques a points fixes,avec deux domaines d’application en vue: les langages de programmation avec types de données(co)inductifs et la vérification des systèmes réactifs.Cette thèse contient trois partie. Dans la première, on rappelle les deux principales approchespour obtenir des systèmes de preuves pour les logiques à points fixes: les systèmes finitaires avecrègle explicite d’induction et les systèmes finitaires, puis on montre comment les deux approchesse relient. Dans la deuxième partie, on argumente que les preuves infinitaires ont effectivement unréel statut preuve-theorique, en montrant que la logique lineaire additive multiplicative avec pointsfixes admet les propriétés d’élimination des coupures et de focalisation. Dans la troisième partie,on utilise nos developpements sur les preuves infinitaires pour monter de manière constructive lacomplétude du μ-calcul lineaire relativement à l’axiomatisation de Kozen. / The subject of this thesis is the proof theory of logics with fixed points, such as the μ-calculus,linear-logic with fixed points, etc. These logics are usually equipped with finitary deductive systemsthat rely on Park’s rules for induction. other proof systems for these logics exist, which relyon infinitary proofs, but they are much less developped. This thesis contributes to reduce thisdeficiency by developing the infinitary proof-theory of logics with fixed points, with two domainsof application in mind: programming languages with (co)inductive data types and verification ofreactive systems.This thesis contains three parts. In the first part, we recall the two main approaches to theproof theory for logics with fixed points: the finitary and the infinitary one, then we show theirrelationships. In the second part, we argue that infinitary proofs have a true proof-theoreticalstatus by showing that the multiplicative additive linear-logic with fixed points admits focalizationand cut-elimination. In the third part, we apply our proof-theoretical investigations to obtain aconstructive proof of completeness for the linear-time μ-calculus w.r.t. Kozen’s axiomatization.
4

Deep Inference and Symmetry in Classical Proofs

Brünnler, Kai 25 August 2003 (has links) (PDF)
In this thesis we see deductive systems for classical propositional and predicate logic which use deep inference, i.e. inference rules apply arbitrarily deep inside formulas, and a certain symmetry, which provides an involution on derivations. Like sequent systems, they have a cut rule which is admissible. Unlike sequent systems, they enjoy various new interesting properties. Not only the identity axiom, but also cut, weakening and even contraction are reducible to atomic form. This leads to inference rules that are local, meaning that the effort of applying them is bounded, and finitary, meaning that, given a conclusion, there is only a finite number of premises to choose from. The systems also enjoy new normal forms for derivations and, in the propositional case, a cut elimination procedure that is drastically simpler than the ones for sequent systems.
5

Sur les épreuves et les types dans la logique du second ordre / On proofs and types in second order logic

Pistone, Paolo 27 March 2015 (has links)
Dans cette thèse on s'intéresse aux formes de "circularité" qui apparaissent dans la théorie de la preuve de la logique du second ordre et de son contrepartie constructive, le Système F.Ces "circularités", ou "cercles vicieux" (Poincaré 1900), sont analysées sur la base d'une distinction entre deux points de vue distincts et irréductible (à cause des théorèmes d'incomplétude): le premier ("le pourquoi", Girard 1989) concerne la cohérence et l'Hauptsatz et demande des méthodes infinitaires (i.e. non élémentaires) de preuve. Le deuxième ("le comment", Girard 1989) concerne le contenu computationnel et combinatoire des preuves, donné par la correspondance entre preuves et programmes, et ne demande que de méthodes élémentaires de preuve.Dans la première partie de la thèse, dévouée au "pourquoi", les arguments philosophiques traditionnels sur les "cercles vicieux" sont confrontés avec la perspective qui émerge de la démonstration de l' Hauptsatz pour la logique de second ordre (obtenue par Girard avec la technique des candidats de réductibilité).Dans la deuxième partie de la thèse, dévouée au "comment", deux approches combinatoires aux cercles vicieux sont proposés: la première se basant sur la théorie du polymorphisme paramétrique, la deuxième sur l'analyse géométrique du typage qui vient de la théorie de l'unification. / In this dissertation several issues concerning the proof-theory of second order logic and its constructive counterpart (System F, Girard 1971) are addressed. The leitmotiv of the investigations here presented is the apparent "circularity'' or "impredicativity'' of second order proofs. This circularity is reflected in System F by the possibility to type functions applied to themselves, in contrast with Russell's idea that typing should rather forbid such ``vicious circles'' (Poincare 1906). A fundamental methodological distinction between two irreducible (because of incompleteness) approaches in proof theory constitutes the background of this work: on the one hand, "why-proof theory'' ("le pourquoi'', Girard 1989) addresses coherence and the Hauptsatz and requires non-elementary ("infinitary'') techniques; on the other hand, "how-proof theory'' ("le comment'', Girard 1989) addresses the combinatorial and computational content of proofs, given by the correspondence between proofs and programs, and is developed on the basis of elementary ("finitary'') techniques. }In the first part of the thesis, dedicated to "why-proof theory'', the traditional philosophical arguments on "vicious circles'' are confronted with the perspective arising from the proof of the Hauptsatz for second order logic (first obtained in Girard 1971 with the technique of reducibility candidates).In the second part of the thesis, dedicated to "how-proof theory'', two combinatorial approaches to "vicious circles'' are presented, with some technical results: the first one based on the theory of parametric polymorphism, the second one on the geometrical analysis of typing coming from unification theory.
6

Dôkazy bezespornosti aritmetiky / Dôkazy bezespornosti aritmetiky

Horská, Anna January 2017 (has links)
The thesis consists of two parts. The first one deals with Gentzen's consistency proof of 1935, especially with the impact of his cut elimination strategy on the complexity of the proof. Our analysis of Gentzen's cut elimi- nation strategy, which eliminates uppermost cuts regardless of their comple- xity, yields that, in his proof, Gentzen implicitly applies transfinite induction up to Φω(0) where Φω is the ω-th Veblen function. This is an upper bound and Φω(0) represents an upper bound on heights of cut-free infinitary deriva- tions that Gentzen constructs for sequents derivable in Peano arithmetic (PA). We currently do not know whether this is a lower bound too. The first part also contains a formalization of Gentzen's proof of 1935. Based on the formalization, we see that the transfinite induction mentioned above is the only principle used in the proof that exceeds PA. The second part compares the performance of Gentzen's and Tait's cut elimi- nation strategy in classical propositional logic. Tait's strategy reduces the cut-rank of the derivation. Since the propositional logic does not use inference rules with eigenvariables, we managed to organize the cut elimination in the way that both strategies yield identical cut-free derivations in classical propositional logic.
7

Deep Inference and Symmetry in Classical Proofs

Brünnler, Kai 22 September 2003 (has links)
In this thesis we see deductive systems for classical propositional and predicate logic which use deep inference, i.e. inference rules apply arbitrarily deep inside formulas, and a certain symmetry, which provides an involution on derivations. Like sequent systems, they have a cut rule which is admissible. Unlike sequent systems, they enjoy various new interesting properties. Not only the identity axiom, but also cut, weakening and even contraction are reducible to atomic form. This leads to inference rules that are local, meaning that the effort of applying them is bounded, and finitary, meaning that, given a conclusion, there is only a finite number of premises to choose from. The systems also enjoy new normal forms for derivations and, in the propositional case, a cut elimination procedure that is drastically simpler than the ones for sequent systems.
8

A Natural Interpretation of Classical Proofs

Brage, Jens January 2006 (has links)
<p>In this thesis we use the syntactic-semantic method of constructive type theory to give meaning to classical logic, in particular Gentzen's LK.</p><p>We interpret a derivation of a classical sequent as a derivation of a contradiction from the assumptions that the antecedent formulas are true and that the succedent formulas are false, where the concepts of truth and falsity are taken to conform to the corresponding constructive concepts, using function types to encode falsity. This representation brings LK to a manageable form that allows us to split the succedent rules into parts. In this way, every succedent rule gives rise to a natural deduction style introduction rule. These introduction rules, taken together with the antecedent rules adapted to natural deduction, yield a natural deduction calculus whose subsequent interpretation in constructive type theory gives meaning to classical logic.</p><p>The Gentzen-Prawitz inversion principle holds for the introduction and elimination rules of the natural deduction calculus and allows for a corresponding notion of convertibility. We take the introduction rules to determine the meanings of the logical constants of classical logic and use the induced type-theoretic elimination rules to interpret the elimination rules of the natural deduction calculus. This produces an interpretation injective with respect to convertibility, contrary to an analogous translation into intuitionistic predicate logic.</p><p>From the interpretation in constructive type theory and the interpretation of cut by explicit substitution, we derive a full precision contraction relation for a natural deduction version of LK. We use a term notation to formalize the contraction relation and the corresponding cut-elimination procedure.</p><p>The interpretation can be read as a Brouwer-Heyting-Kolmogorov (BHK) semantics that justifies classical logic. The BHK semantics utilizes a notion of classical proof and a corresponding notion of classical truth akin to Kolmogorov's notion of pseudotruth. We also consider a second BHK semantics, more closely connected with Kolmogorov's double-negation translation.</p><p>The first interpretation reinterprets the consequence relation while keeping the constructive interpretation of truth, whereas the second interpretation reinterprets the notion of truth while keeping the constructive interpretation of the consequence relation. The first and second interpretations act on derivations in much the same way as Plotkin's call-by-value and call-by-name continuation-passing-style translations, respectively.</p><p>We conclude that classical logic can be given a constructive semantics by laying down introduction rules for the classical logical constants. This semantics constitutes a proof interpretation of classical logic.</p>
9

A Natural Interpretation of Classical Proofs

Brage, Jens January 2006 (has links)
In this thesis we use the syntactic-semantic method of constructive type theory to give meaning to classical logic, in particular Gentzen's LK. We interpret a derivation of a classical sequent as a derivation of a contradiction from the assumptions that the antecedent formulas are true and that the succedent formulas are false, where the concepts of truth and falsity are taken to conform to the corresponding constructive concepts, using function types to encode falsity. This representation brings LK to a manageable form that allows us to split the succedent rules into parts. In this way, every succedent rule gives rise to a natural deduction style introduction rule. These introduction rules, taken together with the antecedent rules adapted to natural deduction, yield a natural deduction calculus whose subsequent interpretation in constructive type theory gives meaning to classical logic. The Gentzen-Prawitz inversion principle holds for the introduction and elimination rules of the natural deduction calculus and allows for a corresponding notion of convertibility. We take the introduction rules to determine the meanings of the logical constants of classical logic and use the induced type-theoretic elimination rules to interpret the elimination rules of the natural deduction calculus. This produces an interpretation injective with respect to convertibility, contrary to an analogous translation into intuitionistic predicate logic. From the interpretation in constructive type theory and the interpretation of cut by explicit substitution, we derive a full precision contraction relation for a natural deduction version of LK. We use a term notation to formalize the contraction relation and the corresponding cut-elimination procedure. The interpretation can be read as a Brouwer-Heyting-Kolmogorov (BHK) semantics that justifies classical logic. The BHK semantics utilizes a notion of classical proof and a corresponding notion of classical truth akin to Kolmogorov's notion of pseudotruth. We also consider a second BHK semantics, more closely connected with Kolmogorov's double-negation translation. The first interpretation reinterprets the consequence relation while keeping the constructive interpretation of truth, whereas the second interpretation reinterprets the notion of truth while keeping the constructive interpretation of the consequence relation. The first and second interpretations act on derivations in much the same way as Plotkin's call-by-value and call-by-name continuation-passing-style translations, respectively. We conclude that classical logic can be given a constructive semantics by laying down introduction rules for the classical logical constants. This semantics constitutes a proof interpretation of classical logic.
10

Towards a Theory of Proofs of Classical Logic

Straßburger, Lutz 07 January 2011 (has links) (PDF)
Les questions <EM>"Qu'est-ce qu'une preuve?"</EM> et <EM>"Quand deux preuves sont-elles identiques?"</EM> sont fondamentales pour la théorie de la preuve. Mais pour la logique classique propositionnelle --- la logique la plus répandue --- nous n'avons pas encore de réponse satisfaisante. C'est embarrassant non seulement pour la théorie de la preuve, mais aussi pour l'informatique, où la logique classique joue un rôle majeur dans le raisonnement automatique et dans la programmation logique. De même, l'architecture des processeurs est fondée sur la logique classique. Tous les domaines dans lesquels la recherche de preuve est employée peuvent bénéficier d'une meilleure compréhension de la notion de preuve en logique classique, et le célèbre problème NP-vs-coNP peut être réduit à la question de savoir s'il existe une preuve courte (c'est-à-dire, de taille polynomiale) pour chaque tautologie booléenne. Normalement, les preuves sont étudiées comme des objets syntaxiques au sein de systèmes déductifs (par exemple, les tableaux, le calcul des séquents, la résolution, ...). Ici, nous prenons le point de vue que ces objets syntaxiques (également connus sous le nom d'arbres de preuve) doivent être considérés comme des représentations concrètes des objets abstraits que sont les preuves, et qu'un tel objet abstrait peut être représenté par un arbre en résolution ou dans le calcul des séquents. Le thème principal de ce travail est d'améliorer notre compréhension des objets abstraits que sont les preuves, et cela se fera sous trois angles différents, étudiés dans les trois parties de ce mémoire: l'algèbre abstraite (chapitre 2), la combinatoire (chapitres 3 et 4), et la complexité (chapitre 5).

Page generated in 0.143 seconds