• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 23
  • 20
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The dialectica models of type theory

Moss, Sean January 2018 (has links)
This thesis studies some constructions for building new models of Martin-Löf type theory out of old. We refer to the main techniques as gluing and idempotent splitting. For each we give general conditions under which type constructors exist in the resulting model. These techniques are used to construct some examples of Dialectica models of type theory. The name is chosen by analogy with de Paiva's Dialectica categories, which semantically embody Gödel's Dialectica functional interpretation and its variants. This continues a programme initiated by von Glehn with the construction of the polynomial model of type theory. We complete the analogy between this model and Gödel's original Dialectica by using our techniques to construct a two-level version of this model, equipping the original objects with an extra layer of predicates. In order to do this we have to carefully build up the theory of finite sum types in a display map category. We construct two other notable models. The first is a model analogous to the Diller-Nahm variant, which requires a detailed study of biproducts in categories of algebras. To make clear the generalization from the categories studied by de Paiva, we illustrate the construction of the Diller-Nahm category in terms of gluing an indexed system of types together with a system of predicates. Following this we develop the general techniques needed for the type-theoretic case. The second notable model is analogous to the Dialectica category associated to the error monad as studied by Biering. This model has only weak dependent products. In order to get a model with full dependent products we use the idempotent splitting construction, which generalizes the Karoubi envelope of a category. Making sense of the Karoubi envelope in the type-theoretic case requires us to face up to issues of coherence in our models. We choose the route of making sure all of the constructions we use preserve strict coherence, rather than applying a general coherence theorem to produce a strict model afterwards. Our chosen method preserves more detailed information in the final model.
22

Uma introdução à lógica de segunda ordem / An Introduction to Logic Second Order

Júnior, Enéas Alves Nogueira 26 April 2013 (has links)
Neste trabalho investigamos alguns aspectos da Lógica de Segunda Ordem, dividindo o tema em três capítulos. No primeiro capítulo discorremos sobre os conceitos básicos desta Lógica, tais como conjunto de fórmulas, sistemas dedutivos e semânticas. Fazemos também um contraste com a Lógica de Primeira Ordem, que é mais conhecida, para se ter uma espécie de modelo do qual estamos nos diferenciando. Provamos o teorema da completude para a Lógica de Segunda Ordem, devido a L. Henkin em Henkin (1950). No segundo capítulo nós procuramos entender o que acontece com a semântica da teoria de conjuntos ZF C (que é de primeira ordem) se adicionarmos alguns axiomas de segunda ordem, criando uma teoria que chamamos de ZF 2 . Mostramos um teorema devido a Zermelo (Zermelo (1930)) que diz que os modelos desta teoria são essencialmente os mesmos. Tam- bém procuramos investigar a questão da Hipótese do Contínuo com relação à de um metódo de forcing para esta teoria, mostramos que a HC ZF 2 e, através continua sem resposta. No terceiro capítulo, escrevemos sobre três temas diferentes: o primeiro é sobre a relação que existe entre a propriedade da completude, da compacidade e a semântica de Henkin. O teorema de Lindström, que provamos nesta seção, diz essencialmente que não podemos ter completude e compacidade para a Lógica de Segunda Ordem ao menos que usemos esta semântica. Na segunda seção, investigamos o número de Hanf da Lógica de Segunda Ordem com a semântica Padrão e, na terceira seção, mostramos que é possível fazer uma redução das Lógicas de ordem superior à segunda e que o conjunto das fórmulas válidas da Lógica de Segunda Ordem não é denível na estrutura dos números naturais. / In this work we investigate some aspects of Second-Order Logic, splitting the theme in three chapters. In the rst one, we discuss the basic concepts of that Logic, such as set of formulas, deductive systems and semantics. We also make a contrast with First-Order Logic, which is better know, in order to have some kind of model from wich we are dierentiating. We prove the theorem of the completeness for the Second-Order Logic, due to L. Henkin in Henkin (1950). In the second chapter we try to understand what happens with the semantics of the ZF C set theory (which is a First-Order theory) if we add some Second-Order axioms, creating a theory that we call ZF 2 . We prove a theorem due to Zermelo (Zermelo (1930)) which says that the models of this theory are essentially the same. We also investigate the question of the Continuum Hypothesis in relation to theory, we show that the HC ZF 2 and, through a method of forcing for that still has no answer. In the third chapter, we write about three dierent themes: the rst is about the relation that exists between the property of completeness, of compactness and the Henkin semantics. The Lindström\'s theorem, which we prove in this section, says essentially that we can\'t have the completeness and the compactness for the Secon-Order Logic without Henkin semantics. In the second section, we investigate the Hanf Number of Second-Order Logic and, in the third section, we show that it is possible to make a reduction of Logics of order higher than the second to the second and that the set of the Second-Order valid formulas is not denable in the structure of the natural numbers.
23

The semantic analysis of advanced programming languages

Eades, Harley D., III 01 July 2014 (has links)
We live in a time where computing devices power essential systems of our society: our automobiles, our airplanes and even our medical services. In these safety-critical systems, bugs do not just cost money to fix; they have a potential to cause harm, even death. Therefore, software correctness is of paramount importance. Existing mainstream programming languages do not support software verification as part of their design, but rely on testing, and thus cannot completely rule out the possibility of bugs during software development. To fix this problem we must reshape the very foundation on which programming languages are based. Programming languages must support the ability to verify the correctness of the software developed in them, and this software verification must be possible using the same language the software is developed in. In the first half of this dissertation we introduce three new programming languages: Freedom of Speech, Separation of Proof from Program, and Dualized Type Theory. The Freedom of Speech language separates a logical fragment from of a general recursive programming language, but still allowing for the types of the logical fragment to depend on general recursive programs while maintaining logical consistency. Thus, obtaining the ability to verify properties of general recursion programs. Separation of Proof from Program builds on the Freedom of Speech languageby relieving several restrictions, and adding a number of extensions. Finally, Dualized Type Theory is a terminating functional programming language rich in constructive duality, and shows promise of being a logical foundation of induction and coninduction. These languages have the ability to verify properties of software, but how can we trust this verification? To be able to put our trust in these languages requires that the language be rigorously and mathematically defined so that the programming language itself can be studied as a mathematical object. Then we must show one very important property, logical consistency of the fragment of the programming language used to verify mathematical properties of the software. In the second half of this dissertation we introduce a well-known proof technique for showing logical consistency called hereditary substitution. Hereditary substitution shows promise of being less complex than existing proof techniques like the Tait-Girard Reducibility method. However, we are unsure which programming languages can be proved terminating using hereditary substitution. Our contribution to this line of work is the application of the hereditary substitution technique to predicative polymorphic programming languages, and the first proof of termination using hereditary substitution for a classical type theory.
24

Fully Generic Programming Over Closed Universes of Inductive-Recursive Types

Diehl, Larry 06 June 2017 (has links)
Dependently typed programming languages allow the type system to express arbitrary propositions of intuitionistic logic, thanks to the Curry-Howard isomorphism. Taking full advantage of this type system requires defining more types than usual, in order to encode logical correctness criteria into the definitions of datatypes. While an abundance of specialized types helps ensure correctness, it comes at the cost of needing to redefine common functions for each specialized type. This dissertation makes an effort to attack the problem of code reuse in dependently typed languages. Our solution is to write generic functions, which can be applied to any datatype. Such a generic function can be applied to datatypes that are defined at the time the generic function was written, but they can also be applied to any datatype that is defined in the future. Our solution builds upon previous work on generic programming within dependently typed programming. Type theory supports generic programming using a construction known as a universe. A universe can be considered the model of a programming language, such that writing functions over it models writing generic programs in the programming language. Historically, there has been a trade-off between the expressive power of the modeled programming language, and the kinds of generic functions that can be written in it. Our dissertation shows that no such trade-off is necessary, and that we can write future-proof generic functions in a model of a dependently typed programming language with a rich collection of types.
25

Variations on a theme of Curry and Howard : the Curry-Howard isomorphism and the proofs-as-programs paradigm adapted to imperative and structured program synthesis

Poernomo, Iman Hafiz, 1976- January 2003 (has links)
Abstract not available
26

Outils Génériques de Modélisation et de Démonstration pour la Formalisation des Mathématiques en Théorie des Types. Application à la Théorie des Catégories.

Saibi, Amokrane 19 March 1999 (has links) (PDF)
Outils Génériques de Modélisation et de Démonstration pour la Formalisation des Mathématiques en Théorie des Types. Application à la Théorie des Catégories.
27

Syntactic foundations for machine learning

Bhat, Sooraj 08 April 2013 (has links)
Machine learning has risen in importance across science, engineering, and business in recent years. Domain experts have begun to understand how their data analysis problems can be solved in a principled and efficient manner using methods from machine learning, with its simultaneous focus on statistical and computational concerns. Moreover, the data in many of these application domains has exploded in availability and scale, further underscoring the need for algorithms which find patterns and trends quickly and correctly. However, most people actually analyzing data today operate far from the expert level. Available statistical libraries and even textbooks contain only a finite sample of the possibilities afforded by the underlying mathematical principles. Ideally, practitioners should be able to do what machine learning experts can do--employ the fundamental principles to experiment with the practically infinite number of possible customized statistical models as well as alternative algorithms for solving them, including advanced techniques for handling massive datasets. This would lead to more accurate models, the ability in some cases to analyze data that was previously intractable, and, if the experimentation can be greatly accelerated, huge gains in human productivity. Fixing this state of affairs involves mechanizing and automating these statistical and algorithmic principles. This task has received little attention because we lack a suitable syntactic representation that is capable of specifying machine learning problems and solutions, so there is no way to encode the principles in question, which are themselves a mapping between problem and solution. This work focuses on providing the foundational layer for enabling this vision, with the thesis that such a representation is possible. We demonstrate the thesis by defining a syntactic representation of machine learning that is expressive, promotes correctness, and enables the mechanization of a wide variety of useful solution principles.
28

Static guarantees for coordinated components : a statically typed composition model for stream-processing networks

Penczek, Frank January 2012 (has links)
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
29

Verifying Higher-Order Imperative Programs with Higher-Order Separation Logic

Krishnaswami, Neelakantan R. 01 June 2012 (has links)
In this thesis I show is that it is possible to give modular correctness proofs of interesting higher-order imperative programs using higher-order separation logic. To do this, I develop a model higher-order imperative programming language, and develop a program logic for it. I demonstrate the power of my program logic by verifying a series of examples. This includes both realistic patterns of higher-order imperative programming such as the subject-observer pattern, as well as examples demonstrating the use of higher-order logic to reason modularly about highly aliased data structures such as the union-find disjoint set algorithm.
30

Sémantique et implantation d'une extension de ML pour la preuve de programmes / Semantics and implementation of an extension of ML for proving programs

Lepigre, Rodolphe 18 July 2017 (has links)
Au cours des dernières années, les assistants de preuves on fait des progrès considérables et ont atteint un grand niveau de maturité. Ils ont permit la certification de programmes complexes tels que des compilateurs et même des systèmes d'exploitation. Néanmoins, l'utilisation d'un assistant de preuve requiert des compétences techniques très particulières, qui sont très éloignées de celles requises pour programmer de manière usuelle. Pour combler cet écart, nous entendons concevoir un langage de programmation de style ML supportant la preuve de programmes. Il combine au sein d'un même outil la flexibilité de ML et le fin niveau de spécification offert par un assistant de preuve. Autrement dit, le système peut être utilisé pour programmer de manière fonctionnelle et fortement typée tout en autorisant l'obtention de nouvelles garanties au besoin.On étudie donc un langage en appel par valeurs dont le système de type étend une logique d'ordre supérieur. Il comprend un type égalité entre les programmes non typés, un type de fonction dépendant, la logique classique et du sous-typage. La combinaison de l'appel par valeurs,des fonctions dépendantes et de la logique classique est connu pour poser des problèmes de cohérence. Pour s'assurer de la correction du système (cohérence logique et sûreté à l'exécution), on propose un cadre théorique basé sur la réalisabilité classique de Krivine. La construction du modèle repose sur une propriété essentielle qui lie les différent niveaux d'interprétation des types d'une manière novatrice.On démontre aussi l'expressivité de notre système en se basant sur son implantation dans un prototype. Il peut être utilisé pour prouver des propriétés de programmes standards tels que la fonction « map »sur les listes ou le tri par insertion. / In recent years, proof assistant have reached an impressive level of maturity. They have led to the certification of complex programs such as compilers and operating systems. Yet, using a proof assistant requires highly specialised skills and it remains very different from standard programming. To bridge this gap, we aim at designing an ML-style programming language with support for proofs of programs, combining in a single tool the flexibility of ML and the fine specification features of a proof assistant. In other words, the system should be suitable both for programming (in the strongly-typed, functional sense) and for gradually increasing the level of guarantees met by programs, on a by-need basis.We thus define and study a call-by-value language whose type system extends higher-order logic with an equality type over untyped programs, a dependent function type, classical logic and subtyping. The combination of call-by-value evaluation, dependent functions and classical logic is known to raise consistency issues. To ensure the correctness of the system (logical consistency and runtime safety), we design a theoretical framework based on Krivine's classical realisability. The construction of the model relies on an essential property linking the different levels of interpretation of types in a novel way.We finally demonstrate the expressive power of our system using our prototype implementation, by proving properties of standard programs like the map function on lists or the insertion sort.

Page generated in 0.0582 seconds