• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 1
  • Tagged with
  • 24
  • 24
  • 11
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Multimodal Networks in Biology

Sioson, Allan A. 14 December 2005 (has links)
A multimodal network (MMN) is a novel mathematical construct that captures the structure of biological networks, computational network models, and relationships from biological databases. An MMN subsumes the structure of graphs and hypergraphs, either undirected or directed. Formally, an MMN is a triple (V,E,M) where V is a set of vertices, E is a set of modal hyperedges, and M is a set of modes. A modal hyperedge e=(T,H,A,m) in E is an ordered 4-tuple, in which T,H,A are subsets of V and m is an element of M. The sets T, H, and A are the tail, head, and associate of e, while m is its mode. In the context of biology, each vertex is a biological entity, each hyperedge is a relationship, and each mode is a type of relationship (e.g., 'forms complex' and 'is a'). Within the space of multimodal networks, structural operations such as union, intersection, hyperedge contraction, subnetwork selection, and graph or hypergraph projections can be performed. A denotational semantics approach is used to specify the semantics of each hyperedge in MMN in terms of interaction among its vertices. This is done by mapping each hyperedge e to a hyperedge code algo:V(e), an algorithm that details how the vertices in V(e) get used and updated. A semantic MMN-based model is a function of a given schedule of evaluation of hyperedge codes and the current state of the model, a set of vertex-value pairs. An MMN-based computational system is implemented as a proof of concept to determine empirically the benefits of having it. This system consists of an MMN database populated by data from various biological databases, MMN operators implemented as database functions, graph operations implemented in C++ using LEDA, and mmnsh, a shell scripting language that provides a consistent interface to both data and operators. It is demonstrated that computational network models may enrich the MMN database and MMN data may be used as input to other computational tools and environments. A simulator is developed to compute from an initial state and a schedule of hyperedge codes the resulting state of a semantic MMN model. / Ph. D.

Reflexive spaces of smooth functions : a logical account of linear partial differential equations / Espaces réflexifs de fonctions lisses : un compte rendu logique des équations aux dérivées partielles linéaires

Kerjean, Marie 19 October 2018 (has links)
La théorie de la preuve se développe depuis la correspondance de Curry-Howard suivant deux sources d’inspirations : les langages de programmation, pour lesquels elle agit comme une théorie des types de données, et l’étude sémantique des preuves. Cette dernière consiste à donner des modèles mathématiques pour les comportements des preuves/programmes. En particulier, la sémantique dénotationnelle s’attache à interpréter les deux-ci comme des fonctions entre des types, et permet en retour d’affiner notre compréhension des preuves/programmes. La logique linéaire (LL), introduite par Girard, donne une interprétation logique des notions d’algèbre linéaire, quand la logique linéaire différentielle (DiLL), introduite par Ehrhard et Regnier, permet une compréhension logique de la notion de différentielle.Cette thèse s’attache à renforcer la correspondance sémantique entre théorie de la preuve et analyse fonctionnelle, en insistant sur le caractère involutif de la négation dans DiLL.La première partie consiste en un rappel des notions de linéarité, polarisation et différentiation en théorie de la preuve, ainsi qu’un exposé rapide de théorie des espaces vectoriels topologiques. La deuxième partie donne deux modèles duaux de la logique linéaire différentielle, interprétant la négation d’une formule respectivement par le dual faible et le dual de Mackey. Quand la topologie faible ne permet qu’une interprétation discrète des preuves sous forme de série formelle, la topologie de Mackey nous permet de donner un modèle polarisé et lisse de DiLL, et de raffiner des résultats précédemment obtenus par Blute, Dabrowski, Ehrhard et Tasson. Enfin, la troisième partie de cette thèse s’attache à interpréter les preuves de DiLL par des distributions à support compact. Nous donnons un modèle polarisé de DiLL où les formules négatives sont interprétés par des espaces Fréchet Nucléaires. Nous montrons que enfin la résolution des équations aux dérivées partielles linéaires à coefficients constants obéit à une syntaxe qui généralise celle de DiLL, que nous détaillons. / Around the curry-coward correspondence, proof-theory has grown along two distinct fields : the theory of programming languages, for which formulas acts as data types, and the semantic study of proofs. The latter consists in giving mathematical models of proofs and programs. In particular, denotational semantics distinguishes data types which serves as input or output of programs, and allows in return for a finer understanding of proofs and programs. Linear Logic (LL) gives a logical interpretation of the basic notions from/of linear algebra, while Differential Linear Logic allows for a logical understanding of differentiation. This manuscript strengthens the link between proof-theory and functional analysis, and highlights the role of linear involutive negation in DiLL. The first part of this thesis consists in a quick overview of prerequisites on the notions of linearity, polarisation and differentiation in proof-theory, and gives the necessary background in the theory of locally convex topological vector spaces. The second part uses two classic topologies on the dual of a topological vector space and gives two models of DiLL: the weak topology allows only for a discrete interpretation of proofs through formal power series, while the Mackey topology on the dual allows for a smooth and polarised model of DiLL. Finally, the third part interprets proofs of DiLL by distributions. We detail a polarized model of DiLL in which negatives are Fréchet Nuclear spaces, and proofs are distributions with compact support. We also show that solving linear partial differential equations with constant coefficients can be typed by a syntax similar to the one of DiLL, which we detail.

Algebraic theory of type-and-effect systems

Kammar, Ohad January 2014 (has links)
We present a general semantic account of Gifford-style type-and-effect systems. These type systems provide lightweight static analyses annotating program phrases with the sets of possible computational effects they may cause, such as memory access and modification, exception raising, and non-deterministic choice. The analyses are used, for example, to justify the program transformations typically used in optimising compilers, such as code reordering and inlining. Despite their existence for over two decades, there is no prior comprehensive theory of type-and-effect systems accounting for their syntax and semantics, and justifying their use in effect-dependent program transformation. We achieve this generality by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with the effect operations. Our first main contribution is the uniform construction of semantic models for typeand- effect analysis by a process we call conservative restriction. Our construction requires an algebraic model of the unannotated programming language and a relevant notion of predicate. It then generates a model for Gifford-style type-and-effect analysis. This uniform construction subsumes existing ad-hoc models for type-and-effect systems, and is applicable in all cases in which the semantics can be given via enriched Lawvere theories. Our second main contribution is a demonstration that our theory accounts for the various aspects of Gifford-style effect systems. We begin with a version of Levy’s Callby- push-value that includes algebraic effects. We add effect annotations, and design a general type-and-effect system for such call-by-push-value variants. The annotated language can be thought of as an intermediate representation used for program optimisation. We relate the unannotated semantics to the conservative restriction semantics, and establish the soundness of program transformations based on this effect analysis. We develop and classify a range of validated transformations, generalising many existing ones and adding some new ones. We also give modularly-checkable sufficient conditions for the validity of these optimisations. In the final part of this thesis, we demonstrate our theory by analysing a simple example language involving global state with multiple regions, exceptions, and nondeterminism. We give decision procedures for the applicability of the various effect-dependent transformations, and establish their soundness and completeness.

Catégories et diagrammes de cordes pour les jeux concurrents / Categories and String Diagrams for Game Semantics

Eberhart, Clovis 22 June 2018 (has links)
La sémantique des jeux est une approche pour modéliser les langages de programmation dans laquelle les types sont interprétés par des jeux et les programmes par des stratégies. Ces modèles de jeux ont couvert des constructions fonctionnelles et impératives, des opérateurs de contrôle, etc. L'approche a récemment été étendue aux langages non-déterministes et concurrents, provoquant au passage un changement de perspective profond~: les parties sont maintenant organisées en une catégorie, sur laquelle les stratégies sont des préfaisceaux. La notion fondamentale d'innocence a aussi été caractérisée comme une condition de faisceau. Cette thèse s'attache à l'étude de quelques constructions apparaissant dans ces nouveaux modèles de jeux.D'abord, constatant que, dans plusieurs de ces modèles, l'étape cruciale consiste à définir une catégorie double de jeux et de parties, nous proposons une construction abstraite d'une telle catégorie double à partir de données de base, puis nous démontrons que, sous des hypothèses adéquates, le résultat obtenu permet en effet la construction des stratégies.Dans un second temps, nous établissons un lien entre deux techniques existantes pour définir les parties~: la technique standard, fondée sur les séquences justifiées, et une autre plus récente utilisant les diagrammes de cordes. Nous définissons un plongement (plein) de la première dans la seconde et prouvons qu'elles induisent essentiellement le même modèle.Enfin, nous proposons une axiomatisation des notions de jeu et de partie, de laquelle nous tirons une catégorie de jeux et stratégies. Nous raffinons ensuite les axiomes pour traiter l'innocence et nous démontrons que, sous des hypothèses adéquates, les stratégies innocentes sont stables par composition. / Game semantics is a class of models of programming languages in which types are interpreted as games and programs as strategies. Such game models have successfully covered diverse features, such as functional and imperative programming, or control operators. They have recently been extended to non-deterministic and concurrent languages, which generated an in-depth recasting of the standard approach: plays are now organised into a category, on which strategies are presheaves. The fundamental notion of innocence has also been recast as a sheaf condition. This thesis is a study of various constructions appearing in this new approach to game semantics.We first consider a pattern common to several game models of concurrent languages, in which games and plays are first organised into a double category, from which strategies are then derived. We provide an abstract construction of such a double category from more basic data, and prove that, under suitable hypotheses, the result allows the construction of strategies.Our second contribution is to relate two established techniques for defining plays: the standard one, based on justified sequences, and a more recent one, based on string diagrams. We (fully) embed the former into the latter and prove that they induce essentially the same model.Finally, we propose an axiomatisation of the notions of game and play, from which we formally derive a category of games and strategies. We also refine the axioms to deal with innocence, and prove that, under suitable hypotheses, innocent strategies are stable under composition.

Using Haskell to Implement Syntactic Control of Interference

Warren, Jared 11 June 2008 (has links)
Interference makes reasoning about imperative programs difficult but it can be controlled syntactically by a language's type system, such as Syntactic Control of Interference (SCI). Haskell is a purely-functional, statically-typed language with a rich type system including algebraic datatypes and type classes. It is popular as a defining language for definitional interpreters of domain-specific languages, making it an ideal candidate for implementation of definitional interpreters for SCI and Syntactic Control of Interference Revisited (SCIR), a variant that improves on SCI. Inference rules and denotational semantics functions are presented for PCF, IA, SCI, and SCIR. An extension to Haskell98 is used to define Haskell functions for those languages' semantics and to define type constructions to statically check their syntax. The results in applied programming language theory demonstrate the suitability and techniques of Haskell for definitional interpretation of languages with rich type systems. / Thesis (Master, Computing) -- Queen's University, 2008-06-10 21:23:33.291

Topological domain theory

Battenfeld, Ingo January 2008 (has links)
This thesis presents Topological Domain Theory as a powerful and flexible framework for denotational semantics. Topological Domain Theory models a wide range of type constructions and can interpret many computational features. Furthermore, it has close connections to established frameworks for denotational semantics, as well as to well-studied mathematical theories, such as topology and computable analysis. We begin by describing the categories of Topological Domain Theory, and their categorical structure. In particular, we recover the basic constructions of domain theory, such as products, function spaces, fixed points and recursive types, in the context of Topological Domain Theory. As a central contribution, we give a detailed account of how computational effects can be modelled in Topological Domain Theory. Following recent work of Plotkin and Power, who proposed to construct effect monads via free algebra functors, this is done by showing that free algebras for a large class of parametrised equational theories exist in Topological Domain Theory. These parametrised equational theories are expressive enough to generate most of the standard examples of effect monads. Moreover, the free algebras in Topological Domain Theory are obtained by an explicit inductive construction, using only basic topological and set-theoretical principles. We also give a comparison of Topological and Classical Domain Theory. The category of omega-continuous dcpos embeds into Topological Domain Theory, and we prove that this embedding preserves the basic domain-theoretic constructions in most cases. We show that the classical powerdomain constructions on omega-continuous dcpos, including the probabilistic powerdomain, can be recovered in Topological Domain Theory. Finally, we give a synthetic account of Topological Domain Theory. We show that Topological Domain Theory is a specific model of Synthetic Domain Theory in the realizability topos over Scott's graph model. We give internal characterisations of the categories of Topological Domain Theory in this realizability topos, and prove the corresponding categories to be internally complete and weakly small. This enables us to show that Topological Domain Theory can model the polymorphic lambda-calculus, and to obtain a richer collection of free algebras than those constructed earlier. In summary, this thesis shows that Topological Domain Theory supports a wide range of semantic constructions, including the standard domain-theoretic constructions, computational effects and polymorphism, all within a single setting.

Verifying Higher-Order Imperative Programs with Higher-Order Separation Logic

Krishnaswami, Neelakantan R. 01 June 2012 (has links)
In this thesis I show is that it is possible to give modular correctness proofs of interesting higher-order imperative programs using higher-order separation logic. To do this, I develop a model higher-order imperative programming language, and develop a program logic for it. I demonstrate the power of my program logic by verifying a series of examples. This includes both realistic patterns of higher-order imperative programming such as the subject-observer pattern, as well as examples demonstrating the use of higher-order logic to reason modularly about highly aliased data structures such as the union-find disjoint set algorithm.

Linéarité : un outil analytique pour l'étude de la complexité et de la sémantique des langages de programmation / Linearity : an analytic tool in the study of complexity and semantics of programming languages

Gaboardi, Marco 12 December 2007 (has links)
Dans la première partie, on propose un système de type pour le lambda-calcul, dans le style du calcul des séquents, nomme « Soft Type Assignment » (STA) qui est inspiré par la logique linéaire « soft ». STA a la propriété de réduction du sujet et est correct et complète pour les calculs en temps polynomial. Par la suite on propose un déduction naturelle, STA_N. Ce système est simple mais il a le désavantage que les variables dans le sujet peuvent être explicitement renommées. Pour résoudre ce problème, on propose le système STA_M, où les contextes sont des multi-ensembles, donc les règles pour renommer les variables peuvent être interdit. L’inférence de type pour STA_M ne semble pas décidable. On propose un algorithme qui pour chaque lambda-terme rend l’ensemble de contraintes que doivent être satisfait pour que le terme soit type. Pi est correct et complet. Ensuite on étend le lambda-calcul par des constantes booléennes et on propose le système STA_B. La particularité de STA_B est que la règle du conditionnel utilise les contextes de façon additive. Chaque programme de STA_B peut être exécuté, par une machine abstraite, en espace polynomial. De plus le système est aussi complet pour PSPACE. Dans la deuxième partie, on propose une restriction de PCF, nommée SlPCF. Ce langage est équipé avec une sémantique opérationnelle qui mélange l’appelle par nom et l’appelle par valeur et peut être interprèté en mode standard dans les espaces cohérents linéaires. SlPCF est complet pour les fonctions récursives, mais il n’est pas complet et donc il n’est pas fully abstract pour les espaces cohérents linéaires / In the first part, we propose, inspired by Soft Linear Logic, a type assignment system for lambda-calculus in sequent calculus style, named Soft Type Assignment (STA). STA enjoys the subject reduction property. and is correct and complete for polynomial time computations. Then, we propose a natural deduction named STA_N. While simple, STA_N has the disadvantage of allowing the explicit renaming of variables in the subject. To overcome to this problem, we propose another natural deduction system, named STA_M, where contexts are multisets, hence rules renaming variables can be avoided. The type inference for STA_M seems in general undecidable. We propose an algorithm Pi returning, for every lambda-term, a set of constraints that need to be satisfied in order to type the term. Pi is correct and complete. We extend the lambda-calculus by basic boolean constants and we propose the system STA_B. The peculiarity of STA_B is that the conditional rule treats the contexts in an additive way. Every STA_B program can be executed, through an abstract machine, in polynomial space. Moreover, STA_B is also complete for PSPACE. In the second part we propose a restriction of PCF, named SlPCF. The language is naturally equipped with an operational semantics mixing call-by-name and call-by-value parameter passing and it can be interpreted in linear coherence space in a standard way. SlPCF is recursive complete, but it is not complete, and thus not fully abstract, with respect to linear coherence spaces

Réalisabilité classique : nouveaux outils et applications / Classical realizability : new tools and applications

Geoffroy, Guillaume 29 March 2019 (has links)
La réalisabilité classique de Jean-Louis Krivine associe à chaque modèle de calcul et chaque modèle de la théorie des ensembles un nouveau modèle de la théorie des ensembles, appelé modèle de réalisabilité, d'une façon similaire au forcing. Chaque modèle de réalisabilité est muni d’une algèbre de Boole caractéristique $\gimel 2$ (gimel 2), dont la structure donne des informations sur les propriétés du modèle de réalisabilité. En particulier, les modèles de forcing correspondent au cas où $\gimel 2$ est l'algèbre de Boole à deux éléments.Ce travail présente de nouveaux outils pour manipuler les modèles de réalisabilité et donne de nouveaux résultats obtenus en les exploitant. L'un d'entre eux est qu'au premier ordre, la théorie des algèbres de Boole à au moins deux éléments est complète pour $\gimel 2$, au sens où $\gimel 2$ eut être rendue élémentairement équivalente à n'importe quelle algèbre de Boole. Deux autres résultats montrent que $\gimel 2$ peut être utilisée pour étudier les modèles dénotationnels de langage de programmation (chacun part d'un modèle dénotationnel et classifie ses degrés de parallélisme à l'aide de $\gimel 2$). Un autre résultat montre que la technique de Jean-Louis Krivine pour réaliser l'axiome des choix dépendants à partir de l'instruction quote peut se généraliser à des formes plus fortes de choix. Enfin, un dernier résultat, obtenu en collaboration avec Laura Fontanella, accompagne le précédent en adaptant la condition d'antichaîne dénombrable du forcing au cadre de la réalisabilité, ce qui semble semble ouvrir une piste prometteuse pour réaliser l'axiome du choix. / Jean-Louis Krivine's classical realizability defines, from any given model of computation and any given model of set theory, a new model of set theory called the realizability model, in a similar way to forcing. Each realizability model is equipped with a characteristic Boolean algebra $\gimel 2$ (gimel 2), whose structure encodes important information about the properties of the realizability model. For instance, forcing models are precisely the realizability models in which $\gimel 2$ is the Boolean algebra with to elements.This document defines new tools for studying realizability models and exploits them to derive new results. One such result is that, as far as first-order logic is concerned, the theory of Boolean algebras with at least two elements is complete for $\gimel 2$, meaning that for each Boolean algebra B (with at least two elements), there exists a realizability model in which $\gimel 2$ is elementarily equivalent to B. Next, two results show that $\gimel 2$ can be used as a tool to study denotational models of programming languages (each one of them takes a particular denotational model and classifies its degrees of parallelism using $\gimel 2$). Moving to set theory, another results generalizes Jean-Louis Krivine's technique of realizing the axiom of dependant choices using the instruction quote to higher forms of choice. Finally, a last result, which is joint work with Laura Fontanella, complements the previous one by adapting the countable antichain condition from forcing to classical realizability, which seems to open a new, promising approach to the problem of realizing the full axiom of choice.

Primitive Direcursion and Difunctorial Semantics of Typed Object Calculus

Glimming, Johan January 2007 (has links)
<p>In the first part of this thesis, we contribute to the semantics of typed object calculus by giving (a) a category-theoretic denotational semantics using partial maps making use of an algebraic compactness assumption, (b) a notion of "wrappers'' by which algebraic datatypes can be represented as object types, and (c) proofs of computational soundness and adequacy of typed object calculus via Plotkin's FPC (with lazy operational semantics), thus making models of FPC suitable also for first-order typed object calculus (with recursive objects supporting method update, but not subtyping). It follows that a valid equation in the model induces operationally congruent terms in the language, so that program algebras can be studied. For (c), we also develop an extended first-order typed object calculus, and prove subject reduction. The second part of the thesis concerns recursion principles on datatypes including the untyped lambda calculus as a special case. Freyd showed that in certain domain theoretic categories, locally continuous functors have minimal invariants, which possess a structure that he termed dialgebra. This gives rise to a category of dialgebras and homomorphisms, where the minimal invariants are initial, inducing a powerful recursion scheme (direcursion) on a complete partial order. We identify a problem that appears when we translate (co)iterative functions to direcursion, and as a solution to this problem we develop a recursion scheme (primitive direcursion). This immediately gives a number of examples of direcursive functions, improving on the situation in the literature where only a few examples have appeared. By means of a case study, this line of work is connected to object calculus models.</p> / Delarbete II är även publicerad som Teknisk rapport, 2007, Oct, No2.

Page generated in 0.1394 seconds