• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 8
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Flexible precision timing instrumentation and quantum key distribution

Nock, Richard William Raymond January 2013 (has links)
Time to Digital Conversion (TDC) is a fundamental building block of many applications, such as quantum information experiments, quantum key distribution, laser detection and ranging (LiDAR), bio-medical imaging, digital phase lock loops, and more. As of today, most timing instruments make use of analogue circuitry or application specific integrated circuits to time input events to picosecond resolution and bin size. As such solutions require programmable logic , to perform calibration and communication tasks, there would be a considerable cost and simplification gain obtained in implementing picosecond timing on the same programmable logic Integrated Circuit (IC). In addition to this, fully digital implementation would allow for such technology to enter broader markets. Numerous methods of implementing TDCs in programmable logic already exist. However, they are limited in bin size, linearity, accuracy or exhibit signi- . ficantly long dead times, due to the fixed structure of the Field Programmable Gate Array (FPGA) itself. This work demonstrates a novel ti~ing technique implemented within a low cost off the shelf FPGA that outperforms previously documented techniques in terms of bin size and linearity. A bin size of 1 ps, single shot precision of 17.11 ps , and a differential and integral. non linearity of < 1 has been demonstrated on a Spartan 6 LX75. The technique's performance is comparable to commercially available instruments costing in excess of an order of magnitude more. A flexible firmware and software defined timing platform has been developed, and four instruments have been demonstrated on it. A multi channel ('"'-'30 ps bin size) TDC, Time Correlated Single Photon Counter, coincidence counter and the aforementioned small bin size TDC have all been developed on a common low cost platform, exploiting the re-programmability of FPGAs. This allows for the functionality to be customised and changed at will, even remotely, as functionality is defined by the FPGA's bitfile and associated Personal Computer (PC) software. The use of such instruments is demonstrated, including hardware for two quantum key distribution systems (the Bristol free space system and the reference frame independent demonstration system), a LiDAR system, and in coincidence counting in quantum optics experiments.
2

Linear type theories, semantics and action calculi

Barber, Andrew G. January 1997 (has links)
this thesis, we study linear type-theories and their semantics. We present a general framework for such type-theories, and prove certain decidability properties of its equality. We also present intuitionistic linear logic and Milner's action calculi as instances of the framework, and use our results to show decidability of their respective equality judgements. Firstly, we motivate our development by giving a split-context logic and type-theory, called dual intuitionistic linear logic (DILL), which is equivalent at the level of term equality to the familiar type-theory derived from intuitionistic linear logic (ILL). We give a semantics for the type-theory DILL, and prove soundness and completeness for it; we can then deduce these results for the type-theory ILL by virtue of our translation. Secondly, we generalise DILL to obtain a general logic, type-theory and semantics based on an arbitrary set of operators, or general natural deduction rules. We again prove soundness and completeness results, augmented in this case by an initiality result. We introduce Milner's action calculi, and present example instances of our framework which are isomorphic to them. We extend this isomorphism to three significant higher-order variants of the action calculi, having functional properties, and compare the induced semantics for these action calculi with those given previously. Thirdly, motivated by these functional extensions, we define higher-order instances of our general framework, which are equipped with functional structure, proceeding as before to give logic, type-theory and semantics. We show that the logic and type-theory DILL arise as a higher-order instance of our general framework. We then define the higher-order extension of any instance of our framework, and use a Yoneda lemma argument to show that the obvious embedding from an instance into its higher-order extension is conservative. This has the corollary that the embeddings from the action calculi into the higher-order action calculi are all conservative, extending a result of Milner. Finally, we introduce relations, a syntax derived from proof-nets, for our general framework, and use them to show that certain instances of our framework, including some higher-order instances, have decidable equality judgements. This immediately shows that the equalities of DILL, ILL, the action calculi and the higher-order action calculi are decidable.
3

Ordinals and interactive programs

Hancock, Peter January 2000 (has links)
The work reported in this thesis arises from the old idea, going back to the origins of constructive logic, that a proof is fundamentally a kind of program. If proofs can be considered as programs, then one might expect that proof theory should have much to contribute to the theory of programming. This has indeed turned out to be the case. Various technologies developed in proof theory are now widely used in computer science for formulating and investigating programming languages and logics connected with them. Yet there is a vigorous and venerable part of proof theory which has so far had little impact in computer science, namely ordinal-theoretic proof theory. This focuses on proofs of wellfoundedness, usually expressed in the form of a schema of transfinite induction with respect to a representation of an initial segment of the countable ordinals. Proof theory of this kind is connected with what it is that limits the capacity of a proof system to 'see into the transfinite. If proofs can be considered as programs, what kind of program is a proof of wellfoundedness? My hypothesis is that the limitations of a formal system for writing proofs of wellfoundedness reflect its limitations as a system in which to program strategies for defeating ones opponent in a certain kind of game. In recent computer science, games have proved invaluable as models for describing patterns of interaction between a system and its environment. I cannot claim to have substantiated this hypothesis, but only to have taken a few steps in that direction. The work reported in the thesis lies in three areas. First, I present a framework for dependently typed programming in the style advocated by Martin-Löf. The novelties here are connected with bringing the type-theoretic approach to programming that comes from the Curry-Howard correspondence closer to the calculational approach in the categorical tradition that comes from Lambek and Lawvere. A particular challenge is to find a smooth and practical way of encoding inductive and coinductive definitions. Second, I have investigated a number of ways of modeling interactive systems and transition systems in a constructive context. The focus here is on models with a direct computational interpretation, that can actually be used in programming. The approach is inspired by a construction due to Petersson and Synek. It is shown how one may represent game-theoretic strategies of various kinds using these models. Finally, I give a construction of provable ordinals within a Martin-Löf style type theory that has a type of natural numbers, and an external sequence of universes closed under generalised Cartesian products. The locus of the ideas for this construction lie more in conventional proof theory, and were the basis for a conjecture made by me almost thirty years ago in work that I then abandoned. What is new here is the concept of a 'lens'. This is a predicate transformer that has been implicit in the construction of proofs of wellfoundedness since Gentzen. I hope this concept may be of some use in an algebraic, systematic approach to setting lower bounds on the proof-theoretic strength of more extensive type theories.
4

Optimising automatic fault detection and diagnostics for large sequential logic networks

Jain, G. C. January 1973 (has links)
No description available.
5

Justification based explanation in ontologies

Horridge, Matthew January 2011 (has links)
The Web Ontology Language, OWL, is the latest standard in logic based ontology languages. It is built upon the foundations of highly expressive Description Logics, which are fragments of First Order Logic. These logical foundations mean that it is possible to compute what is entailed by an OWL ontology. The reasons for entailments can range from fairly simple localised reasons through to highly non-obvious reasons. In both cases, without tool support that provides explanations for entailments, it can be very difficult or impossible to understand why an entailment holds. In the OWL world, justifications, which are minimal entailing subsets of ontologies, have emerged as the dominant form of explanation. This thesis investigates justification based explanation techniques. The core of the thesis is devoted to defining and analysing Laconic and Precise Justifications. These are fine-grained justifications whose axioms do not contain any superfluous parts. Optimised algorithms for computing these justifications are presented, and an extensive empirical investigation shows that these algorithms perform well on state of the art, large and expressive bio-medical ontologies. The investigation also highlights the prevalence of superfluity in real ontologies, along with the related phenomena of justification masking. The practicality of computing Laconic Justifications coupled with the prevalence of non-laconic justifications in the wild indicates that Laconic and Precise justifications are likely to be useful in practice. The work presented in this thesis should be of interest to researchers in the area of knowledge representation and reasoning, and developers of reasoners and ontology editors, who wish to incorporate explanation generation techniques into their systems.
6

Proof-theoretical observations of BI and BBI base-logic interactions, and development of phased sequence calculus to define logic combinations

Arisaka, Ryuta January 2013 (has links)
I study sequent calculus of combined logics in this thesis. Two specific logics are looked at-Logic BI that combines intuitionistic logic and multiplicative intuitionistic linear logic and Logic BBI that combines classical logic and multiplicative linear logic. A proof-theoretical study into logical combinations themsel ves then follows. To consolidate intuition about what this thesis is all about, let us suppose that we know about two different logics, Logic A developed for reasoning about Purpose A and Logic B developed for reasoning about Purpose B. Logic A serves Purpose A very well, but not Purpose B. Logic B serves Purpose B very well but not Purpose A. We wish to fulfill both Purpose A and Purpose B, but presently we can only afford to let one logic guide through our reasoning. What shall we do? One option is to be content with having Logic A with which we handle Purpose A efficiently and Purpose B rather inefficiently. Another option is to choose Logic B instead. But there is yet another option: we combine Logic A and Logic B to derive a new logic Logic C which is still one logic but which serves both Purpose A and Purpose B efficiently. The combined logic is synthetic of the strengths in more basic logics (Logic A and Logic B). As it nicely takes care of our requirements, it may be the best choice among all that have been so far considered. Yet this is not the end of the story. Depending on the manner Logic A and Logic B combine, Logic C may have extensions serving more purposes than just Purpose A and Purpose B. Ensuing is the following problem: we know about Logic A and Logic B, but we may not know about combined logics of the base logics. To understand the combined logics, we need to understand the extensions in which base logics interact each other. Analysis on the interesting parts tends to be non-trivial, however. The mentioned two specific combined logics BI and BBI do not make an exception, for which proof-theoretical development has been particularly slow. It has remained in obscurity how to properly handle base-logic interactions of the combined logics as appearing syntactically. As one objective of this thesis, I provide analysis on the syntactic phenomena of the BI and BBI base-logic interactions within sequent calculus, to augment the knowledge. For BI, I deliver, through appropriate methodologies to reason about the syntactic phenomena of the base-logic interactions, the first BI sequent calculus free of any structural rules. Given its positive consequence to efficient proof searches, this is a significant step forward in further maturity of BI proof theory. Based on the calculus, I prove decidability of a fragment of BI purely syntactically. For BBI which is closely connected to application via separation logic, I develop adequate sequent calculus conventions and consider the implication of the underlying semantics onto syntax. Sound BBI sequent calculi result with a closer syntax-semantics correspondence than previously envisaged. From them, adaptation to separation logic is also considered. To promote the knowledge of combined logics in general within computer science, it is also important that we be able to study logical combinations themselves. Towards this direction of generalisation, I present the concept of phased sequent calculus - sequent calculus which physically separates base logics, and in which a specific manner of logical combination to take place between them can be actually developed and analysed. For a demonstration, the said decidable BI fragment is formulated in phased sequent calculus, and the sense of logical combination in effect is analysed. A decision procedure is presented for the fragment.
7

Logiques de ressources dynamiques : modèles, propriétés et preuves / Dynamic Resource Logic : Models, Properties et Proofs

Courtault, Jean-René 15 April 2015 (has links)
En informatique, la notion de ressource est une notion centrale. Nous considérons comme ressource toute entité pouvant être composée ou décomposée en sous-entités. Plusieurs logiques ont été proposées afin de modéliser et d’exprimer des propriétés sur celles-ci, comme la logique BI exprimant des propriétés de partage et de séparation. Puisque les systèmes informatiques manipulent des ressources, la proposition de nouveaux modèles capturant la dynamique de ces ressources, ainsi que la vérification et la preuve de propriétés sur ces modèles, sont des enjeux cruciaux. Dans ce contexte, nous définissons de nouvelles logiques permettant la modélisation logique de la dynamique des ressources, proposant de nouveaux modèles et permettant l’expression de nouvelles propriétés sur cette dynamique. De plus, pour ces logiques, nous proposons des méthodes des tableaux et d’extraction de contre-modèles. Dans un premier temps, nous définissons de nouveaux réseaux de Petri, nommés ß-PN, et proposons une nouvelle sémantique à base de ß-PN pour BI. Puis nous proposons une première extension modale de BI, nommée DBI, permettant la modélisation de ressources ayant des propriétés dynamiques, c’est-à-dire évoluant en fonction de l’état courant d’un système. Ensuite, nous proposons une logique, nommée DMBI, modélisant des systèmes manipulant/produisant/consommant des ressources. Par ailleurs, nous proposons une nouvelle logique (LSM) possédant de nouvelles modalités multiplicatives (en lien avec les ressources). Pour finir, nous introduisons la séparation au sein des logiques épistémiques, obtenant ainsi une nouvelle logique ESL, exprimant de nouvelles propriétés épistémiques / In computer science, the notion of resource is a central concern. We consider as a resource, any entity that can be composed or decomposed into sub-entities. Many logics were proposed to model and express properties on these resources, like BI logic, a logic about sharing and separation of resources. As the computer systems manipulate resources, a crucial issue consists in providing new models that capture the dynamics of resources, and also in verifying and proving properties on these models. In this context, we define new logics with new models and new languages allowing to respectively capture and express new properties on the dynamics of resources. Moreover, for all these logics, we also study the foundations of proof search and provide tableau methods and counter-model extraction methods. After defining new Petri nets, called ß-PN, we propose a new semantics based on ß-PN for BI logic, that allows us to show that BI is able to capture a kind of dynamics of resources. After observing that it is necessary to introduce new modalities in BI logic, we study successively different modal extensions of BI. We define a logic, called DBI, that allows us to model resources having dynamic properties, meaning that they evolve during the iterations of a system. Then, we define a logic, called DMBI, that allows us to model systems that manipulate/produce/consume resources. Moreover, we define a new modal logic, called LSM, having new multiplicative modalities, that deals with resources. Finally, we introduce the notion of separation in Epistemic Logic, obtaining a new logic, called ESL, that models and expresses new properties on agent knowledge
8

Subspace clustering on static datasets and dynamic data streams using bio-inspired algorithms / Regroupement de sous-espaces sur des ensembles de données statiques et des flux de données dynamiques à l'aide d'algorithmes bioinspirés

Peignier, Sergio 27 July 2017 (has links)
Une tâche importante qui a été étudiée dans le contexte de données à forte dimensionnalité est la tâche connue sous le nom de subspace clustering. Le subspace clustering est généralement reconnu comme étant plus compliqué que le clustering standard, étant donné que cette tâche vise à détecter des groupes d’objets similaires entre eux (clusters), et qu’en même temps elle vise à trouver les sous-espaces où apparaissent ces similitudes. Le subspace clustering, ainsi que le clustering traditionnel ont été récemment étendus au traitement de flux de données en mettant à jour les modèles de clustering de façon incrémentale. Les différents algorithmes qui ont été proposés dans la littérature, reposent sur des bases algorithmiques très différentes. Parmi ces approches, les algorithmes évolutifs ont été sous-explorés, même si ces techniques se sont avérées très utiles pour traiter d’autres problèmes NP-difficiles. L’objectif de cette thèse a été de tirer parti des nouvelles connaissances issues de l’évolution afin de concevoir des algorithmes évolutifs qui traitent le problème du subspace clustering sur des jeux de données statiques ainsi que sur des flux de données dynamiques. Chameleoclust, le premier algorithme développé au cours de ce projet, tire partie du grand degré de liberté fourni par des éléments bio-inspirés tels qu’un génome de longueur variable, l’existence d’éléments fonctionnels et non fonctionnels et des opérateurs de mutation incluant des réarrangements chromosomiques. KymeroClust, le deuxième algorithme conçu dans cette thèse, est un algorithme de k-medianes qui repose sur un mécanisme évolutif important: la duplication et la divergence des gènes. SubMorphoStream, le dernier algorithme développé ici, aborde le problème du subspace clustering sur des flux de données dynamiques. Cet algorithme repose sur deux mécanismes qui jouent un rôle clef dans l’adaptation rapide des bactéries à des environnements changeants: l’amplification de gènes et l’absorption de matériel génétique externe. Ces algorithmes ont été comparés aux principales techniques de l’état de l’art, et ont obtenu des résultats compétitifs. En outre, deux applications appelées EvoWave et EvoMove ont été développés pour évaluer la capacité de ces algorithmes à résoudre des problèmes réels. EvoWave est une application d’analyse de signaux Wi-Fi pour détecter des contextes différents. EvoMove est un compagnon musical artificiel qui produit des sons basés sur le clustering des mouvements d’un danseur, décrits par des données provenant de capteurs de déplacements. / An important task that has been investigated in the context of high dimensional data is subspace clustering. This data mining task is recognized as more general and complicated than standard clustering, since it aims to detect groups of similar objects called clusters, and at the same time to find the subspaces where these similarities appear. Furthermore, subspace clustering approaches as well as traditional clustering ones have recently been extended to deal with data streams by updating clustering models in an incremental way. The different algorithms that have been proposed in the literature, rely on very different algorithmic foundations. Among these approaches, evolutionary algorithms have been under-explored, even if these techniques have proven to be valuable addressing other NP-hard problems. The aim of this thesis was to take advantage of new knowledge from evolutionary biology in order to conceive evolutionary subspace clustering algorithms for static datasets and dynamic data streams. Chameleoclust, the first algorithm developed in this work, takes advantage of the large degree of freedom provided by bio-like features such as a variable genome length, the existence of functional and non-functional elements and mutation operators including chromosomal rearrangements. KymeroClust, our second algorithm, is a k-medians based approach that relies on the duplication and the divergence of genes, a cornerstone evolutionary mechanism. SubMorphoStream, the last one, tackles the subspace clustering task over dynamic data streams. It relies on two important mechanisms that favor fast adaptation of bacteria to changing environments, namely gene amplification and foreign genetic material uptake. All these algorithms were compared to the main state-of-the-art techniques, obtaining competitive results. Results suggest that these algorithms are useful complementary tools in the analyst toolbox. In addition, two applications called EvoWave and EvoMove have been developed to assess the capacity of these algorithms to address real world problems. EvoWave is an application that handles the analysis of Wi-Fi signals to detect different contexts. EvoMove, the second one, is a musical companion that produces sounds based on the clustering of dancer moves captured using motion sensors.

Page generated in 0.044 seconds