• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cut-free sequent and tableau systems for propositional normal modal logics

Gore, Rajeev January 1991 (has links)
No description available.
2

Integrating logic and objects for knowledge representation and reasoning

Hatzilygeroudis, Ioannis January 1992 (has links)
No description available.
3

Enriching deontic logic with typicality

Chingoma, Julian January 2020 (has links)
Legal reasoning is a method that is applied by legal practitioners to make legal decisions. For a scenario, legal reasoning requires not only the facts of the scenario but also the legal rules to be enforced within it. Formal logic has long been used for reasoning tasks in many domains. Deontic logic is a logic which is often used to formalise legal scenarios with its built-in notions of obligation, permission and prohibition. Within the legal domain, it is important to recognise that there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with not only the notion of defeasibility, which allows for reasoning about exceptions, but a stronger notion of typicality which is based on defeasibility. KLM-style defeasible reasoning introduced by Kraus, Lehmann and Magidor (KLM), is a logic system that employs defeasibility while a logic that serves the same role for the stronger notion of typicality is Propositional Typicality Logic (PTL). Deontic paradoxes are often used to examine deontic logic systems as the scenarios arising from the paradoxes' structures produce undesirable results when desirable deontic properties are applied to the scenarios. This is despite the various scenarios themselves seeming intuitive. This dissertation shows that KLM-style defeasible reasoning and PTL are both effective when applied to the analysis of the deontic paradoxes. We first present the background information which comprises propositional logic, which forms the foundation for the other logic systems, as well as the background of KLM-style defeasible reasoning, deontic logic and PTL. We outline the paradoxes along with their issues within the presentation of deontic logic. We then show that for each of the two logic systems we can intuitively translate the paradoxes, satisfy many of the desirable deontic properties and produce reasonable solutions to the issues resulting from the paradoxes.
4

Deadlock and deadlock freedom

Dathi, Naiem January 1989 (has links)
We introduce a number of techniques for establishing the deadlock freedom of concurrent systems. Our methods are based on the local analysis (or at worst a directed global analysis) of networks. We identify the relationships between these techniques and the range of their application within a framework of deadlock freedom types that we have defined. We also show that the problem of proving total correctness may be translated to one of proving deadlock freedom, with the consequence that our techniques for proving deadlock freedom may be utilised to effect a total correctness proof.
5

Automatic methods of inductive inference

Plotkin, Gordon D. January 1972 (has links)
This thesis is concerned with algorithms for generating generalisations-from experience. These algorithms are viewed as examples of the general concept of a hypothesis discovery system which, in its turn, is placed in a framework in which it is seen as one component in a multi-stage process which includes stages of hypothesis criticism or justification, data gathering and analysis and prediction. Formal and informal criteria, which should be satisfied by the discovered hypotheses are given. In particular, they should explain experience and be simple. The formal work uses the first-order predicate calculus. These criteria are applied to the case of hypotheses which are generalisations from experience. A formal definition of generalisation from experience, relative to a body of knowledge is developed and several syntactical simplicity measures are defined. This work uses many concepts taken from resolution theory (Robinson, 1965). We develop a set of formal criteria that must be satisfied by any hypothesis generated by an algorithm for producing generalisation from experience. The mathematics of generalisation is developed. In particular, in the case when there is no body of knowledge, it is shown that there is always a least general generalisation of any two clauses, in the generalisation ordering. (In resolution theory, a clause is an abbreviation for a disjunction of literals.) This least general generalisation is effectively obtainable. Some lattices induced by the generalisation ordering, in the case where there is no body of knowledge, are investigated. The formal set of criteria is investigated. It is shown that for a certain simplicity measure, and under the assumption that there is no body of knowledge, there always exist hypotheses which satisfy them. Generally, however, there is no algorithm which, given the sentences describing experience, will produce as output a hypothesis satisfying the formal criteria. These results persist for a wide range of other simplicity measures. However several useful cases for which algorithms are available are described, as are some general properties of the set of hypotheses which satisfy the criteria. Some connections with philosophy are discussed. It is shown that, with sufficiently large experience, in some cases, any hypothesis which satisfies the formal criteria is acceptable in the sense of Hintikka and Hilpinen (1966). The role of simplicity is further discussed. Some practical difficulties which arise because of Goodman's (1965) "grue" paradox of confirmation theory are presented. A variant of the formal criteria suggested by the work of Meltzer (1970) is discussed. This allows an effective method to be developed when this was not possible before. However, the possibility is countenanced that inconsistent hypotheses might be proposed by the discovery algorithm. The positive results on the existence of hypotheses satisfying the formal criteria are extended to include some simple types of knowledge. It is shown that they cannot be extended much further without changing the underlying simplicity ordering. A program which implements one of the decidable cases is described. It is used to find definitions in the game of noughts and crosses and in family relationships. An abstract study is made of the progression of hypothesis discovery methods through time. Some possible and some impossible behaviours of such methods are demonstrated. This work is an extension of that of Gold (1967) and Feldman (1970). The results are applied to the case of machines that discover generalisations. They are found to be markedly sensitive to the underlying simplicity ordering employed.
6

Strategies for improving the efficiency of automatic theorem-proving

Kuehner, Donald Grant January 1971 (has links)
In an attempt to overcome the great inefficiency of theorem proving methods, several existing methods are studied, and several now ones are proposed. A concentrated attempt is made to devise a unified proof procedure whose inference rules are designed for the efficient use by a search strategy. For unsatisfiable sets of Horn clauses, it is shown that p1-resolution and selective linear negative (SLN) resolution can be alternated heuristically to conduct a bi-directional search. This bi-directional search is shown to be more efficient than either of P.-resolution and SLN-resolution. The extreme sparseness of the SLN-search spaces lead to the extension of SLN-resolution to a more general and more powerful resolution rule, selective linear (SL) resolution, which resembles Loveland's model elimination strategy. With SL-resolution, all immediate descendants of a clause are obtained by resolving upon a single selected literal of that clause. Linear resolution, s-linear resolution and t-linear resolution are shown to be as powerful as the most powerful resolution systems. By slightly decreasing the power, considerable increase in the sparseness of search spaces is obtained by using SL-resolution. The amenability of SL-resolution to applications of heuristic methods suggest that, on these grounds alone, it is at least competitive with theorem-proving procedures designed solely from heuristic considerations. Considerable attention is devoted to various anticipation procedures which allow an estimate of the sparseness of search trees before their generation. Unlimited anticipation takes the form of pseudo-search trees which construct outlines of possible proofs. Anticipation procedures together with a number of heuristic measures are suggested for the implementation of an exhaustive search strategy for SL-resolution.
7

Integrating Reasoning Services for Description Logics with Cardinality Constraints with Numerical Optimization Techniques

De Bortoli, Filippo 08 May 2023 (has links)
Recent research in the field of Description Logic (DL) investigated the complexity of the satisfiability problem for description logics that are obtained by enriching the well-known DL ALCQ with more complex set and cardinality constraints over role successors. The algorithms that have been proposed so far, despite providing worst-case optimal decision procedures for the concept satisfiability problem (both without and with a terminology) lack the efficiency needed to obtain usable implementations. In particular, the algorithm for the case without terminology is non-deterministic and the one for the case with a terminology is also best-case exponential. The goal of this thesis is to use well-established techniques from the field of numerical optimization, such as column generation, in order to obtain more practical algorithms. As a starting point, efficient approaches for dealing with counting quantifiers over unary predicates based on SAT-based column generation should be considered.:1. Introduction 2. Preliminaries 2.1. First-order logic 2.2. Linear Programming 2.3. The description logic ALCQ 2.4. Extending ALCQ with expressive role successor constraints 2.4.1. The logic QFBAPA 2.4.2 The description logic ALCSCC 3. The description logic ALCCQU 3.1. A normal form for ALCCQU 3.2. ALCQt as an equivalent formulation of ALCCQU 3.2.1. ALCQt is a sublogic of ALCCQU 3.2.2. ALCCQU is a sublogic of ALCQt 3.3. Model-theoretic characterization of ALCQt 3.3.1. ALCQt-bisimulation and invariance for ALCQt 3.3.2. Characterization of ALCQt concept descriptions 3.4. Expressive power 3.4.1. Relative expressivity of ALCQ and ALCCQU 3.4.2. Relative expressivity of ALCCQU and ALCSCC 3.5. ALCCQU as the first-order fragment of ALCSCC 4. Concept satisfiability in ALCCQU 4.1. The first-order fragment CQU 4.2. Column generation with SAT oracle 4.2.1. Column generation and CQU 4.2.2. From linear inequalities to propositional formulae 4.2.3. Column generation and ALCCQU 4.3. Branch-and-Price for ALCCQU concept satisfiability 4.4. Correctness of ALCCQU-BB 4.4.1. Complexity of ALCCQU-BB 5. Conclusion - Bibliography
8

L?gica BDI fuzzy

Cruz, Anderson Paiva 26 September 2008 (has links)
Made available in DSpace on 2014-12-17T15:47:50Z (GMT). No. of bitstreams: 1 AndersonPC.pdf: 869261 bytes, checksum: 91e1d275a5e4a9bad4ad9b5124d11a65 (MD5) Previous issue date: 2008-09-26 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Intendding to understand how the human mind operates, some philosophers and psycologists began to study about rationality. Theories were built from those studies and nowadays that interest have been extended to many other areas such as computing engineering and computing science, but with a minimal distinction at its goal: to understand the mind operational proccess and apply it on agents modelling to become possible the implementation (of softwares or hardwares) with the agent-oriented paradigm where agents are able to deliberate their own plans of actions. In computing science, the sub-area of multiagents systems has progressed using several works concerning artificial intelligence, computational logic, distributed systems, games theory and even philosophy and psycology. This present work hopes to show how it can be get a logical formalisation extention of a rational agents architecture model called BDI (based in a philosophic Bratman s Theory) in which agents are capable to deliberate actions from its beliefs, desires and intentions. The formalisation of this model is called BDI logic and it is a modal logic (in general it is a branching time logic) with three access relations: B, D and I. And here, it will show two possible extentions that tranform BDI logic in a modal-fuzzy logic where the formulae and the access relations can be evaluated by values from the interval [0,1] / Com o intuito de entender como a mente humana funciona iniciaram-se estudos sobre cogni??o nos campos da filosofia e psicologia. Teorias surgiram desses estudos e, atualmente, esta curiosidade foi estendida a outras ?reas, tais como, ci?ncia e engenharia de computa??o, no entanto, nestas ?reas, o objetivo ? sutilmente diferente: entender o funcionamento da mente e aplic?-lo em uma modelagem artificial. Em ci?ncia da computa??o, a sub-?rea de sistemas multiagentes tem progredido bastante, utilizando trabalhos em intelig?ncia artificial, l?gica computacional, sistemas distribu?dos, teoria dos jogos e, aproveitando tamb?m teorias provenientes da pr?pria filosofia e psicologia. Desta forma, alguns pesquisadores j? v?em o paradigma de programa??o orientado a agentes como a melhor solu??o para a implementa??o dos softwares mais complexos: cujos sistemas s?o din?micos, n?o-determin?sticos e que podem ter de operar com dados faltosos sobre ambientes tamb?m din?micos e n?o-determin?sticos. Este trabalho busca a apresenta??o de uma extens?o da formaliza??o l?gica de um modelo de arquitetura de agentes cognitivos, chamado BDI (belief-desire-intention), na qual o agente ? capaz de deliberar suas a??es baseando-se em suas cren?as, desejos e inten??es. A formaliza??o de tal modelo ? conhecida pelo nome de l?gica BDI, uma l?gica modal com tr?s rela??es de modalidade. Neste trabalho, ser?o apresentados dois planos para transform?-la numa l?gica modal fuzzy onde as rela??es de acessibilidade e as f?rmulas (modais-fuzzy) poder?o ter valora??es dentro do intervalo [0,1]. Esta l?gica modal fuzzy h? de ser um sistema l?gico formal capaz de representar quantitativamente os diferentes graus de cren?as, desejos e inten??es objetivando a constru??o de racioc?nios fuzzy e a delibera??o de a??es de um agente (ou grupo de agentes), atrav?s dessas atitudes mentais (seguindo assim um modelo intensional)
9

Applications of Foundational Proof Certificates in theorem proving / Applications des Certificats de Preuve Fondamentaux à la démonstration automatique de théorèmes

Blanco Martínez, Roberto 21 December 2017 (has links)
La confiance formelle en une propriété abstraite provient de l'existence d'une preuve de sa correction, qu'il s'agisse d'un théorème mathématique ou d'une qualité du comportement d'un logiciel ou processeur. Il existe de nombreuses définitions différentes de ce qu'est une preuve, selon par exemple qu'elle est écrite soit par des humains soit par des machines, mais ces définitions sont toutes concernées par le problème d'établir qu'un document représente en fait une preuve correcte. Le cadre des Certificats de Preuve Fondamentaux (Foundational Proof Certificates, FPC) est une approche proposée récemment pour étudier ce problème, fondée sur des progrès de la théorie de la démonstration pour définir la sémantique des formats de preuve. Les preuves ainsi définies peuvent être vérifiées indépendamment par un noyau vérificateur de confiance codé dans un langage de programmation logique. Cette thèse étend des résultats initiaux sur la certification de preuves du premier ordre en explorant plusieurs dimensions logiques essentielles, organisées en combinaisons correspondant à leur usage en pratique: d'abord, la logique classique sans points fixes, dont les preuves sont générées par des démonstrateurs automatiques de théorème; ensuite, la logique intuitionniste avec points fixes et égalité,dont les preuves sont générées par des assistants de preuve. Les certificats de preuve ne se limitent pas comme précédemment à servir de représentation des preuves complètes pour les vérifier indépendamment. Leur rôle s'étend pour englober des transformations de preuve qui peuvent enrichir ou compacter leur représentation. Ces transformations peuvent rendre des certificats plus simples opérationnellement, ce qui motive la construction d'une suite de vérificateurs de preuve de plus en plus fiables et performants. Une autre nouvelle fonction des certificats de preuve est l'écriture d'aperçus de preuve de haut niveau, qui expriment des schémas de preuve tels qu'ils sont employés dans la pratique des mathématiciens, ou dans des techniques automatiques comme le property-based testing. Ces développements s'appliquent à la certification intégrale de résultats générés par deux familles majeures de démonstrateurs automatiques de théorème, utilisant techniques de résolution et satisfaisabilité, ainsi qu'à la création de langages programmables de description de preuve pour un assistant de preuve. / Formal trust in an abstract property, be it a mathematical result or a quality of the behavior of a computer program or a piece of hardware, is founded on the existence of a proof of its correctness. Many different kinds of proofs are written by mathematicians or generated by theorem provers, with the common problem of ascertaining whether those claimed proofs are themselves correct. The recently proposed Foundational Proof Certificate (FPC) framework harnesses advances in proof theory to define the semantics of proof formats, which can be verified by an independent and trusted proof checking kernel written in a logic programming language. This thesis extends initial results in certification of first-order proofs in several directions. It covers various essential logical axes grouped in meaningful combinations as they occur in practice: first,classical logic without fixed points and proofs generated by automated theorem provers; later, intuitionistic logic with fixed points and equality as logical connectives and proofs generated by proof assistants. The role of proof certificates is no longer limited to representing complete proofs to enable independent checking, but is extended to model proof transformations where details can be added to or subtracted from a certificate. These transformations yield operationally simpler certificates, around which increasingly trustworthy and performant proof checkers are constructed. Another new role of proof certificates is writing high-level proof outlines, which can be used to represent standard proof patterns as written by mathematicians, as well as automated techniques like property-based testing. We apply these developments to fully certify results produced by two families of standard automated theorem provers: resolution- and satisfiability-based. Another application is the design of programmable proof description languages for a proof assistant.
10

The Biowall Field Test Analysis and Optimization

Jacob J. Torres (5930906) 14 May 2019 (has links)
<div> <p>A residential botanical air filtration system (Biowall) to investigate the potential for using phytoremediation to remove contaminants from indoor air was developed. A full scale and functioning prototype was installed in a residence located in West Lafayette, Indiana. The prototype was integrated into the central Heating, Ventilating, and Air Conditioning (HVAC) system of the home. This research evaluated the Biowall operation to further its potential as an energy efficient and sustainable residential air filtration system.<br></p> <p> </p> <p>The main research effort began after the Biowall was installed in the residence. A field evaluation, which involved a series of measurements and data analysis, was conducted to identify treatments to improve Biowall performance. The study was conducted for approximately one year (Spring 2017-Spring 2018). Based on the initial data set, prioritization of systems in need of improvement was identified and changes were imposed. Following a post-treatment testing period, a comparison between the initial and final performances was completed with conclusions based on this comparison. </p> <p> </p> <p>The engineering and analysis reported in this document focus on the air flow path through the Biowall, plant growth, and the irrigation system. The conclusions provide an extensive evaluation of the design, operation, and function of the Biowall subsystems under review.</p> </div> <br>

Page generated in 0.1026 seconds