Spelling suggestions: "subject:"[een] PROPOSITIONAL LOGIC"" "subject:"[enn] PROPOSITIONAL LOGIC""
11 |
Computational Aspects of Learning, Reasoning, and DecidingZanuttini, Bruno 27 June 2011 (has links) (PDF)
We present results and research projects about the computational aspects of classical problems in Artificial Intelligence. We are interested in the setting of agents able to describe their environment through a possibly huge number of Boolean descriptors, and to act upon this environment. The typical applications of this kind of studies are to the design of autonomous robots (for exploring unknown zones, for instance) or of software assistants (for scheduling, for instance). The ultimate goal of research in this domain is the design of agents able to learn autonomously, by learning and interacting with their environment (including human users), also able to reason for producing new pieces of knowledge, for explaining observed phenomena, and finally, able to decide on which action to take at any moment, in a rational fashion. Ideally, such agents will be fast, efficient as soon as they start to interact with their environment, they will improve their behavior as time goes by, and they will be able to communicate naturally with humans. Among the numerous research questions raised by these objectives, we are especially interested in concept and preference learning, in reinforcement learning, in planning, and in some underlying problems in complexity theory. A particular attention is paid to interaction with humans and to huge numbers of descriptors of the environment, as are necessary in real-world applications.
|
12 |
Semântica proposicional categóricaFerreira, Rodrigo Costa 01 December 2010 (has links)
Made available in DSpace on 2015-05-14T12:11:59Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 891353 bytes, checksum: 2d056c7f53fdfb7c20586b64874e848d (MD5)
Previous issue date: 2010-12-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The basic concepts of what later became called category theory were introduced in 1945 by
Samuel Eilenberg and Saunders Mac Lane. In 1940s, the main applications were originally
in the fields of algebraic topology and algebraic abstract. During the 1950s and 1960s, this
theory became an important conceptual framework in other many areas of mathematical
research, especially in algrebraic homology and algebraic geometry, as shows the works of
Daniel M. Kan (1958) and Alexander Grothendieck (1957). Late, questions mathematiclogics
about the category theory appears, in particularly, with the publication of the
Functorial Semantics of Algebraic Theories (1963) of Francis Willian Lawvere. After,
other works are done in the category logic, such as the the current Makkai (1977), Borceux
(1994), Goldblatt (2006), and others. As introduction of application of the category theory
in logic, this work presents a study on the logic category propositional. The first section
of this work, shows to the reader the important concepts to a better understanding of
subject: (a) basic components of category theory: categorical constructions, definitions,
axiomatic, applications, authors, etc.; (b) certain structures of abstract algebra: monoids,
groups, Boolean algebras, etc.; (c) some concepts of mathematical logic: pre-order, partial
orderind, equivalence relation, Lindenbaum algebra, etc. The second section, it talk
about the properties, structures and relations of category propositional logic. In that
section, we interpret the logical connectives of the negation, conjunction, disjunction and
implication, as well the Boolean connectives of complement, intersection and union, in
the categorical language. Finally, we define a categorical boolean propositional semantics
through a Boolean category algebra. / Os conceitos básicos do que mais tarde seria chamado de teoria das categorias são introduzidos
no artigo General Theory of Natural Equivalences (1945) de Samuel Eilenberg e
Saunders Mac Lane. Já em meados da década de 1940, esta teoria é aplicada com sucesso
ao campo da topologia. Ao longo das décadas de 1950 e 1960, a teoria das categorias ostenta
importantes mudanças ao enfoque tradicional de diversas áreas da matemática, entre
as quais, em especial, a álgebra geométrica e a álgebra homológica, como atestam os pioneiros
trabalhos de Daniel M. Kan (1958) e Alexander Grothendieck (1957). Mais tarde,
questões lógico-matemáticas emergem em meio a essa teoria, em particular, com a publica
ção da Functorial Semantics of Algebraic Theories (1963) de Francis Willian Lawvere.
Desde então, diversos outros trabalhos vêm sendo realizados em lógica categórica, como
os mais recentes Makkai (1977), Borceux (1994), Goldblatt (2006), entre outros. Como
inicialização à aplicação da teoria das categorias à lógica, a presente dissertação aduz um
estudo introdutório à lógica proposicional categórica. Em linhas gerais, a primeira parte
deste trabalho procura familiarizar o leitor com os conceitos básicos à pesquisa do tema:
(a) elementos constitutivos da teoria das categorias : axiomática, construções, aplicações,
autores, etc.; (b) algumas estruturas da álgebra abstrata: monóides, grupos, álgebra de
Boole, etc.; (c) determinados conceitos da lógica matemática: pré-ordem; ordem parcial;
equivalência, álgebra de Lindenbaum, etc. A segunda parte, trata da aproximação da
teoria das categorias à lógica proposicional, isto é, investiga as propriedades, estruturas
e relações próprias à lógica proposicional categórica. Nesta passagem, há uma reinterpreta
ção dos conectivos lógicos da negação, conjunção, disjunção e implicação, bem como
dos conectivos booleanos de complemento, interseção e união, em termos categóricos. Na
seqüência, estas novas concepções permitem enunciar uma álgebra booleana categórica,
por meio da qual, ao final, é construída uma semântica proposicional booleana categórica.
|
13 |
Diagnostic distribué de systèmes respectant la confidentialité / Distributed diagnosis of systems respecting privacyArmant, Vincent 27 September 2012 (has links)
Dans cette thèse, nous nous intéressons à diagnostiquer des systèmes intrinsèquement distribués (comme les systèmes pairs-à-pairs) où chaque pair n'a accès qu'à une sous partie de la description d'un système global. De plus, en raison d'une politique d'accès trop restrictive, il sera pourra qu'aucun pair ne puisse expliquer le comportement du système global. Dans ce contexte, le challenge du diagnostic distribué est le suivant: expliquer le comportement global d'un système distribué par un ensemble de pairs ayant chacun une vision limitée, tout comme l'aurait fait un unique pair diagnostiqueur ayant, lui, une vision globale du système.D'un point de vue théorique, nous montrons que tout nouveau système, logiquement équivalent au système pair-à-pairs initialement observé, garantit que tout diagnostic local d'un pair pourra être prolongé par un diagnostic global (dans ce cas, le nouveau système est dit correct pour le diagnostic distribué).Nous montrons aussi que si ce nouveau système est structuré (c-à-d: il contient un arbre couvrant pour lequel tous les pairs contenant une même variable forme un graphe connecté) alors il garantit que tout diagnostic global pourra être retrouvé à travers un ensemble de diagnostics locaux des pairs (dans ce cas le nouveau système est dit complet pour le diagnostic distribué).Dans un souci de représentation succincte et afin de respecter la politique de confidentialité du vocabulaire de chacun des pairs, nous présentons un nouvel algorithme Token Elimination (TE), qui décompose le système de pairs initial vers un système structuré.Nous montrons expérimentalement que TE produit des décompositions de meilleurs qualité (c-à-d: de plus petites largeurs arborescentes) que les méthodes envisagées dans un contexte distribué. À partir du système structuré construit par TE, nous transformons chaque description locale en une Forme Normale Disjonctive (FND) globalement cohérente.Nous montrons que ce dernier système garantit effectivement un diagnostic distribué correct et complet. En plus, nous exhibons un algorithme capable de vérifier efficacement que tout diagnostic local fait partie d'un diagnostic minimal global, faisant du système structuré de FNDs un système compilé pour le diagnostic distribué. / In this thesis, we focus on diagnosing inherently distributed systems such as peer-to-peer, where each peer has access to only a sub-part of the description of an overall system.In addition, due to a too restrictive access control policy, it can be possible that neither peer nor supervisor is able to explain the behaviour of the overall system.The goal of distributed diagnosis is to explain the behaviour of a distributed system by a set of peers (each having a limited local view) as a single diagnosis engine having a global view of the overall system.First, we show that any new system logically equivalent to the initially observed peer-to-peer setting ensures that all diagnosis of a peer may be extended to a global diagnosis (in this case the new system ensures correctness of the distributed diagnosis).Moreover, we prove that if the new system is structured (i.e.it contains a spanning tree for which all peers containing the same variable form a connected graph) then it ensures that any global diagnosis can be found through a set of local diagnoses (in this case the new system ensures the completeness of the distributed diagnoses).For a succinct representation and in order to comply with the privacy policy of the vocabulary of each peer, we present a new algorithm Token Elimination (TE), which decomposes the original peer system to a structured one.We experimentally show that TE produces better quality decompositions (i.e. smaller tree widths) than proposed methods in a distributed context.From the structured system built by TE, we transform each local description into globally consistent DNF.We demonstrate that the latter system is correct and complete for the distributed diagnosis.Finally, we present an algorithm that can effectively check that any local diagnosis is part of a global minimal diagnosis, turning the structured system of DNFs into a compiled system for distributed diagnosis.
|
14 |
[en] LOGIC PROOFS COMPACTATION / [pt] COMPACTAÇÃO DE PROVAS LÓGICASVASTON GONCALVES DA COSTA 01 June 2007 (has links)
[pt] É um fato conhecido que provas clássicas podem ser
demasiadamente grandes. Estudos em teoria da prova
descobriram diferenças exponenciais entre
provas normais (ou provas livres do corte) e suas
respectivas provas não normais. Por outro lado, provadores
automáticos de teorema usualmente se baseiam na
construção de provas normais, livres de corte ou provas de
corte atômico, pois tais procedimento envolvem menos
escolhas. Provas de algumas tautologias são
conhecidamente grandes quanto realizadas sem a regra do
corte e curtas quando a utilizam. Queremos com este
trabalho apresentar procedimentos para reduzir o
tamanho de provas proposicionais. Neste sentido,
apresentamos dois métodos. O primeiro, denominado método
vertical, faz uso de axiomas de extensão e alguns
casos é possível uma redução considerável no tamanho da
prova. Apresentamos um procedimento que gera tais axiomas
de extensão. O segundo, denominado método horizontal,
adiciona fórmulas máximas por meio de unificação via
substituição de variáveis proposicionais. Também
apresentamos um método que gera tal unificação durante o
processo de construção da prova. O primeiro método
é aplicado a dedução natural enquanto o segundo à Dedução
Natural e Cálculo de Seqüentes. As provas produzidas
correspondem de certo modo a provas não normais (com a
regra do corte). / [en] It is well-known that the size of propositional classical
proofs can be
huge. Proof theoretical studies discovered exponential
gaps between normal or
cut-free proofs and their respective non-normal proofs.
The task of automatic
theorem proving is, on the other hand, usually based on
the construction of
normal, cut-free or only-atomic-cuts proofs, since this
procedure produces less
alternative choices. There are familiar tautologies such
that the cut-free proof
is huge while the non-cut-free is small. The aim of this
work is to reduce the
weight of proposicional deductions. In this sense we
present two methods. The
fi first, namely vertical method, uses the extension
axioms. We present a method
that generates a such extension axiom. The second, namely
horizontal method,
adds suitable (propositional) unifi fications modulo
variable substitutions.We also
present a method that generates a such unifi fication
during the proving process.
The proofs produced correspond in a certain way to non
normal proofs (non
cut-free proofs).
|
15 |
Data Editing and Logic: The covering set method from the perspective of logicBoskovitz, Agnes, abvi@webone.com.au January 2008 (has links)
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
|
16 |
Επαγωγικός λογικός προγραμματισμός : μια διδακτική προσέγγισηΚαραμουτζογιάννη, Ζωή 31 May 2012 (has links)
Ο Επαγωγικός Λογικός Προγραμματισμός (Inductive Logic Programming ή, σε συντομογραφία ILP) είναι ο ερευνητικός τομέας της Τεχνητής Νοημοσύνης (Artificial Intelligence) που δραστηριοποιείται στη τομή των γνωστικών περιοχών της Μάθησης Μηχανής (Machine Learning) και του Λογικού Προγραμματισμού (Logic Programming).Ο όρος επαγωγικός εκφράζει την ιδέα του συλλογισμού από το επί μέρους στο γενικό. Μέσω της επαγωγικής μάθησης μηχανής ο Επαγωγικός Λογικός Προγραμματισμός επιτυγχάνει το στόχο του που είναι η δημιουργία εργαλείων και η ανάπτυξη τεχνικών για την εξαγωγή υποθέσεων από παρατηρήσεις (παραδείγματα) και η σύνθεση-απόκτηση νέας γνώσης από εμπειρικές παρατηρήσεις. Σε αντίθεση με της περισσότερες άλλες προσεγγίσεις της επαγωγικής μάθησης ο Επαγωγικός Λογικός Προγραμματισμός ενδιαφέρεται για της ιδιότητες του συμπερασμού με κανόνες για την σύγκλιση αλγορίθμων και για την υπολογιστική πολυπλοκότητα των διαδικασιών. Ο Επαγωγικός Λογικός Προγραμματισμός ασχολείται με την ανάπτυξη τεχνικών και εργαλείων για την σχεσιακή ανάλυση δεδομένων. Εφαρμόζεται απευθείας σε δεδομένα πολλαπλών συσχετισμών για την ανακάλυψη προτύπων. Τα πρότυπα που ανακαλύπτονται από τα συστήματα στον Επαγωγικό Λογικό Προγραμματισμό προκύπτουν από κάποιο γνωστό θεωρητικό υπόβαθρο και θετικά και αρνητικά παραδείγματα και εκφράζονται ως λογικά προγράμματα. Ο Επαγωγικός Λογικός Προγραμματισμός έχει χρησιμοποιηθεί εκτεταμένα σε προβλήματα που αφορούν τη μοριακή βιολογία, την βιοχημεία και την χημεία. Ο Επαγωγικός Λογικός Προγραμματισμός διαφοροποιείται από τις άλλες μορφές Μάθησης Μηχανής, αφ’ ενός μεν λόγω της χρήσης μιας εκφραστικής γλώσσας αναπαράστασης και αφ’ ετέρου από τη δυνατότητά του να χρησιμοποιεί τη γνώση υποβάθρου. Έχουν αναπτυχθεί διάφορες μηχανισμούς υλοποίησης του ILP, εκ των οποίων η πιο πρόσφατη είναι η Progol, που βασίζεται σε ένα διερμηνέα της Prolog ο οποίος συνοδεύεται από έναν αλγόριθμο Αντίστροφης Συνεπαγωγής (Inverse Entailment). Η Progol κατασκευάζει νέες προτάσεις με τη γενίκευση των παραδειγμάτων που περιέχονται στη βάση δεδομένων που της δίνεται. Η θεωρία του Επαγωγικού Λογικού Προγραμματισμού εγγυάται ότι η Progol θα διεξάγει μια αποδεκτή αναζήτηση στο διάστημα των γενικεύσεων, βρίσκοντας το ελάχιστο σύνολο προτάσεων, από το οποίο όλα τα παραδείγματα μπορούν να προκύψουν.
Σε αυτή την εργασία θα αναπτυχθούν αναλυτικά η θεωρία και οι κανόνες του Επαγωγικού Λογικού Προγραμματισμού, τα είδη των προβλημάτων που επιλύονται μέσω του Επαγωγικού Λογικού Προγραμματισμού, οι μέθοδοι που ακολουθούνται καθώς και ο τρόπος με τον οποίο αναπτύσσονται οι εφαρμογές του Επαγωγικού Λογικού Προγραμματισμού. Θα δοθούν επίσης παραδείγματα κατάλληλα για την κατανόηση των γνώσεων αυτών από ένα ακροατήριο που διαθέτει βασικές γνώσεις Λογικής και Λογικού Προγραμματισμού. / Inductive Logic Programming is a research area of Artificial Intelligence that operates in the intersection of cognitive areas of Machine Learning and Logic Programming. Through inductive machine learning, Inductive Logic Programming‟s objective is creating tools and developing techniques to extract new knowledge composing a background one and empirical observations (examples). Some methods are employed, the best known of which is the reverse implication, the reverse resolution and the inverse implication. Based on Inductive Logic Programming, some systems have been developed for knowledge production. The most widely used system is Progol, which uses an input of examples and background knowledge, whichε is stated in a kind of grammar compatible to that the programming language Prolog, and generates procedures in the same language that illustrate these examples. Other systems are FOIL, MOBAL, GOLEM and LINUS. There is also Cigol which is a programming language based on the theory of Inductive Logic Programming. These systems are used in many applications. The most important is the area of pharmacology, such as predictive toxicology, the provision of rheumatic disease and the design of drugs for Alzheimer's. Applications can also be found in programming, linguistics and games like chess.
|
17 |
The theory and pedagody of semantic inconsistency in critical reasoningDixon, Scott Walton 05 1900 (has links)
One aspect of critical reasoning is the analysis and appraisal of claims and arguments. A typical problem, when analysing and appraising arguments, is inconsistent statements. Although several inconsistencies may have deleterious effects on rationality and action, not all of them do. As educators, we also have an obligation to teach this evaluation in a way that does justice to our normal reasoning practices and judgements of inconsistency. Thus, there is a need to determine the acceptable inconsistencies from those that are not, and to impart that information to students.
We might ask: What is the best concept of inconsistency for critical reasoning and pedagogy? While the answer might appear obvious to some, the history of philosophy shows that there are many concepts of “inconsistency”, the most common of which comes from classical logic and its reliance on opposing truth-values. The current exemplar of this is the standard truth functional account from propositional logic. Initially, this conception is shown to be problematic, practically, conceptually and pedagogically speaking. Especially challenging from the classical perspective are the concepts of ex contradictione quodlibet and ex falso quodlibet. The concepts may poison the well against any notion of inconsistency, which is not something that should be done unreflectively. Ultimately, the classical account of inconsistency is rejected.
In its place, a semantic conception of inconsistency is argued for and demonstrated to handle natural reasoning cases effectively. This novel conception utilises the conceptual antonym theory to explain semantic contrast and gradation, even in the absence of non-canonical antonym pairs. The semantic conception of inconsistency also fits with an interrogative argument model that exploits inconsistency to display semantic contrast in reasons and conclusions. A method for determining substantive inconsistencies follows from this argument model in a
4
straightforward manner. The conceptual fit is then incorporated into the pedagogy of critical reasoning, resulting in a natural approach to reasoning which students can apply to practical matters of everyday life, which include inconsistency. Thus, the best conception of inconsistency for critical reasoning and its pedagogy is the semantic, not the classical. / Philosophy Practical and Systematic Theology / D. Phil
|
18 |
Information retrieval modeling by logic and lattice : application to conceptual information retrieval / Modélisation de la recherche d'information par la logique et les treillis : application à la recherche d'information conceptuelleAbdulahhad, Karam 05 May 2014 (has links)
Cette thèse se situe dans le contexte des modèles logique de Recherche d'Information (RI). Le travail présenté dans la thèse est principalement motivé par l'inexactitude de l'hypothèse sur l'indépendance de termes. En effet, cette hypothèse communément acceptée en RI stipule que les termes d'indexation sont indépendant les un des autres. Cette hypothèse est fausse en pratique mais permet tout de même aux systèmes de RI de donner de bon résultats. La proposition contenue dans cette thèse met également l'emphase sur la nature déductive du processus de jugement de pertinence. Les logiques formelles sont bien adaptées pour la représentation des connaissances. Elles permettent ainsi de représenter les relations entre les termes. Les logiques formelles sont également des systèmes d'inférence, ainsi la RI à base de logique constitue une piste de travail pour construire des systèmes efficaces de RI. Cependant, en étudiant les modèles actuels de RI basés sur la logique, nous montrons que ces modèles ont généralement des lacunes. Premièrement, les modèles de RI logiques proposent normalement des représentations complexes de document et des requête et difficile à obtenir automatiquement. Deuxièmement, la décision de pertinence d->q, qui représente la correspondance entre un document d et une requête q, pourrait être difficile à vérifier. Enfin, la mesure de l'incertitude U(d->q) est soit ad-hoc ou difficile à mettre en oeuvre. Dans cette thèse, nous proposons un nouveau modèle de RI logique afin de surmonter la plupart des limites mentionnées ci-dessus. Nous utilisons la logique propositionnelle (PL). Nous représentons les documents et les requêtes comme des phrases logiques écrites en Forme Normale Disjonctive. Nous argumentons également que la décision de pertinence d->q pourrait être remplacée par la validité de l'implication matérielle. Pour vérifier si d->q est valide ou non, nous exploitons la relation potentielle entre PL et la théorie des treillis. Nous proposons d'abord une représentation intermédiaire des phrases logiques, où elles deviennent des noeuds dans un treillis ayant une relation d'ordre partiel équivalent à la validité de l'implication matérielle. En conséquence, nous transformons la vérification de validité de d->q, ce qui est un calcul intensif, en une série de vérifications simples d'inclusion d'ensembles. Afin de mesurer l'incertitude de la décision de pertinence U(d->q), nous utilisons la fonction du degré d'inclusion Z, qui est capable de quantifier les relations d'ordre partielles définies sur des treillis. Enfin, notre modèle est capable de travailler efficacement sur toutes les phrases logiques sans aucune restriction, et est applicable aux données à grande échelle. Notre modèle apporte également quelques conclusions théoriques comme: la formalisation de l'hypothèse de van Rijsbergen sur l'estimation de l'incertitude logique U(d->q) en utilisant la probabilité conditionnelle P(q|d), la redéfinition des deux notions Exhaustivité et Spécificité, et finalement ce modèle a également la possibilité de reproduire les modèles les plus classiques de RI. De manière pratique, nous construisons trois instances opérationnelles de notre modèle. Une instance pour étudier l'importance de Exhaustivité et Spécificité, et deux autres pour montrer l'insuffisance de l'hypothèse sur l'indépendance des termes. Nos résultats expérimentaux montrent un gain de performance lors de l'intégration Exhaustivité et Spécificité. Cependant, les résultats de l'utilisation de relations sémantiques entre les termes ne sont pas suffisants pour tirer des conclusions claires. Le travail présenté dans cette thèse doit être poursuivit par plus d'expérimentations, en particulier sur l'utilisation de relations, et par des études théoriques en profondeur, en particulier sur les propriétés de la fonction Z. / This thesis is situated in the context of logic-based Information Retrieval (IR) models. The work presented in this thesis is mainly motivated by the inadequate term-independence assumption, which is well-accepted in IR although terms are normally related, and also by the inferential nature of the relevance judgment process. Since formal logics are well-adapted for knowledge representation, and then for representing relations between terms, and since formal logics are also powerful systems for inference, logic-based IR thus forms a candidate piste of work for building effective IR systems. However, a study of current logic-based IR models shows that these models generally have some shortcomings. First, logic-based IR models normally propose complex, and hard to obtain, representations for documents and queries. Second, the retrieval decision d->q, which represents the matching between a document d and a query q, could be difficult to verify or check. Finally, the uncertainty measure U(d->q) is either ad-hoc or hard to implement. In this thesis, we propose a new logic-based IR model to overcome most of the previous limits. We use Propositional Logic (PL) as an underlying logical framework. We represent documents and queries as logical sentences written in Disjunctive Normal Form. We also argue that the retrieval decision d->q could be replaced by the validity of material implication. We then exploit the potential relation between PL and lattice theory to check if d->q is valid or not. We first propose an intermediate representation of logical sentences, where they become nodes in a lattice having a partial order relation that is equivalent to the validity of material implication. Accordingly, we transform the checking of the validity of d->q, which is a computationally intensive task, to a series of simple set-inclusion checking. In order to measure the uncertainty of the retrieval decision U(d->q), we use the degree of inclusion function Z that is capable of quantifying partial order relations defined on lattices. Finally, our model is capable of working efficiently on any logical sentence without any restrictions, and is applicable to large-scale data. Our model also has some theoretical conclusions, including, formalizing and showing the adequacy of van Rijsbergen assumption about estimating the logical uncertainty U(d->q) through the conditional probability P(q|d), redefining the two notions Exhaustivity and Specificity, and the possibility of reproducing most classical IR models as instances of our model. We build three operational instances of our model. An instance to study the importance of Exhaustivity and Specificity, and two others to show the inadequacy of the term-independence assumption. Our experimental results show worthy gain in performance when integrating Exhaustivity and Specificity into one concrete IR model. However, the results of using semantic relations between terms were not sufficient to draw clear conclusions. On the contrary, experiments on exploiting structural relations between terms were promising. The work presented in this thesis can be developed either by doing more experiments, especially about using relations, or by more in-depth theoretical study, especially about the properties of the Z function.
|
19 |
Changement de croyances et logiques modales / Belief change and modal logicsCaridroit, Thomas 13 December 2016 (has links)
Le changement de croyances vise à trouver des moyens adéquats pour faire évoluer les croyances d'un agent lorsqu'il est confronté à de nouvelles informations. Dans la plupart des travaux sur la révision de croyances, l'ensemble de croyances d'un agent est composé de croyances au sujet de l'environnement (le monde) et est représenté par un ensemble de formules de la logique classique. Dans de nombreuses applications, un agent n'est pas seul dans l'environnement, mais le partage avec d'autres agents, qui ont aussi des croyances. Ainsi les croyances sur les croyances des autres agents constituent un élément d'information important pour l'agent, afin d'être en mesure de prendre les meilleures décisions et d'effectuer les meilleures actions. L'utilisation de croyances sur les croyances des autres agents est par exemple cruciale dans la théorie des jeux. Dans cette thèse, nous étudions dans un premier temps les opérateurs de contraction propositionnelle correspondant aux opérateurs de révision de Katsuno et Mendelzon. Nous étudions ensuite une connexion entre les logiques épistémiques et la théorie du changement de croyances, proche de l'approche AGM. Nous nous sommes intéressés à l'utilisation des opérateurs qui modifient les croyances des agents dans les modèles KD45n standard. Cette tâche est plus compliquée que dans le cadre AGM standard, car, dans un contexte multi-agents, les nouvelles informations peuvent prendre différentes formes. Par exemple, chaque nouvelle information peut être observée/transmise/disponible à tous les agents ou seulement à certains d’entre eux. / Belief change is about finding appropriate ways to evolve an agent's beliefs when confronted with new pieces of information. In most works on belief revision, the set of beliefs of an agent is composed of beliefs about the environment (the world) and is represented by a set of formulas of classical logic. In many applications, an agent is not alone in the environment, but sharing with other agents, which also have beliefs. Thus beliefs about the beliefs of other agents are an important piece of information for the agent in order to be able to make the best decisions and perform the best actions. The use of beliefs about the beliefs of other agents is, for exampel, crucial in game theory. In this thesis, we first study the operators of propositional contraction corresponding to the revision operators proposed by Katsuno and Mendelzon. Then, we study a connection between epistemic logics and belief change theory, close to the AGM approach. We are interested in the use of operators that modify agent beliefs in standard KD45n models. This task is more complicated than in the standard AGM framework because, in a multi-agent context, new information can take different forms. For example, each new information can be observed/transmitted/available to all agents or only some of them.
|
20 |
Proof systems for propositional modal logicVan der Vyver, Thelma 11 1900 (has links)
In classical propositional logic (CPL) logical reasoning is formalised as logical entailment and can be computed by means of tableau and resolution proof procedures. Unfortunately CPL is not expressive enough and using first order logic (FOL) does not solve the problem either since proof procedures for these logics are not decidable. Modal propositional logics (MPL) on the other hand are both decidable and more expressive than CPL. It therefore seems reasonable to apply tableau and resolution proof systems to MPL in order to compute logical entailment in MPL. Although some of the principles in CPL are present in MPL, there are complexities in MPL that are not present in CPL. Tableau and resolution proof systems which address these issues and others will be surveyed here. In particular the work of Abadi & Manna (1986), Chan (1987), del Cerro & Herzig (1988), Fitting (1983, 1990) and
Gore (1995) will be reviewed. / Computing / M. Sc. (Computer Science)
|
Page generated in 0.0507 seconds