• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • Tagged with
  • 13
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis and specialisation of imperative programs : an approach using CLP

Peralta Estrada, Julio C. January 2000 (has links)
No description available.
2

An investigation of persistence in temporal reasoning

Evans, David Hugh January 1997 (has links)
No description available.
3

ATT: Execution models for logic programs

Chen, Pu January 1995 (has links)
No description available.
4

Stationary generated models of generalized logic programs

Herre, Heinrich, Hummel, Axel January 2010 (has links)
The interest in extensions of the logic programming paradigm beyond the class of normal logic programs is motivated by the need of an adequate representation and processing of knowledge. One of the most difficult problems in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the ‘intended’ partial models of generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [Prz91]. The presented preference criterion defines a partial model of a generalized logic program as intended if it is generated by a stationary chain. It turns out that the stationary generated models coincide with the stationary models on the class of normal logic programs. The general wellfounded semantics of such a program is defined as the set-theoretical intersection of its stationary generated models. For normal logic programs the general wellfounded semantics equals the wellfounded semantics.
5

A workbench for developing logic programs by stepwise enhancement

Lakhotia, Arun January 1990 (has links)
No description available.
6

Bayesian Logic Programs for plan recognition and machine reading

Vijaya Raghavan, Sindhu 22 February 2013 (has links)
Several real world tasks involve data that is uncertain and relational in nature. Traditional approaches like first-order logic and probabilistic models either deal with structured data or uncertainty, but not both. To address these limitations, statistical relational learning (SRL), a new area in machine learning integrating both first-order logic and probabilistic graphical models, has emerged in the recent past. The advantage of SRL models is that they can handle both uncertainty and structured/relational data. As a result, they are widely used in domains like social network analysis, biological data analysis, and natural language processing. Bayesian Logic Programs (BLPs), which integrate both first-order logic and Bayesian net- works are a powerful SRL formalism developed in the recent past. In this dissertation, we develop approaches using BLPs to solve two real world tasks – plan recognition and machine reading. Plan recognition is the task of predicting an agent’s top-level plans based on its observed actions. It is an abductive reasoning task that involves inferring cause from effect. In the first part of the dissertation, we develop an approach to abductive plan recognition using BLPs. Since BLPs employ logical deduction to construct the networks, they cannot be used effectively for abductive plan recognition as is. Therefore, we extend BLPs to use logical abduction to construct Bayesian networks and call the resulting model Bayesian Abductive Logic Programs (BALPs). In the second part of the dissertation, we apply BLPs to the task of machine reading, which involves automatic extraction of knowledge from natural language text. Most information extraction (IE) systems identify facts that are explicitly stated in text. However, much of the information conveyed in text must be inferred from what is explicitly stated since easily inferable facts are rarely mentioned. Human readers naturally use common sense knowledge and “read between the lines” to infer such implicit information from the explicitly stated facts. Since IE systems do not have access to common sense knowledge, they cannot perform deeper reasoning to infer implicitly stated facts. Here, we first develop an approach using BLPs to infer implicitly stated facts from natural language text. It involves learning uncertain common sense knowledge in the form of probabilistic first-order rules by mining a large corpus of automatically extracted facts using an existing rule learner. These rules are then used to derive additional facts from extracted information using BLP inference. We then develop an online rule learner that handles the concise, incomplete nature of natural-language text and learns first-order rules from noisy IE extractions. Finally, we develop a novel approach to calculate the weights of the rules using a curated lexical ontology like WordNet. Both tasks described above involve inference and learning from partially observed or incomplete data. In plan recognition, the underlying cause or the top-level plan that resulted in the observed actions is not known or observed. Further, only a subset of the executed actions can be observed by the plan recognition system resulting in partially observed data. Similarly, in machine reading, since some information is implicitly stated, they are rarely observed in the data. In this dissertation, we demonstrate the efficacy of BLPs for inference and learning from incomplete data. Experimental comparison on various benchmark data sets on both tasks demonstrate the superior performance of BLPs over state-of-the-art methods. / text
7

Handling Inconsistency in Knowledge Bases

Jayakumar, Badrinath 10 May 2017 (has links)
Real-world automated reasoning systems, based on classical logic, face logically inconsistent information, and they must cope with it. It is onerous to develop such systems because classical logic is explosive. Recently, progress has been made towards semantics that deal with logical inconsistency. However, such semantics was never analyzed in the aspect of inconsistency tolerant relational model. In our research work, we use an inconsistency and incompleteness tolerant relational model called "Paraconsistent Relational Model." The paraconsistent relational model is an extension of the ordinary relational model that can store, not only positive information but also negative information. Therefore, a piece of information in the paraconsistent relational model has four truth values: true, false, both, and unknown. However, the paraconsistent relational model cannot represent disjunctive information (disjunctive tuples). We then introduce an extended paraconsistent relational model called disjunctive paraconsistent relational model. By using both the models, we handle inconsistency - similar to the notion of quasi-classic logic or four-valued logic -- in deductive databases (logic programs with no functional symbols). In addition to handling inconsistencies in extended databases, we also apply inconsistent tolerant reasoning technique in semantic web knowledge bases. Specifically, we handle inconsistency assosciated with closed predicates in semantic web. We use again the paraconsistent approach to handle inconsistency. We further extend the same idea to description logic programs (combination of semantic web and logic programs) and introduce dl-relation to represent inconsistency associated with description logic programs.
8

A paraconsistent semantics for generalized logic programs

Herre, Heinrich, Hummel, Axel January 2010 (has links)
We propose a paraconsistent declarative semantics of possibly inconsistent generalized logic programs which allows for arbitrary formulas in the body and in the head of a rule (i.e. does not depend on the presence of any specific connective, such as negation(-as-failure), nor on any specific syntax of rules). For consistent generalized logic programs this semantics coincides with the stable generated models introduced in [HW97], and for normal logic programs it yields the stable models in the sense of [GL88].
9

Towards putting abstract interpretation of Prolog into practice : design, implementation and evaluation of a tool to verify and optimise Prolog programs

Gobert, François 11 December 2007 (has links)
Logic programming is appealing since it allows the programmer to concentrate on the meaning of the problem to be solved. Unfortunately, for efficiency reasons, the declarative and operational natures of Prolog do not coincide. Prolog uses an incomplete depth-first search rule, unifications and negations may be unsound, and there are extralogical features like the cut or dynamic predicates. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework. <P> The goal and topic of this thesis is the design, implementation and evaluation of an abstract interpretation framework of Prolog to integrate state-of-the-art techniques. The analyser is based on an original proposal that defines the notion of abstract sequence, which allows one to verify many desirable operational properties of a logic procedure. The properties include types, modes, sharing of terms, proving termination, linear relations between the size of input/output terms and the number of solutions to a call. A single global analysis is performed, and abstract sequences are derived at each program point. <P> In this thesis, we implement and evaluate the original framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog code with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. The optimiser uses the abstract information to safely apply source-to-source transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order.
10

Neural-Symbolic Integration / Neuro-Symbolische Integration

Bader, Sebastian 15 December 2009 (has links) (PDF)
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them. Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.

Page generated in 0.0737 seconds