• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 34
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 82
  • 50
  • 40
  • 37
  • 36
  • 33
  • 29
  • 29
  • 28
  • 26
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Improving Scalability And Efficiency Of Ilp-based And Graph-based Concept Discovery Systems

Mutlu, Alev 01 July 2013 (has links) (PDF)
Concept discovery is the problem of finding definitions of target relation in terms or other relation given as a background knowledge. Inductive Logic Programming (ILP)-based and graph-based approaches are two competitors in concept discovery problem. Although ILP-based systems have long dominated the area, graph-based systems have recently gained popularity as they overcome certain shortcomings of ILP-based systems. While having applications in numerous domains, ILP-based concept discovery systems still sustain scalability and efficiency issues. These issues generally arose due to the large search spaces such systems build. In this work we propose memoization-based and parallelization-based methods that modify the search space construction step and the evaluation step of ILP-based concept discovery systems to overcome these problem. In this work we propose three memoization-based methods, called Tabular CRIS, Tabular CRIS-wEF, and Selective Tabular CRIS. In these methods, basically, evaluation queries are stored in look-up tables for later uses. While preserving some core functions in common, each proposed method improves e_ciency and scalability of its predecessor by introducing constraints on what kind of evaluation queries to store in look-up tables and for how long. The proposed parallelization method, called pCRIS, parallelizes the search space construction and evaluation steps of ILP-based concept discovery systems in a data-parallel manner. The proposed method introduces policies to minimize the redundant work and waiting time among the workers at synchronization points. Graph-based approaches were first introduced to the concept discovery domain to handle the so called local plateau problem. Graph-based approaches have recently gained more popularity in concept discovery system as they provide convenient environment to represent relational data and are able to overcome certain shortcomings of ILP-based concept discovery systems. Graph-based approaches can be classified as structure-based approaches and path-finding approaches. The first class of approaches need to employ expensive algorithms such as graph isomorphism to find frequently appearing substructures. The methods that fall into the second class need to employ sophisticated indexing mechanisms to find out the frequently appearing paths that connect some nodes in interest. In this work, we also propose a hybrid method for graph-based concept discovery which does not require costly substructure matching algorithms and path indexing mechanism. The proposed method builds the graph in such a way that similar facts are grouped together and paths that eventually turn to be concept descriptors are build while the graph is constructed.
132

Proceedings of the 23rd Workshop on (Constraint) Logic Programming 2009

January 2010 (has links)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. The 23rd WLP was held in Potsdam at September 15 – 16, 2009. The topics of the presentations of WLP2009 were grouped into the major areas: Databases, Answer Set Programming, Theory and Practice of Logic Programming as well as Constraints and Constraint Handling Rules.
133

Preface

Geske, Ulrich, Wolf, Armin January 2010 (has links)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, / interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
134

A CLP(FD)-based model checker for CTL

Eriksson, Marcus January 2005 (has links)
<p>Model checking is a formal verification method where one tries to prove or disprove properties of a formal system. Typical systems one might want to prove properties within are network protocols and digital circuits. Typical properties to check for are safety (nothing bad ever happens) and liveness (something good eventually happens).</p><p>This thesis describes an implementation of a sound and complete model checker for Computation Tree Logic (CTL) using Constraint Logic Programming over Finite Domains (CLP(FD)). The implementation described uses tabled resolution to remember earlier computations, is parameterised by choices of computation strategies and can with slight modification support different constraint domains. Soundness under negation is maintained through a restricted form of constructive negation.</p><p>The computation process amounts to a fixpoint search, where a fixpoint is reached when no more extension operations has any effect. As results show, the choice of strategies does influence the efficiency of the computation. Soundness and completeness are of course independent of the choice of strategies. Strategies include how to choose the extension operation for the next step and whether to perform global or local rule instantiations, resulting in bottom-up or top-down computations respectively.</p>
135

An inductive logic programming approach to statistical relational learning /

Kersting, Kristian. January 2006 (has links)
Univ., Diss.--Freiburg (Breisgau), 2006.
136

Nonclausal logic programming /

Malachi, Yonathan. January 1900 (has links)
Thesis (Ph. D.)--Stanford University, 1986. / Cover title. "March 1986." Includes bibliographical references.
137

Representing actions in logic-based languages

Yang, Fangkai 27 June 2014 (has links)
Knowledge about actions is an important part of commonsense knowledge studied in Artificial Intelligence. For decades, researchers have been developing methods for describing how actions affect states of the world and for automating reasoning about actions. In recent years, significant progress has been made. In particular, the frame problem has been solved using nonmonotonic knowledge representation formalisms, such as logic programming under the answer set semantics. New theories of causality have allowed us to express causal dependencies between fluents, which has proved essential for solving the ramification problem. It has been shown that reasoning about actions described by logic programs and causal theories can be automated using answer set programming. Action description languages are high level languages that allow us to represent knowledge about actions more concisely than when logic programs are used. Many action description languages have been described in the literature, including B, C, and C+. Reasoning about dynamic domains described in languages C and C+ can be performed automatically using the Causal Calculator (CCalc), which employs SAT solvers for search, and the systems coala and cplus2asp, which employ answer set solvers such as clingo. The dissertation addresses problems of three kinds. First, we study some mathematical properties of expressive action languages based on nonmonotonic causal logic that were not well understood until now. This includes causal rules expressing synonymy, nondefinite causal rules, and nonpropositional causal rules. We generalize existing translations from nonmonotonic causal theories to logic programming under the answer set semantics. This makes it possible to automate reasoning with a wider class of causal theories by calling answer set solvers. Second, we design and study a new action language BC, which is more expressive in some ways than the existing and previously proposed languages. We develop a framework that combines the most useful expressive features of the languages B and C+, and use program completion to characterize the effects of actions described in these languages. Third, we illustrate the possibilities of the new action language by two practical applications: to the dynamic domain of the Reactive Control System of the space shuttle, and to the task planning of mobile robots. / text
138

Expressiveness of answer set languages

Ferraris, Paolo, 1972- 28 August 2008 (has links)
Answer set programming (ASP) is a form of declarative programming oriented towards difficult combinatorial search problems. It has been applied, for instance, to plan generation and product configuration problems in artificial intelligence and to graph-theoretic problems arising in VLSI design and in historical linguistics. Syntactically, ASP programs look like Prolog programs, but the computational mechanisms used in ASP are different: they are based on the ideas that have led to the development of fast satisfiability solvers for propositional logic. ASP is based on the answer set/stable model semantics for logic problems, originally intended as a specification for query answering in Prolog. From the original definition of 1988, the semantics was independently extended by different research groups to more expressive kinds of programs, with syntax and semantics that are incompatible with each other. In this thesis we study how the various extensions are related to each other. In order to do that, we propose another definition of an answer set. This definition has three main characteristics: (i) it is very simple, (ii) its syntax is more general than the usual concept of a logic program, and (iii) strong theoretical tools can be used to reason on it. About (ii), we show that our syntax allows constructs defined in many other extensions of the answer sets semantics. This fact, together with (iii), allows us to study the expressiveness of those constructs. We also compare the answer set semantics with another important formalism developed by Norm McCain and Hudson Turner, called logic. / text
139

Universalus pertvarkų įrankis / Universal refactoring tool

Peldžius, Stasys 25 November 2010 (has links)
Vykstant nuolatiniam programų sistemų atnaujinimui, nuolatos reikia prižiūrėti, kad jos būtų kokybiškai projektuojamos ir programuojamos. Tačiau neišvengiamai atsiranda nekokybiško išeities teksto, arba atsiranda projektavimo trūkumų. Todėl yra svarbu mokėti ieškoti tokias problemas, ir jas ištaisyti. Šio darbo tikslas yra sukurti automatinio – universalaus įrankio modelį, kuris savarankiškai aptiktų pertvarkas, bei būtų nepriklausomas nuo konkrečios programavimo kalbos. Šiam tikslui pasiekti yra nagrinėjami mokslininkų siūlomi automatiškai aptinkantys pertvarkas metodai. Taip pat yra nagrinėjamos tokio įrankio realizavimo galimybės, pateikti realizaciniai sprendimai ir pavyzdžiai. Taip pat tikslas yra sukurti praktiškai naudingas automatines pertvarkas, kurios būtų realizuotos pasiūlytu įrankiu, ir pademonstruotas jų veikimas. Šis įrankis naudoja loginį programavimą, kurio faktais yra aprašomos pertvarkomos programos, o taisyklėmis – pačios pertvarkų programos. Sėkmingai sukurti automatiniai pertvarkų radimo pavyzdžiai, leidžia daryti išvadą, kad šiame darbe rastas būdas automatiškai aptikti nekokybišką išeities tekstą, bei realizuoti tokias pertvarkas nepriklausomai nuo programavimo kalbos. / In the continual evolution of software systems, should be continuous to ensure that they are high quality designed and programmed. But inevitably the defective code, that call “bad small” or the design deficiencies. It is therefore important to be able to find such problems, and to correct them. The aim of this thesis is to create automatic - universal refactoring tool, which is detected in self-refactoring, and is independent of specific programming languages. To achieve this objective are scientists considered the proposed automatic detection of refactoring methods. It is also considered the possibility of the realization of such a tool, to provide examples and realizable solutions. It also aims to create a practical benefit of the automatic adjustments to the proposed tool is to be realized, and a demonstration of their operation. This tool uses a logic programming, which is a factual description of the conversion, and the rules - the refactoring of the program. The successful creation of automatic detection for refactoring, it can be concluded that this work is found way to automatically detect poor quality of source code, and the realization of the restructuring, regardless of programming language.
140

Validation of machine-oriented strategies in chess endgames

Niblett, Timothy B. January 1982 (has links)
This thesis is concerned with the validation of chess endgame strategies. It is also concerned with the synthesis of strategies that can be validated. A strategy for a given player is the specification of the move to be made by that player from any position that may occur. This move may be dependent on the previous moves of both sides. A strategy is said to be correct if following the strategy always leads to an outcome of at least the same game theoretic value as the starting position. We are not concerned with proving the correctness of programs that implement the strategies under consideration. We shall be working with knowledge-based programs which produce playing strategies, and assume that their concrete implementations (in POP2, PROLOG etc.) are correct. The synthesis approach taken attempts to use the large body of heuristic knowledge and theory, accumulated over the centuries by chessmasters, to find playing strategies. Our concern here is to produce structures for representing a chessmaster's knowledge wnich can be analysed within a game theoretic model. The validation approach taken is that a theory of the domain in the form of the game theoretic model of chess provides an objective measure of the strategy followed by a program. Our concern here is to analyse the structures created in the synthesis phase. This is an instance of a general problem, that of quantifying the performance of computing systems. In general to quantify the performance of a system we need,- A theory of the domain. - A specification of the problem to be solved. - Algorithms and/or domain-specific knowledge to be applied to solve the problem.

Page generated in 0.0738 seconds