• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 528
  • 528
  • 148
  • 139
  • 124
  • 123
  • 119
  • 111
  • 103
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Action, Time and Space in Description Logics

Milicic, Maja 08 September 2008 (has links) (PDF)
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. One of the main technical results is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. A smaller part of the work is related to decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time.
142

On the Computation of Common Subsumers in Description Logics

Turhan, Anni-Yasmin 30 May 2008 (has links) (PDF)
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
143

Knowledge representation and stocastic multi-agent plan recognition

Suzic, Robert January 2005 (has links)
<p>To incorporate new technical advances into military domain and make those processes more <i>efficient</i> in accuracy, time and cost, a new concept of Network Centric Warfare has been introduced in the US military forces. In Sweden a similar concept has been studied under the name Network Based Defence (NBD). Here we present one of the methodologies, called tactical plan recognition that is aimed to support NBD in future.</p><p>Advances in sensor technology and modelling produce large sets of data for decision makers. To achieve <i>decision superiority</i>, decision makers have to act agile with proper, adequate and relevant information (data aggregates) available. Information fusion is a process aimed to support decision makers’ situation awareness. This involves a process of combining data and information from disparate sources with <i>prior</i> information or knowledge to obtain an improved state estimate about an agent or phenomena. <i>Plan recognition</i> is the term given to the process of inferring an agent’s intentions from a set of actions and is intended to support decision making.</p><p>The aim of this work has been to introduce a methodology where prior (empirical) knowledge (e.g. behaviour, environment and organization) is represented and combined with sensor data to recognize plans/behaviours of an agent or group of agents. We call this methodology <i>multi-agent plan recognition</i>. It includes knowledge representation as well as imprecise and statistical inference issues.</p><p>Successful plan recognition in large scale systems is heavily dependent on the data that is supplied. Therefore we introduce a <i>bridge</i> between the plan recognition and sensor management where results of our plan recognition are reused to the control of, give <i>focus of attention</i> to, the sensors that are supposed to acquire most important/<i>relevant</i> information.</p><p>Here we combine different theoretical methods (Bayesian Networks, Unified Modeling Language and Plan Recognition) and apply them for tactical military situations for ground forces. The results achieved from several proof-ofconcept models show that it is possible to model and recognize behaviour of tank units.</p>
144

Exploring the use of contextual metadata collected during ubiquitous learning activities

Svensson, Martin, Pettersson, Oskar January 2008 (has links)
<p>Recent development in modern computing has led to a more diverse use of devices within the field of mobility. Many mobile devices of today can, for instance, surf the web and connect to wireless networks, thus gradually merging the wired Internet with the mobile Internet. As mobile devices by design usually have built-in means for creating rich media content, along with the ability to upload these to the Internet, these devices are potential contributors to the already overwhelming content collection residing on the World Wide Web. While interesting initiatives for structuring and filtering content on the World Wide Web exist – often based on various forms of metadata – a unified understanding of individual content is more or less restricted to technical metadata values, such as file size and file format. These kinds of metadata make it impossible to incorporate the purpose of the content when designing applications. Answers to questions such as "why was this content created?" or "in which context was the content created?" would allow for a more specified content filtering tailored to fit the end-users cause. In the opinion of the authors, this kind of understanding would be ideal for content created with mobile devices which purposely are brought into various environments. This is why we in this thesis have investigated in which way descriptions of contexts could be caught, structured and expressed as machine-readable semantics.</p><p>In order to limit the scope of our work we developed a system which mirrored the context of ubiquitous learning activities to a database. Whenever rich media content was created within these activities, the system associated that particular content to its context. The system was tested during live trials in order to gather reliable and “real” contextual data leading to the transition to semantics by generating Rich Document Format documents from the contents of the database. The outcome of our efforts was a fully-functional system able to capture contexts of pre-defined ubiquitous learning activities and transforming these into machine-readable semantics. We would like to believe that our contribution has some innovative aspects – one being that the system can output contexts of activities as semantics in real-time, allowing monitoring of activities as they are performed.</p>
145

Decentralising the codification of rules in a decision support expert knowledge base

De Kock, Erika. January 2003 (has links)
Thesis (M. Sc.(Computer Science))--University of Pretoria, 2003. / Includes bibliographical references.
146

A Comparison of Web Resource Access Experiments:Planning for the New Millennium

Greenberg, Jane January 2000 (has links)
Over the last few years the bibliographic control community has initiated a series of experiments that aim to improve access to the growing number of valuable information resources that are increasingly being placed on World Wide Web (here after referred to as Web resources). Much has been written about these experiments, mainly describing their implementation and features, and there has been some evaluative reporting, but there has been little comparison among these initiatives. The research reported on in this paper addresses this limitation by comparing five leading experiments in this area. The objective was to identify characteristics of success and considerations for improvement in experiments providing access to Web resources via bibliographic control methods. The experiments examined include: OCLC's CORC project; UKOLN's BIBLINK, ROADS, and DESIRE projects; and the NORDIC project. The research used a multi-case study methodology and a framework comprised of five evaluation criteria that included the experiment's organizational structure, reception, duration, application of computing technology, and use of human resources. This paper defines the Web resource access experimentation environment, reviews the study's research methodology, and highlights key findings. The paper concludes by initiating a strategic plan and by inviting conference participants to contribute their ideas and expertise to an effort will improve experimental initiatives that ultimately aim to improve access to Web resources in the new Millennium.
147

Debugging and repair of description logic ontologies.

Moodley, Kodylan. January 2010 (has links)
In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to represent knowledge about a particular domain of interest in a precise way. The building blocks of ontologies include concepts, relations and objects. Those can be combined to form logical sentences which explicitly describe the domain. With this explicit knowledge one can perform reasoning to derive knowledge that is implicit in the ontology. Description Logics (DLs) are a group of knowledge representation languages with such capabilities that are suitable to represent ontologies. The process of building ontologies has been greatly simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and OntoStudio. The result of this is that there are a growing number of ontology engineers attempting to build and develop ontologies. It is frequently the case that errors are introduced while constructing the ontology resulting in undesirable pieces of implicit knowledge that follows from the ontology. As such there is a need to extend current ontology editors with tool support to aid these ontology engineers in correctly designing and debugging their ontologies. Errors such as unsatis able concepts and inconsistent ontologies frequently occur during ontology construction. Ontology Debugging and Repair is concerned with helping the ontology developer to eliminate these errors from the ontology. Much emphasis, in current tools, has been placed on giving explanations as to why these errors occur in the ontology. Less emphasis has been placed on using this information to suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an important gap in the area by contributing an alternative approach to ontology debugging and repair for the more general error of a list of unwanted sentences. Errors such as unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences in the ontology. Our approach not only considers the explanation of the unwanted sentences but also the identi cation of repair strategies to eliminate these unwanted sentences from the ontology. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.
148

A functional theory of creative reading : process, knowledge, and evaluation

Moorman, Kenneth Matthew 08 1900 (has links)
No description available.
149

ON SIMPLE BUT HARD RANDOM INSTANCES OF PROPOSITIONAL THEORIES AND LOGIC PROGRAMS

Namasivayam, Gayathri 01 January 2011 (has links)
In the last decade, Answer Set Programming (ASP) and Satisfiability (SAT) have been used to solve combinatorial search problems and practical applications in which they arise. In each of these formalisms, a tool called a solver is used to solve problems. A solver takes as input a specification of the problem – a logic program in the case of ASP, and a CNF theory for SAT – and produces as output a solution to the problem. Designing fast solvers is important for the success of this general-purpose approach to solving search problems. Classes of instances that pose challenges to solvers can help in this task. In this dissertation we create challenging yet simple benchmarks for existing solvers in ASP and SAT.We do so by providing models of simple logic programs as well as models of simple CNF theories. We then randomly generate logic programs as well as CNF theories from these models. Our experimental results show that computing answer sets of random logic programs as well as models of random CNF theories with carefully chosen parameters is hard for existing solvers. We generate random logic programs with 2-literals, and our experiments show that it is hard for ASP solvers to obtain answer sets of purely negative and constraint-free programs, indicating the importance of these programs in the development of ASP solvers. An easy-hard-easy pattern emerges as we compute the average number of choice points generated by ASP solvers on randomly generated 2-literal programs with an increasing number of rules. We provide an explanation for the emergence of this pattern in these programs. We also theoretically study the probability of existence of an answer set for sparse and dense 2-literal programs. We consider simple classes of mixed Horn formulas with purely positive 2- literal clauses and purely negated Horn clauses. First we consider a class of mixed Horn formulas wherein each formula has m 2-literal clauses and k-literal negated Horn clauses. We show that formulas that are generated from the phase transition region of this class are hard for complete SAT solvers. The second class of Mixed Horn Formulas we consider are obtained from completion of a certain class of random logic programs. We show the appearance of an easy-hard-easy pattern as we generate formulas from this class with increasing numbers of clauses, and that the formulas generated in the hard region can be used as benchmarks for testing incomplete SAT solvers.
150

Approximating Operators and Semantics for Abstract Dialectical Frameworks

Strass, Hannes 31 January 2013 (has links) (PDF)
We provide a systematic in-depth study of the semantics of abstract dialectical frameworks (ADFs), a recent generalisation of Dung\'s abstract argumentation frameworks. This is done by associating with an ADF its characteristic one-step consequence operator and defining various semantics for ADFs as different fixpoints of this operator. We first show that several existing semantical notions are faithfully captured by our definition, then proceed to define new ADF semantics and show that they are proper generalisations of existing argumentation semantics from the literature. Most remarkably, this operator-based approach allows us to compare ADFs to related nonmonotonic formalisms like Dung argumentation frameworks and propositional logic programs. We use polynomial, faithful and modular translations to relate the formalisms, and our results show that both abstract argumentation frameworks and abstract dialectical frameworks are at most as expressive as propositional normal logic programs.

Page generated in 0.1614 seconds