• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 1
  • Tagged with
  • 22
  • 22
  • 21
  • 21
  • 15
  • 14
  • 14
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Number Restrictions on Complex Roles in Description Logics

Baader, Franz, Sattler, Ulrike 18 May 2022 (has links)
Number restrictions are concept constructors that are available in almost all implemented description logic systems. However, even though there has lately been considerable effort on integrating expressive role constructors into description logics, the roles that may occur in number restrictions are usually of a very restricted type. Until now, only languages with number restrictions on atomic roles and inversion of atomic roles, or with number restrictions on intersection of atomic roles have been investigated in detail. In the present paper, we increase the expressive power of description languages by allowing for more complex roles in number restrictions. As role constructors, we consider composition of roles (which will be present in all our languages), and intersection, union and inversion of roles in different combinations. We will present one decidability result (for the basic language that extends ALC by number restrictions on roles with composition), and three undecidability results for three different extensions of the basic language.
12

Fuzzy Description Logics with General Concept Inclusions

Borgwardt, Stefan 23 May 2014 (has links)
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hájek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
13

Learning Terminological Knowledge with High Confidence from Erroneous Data

Borchmann, Daniel 09 September 2014 (has links)
Description logics knowledge bases are a popular approach to represent terminological and assertional knowledge suitable for computers to work with. Despite that, the practicality of description logics is impaired by the difficulties one has to overcome to construct such knowledge bases. Previous work has addressed this issue by providing methods to learn valid terminological knowledge from data, making use of ideas from formal concept analysis. A basic assumption here is that the data is free of errors, an assumption that can in general not be made for practical applications. This thesis presents extensions of these results that allow to handle errors in the data. For this, knowledge that is "almost valid" in the data is retrieved, where the notion of "almost valid" is formalized using the notion of confidence from data mining. This thesis presents two algorithms which achieve this retrieval. The first algorithm just extracts all almost valid knowledge from the data, while the second algorithm utilizes expert interaction to distinguish errors from rare but valid counterexamples.
14

Action, Time and Space in Description Logics

Milicic, Maja 19 June 2008 (has links)
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. One of the main technical results is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. A smaller part of the work is related to decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time.
15

Computing Least Common Subsumer in Description Logics with Existential Restrictions

Baader, Franz, Küsters, Ralf, Molitor, Ralf 20 May 2022 (has links)
Computing the least common subsumer (lcs) is an inference task that can be used to support the \bottom-up' construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for universal value restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. More precisely, we show that, for the description logic ALE (which allows for conjunction, universal value restrictions, existential restrictions, negation of atomic concepts, as well as the top and the bottom concept), the lcs always exists and can efiectively be computed. Our approach for computing the lcs is based on an appropriate representation of concept descriptions by certain trees, and a characterization of subsumption by homomorphisms between these trees. The lcs operation then corresponds to the product operation on trees. / An abridged version of this technical report is published in the Proceedings of IJCAI'99.
16

To and Fro Between Tableaus and Automata for Description Logics

Hladik, Jan 31 January 2008 (has links) (PDF)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
17

To and Fro Between Tableaus and Automata for Description Logics

Hladik, Jan 14 November 2007 (has links)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
18

Mixing Description Logics in Privacy-Preserving Ontology Publishing

Baader, Franz, Nuradiansyah, Adrian 30 July 2021 (has links)
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE.
19

On the Relation between Conceptual Graphs and Description Logics

Baader, Franz, Molitor, Ralf, Tobies, Stefan 20 May 2022 (has links)
Aus der Einleitung: 'Conceptual graphs (CGs) are an expressive formalism for representing knowledge about an application domain in a graphical way. Since CGs can express all of first-order predicate logic (FO), they can also be seen as a graphical notation for FO formulae. In knowledge representation, one is usually not only interested in representing knowledge, one also wants to reason about the represented knowledge. For CGs, one is, for example, interested in validity of a given graph, and in the question whether one graph subsumes another one. Because of the expressiveness of the CG formalism, these reasoning problems are undecidable for general CGs. In the literature [Sow84, Wer95, KS97] one can find complete calculi for validity of CGs, but implementations of these calculi have the same problems as theorem provers for FO: they may not terminate for formulae that are not valid, and they are very ineficient. To overcome this problem, one can either employ incomplete reasoners, or try to find decidable (or even tractable) fragments of the formalism. This paper investigates the second alternative. The most prominent decidable fragment of CGs is the class of simple conceptual graphs (SGs), which corresponds to the conjunctive, positive, and existential fragment of FO (i.e., existentially quantified conjunctions of atoms). Even for this simple fragment, however, subsumption is still an NP-complete problem [CM92]. SGs that are trees provide for a tractable fragment of SGs, i.e., a class of simple conceptual graphs for which subsumption can be decided in polynomial time [MC93]. In this report, we will identify a tractable fragment of SGs that is larger than the class of trees. Instead of trying to prove new decidability or tractability results for CGs from scratch, our idea was to transfer decidability results from description logics [DLNN97, DLNS96] to CGs. The goal was to obtain a \natural' sub-class of the class of all CGs in the sense that, on the one hand, this sub-class is defined directly by syntactic restrictions on the graphs, and not by conditions on the first-order formulae obtained by translating CGs into FO, and, on the other hand, is in some sense equivalent to a more or less expressive description logic. Although description logics (DLs) and CGs are employed in very similar applications (e.g., for representing the semantics of natural language sentences), it turned out that these two formalisms are quite different for several reasons: (1) conceptual graphs are interpreted as closed FO formulae, whereas DL concept descriptions are interpreted by formulae with one free variable; (2) DLs do not allow for relations of arity > 2 ; (3) SGs are interpreted by existential sentences, whereas almost all DLs considered in the literature allow for universal quantification; (4) because DLs use a variable-free syntax, certain identifications of variables expressed by cycles in SGs and by co-reference links in CGs cannot be expressed in DLs. As a consequence of these differences, we could not identify a natural fragment of CGs corresponding to an expressive DL whose decidability was already shown in the literature. We could, however, obtain a new tractability result for a DL corresponding to SGs that are trees. This correspondence result strictly extends the one in [CF98]. In addition, we have extended the tractability result from SGs that are trees to SGs that can be transformed into trees using a certain \cycle-cutting' operation. The report is structured as follows. We first introduce the description logic for which we will identify a subclass of equivalent SGs. In Section 3, we recall basic definitions and results on SGs. Thereafter, we introduce a syntactical variant of SGs which allows for directly encoding the support into the graphs (Section 4.1). In order to formalize the equivalence between DLs and SGs, we have to consider SGs with one distinguished node called root (Section 4.2). In Section 5, we finally identify a class of SGs corresponding to a DL that is a strict extension of the DL considered in [CF98].
20

Learning OWL Class Expressions

Lehmann, Jens 24 June 2010 (has links) (PDF)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.

Page generated in 0.0991 seconds