Spelling suggestions: "subject:"descriptionlogics"" "subject:"descriptionlogic""
31 |
On the Computation of Common Subsumers in Description LogicsTurhan, Anni-Yasmin 30 May 2008 (has links) (PDF)
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
|
32 |
Computing Updates in Description LogicsLiu, Hongkai 15 February 2010 (has links) (PDF)
Description Logics (DLs) form a family of knowledge representation formalisms which can be used to represent and reason with conceptual knowledge about a domain of interest. The knowledge represented by DLs is mainly static. In many applications, the domain knowledge is dynamic. This observation motivates the research on how to update the knowledge when changes in the application domain take place. This thesis is dedicated to the study of updating knowledge, more precisely, assertional knowledge represented in DLs. We explore whether the updated knowledge can be expressed in several standard DLs and, if so, whether it is computable and what is its size.
|
33 |
Learning Terminological Knowledge with High Confidence from Erroneous DataBorchmann, Daniel 17 September 2014 (has links) (PDF)
Description logics knowledge bases are a popular approach to represent terminological and assertional knowledge suitable for computers to work with. Despite that, the practicality of description logics is impaired by the difficulties one has to overcome to construct such knowledge bases. Previous work has addressed this issue by providing methods to learn valid terminological knowledge from data, making use of ideas from formal concept analysis.
A basic assumption here is that the data is free of errors, an assumption that can in general not be made for practical applications. This thesis presents extensions of these results that allow to handle errors in the data. For this, knowledge that is "almost valid" in the data is retrieved, where the notion of "almost valid" is formalized using the notion of confidence from data mining. This thesis presents two algorithms which achieve this retrieval. The first algorithm just extracts all almost valid knowledge from the data, while the second algorithm utilizes expert interaction to distinguish errors from rare but valid counterexamples.
|
34 |
Temporalised Description Logics for Monitoring Partially Observable EventsLippmann, Marcel 09 July 2014 (has links) (PDF)
Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar.
Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen.
Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms.
|
35 |
Fuzzy Description Logics with General Concept InclusionsBorgwardt, Stefan 01 July 2014 (has links) (PDF)
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hájek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
|
36 |
Application of Definability to Query Answering over Knowledge BasesKinash, Taras January 2013 (has links)
Answering object queries (i.e. instance retrieval) is a central task in ontology based data access (OBDA). Performing this task involves reasoning with respect to a knowledge base K (i.e. ontology) over some description logic (DL) dialect L. As the expressive power of L grows, so does the complexity of reasoning with respect to K. Therefore, eliminating the need to reason with respect to a knowledge base K is desirable.
In this work, we propose an optimization to improve performance of answering object queries by eliminating the need to reason with respect to the knowledge base and, instead, utilizing cached query results when possible. In particular given a DL dialect L, an object query C over some knowledge base K and a set of cached query results S={S1, ..., Sn} obtained from evaluating past queries, we rewrite C into an equivalent query D, that can be evaluated with respect to an empty knowledge base, using cached query results S' = {Si1, ..., Sim}, where S' is a subset of S. The new query D is an interpolant for the original query C with respect to K and S. To find D, we leverage a tool for enumerating interpolants of a given sentence with respect to some theory. We describe a procedure that maps a knowledge base K, expressed in terms of a description logic dialect of first order logic, and object query C into an equivalent theory and query that are input into the interpolant enumerating tool, and resulting interpolants into an object query D that can be evaluated over an empty knowledge base.
We show the efficacy of our approach through experimental evaluation on a Lehigh University Benchmark (LUBM) data set, as well as on a synthetic data set, LUBMMOD, that we created by augmenting an LUBM ontology with additional axioms.
|
37 |
Révision d'ontologies fondée sur tableaux. / Tableaux-based revision of ontologiesDong, Ngoc Nguyen Thinh 04 July 2017 (has links)
L'objectif de cette thèse est d'étendre des opérateurs de révision d'ontologie existants en respectant les postulats AGM (priorité aux nouvelles connaissances, cohérence de connaissances et minimalité des changements) pour des ontologies d'expressivité SHIQ et de proposer de nouveaux algorithmes palliant les inconvénients inhérents à ces opérateurs.Après étude de l'existant, nous avons proposé un nouvel algorithme de tableau pour la révision des ontologies exprimées en SHIQ. En créant ce nouvel algorithme de tableau, nous avons défini la notion des modèles de graphe finis (des modèles d'arbre ou des modèles de forêt) afin de représenter un ensemble éventuellement infini de modèles d'une ontologie en SHIQ. Ces structures finies équipées d'un pré-ordre total permettent de déterminer la différence sémantique entre deux ontologies représentées comme deux ensembles de modèles. Nous avons mis en œuvre les algorithmes proposés dans notre moteur de révision OntoRev, intégrant des techniques d'optimisation pour (i) réduire des non-déterminismes lors de l'application de l'algorithme de tableau, (ii) optimiser le temps du calcul de distance entre des modèles d'arbre ou entre des modèles de forêt, (iii) éviter de construire des forêts ou des arbres non nécessaires à la révision. De plus, nous avons examiné la possibilité d'améliorer la méthode de tableau par une approche permettant de compresser les modèles d'arbres. Enfin, nous avons effectué des expérimentations avec des ontologies du monde réel qui ont mis en exergue la difficulté à traiter des axiomes non déterministes intrinsèques. / The objective of this PhD thesis is to extend existing ontology revision operators in accordance with the postulates AGM (priority on new knowledge, knowledge coherence and minimal change) for ontologies in SHIQ and propose new algorithms to overcome the disadvantages in these operators.After studying the existing approaches, we have proposed a new tableau algorithm for the revision of ontologies expressed in SHIQ. Together with this new tableau algorithm, we have defined the notion of finite graph models (tree models or forest models) in order to represent a possibly infinite set of models of an ontology in SHIQ. These finite structures equipped with a total pre-order make it possible to determine the semantic difference between two ontologies represented as two sets of models.We have implemented the proposed algorithms in our revision engine OntoRev, by integrating optimization techniques for (i) reducing non-determinisms when applying the tableau algorithm, (ii) optimizing the computation time of the distance between tree models or between forest models, (iii) avoiding the construction of unnecessary forests or trees in the revision. In addition, we examined the possibility of improving the tableau method using an approach for compressing tree models. Finally, we carried out experiments with real-world ontologies which highlighted the difficulty to deal with intrinsic non-deterministic axioms.
|
38 |
Evaluating conjunctive and graph queries over the EL profile of OWL 2Stefanoni, Giorgio January 2015 (has links)
OWL 2 EL is a popular ontology language that is based on the EL family of description logics and supports regular role inclusions,axioms that can capture compositional properties of roles such as role transitivity and reflexivity. In this thesis, we present several novel complexity results and algorithms for answering expressive queries over OWL 2 EL knowledge bases (KBs) with regular role inclusions. We first focus on the complexity of conjunctive query (CQ) answering in OWL 2 EL and show that the problem is PSpace-complete in combined complexity, the complexity measured in the total size of the input. All the previously known approaches encode the regular role inclusions using finite automata that can be worst-case exponential in size, and thus are not optimal. In our PSpace procedure, we address this problem by using a novel, succinct encoding of regular role inclusions based on pushdown automata with a bounded stack. Moreover, we strengthen the known PSpace lower complexity bound and show that the problem is PSpace-hard even if we consider only the regular role inclusions as part of the input and the query is acyclic; thus, our algorithm is optimal in knowledge base complexity, the complexity measured in the size of the KB, as well as for acyclic queries. We then study graph queries for OWL 2 EL and show that answering positive, converse- free conjunctive graph queries is PSpace-complete. Thus, from a theoretical perspective, we can add navigational features to CQs over OWL 2 EL without an increase in complexity. Finally, we present a practicable algorithm for answering CQs over OWL 2 EL KBs with only transitive and reflexive composite roles. None of the previously known approaches target transitive and reflexive roles specifically, and so they all run in PSpace and do not provide a tight upper complexity bound. In contrast, our algorithm is optimal: it runs in NP in combined complexity and in PTime in KB complexity. We also show that answering CQs is NP-hard in combined complexity if the query is acyclic and the KB contains one transitive role, one reflexive role, or nominalsâconcepts containing precisely one individual.
|
39 |
Ontology module extraction and applications to ontology classificationArmas Romero, Ana January 2015 (has links)
Module extraction is the task of computing a (preferably small) fragment <i>M</i> of an ontology <i>O</i> that preserves a class of entailments over a signature of interest ∑. Existing practical approaches ensure that <i>M</i> preserves all second-order entailments of <i>O</i> over ∑, which is a stronger condition than is required in many applications. In the first part of this thesis, we propose a novel approach to module extraction which, based on a reduction to a datalog reasoning problem, makes it possible to compute modules that are tailored to preserve only specific kinds of entailments. This leads to obtaining modules that are often significantly smaller than those produced by other practical approaches, as shown in an empirical evaluation. In the second part of this thesis, we consider the application of module extraction to the optimisation of ontology classification. Classification is a fundamental reasoning task in ontology design, and there is currently a wide range of reasoners that provide this service. Reasoners aimed at so-called lightweight ontology languages are much more efficient than those aimed at more expressive ones, but they do not offer completeness guarantees for ontologies containing axioms outside the relevant language. We propose an original approach to classification based on exploiting module extraction techniques to divide the workload between a general purpose reasoner and a more efficient reasoner for a lightweight language in such a way that the bulk of the workload is assigned to the latter. We show how the proposed approach can be realised using two particular module extraction techniques, including the one presented in the first part of the thesis. Furthermore, we present the results of an empirical evaluation that shows that this approach can lead to a significant performance improvement in many cases.
|
40 |
ADLOA : an architecture description language for artificial ontogenetic architecturesVenter, Jade Anthony 13 October 2014 (has links)
M.Com. (Information Technology) / ADLOA is an Architecture Description Language (ADL) proposed to describe biologicallyinspired complex adaptive architectures such as ontogenetic architectures. The need for an ontogenetic ADL stems from the lack of support from existing ADLs. This dissertation further investigates the similarities between existing intelligent architectures and ontogenetic architectures. The research conducted on current ADLs, artificial ontogeny and intelligent architectures reveals that there are similarities between ontogenetic architectures and other intelligent architectures. However, the dynamism of artificial ontogeny indicates a lack of support for architecture description. Therefore, the dissertation proposes two core mechanisms to address ontogenetic architecture description. Firstly, the ADLOA process is defined as a systematisation of artificial ontogeny. The process specifies a uniform approach to defining ontogenetic architectures. Secondly, a demonstration of the implemented ADLOA process is used, in conjunction with the ADLOA model, mechanisms and Graphical User Interface (GUI), to present a workable description environment for software architects. The result of the dissertation is a standalone ADL that has the ability to describe ontogenetic architectures and to produce language-dependent code frameworks using the Extensible Markup Language (XML) and Microsoft Visual Studio platform.
|
Page generated in 0.0906 seconds