• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 96
  • 96
  • 42
  • 32
  • 32
  • 31
  • 28
  • 28
  • 28
  • 23
  • 21
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Semantic Interoperability Of The Un/cefact Ccts Based Electronic Business Document Standards

Kabak, Yildiray 01 July 2009 (has links) (PDF)
The interoperability of the electronic documents exchanged in eBusiness applications is an important problem in industry. Currently, this problem is handled by the mapping experts who understand the meaning of every element in the involved document schemas and define the mappings among them which is a very costly and tedious process. In order to improve electronic document interoperability, the UN/CEFACT produced the Core Components Technical Specification (CCTS) which defines a common structure and semantic properties for document artifacts. However, at present, this document content information is available only through text-based search mechanisms and tools. In this thesis, the semantics of CCTS based business document standards is explicated through a formal, machine processable language as an ontology. In this way, it becomes possible to compute a harmonized ontology, which gives the similarities among document schema ontology classes of different document standards through both the semantic properties they share and the semantic equivalences established through reasoning. However, as expected, the harmonized ontology only helps discovering the similarities of structurally and semantically equivalent elements. In order to handle the structurally different but semantically similar document artifacts, heuristic rules are developed describing the possible ways of organizing simple document artifacts into compound artifacts as defined in the CCTS methodology. Finally, the equivalences discovered among document schema ontologies are used for the semi-automated generation of XSLT definitions for the translation of real-life document instances.
22

Ontology Learning And Question Answering (qa) Systems

Baskurt, Meltem 01 May 2010 (has links) (PDF)
Ontology Learning requires a deep specialization on Semantic Web, Knowledge Representation, Search Engines, Inductive Learning, Natural Language Processing, Information Storage, Extraction and Retrieval. Huge amount of domain specific, unstructured on-line data needs to be expressed in machine understandable and semantically searchable format. Currently users are often forced to search manually in the results returned by the keyword-based search services. They also want to use their native languages to express what they search. In this thesis we developed an ontology based question answering system that satisfies these needs by the research outputs of the areas stated above. The system allows users to enter a question about a restricted domain by means of natural language and returns exact answer of the questions. A set of questions are collected from the users in the domain. In addition to questions, their corresponding question templates were generated on the basis of the domain ontology. When the user asks a question and hits the search button, system chooses the suitable question template and builds a SPARQL query according to this template. System is also capable of answering questions required inference by using generic inference rules defined at a rule file. Our evaluation with ten users shows that the sytem is extremely simple to use without any training resulting in very good query performance.
23

Ontology Based Text Mining In Turkish Radiology Reports

Deniz, Onur 01 January 2012 (has links) (PDF)
Vast amount of radiology reports are produced in hospitals. Being in free text format and having errors due to rapid production, it continuously gets more complicated for radiologists and physicians to reach meaningful information. Though application of ontologies into bio-medical text mining has gained increasing interest in recent years, less work has been offered for ontology based retrieval tasks in Turkish language. In this work, an information extraction and retrieval system based on SNOMED-CT ontology has been proposed for Turkish radiology reports. Main purpose of this work is to utilize semantic relations in ontology to improve precision and recall rates of search results in domain. Practical problems encountered such as spelling errors, segmentation and tokenization of unstructured medical reports has also been addressed during the work.
24

Action, Time and Space in Description Logics

Milicic, Maja 08 September 2008 (has links) (PDF)
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. One of the main technical results is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. A smaller part of the work is related to decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time.
25

Computing Updates in Description Logics

Liu, Hongkai 15 February 2010 (has links) (PDF)
Description Logics (DLs) form a family of knowledge representation formalisms which can be used to represent and reason with conceptual knowledge about a domain of interest. The knowledge represented by DLs is mainly static. In many applications, the domain knowledge is dynamic. This observation motivates the research on how to update the knowledge when changes in the application domain take place. This thesis is dedicated to the study of updating knowledge, more precisely, assertional knowledge represented in DLs. We explore whether the updated knowledge can be expressed in several standard DLs and, if so, whether it is computable and what is its size.
26

Learning Terminological Knowledge with High Confidence from Erroneous Data

Borchmann, Daniel 17 September 2014 (has links) (PDF)
Description logics knowledge bases are a popular approach to represent terminological and assertional knowledge suitable for computers to work with. Despite that, the practicality of description logics is impaired by the difficulties one has to overcome to construct such knowledge bases. Previous work has addressed this issue by providing methods to learn valid terminological knowledge from data, making use of ideas from formal concept analysis. A basic assumption here is that the data is free of errors, an assumption that can in general not be made for practical applications. This thesis presents extensions of these results that allow to handle errors in the data. For this, knowledge that is "almost valid" in the data is retrieved, where the notion of "almost valid" is formalized using the notion of confidence from data mining. This thesis presents two algorithms which achieve this retrieval. The first algorithm just extracts all almost valid knowledge from the data, while the second algorithm utilizes expert interaction to distinguish errors from rare but valid counterexamples.
27

Temporalised Description Logics for Monitoring Partially Observable Events

Lippmann, Marcel 09 July 2014 (has links) (PDF)
Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar. Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen. Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms.
28

Fuzzy Description Logics with General Concept Inclusions

Borgwardt, Stefan 01 July 2014 (has links) (PDF)
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hájek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
29

Application of Definability to Query Answering over Knowledge Bases

Kinash, Taras January 2013 (has links)
Answering object queries (i.e. instance retrieval) is a central task in ontology based data access (OBDA). Performing this task involves reasoning with respect to a knowledge base K (i.e. ontology) over some description logic (DL) dialect L. As the expressive power of L grows, so does the complexity of reasoning with respect to K. Therefore, eliminating the need to reason with respect to a knowledge base K is desirable. In this work, we propose an optimization to improve performance of answering object queries by eliminating the need to reason with respect to the knowledge base and, instead, utilizing cached query results when possible. In particular given a DL dialect L, an object query C over some knowledge base K and a set of cached query results S={S1, ..., Sn} obtained from evaluating past queries, we rewrite C into an equivalent query D, that can be evaluated with respect to an empty knowledge base, using cached query results S' = {Si1, ..., Sim}, where S' is a subset of S. The new query D is an interpolant for the original query C with respect to K and S. To find D, we leverage a tool for enumerating interpolants of a given sentence with respect to some theory. We describe a procedure that maps a knowledge base K, expressed in terms of a description logic dialect of first order logic, and object query C into an equivalent theory and query that are input into the interpolant enumerating tool, and resulting interpolants into an object query D that can be evaluated over an empty knowledge base. We show the efficacy of our approach through experimental evaluation on a Lehigh University Benchmark (LUBM) data set, as well as on a synthetic data set, LUBMMOD, that we created by augmenting an LUBM ontology with additional axioms.
30

Révision d'ontologies fondée sur tableaux. / Tableaux-based revision of ontologies

Dong, Ngoc Nguyen Thinh 04 July 2017 (has links)
L'objectif de cette thèse est d'étendre des opérateurs de révision d'ontologie existants en respectant les postulats AGM (priorité aux nouvelles connaissances, cohérence de connaissances et minimalité des changements) pour des ontologies d'expressivité SHIQ et de proposer de nouveaux algorithmes palliant les inconvénients inhérents à ces opérateurs.Après étude de l'existant, nous avons proposé un nouvel algorithme de tableau pour la révision des ontologies exprimées en SHIQ. En créant ce nouvel algorithme de tableau, nous avons défini la notion des modèles de graphe finis (des modèles d'arbre ou des modèles de forêt) afin de représenter un ensemble éventuellement infini de modèles d'une ontologie en SHIQ. Ces structures finies équipées d'un pré-ordre total permettent de déterminer la différence sémantique entre deux ontologies représentées comme deux ensembles de modèles. Nous avons mis en œuvre les algorithmes proposés dans notre moteur de révision OntoRev, intégrant des techniques d'optimisation pour (i) réduire des non-déterminismes lors de l'application de l'algorithme de tableau, (ii) optimiser le temps du calcul de distance entre des modèles d'arbre ou entre des modèles de forêt, (iii) éviter de construire des forêts ou des arbres non nécessaires à la révision. De plus, nous avons examiné la possibilité d'améliorer la méthode de tableau par une approche permettant de compresser les modèles d'arbres. Enfin, nous avons effectué des expérimentations avec des ontologies du monde réel qui ont mis en exergue la difficulté à traiter des axiomes non déterministes intrinsèques. / The objective of this PhD thesis is to extend existing ontology revision operators in accordance with the postulates AGM (priority on new knowledge, knowledge coherence and minimal change) for ontologies in SHIQ and propose new algorithms to overcome the disadvantages in these operators.After studying the existing approaches, we have proposed a new tableau algorithm for the revision of ontologies expressed in SHIQ. Together with this new tableau algorithm, we have defined the notion of finite graph models (tree models or forest models) in order to represent a possibly infinite set of models of an ontology in SHIQ. These finite structures equipped with a total pre-order make it possible to determine the semantic difference between two ontologies represented as two sets of models.We have implemented the proposed algorithms in our revision engine OntoRev, by integrating optimization techniques for (i) reducing non-determinisms when applying the tableau algorithm, (ii) optimizing the computation time of the distance between tree models or between forest models, (iii) avoiding the construction of unnecessary forests or trees in the revision. In addition, we examined the possibility of improving the tableau method using an approach for compressing tree models. Finally, we carried out experiments with real-world ontologies which highlighted the difficulty to deal with intrinsic non-deterministic axioms.

Page generated in 0.0484 seconds