Spelling suggestions: "subject:"descriptionlogics"" "subject:"descriptionlogic""
21 |
Information Modeling for Intent-based Retrieval of Parametric Finite Element Analysis ModelsUdoyen, Nsikan 23 October 2006 (has links)
Adaptive reuse of parametric finite element analysis (FEA) models is a common form of reuse that involves integrating new information into an archived FEA model to apply it towards a new similar physical problem. Adaptive reuse of archived FEA models is often motivated by the need to assess the impact of minor improvements to component-based designs such as addition of new structural components, or the need to assess new failure modes that arise when a device is redesigned for new operating environments or loading conditions. Successful adaptive reuse of FEA models involves reference to supporting documents that capture the formulation of the model to determine what new information can be integrated and how. However, FEA models and supporting documents are not stored in formats that are semantically rich enough to support automated inference of their relevance to a modelers needs. The modelers inability to precisely describe information needs and execute queries based on such requirements results in inefficient queries and time spent manually assessing irrelevant models. The central research question in this research is thus how do we incorporate a modelers intent into automated retrieval of FEA models for adaptive reuse?
An automated retrieval method to support adaptive reuse of parametric FEA models has been developed in the research documented in this thesis. The method consists of a classification-based retrieval method based on ALE subsumption hierarchies that classify models using semantically rich description logic representations of physical problem structure and a reusability-based ranking method. Conceptual data models have been developed for the representations that support both retrieval and ranking of archived FEA models. The method is validated using representations of FEA models of several classes of electronic chip packages. Experimental results indicate that the properties of the representation methods support effective automation of retrieval functions for FEA models of component-based designs.
|
22 |
Tractable reasoning with quality guarantee for expressive description logicsRen, Yuan January 2014 (has links)
DL-based ontologies have been widely used as knowledge infrastructures in knowledge management systems and on the Semantic Web. The development of efficient, sound and complete reasoning technologies has been a central topic in DL research. Recently, the paradigm shift from professional to novice users, and from standalone and static to inter-linked and dynamic applications raises new challenges: Can users build and evolve ontologies, both static and dynamic, with features provided by expressive DLs, while still enjoying e cient reasoning as in tractable DLs, without worrying too much about the quality (soundness and completeness) of results? To answer these challenges, this thesis investigates the problem of tractable and quality-guaranteed reasoning for ontologies in expressive DLs. The thesis develops syntactic approximation, a consequence-based reasoning procedure with worst-case PTime complexity, theoretically sound and empirically high-recall results, for ontologies constructed in DLs more expressive than any tractable DL. The thesis shows that a set of semantic completeness-guarantee conditions can be identifed to efficiently check if such a procedure is complete. Many ontologies tested in the thesis, including difficult ones for an off-the-shelf reasoner, satisfy such conditions. Furthermore, the thesis presents a stream reasoning mechanism to update reasoning results on dynamic ontologies without complete re-computation. Such a mechanism implements the Delete-and-Re-derive strategy with a truth maintenance system, and can help to reduce unnecessary over-deletion and re-derivation in stream reasoning and to improve its efficiency. As a whole, the thesis develops a worst-case tractable, guaranteed sound, conditionally complete and empirically high-recall reasoning solution for both static and dynamic ontologies in expressive DLs. Some techniques presented in the thesis can also be used to improve the performance and/or completeness of other existing reasoning solutions. The results can further be generalised and extended to support a wider range of knowledge representation formalisms, especially when a consequence-based algorithm is available.
|
23 |
Scalable reasoning for description logicsShearer, Robert D. C. January 2011 (has links)
Description logics (DLs) are knowledge representation formalisms with well-understood model-theoretic semantics and computational properties. The DL SROIQ provides the logical underpinning for the semantic web language OWL 2, which is quickly becoming the standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. We present a novel reasoning calculus for the description logic SROIQ which addresses two of the major sources of inefficiency present in the tableau-based reasoning calculi used in state-of-the-art reasoners: unnecessary nondeterminism and unnecessarily large model sizes. Further, we describe a new approach to classification which exploits partial information about the subsumption relation between concept names to reduce both the number of individual subsumption tests performed and the cost of working with large ontologies; our algorithm is applicable to the general problem of deducing a quasi-ordering from a sequence of binary comparisons. We also present techniques for extracting partial information about the subsumption relation from the models generated by constructive DL reasoning methods, such as our hypertableau calculus. Empirical results from a prototypical implementation demonstrate substantial performance improvements compared to existing algorithms and implementations.
|
24 |
Tractable query answering for description logics via query rewritingPerez-Urbina, Hector M. January 2010 (has links)
We consider the problem of answering conjunctive queries over description logic knowledge bases via query rewriting. Given a conjunctive query Q and a TBox T, we compute a new query Q′ that incorporates the semantic consequences of T such that, for any ABox A, evaluating Q over T and A can be done by evaluating the new query Q′ over A alone. We present RQR—a novel resolution-based rewriting algorithm for the description logic ELHIO¬ that generalizes and extends existing approaches. RQR not only handles a spectrum of logics ranging from DL-Lite_core up to ELHIO¬, but it is worst-case optimal with respect to data complexity for all of these logics; moreover, given the form of the rewritten queries, their evaluation can be delegated to off-the-shelf (deductive) database systems. We use RQR to derive the novel complexity results that conjunctive query answering for ELHIO¬ and DL-Lite+ are, respectively, PTime and NLogSpace complete with respect to data complexity. In order to show the practicality of our approach, we present the results of an empirical evaluation. Our evaluation suggests that RQR, enhanced with various straightforward optimizations, can be successfully used in conjunction with a (deductive) database system in order to answer queries over knowledge bases in practice. Moreover, in spite of being a more general procedure, RQR will often produce significantly smaller rewritings than the standard query rewriting algorithm for the DL-Lite family of logics.
|
25 |
Automating the multidimensional design of data warehousesRomero Moral, Oscar 09 February 2010 (has links)
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades. / Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
|
26 |
Axiomatizing Confident GCIs of Finite InterpretationsBorchmann, Daniel 10 September 2012 (has links) (PDF)
Constructing description logic ontologies is a difficult task that is normally conducted by experts. Recent results show that parts of ontologies can be constructed from description logic interpretations.
However, these results assume the interpretations to be free of errors, which may not be the case for real-world data. To provide some mechanism to handle these errors, the notion of confidence from data mining is introduced into description logics, yielding confident general concept inclusions (confident GCIs) of finite interpretations. The main focus of this work is to prove the existence of finite bases of confident GCIs and to describe some of theses bases explicitly.
|
27 |
Semantic Interoperability Of The Un/cefact Ccts Based Electronic Business Document StandardsKabak, Yildiray 01 July 2009 (has links) (PDF)
The interoperability of the electronic documents exchanged in eBusiness
applications is an
important problem in industry. Currently, this problem is handled by
the mapping experts who understand the meaning of every element in the involved
document schemas and define the mappings among them which is a very costly and tedious process.
In order to improve electronic document interoperability,
the UN/CEFACT produced the Core Components Technical Specification (CCTS)
which defines a common structure and semantic properties for document artifacts.
However, at present, this document content information is available only through text-based search mechanisms and tools.
In this thesis, the semantics of CCTS based business document
standards is explicated
through a formal, machine processable language as an ontology.
In this way, it becomes possible to compute a harmonized
ontology, which gives
the similarities among document schema ontology classes of different document standards
through both the semantic properties they share and the semantic equivalences established
through reasoning. However, as expected, the harmonized
ontology only helps discovering the similarities of structurally and semantically equivalent elements.
In order to handle the structurally different but semantically similar document artifacts, heuristic rules are
developed describing the possible ways of organizing simple document artifacts
into compound artifacts as defined in the CCTS methodology.
Finally, the equivalences discovered among document schema ontologies
are used for the semi-automated generation of XSLT definitions
for the translation of real-life document instances.
|
28 |
Ontology Learning And Question Answering (qa) SystemsBaskurt, Meltem 01 May 2010 (has links) (PDF)
Ontology Learning requires a deep specialization on Semantic Web, Knowledge Representation, Search Engines, Inductive Learning, Natural Language Processing, Information Storage, Extraction and Retrieval. Huge amount of domain specific, unstructured on-line data needs to be expressed in machine understandable and semantically searchable format. Currently users are often forced to search manually in the results returned by the keyword-based search services. They also want to use their native languages to express what they search. In this thesis we developed an ontology based question answering system that satisfies these needs by the research outputs of the areas stated above. The system allows users to enter a question about a restricted domain by means of natural language and returns exact answer of the questions. A set of questions are collected from the users in the domain. In addition to questions, their corresponding question templates were generated on the basis of the domain ontology. When the user asks a question and hits the search button, system chooses the suitable question template and builds a SPARQL query according to this template. System is also capable of answering questions required inference by using generic inference rules defined at a rule file. Our evaluation with ten users shows that the sytem is extremely simple to use without any training resulting in very good query performance.
|
29 |
Ontology Based Text Mining In Turkish Radiology ReportsDeniz, Onur 01 January 2012 (has links) (PDF)
Vast amount of radiology reports are produced in hospitals. Being in free text format and having errors due to rapid production, it continuously gets more complicated for radiologists and physicians to reach meaningful information. Though application of ontologies into bio-medical text mining has gained increasing interest in recent years, less work has been offered for ontology based retrieval tasks in Turkish language. In this work, an information extraction and retrieval system based on SNOMED-CT ontology has been proposed for Turkish radiology reports. Main purpose of this work is to utilize semantic relations in ontology to improve precision and recall rates of search results in domain. Practical problems encountered such as spelling errors, segmentation and tokenization of unstructured medical reports has also been addressed during the work.
|
30 |
Action, Time and Space in Description LogicsMilicic, Maja 08 September 2008 (has links) (PDF)
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. One of the main technical results is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. A smaller part of the work is related to decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time.
|
Page generated in 0.0805 seconds