Spelling suggestions: "subject:"descriptionlogics"" "subject:"descriptionlogic""
91 |
Quantitative Variants of Language Equations and their Applications to Description LogicsMarantidis, Pavlos 10 October 2019 (has links)
Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. Together with the special case of matching, they were first investigated in detail for the DL FL0, where these problems can be reduced to solving certain language equations.
In this thesis, we extend this service in two directions. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that “almost” unify, in order to account for potential small modelling errors. The meaning of “almost” is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for particular distance measures. Furthermore, we make a first step towards integrating background knowledge, formulated in so-called TBoxes, by investigating the special case of matching in the presence of TBoxes of different forms. We acquire a tight complexity bound for the general case, while we prove that the problem becomes easier in a restricted setting. To achieve these bounds, we take advantage of an equivalence characterization of FL0 concepts that is based on formal languages. In addition, we incorporate TBoxes in computing concept distances. Even though our results on the approximate setting cannot deal with TBoxes yet, we prepare the framework that future research can build on. Before we journey to the technical details of the above investigations, we showcase our program in the simpler setting of the equational theory ACUI, where we are able to also combine the two extensions. In the course of studying the above problems, we make heavy use of automata theory, where we also derive novel results that could be of independent interest.
|
92 |
To and Fro Between Tableaus and Automata for Description LogicsHladik, Jan 14 November 2007 (has links)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
|
93 |
Mixing Description Logics in Privacy-Preserving Ontology PublishingBaader, Franz, Nuradiansyah, Adrian 30 July 2021 (has links)
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE.
|
94 |
On the Relation between Conceptual Graphs and Description LogicsBaader, Franz, Molitor, Ralf, Tobies, Stefan 20 May 2022 (has links)
Aus der Einleitung:
'Conceptual graphs (CGs) are an expressive formalism for representing knowledge about an application domain in a graphical way. Since CGs can express all of first-order predicate logic (FO), they can also be seen as a graphical notation for FO formulae. In knowledge representation, one is usually not only interested in representing knowledge, one also wants to reason about the represented knowledge. For CGs, one is, for example, interested in validity of a given graph, and in the question whether one graph subsumes another one. Because of the expressiveness of the CG formalism, these reasoning problems are undecidable for general CGs. In the literature [Sow84, Wer95, KS97] one can find complete calculi for validity of CGs, but implementations of these calculi have the same problems as
theorem provers for FO: they may not terminate for formulae that are not valid, and they are very ineficient. To overcome this problem, one can either employ
incomplete reasoners, or try to find decidable (or even tractable) fragments of the formalism. This paper investigates the second alternative.
The most prominent decidable fragment of CGs is the class of simple conceptual graphs (SGs), which corresponds to the conjunctive, positive, and existential fragment of FO (i.e., existentially quantified conjunctions of atoms). Even for this simple fragment, however, subsumption is still an NP-complete problem [CM92]. SGs that are trees provide for a tractable fragment of SGs, i.e., a class of simple conceptual graphs for which subsumption can be decided in polynomial time [MC93]. In this report, we will identify a tractable fragment of SGs that is larger than the class of trees.
Instead of trying to prove new decidability or tractability results for CGs from scratch, our idea was to transfer decidability results from description logics [DLNN97, DLNS96] to CGs. The goal was to obtain a \natural' sub-class of the class of all CGs in the sense that, on the one hand, this sub-class is defined directly by syntactic restrictions on the graphs, and not by conditions on the first-order formulae obtained by translating CGs into FO, and, on the other hand, is in some sense equivalent to a more or less expressive description logic.
Although description logics (DLs) and CGs are employed in very similar applications (e.g., for representing the semantics of natural language sentences), it turned out that these two formalisms are quite different for several reasons:
(1) conceptual graphs are interpreted as closed FO formulae, whereas DL concept descriptions are interpreted by formulae with one free variable; (2) DLs do not allow for relations of arity > 2 ; (3) SGs are interpreted by existential sentences, whereas almost all DLs considered in the literature allow for universal quantification; (4) because DLs use a variable-free syntax, certain identifications of variables expressed by cycles in SGs and by co-reference links in CGs cannot be expressed in DLs. As a consequence of these differences, we could not identify a natural fragment of CGs corresponding to an expressive DL whose decidability was already shown in the literature. We could, however, obtain a new tractability result for a DL corresponding to SGs that are trees. This correspondence result strictly extends the one in [CF98]. In addition, we have extended the tractability result from SGs that are trees to SGs that can be transformed into trees using a certain \cycle-cutting' operation. The report is structured as follows. We first introduce the description logic for which we will identify a subclass of equivalent SGs. In Section 3, we recall basic definitions and results on SGs. Thereafter, we introduce a syntactical variant of SGs which allows for directly encoding the support into the graphs (Section 4.1). In order to formalize the equivalence between DLs and SGs, we have to consider SGs with one distinguished node called root (Section 4.2). In Section 5, we finally identify a class of SGs corresponding to a DL that is a strict extension of the DL considered in [CF98].
|
95 |
Advanced Reasoning about Dynamical SystemsGu, Yilan 17 February 2011 (has links)
In this thesis, we study advanced reasoning about dynamical systems in a logical framework -- the situation calculus. In particular, we consider promoting the efficiency of reasoning about action
in the situation calculus from three different aspects.
First, we propose a modified situation calculus based on the two-variable predicate logic with counting quantifiers. We show that solving the projection and executability problems via regression in such language are decidable. We prove that generally these two problems are co-NExpTime-complete in the modified language. We also consider restricting the format of regressable formulas and basic action theories (BATs) further to gain better computational complexity for reasoning about action via regression. We mention possible applications to formalization of
Semantic Web services.
Then, we propose a hierarchical representation of actions based on the situation calculus to facilitate development, maintenance and elaboration of very large taxonomies of actions. We show that our axioms can be more succinct,
while still using an extended regression operator to solve the projection problem.
Moreover, such representation has significant computational advantages. For taxonomies of actions that can be represented
as finitely branching trees, the regression operator can sometimes work exponentially faster with our theories than it works with the BATs current situation calculus. We also propose a general guideline on how a taxonomy of actions can be constructed from the given set of effect axioms.
Finally, we extend the current situation calculus with the order-sorted logic. In the new formalism, we add sort theories to the usual initial theories to describe taxonomies of objects. We then investigate what is the well-sortness for BATs under such framework. We consider extending the current regression operator with well-sortness checking and unification techniques. With the modified regression,
we gain computational efficiency by terminating the regression earlier when
reasoning tasks are ill-sorted and by reducing the search spaces for well-sorted
objects. We also study that the connection between the order-sorted situation calculus and the current situation calculus.
|
96 |
Qualitative Distances and Qualitative Description of Images for Indoor Scene Description and Recognition in RoboticsFalomir Llansola, Zoe 28 November 2011 (has links)
The automatic extraction of knowledge from the world by a robotic system as human beings interpret their environment through their senses is still an unsolved task in Artificial Intelligence. A robotic agent is in contact with the world through its sensors and other electronic components which obtain and process mainly numerical information. Sonar, infrared and laser sensors obtain distance information. Webcams obtain digital images that are represented internally as matrices of red, blue and green (RGB) colour coordinate values. All this numerical values obtained from the environment need a later interpretation in order to provide the knowledge required by the robotic agent in order to carry out a task.
Similarly, light wavelengths with specific amplitude are captured by cone cells of human eyes obtaining also stimulus without meaning. However, the information that human beings can describe and remember from what they see is expressed using words, that is qualitatively.
The exact process carried out after our eyes perceive light wavelengths and our brain interpret them is quite unknown. However, a real fact in human cognition is that people go beyond the purely perceptual experience to classify things as members of categories and attach linguistic labels to them.
As the information provided by all the electronic components incorporated in a robotic agent is numerical, the approaches that first appeared in the literature giving an interpretation of this information followed a mathematical trend. In this thesis, this problem is addressed from the other side, its main aim is to process these numerical data in order to obtain qualitative information as human beings can do.
The research work done in this thesis tries to narrow the gap between the acquisition of low level information by robot sensors and the need of obtaining high level or qualitative information for enhancing human-machine communication and for applying logical reasoning processes based on concepts. Moreover, qualitative concepts can be added a meaning by relating them to others. They can be used for reasoning applying qualitative models that have been developed in the last twenty years for describing and interpreting metrical and mathematical concepts such as orientation, distance, velocity, acceleration, and so on. And they can be also understood by human-users both written and read aloud.
The first contributions presented are the definition of a method for obtaining fuzzy distance patterns (which include qualitative distances such as ‘near’, far’, ‘very far’ and so on) from the data obtained by any kind of distance sensors incorporated in a mobile robot and the definition of a factor to measure the dissimilarity between those fuzzy patterns. Both have been applied to the integration of the distances obtained by the sonar and laser distance sensors incorporated in a Pioneer 2 dx mobile robot and, as a result, special obstacles have been detected as ‘glass window’, ‘mirror’, and so on. Moreover, the fuzzy distance patterns provided have been also defuzzified in order to obtain a smooth robot speed and used to classify orientation reference systems into ‘open’ (it defines an open space to be explored) or ‘closed’.
The second contribution presented is the definition of a model for qualitative image description (QID) by applying the new defined models for qualitative shape and colour description and the topology model by Egenhofer and Al-Taha [1992] and the orientation models by Hernández [1991] and Freksa [1992]. This model can qualitatively describe any kind of digital image and is independent of the image segmentation method used. The QID model have been tested in two scenarios in robotics: (i) the description of digital images captured by the camera of a Pioneer 2 dx mobile robot and (ii) the description of digital images of tile mosaics taken by an industrial camera located on a platform used by a robot arm to assemble tile mosaics.
In order to provide a formal and explicit meaning to the qualitative description of the images generated, a Description Logic (DL) based ontology has been designed and presented as the third contribution. Our approach can automatically process any random image and obtain a set of DL-axioms that describe it visually and spatially. And objects included in the images are classified according to the ontology schema using a DL reasoner. Tests have been carried out using digital images captured by a webcam incorporated in a Pioneer 2 dx mobile robot. The images taken correspond to the corridors of a building at University Jaume I and objects with them have been classified into ‘walls’, ‘floor’, ‘office doors’ and ‘fire extinguishers’ under different illumination conditions and from different observer viewpoints.
The final contribution is the definition of a similarity measure between qualitative descriptions of shape, colour, topology and orientation. And the integration of those measures into the definition of a general similarity measure between two qualitative descriptions of images. These similarity measures have been applied to: (i) extract objects with similar shapes from the MPEG7 CE Shape-1 library; (ii) assemble tile mosaics by qualitative shape and colour similarity matching; (iii) compare images of tile compositions; and (iv) compare images of natural landmarks in a mobile robot world for their recognition.
The contributions made in this thesis are only a small step forward in the direction of enhancing robot knowledge acquisition from the world. And it is also written with the aim of inspiring others in their research, so that bigger contributions can be achieved in the future which can improve the life quality of our society.
|
97 |
Formal Concept Analysis Methods for Description Logics / Formale Begriffsanalyse Methoden für BeschreibungslogikenSertkaya, Baris 09 July 2008 (has links) (PDF)
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
|
98 |
Standard and Non-standard reasoning in Description Logics / Standard- und Nicht-Standard-Inferenzen in BeschreibungslogikenBrandt, Sebastian-Philipp 23 May 2006 (has links) (PDF)
The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
|
99 |
Μέθοδοι και τεχνικές ανακάλυψης γνώσης στο σημαντικό ιστό : παραγωγική απόκτηση γνώσης από οντολογικά έγγραφα και η τεχνική της σημασιακής προσαρμογής / Methods and techniques for semantic web knowledge discovery : deductive knowledge acquisition from ontology documents and the semantic profiling techniqueΚουτσομητρόπουλος, Δημήτριος 03 August 2009 (has links)
Ο Σημαντικός Ιστός (Semantic Web) είναι ένας συνδυασμός τεχνολογιών και προτύπων με σκοπό να προσδοθεί στη διαδικτυακή πληροφορία αυστηρά καθορισμένη σημασιακή δομή και ερμηνεία. Στόχος είναι να μπορούν οι χρήστες του Παγκόσμιου Ιστού καθώς και αυτοματοποιημένοι πράκτορες να επεξεργάζονται, να διαχειρίζονται και να αξιοποιούν την κατάλληλα χαρακτηρισμένη πληροφορία με τρόπο ευφυή και αποδοτικό.
Ωστόσο, παρά τις τεχνικές που έχουν κατά καιρούς προταθεί, δεν υπάρχει ξεκάθαρη μέθοδος ώστε, αξιοποιώντας το φάσμα του Σημαντικού Ιστού, η διαδικτυακή πληροφορία να ανακτάται με τρόπο παραγωγικό, δηλαδή με βάση τα ήδη εκπεφρασμένα γεγονότα να συνάγεται νέα, άρρητη πληροφορία.
Για την αντιμετώπιση αυτής της κατάστασης, αρχικά εισάγεται και προσδιορίζεται το πρόβλημα της Ανακάλυψης Γνώσης στο Σημαντικό Ιστό (Semantic Web Knowledge Discovery, SWKD). Η Ανακάλυψη Γνώσης στο Σημαντικό Ιστό εκμεταλλεύεται το σημασιακό υπόβαθρο και τις αντίστοιχες σημασιακές περιγραφές των πληροφοριών, όπως αυτές είναι θεμελιωμένες σε μια λογική θεωρία (οντολογίες εκφρασμένες σε γλώσσα OWL). Βάσει αυτών και με τη χρήση των κατάλληλων μηχανισμών αυτοματοποιημένου συλλογισμού μπορεί να συμπεραθεί νέα, άδηλη γνώση, η οποία, μέχρι τότε, μόνο υπονοούνταν στα ήδη υπάρχοντα δεδομένα.
Για να απαντηθεί το ερώτημα αν και σε πιο βαθμό οι τεχνολογίες και η λογική θεωρία του Σημαντικού Ιστού συνεισφέρουν αποδοτικά και εκφραστικά στο πρόβλημα της SWKD καταρτίζεται μια πρότυπη Μεθοδολογία Ανακάλυψης Γνώσης στο Σημαντικό Ιστό, η οποία θεμελιώνεται σε πρόσφατα θεωρητικά αποτελέσματα, αλλά και στην ποιοτική και πειραματική συγκριτική αξιολόγηση διαδεδομένων μηχανισμών συμπερασμού (inference engines) που βασίζονται σε Λογικές Περιγραφής (Description Logics). H αποδοτικότητα και η εκφραστικότητα της μεθόδου αυτής δείχνεται ότι εξαρτώνται από συγκεκριμένους θεωρητικούς, οργανωτικούς και τεχνικούς περιορισμούς.
Η πειραματική επαλήθευση της μεθοδολογίας επιτυγχάνεται με την κατασκευή και επίδειξη της Διεπαφής Ανακάλυψης Γνώσης (Knowledge Discovery Interface) μιας κατανεμημένης δηλαδή δικτυακής υπηρεσίας, η οποία έχει εφαρμοστεί με επιτυχία σε πειραματικά δεδομένα. Τα αποτελέσματα που προκύπτουν με τη χρήση της διεπαφής επαληθεύουν, μέχρι ορισμένο βαθμό, τις υποθέσεις που έχουν γίνει σχετικά κυρίως με την παράμετρο της εκφραστικότητας και δίνουν το έναυσμα για την αναζήτηση και εξέταση της υποστήριξης των νέων προτεινόμενων επεκτάσεων της λογικής θεωρίας του Σημαντικού Ιστού, δηλαδή της γλώσσας OWL 1.1.
Για την ενίσχυση της εκφραστικότητας της ανακάλυψης γνώσης στην περίπτωση συγκεκριμένων πεδίων γνώσης (knowledge domains) εισάγεται μια νέα τεχνική, αποκαλούμενη Σημασιακή Προσαρμογή. Η τεχνική αυτή εξελίσσει την Προσαρμογή Μεταδεδομένων Εφαρμογής (Metadata Application Profiling) από μια επίπεδη συρραφή και συγχώνευση σχημάτων και πεδίων μεταδεδομένων, σε μία ουσιαστική επέκταση και σημασιακή αναγωγή και εμπλουτισμό του αντίστοιχου μοντέλου στο οποίο εφαρμόζεται. Έτσι, η σημασιακή προσαρμογή εξειδικεύει ένα οντολογικό μοντέλο ως προς μια συγκεκριμένη εφαρμογή, όχι απλά με την προσθήκη λεξιλογίου από ετερογενή σχήματα, αλλά μέσω της σημασιακής εμβάθυνσης (semantic intension) και εκλέπτυνσης (semantic refinement) του αρχικού μοντέλου. Η τεχνική αυτή και τα αποτελέσματά της επαληθεύονται πειραματικά με την εφαρμογή στο μοντέλο πληροφοριών πολιτιστικής κληρονομιάς CIDOC-CRM και δείχνεται ότι, με τη χρήση κατάλληλων μεθόδων, η γενική εφαρμοσιμότητα του μοντέλου μπορεί να διαφυλαχθεί.
Για να μπορεί όμως η Ανακάλυψη Γνώσης στο Σημαντικό Ιστό να δώσει ικανοποιητικά αποτελέσματα, απαιτούνται όσο το δυνατόν πληρέστερες και αυξημένες περιγραφές των δικτυακών πόρων. Παρόλο που πληροφορίες άμεσα συμβατές με τη λογική θεωρία του Σημαντικού Ιστού δεν είναι ευχερείς, υπάρχει πληθώρα δεδομένων οργανωμένων σε επίπεδα σχήματα μεταδεδομένων (flat metadata schemata). Διερευνάται επομένως αν η SWKD μπορεί να εφαρμοστεί αποδοτικά και εκφραστικά στην περίπτωση τέτοιων ημιδομημένων μοντέλων γνώσης, όπως για παράδειγμα στην περίπτωση του σχήματος μεταδεδομένων Dublin Core. Δείχνεται ότι το πρόβλημα αυτό ανάγεται μερικώς στην εφαρμογή της σημασιακής προσαρμογής στην περίπτωση τέτοιων μοντέλων, ενώ για τη διαφύλαξη της διαλειτουργικότητας και την επίλυση αμφισημιών που προκύπτουν εφαρμόζονται ανάλογες μέθοδοι και επιπλέον εξετάζεται η τεχνική της παρονομασίας (punning) που εισάγει η OWL 1.1, βάσει της οποίας ο ορισμός ενός ονόματος μπορεί να έχει κυμαινόμενη σημασιακή ερμηνεία ανάλογα με τα συμφραζόμενα.
Συμπερασματικά, οι νέες μέθοδοι που προτείνονται μπορούν να βελτιώσουν το πρόβλημα της Ανακάλυψης Γνώσης στο Σημαντικό Ιστό ως προς την εκφραστικότητα, ενώ ταυτόχρονα η πολυπλοκότητα παραμένει η μικρότερη δυνατή. Επιτυγχάνουν επίσης την παραγωγή πιο εκφραστικών περιγραφών από υπάρχοντα μεταδεδομένα, προτείνοντας έτσι μια λύση στο πρόβλημα της εκκίνησης (bootstrapping) για το Σημαντικό Ιστό. Παράλληλα, μπορούν να χρησιμοποιηθούν ως βάση για την υλοποίηση πιο αποδοτικών τεχνικών κατανεμημένου και αυξητικού συλλογισμού. / Semantic Web is a combination of technologies and standards in order to give Web information strictly defined semantic structure and meaning. Its aim is to enable Web users and automated agents to process, manage and utilize properly described information in intelligent and efficient ways.
Nevertheless, despite the various techniques that have been proposed, there is no clear method such that, by taking advantage of Semantic Web technologies, to be able to retrieve information deductively, i.e. to infer new and implicit information based on explicitly expressed facts.
In order to address this situation, the problem of Semantic Web Knowledge Discovery (SWKD) is first specified and introduced. SWKD takes advantage of the semantic underpinnings and semantic descriptions of information, organized in a logic theory (i.e. ontologies expressed in OWL). Through the use of appropriate automated reasoning mechanisms, SWKD makes then possible to deduce new and unexpressed information that is only implied among explicit facts.
The question as to whether and to what extent do Semantic Web technologies and logic theory contribute efficiently and expressively enough to the SWKD problem is evaluated through the establishment of a SWKD methodology, which builds upon recent theoretical results, as well as on the qualitative and experimental comparison of some popular inference engines, based on Description Logics. It is shown that the efficiency and expressivity of this method depends on specific theoretical, organizational and technical limitations.
The experimental verification of this methodology is achieved through the development and demonstration of the Knowledge Discovery Interface (KDI), a web-distributed service that has been successfully applied on experimental data. The results taken through the KDI confirm, to a certain extent, the assumptions made mostly about expressivity and motivate the examination and investigation of the newly proposed extensions to the Semantic Web logic theory, namely the OWL 1.1 language.
In order to strengthen the expressivity of knowledge discovery in the case of particular knowledge domains a new technique is introduced, known as Semantic Profiling. This technique evolves traditional Metadata Application Profiling from a flat aggregation and mixing of schemata and metadata elements to the substantial extension and semantic enhancement and enrichment of the model on which it is applied. Thus, semantic profiling actually profiles an ontological model for a particular application, not only by bringing together vocabularies from disparate schemata, but also through the semantic intension and semantic refinement of the initial model. This technique and its results are experimentally verified through its application on the CIDOC-CRM cultural heritage information model and it is shown that, through appropriate methods, the general applicability of the model can be preserved.
However, for SWKD to be of much value, it requires the availability of rich and detailed resource descriptions. Even though information compatible with the Semantic Web logic theory are not always readily available, there are plenty of data organized in flat metadata schemata. To this end, it is investigated whether SWKD can be efficiently and expressively applied on such semi-structured knowledge models, as is the case for example with the Dublin Core metadata schema. It is shown that this problem can be partially reduced to applying semantic profiling on such models and, in order to retain interoperability and resolve potential ambiguities, the OWL 1.1 punning feature is investigated, based on which a name definition may have variable semantic interpretation depending on the ontological context.
In conclusion, these newly proposed methods can improve the SWKD problem in terms of expressive strength, while keeping complexity as low as possible. They also contribute to the creation of expressive descriptions from existing metadata, suggesting a solution to the Semantic Web bootstrapping problem. Finally, they can be utilized as the basis for implementing more efficient techniques that involve distributed and incremental reasoning.
|
100 |
Advanced Reasoning about Dynamical SystemsGu, Yilan 17 February 2011 (has links)
In this thesis, we study advanced reasoning about dynamical systems in a logical framework -- the situation calculus. In particular, we consider promoting the efficiency of reasoning about action
in the situation calculus from three different aspects.
First, we propose a modified situation calculus based on the two-variable predicate logic with counting quantifiers. We show that solving the projection and executability problems via regression in such language are decidable. We prove that generally these two problems are co-NExpTime-complete in the modified language. We also consider restricting the format of regressable formulas and basic action theories (BATs) further to gain better computational complexity for reasoning about action via regression. We mention possible applications to formalization of
Semantic Web services.
Then, we propose a hierarchical representation of actions based on the situation calculus to facilitate development, maintenance and elaboration of very large taxonomies of actions. We show that our axioms can be more succinct,
while still using an extended regression operator to solve the projection problem.
Moreover, such representation has significant computational advantages. For taxonomies of actions that can be represented
as finitely branching trees, the regression operator can sometimes work exponentially faster with our theories than it works with the BATs current situation calculus. We also propose a general guideline on how a taxonomy of actions can be constructed from the given set of effect axioms.
Finally, we extend the current situation calculus with the order-sorted logic. In the new formalism, we add sort theories to the usual initial theories to describe taxonomies of objects. We then investigate what is the well-sortness for BATs under such framework. We consider extending the current regression operator with well-sortness checking and unification techniques. With the modified regression,
we gain computational efficiency by terminating the regression earlier when
reasoning tasks are ill-sorted and by reducing the search spaces for well-sorted
objects. We also study that the connection between the order-sorted situation calculus and the current situation calculus.
|
Page generated in 0.0938 seconds