• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 22
  • 10
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Toward semantic interoperability for software systems

Lister, Kendall January 2008 (has links)
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57] / In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. / The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. / The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. / Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. / In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
22

[en] SEMANTIC INTEGRATION OF INFORMATION SYSTEMS / [pt] INTEGRAÇÃO SEMÂNTICA DE SISTEMAS DE INFORMAÇÃO

MARCOS MAGALHAES MOREIRA 21 January 2004 (has links)
[pt] Propõe-se neste trabalho uma abordagem para a integração semântica de informações baseada na linguagem de ontologia OWL, utilizada como linguagem padrão para tornar compatíveis as diversas fontes de informação. Incialmente, apresenta-se o problema de integração de informações e discute-se a aplicação de ontologia para resolvê-lo. Em seguida, indentificam-se as formas de obtenção e extração de ontologias , com ênfase em sistemas de bancos de dados. Da mesma forma, propõem-se alternativas para mapeamento entre classes , propriedades e instâncias das ontologias obtidas. Finalmente, desenvolve-se um estudo de caso para aplicação e validação das idéias apresentadas. Como resultado, propõe-se uma arquitetura de um sistema integrador e discute-se a implementação de alguns dos seus componentes. / [en] This work presents a semantic approach to information integration based on the OWL ontology language, proposed as a standard language to facilitate the integration of different information sources. The information integration problem is first presented and then the use of ontologies to solve it is addressed. Then, strategies to obtain and extract ontologies are identified, emphasizing database system. Alternative mappings between classes , properties and instances of the resulting ontologies are also proposed.Finally, a case study is developed to apply and validate the strategies presented. As a result, an integrator system architecture is proposed and the implementation of some of its components is discussed.
23

Jazyk lékařských zpráv a jeho informačně lexikální analýza / The language of medical reports and its information-lexical analysis

Přečková, Petra January 2011 (has links)
The objective of the dissertation thesis has been the information-lexical analysis of Czech medical reports and the usability of international classification systems in the Czech healthcare environment. The analysis of medical reports has been based on the attributes of the Minimal Data Model for Cardiology (MDMC). Narrative medical reports and structured medical reports from the ADAMEK software application have been used. For the thesis SNOMED CT and ICD-10 classification systems have been used. There has been compared how well attributes of MDMC are recorded in narrative and structured medical reports. The language analysis of the Czech narrative medical reports has been made. A new application for measuring diversity in medical reports written in any language is proposed. The application is based on the general concepts of diversities derived from f-diversity, relative f- diversity, self f-diversity and marginal f-diversity. The thesis has come to the conclusion that using a free text in medical reports is not consistent and not standardized. The standardized terminology would bring benefits to physicians, patients, administrators, software developers and payers and it would help healthcare providers as it could provide complete and easily accessible information that belongs to the process of...
24

Jazyk lékařských zpráv a jeho informačně lexikální analýza / The language of medical reports and its information-lexical analysis

Přečková, Petra January 2011 (has links)
The objective of the dissertation thesis has been the information-lexical analysis of Czech medical reports and the usability of international classification systems in the Czech healthcare environment. The analysis of medical reports has been based on the attributes of the Minimal Data Model for Cardiology (MDMC). Narrative medical reports and structured medical reports from the ADAMEK software application have been used. For the thesis SNOMED CT and ICD-10 classification systems have been used. There has been compared how well attributes of MDMC are recorded in narrative and structured medical reports. The language analysis of the Czech narrative medical reports has been made. A new application for measuring diversity in medical reports written in any language is proposed. The application is based on the general concepts of diversities derived from f-diversity, relative f- diversity, self f-diversity and marginal f-diversity. The thesis has come to the conclusion that using a free text in medical reports is not consistent and not standardized. The standardized terminology would bring benefits to physicians, patients, administrators, software developers and payers and it would help healthcare providers as it could provide complete and easily accessible information that belongs to the process of...
25

A Schema and Ontology-Assisted Heterogeneous Information Integration Study / 運用綱要和本體論以協助異質資訊整合之研究

龔怡寧, Kung, Yi-Ning Unknown Date (has links)
由於對資訊科技以及網際網路/和企業內網路的依賴持續加深,異質資訊整合在電子化企業中已經成為一個普遍存在而且相當重要的議題。因為在缺乏整合的情形下個別地存取異質資訊來源可能會造成資訊的混亂,而且在電子化企業的環境中,這麼做也不符合成本效益決策支援管理分析。在傳統異質資訊整合的研究中,通常會創造一個共同資料模式來處理異質性的問題,而可延伸性標記語言已經成為網路上交換資訊時的標準文件格式,使得XML成為整合工作中共同資料模式的一個很好的候選者;然而,XML僅能夠處理結構異質性,無法處理語意異質性,而本體論被視為是一個重要而且自然的工具可以用來表現真實世界中模糊不清的語意和關係,因此,在本研究中也加入了本體論以期達到異質資訊整合中的語意互動性。 在本篇論文中,我們提出一個以學名結構導向非特殊隨機式對應的方法來產生全區域綱要方法(Global Schema),以促成非傳統而是以網路為基礎的異質資訊整合。我們也提出一個對異質資訊來源較具智慧性的查詢方法,該查詢方法應用了global-as-view (GAV)全區域景觀導向方法加上本體論觀念運用,可以同時提高對底層異質資訊來源的結構互動性和語意互動性。我們透過雛型系統的實作來驗證本研究所提供的異質資訊整合方法的可行性。 / The research issues of heterogeneous information integration have become ubiquitous and critically important in e-business (EB) with the increasing dependence on Internet/Intranet and information technology (IT). Accessing the heterogeneous information sources separately without integration may lead to the chaos of information requested. It is also not cost-effective in EB settings. A common general way to deal with heterogeneity problems in traditional HII is to create a common data model. The eXtensible Markup Language (XML) has been the standard data document format for exchanging information on the Web. XML only deals with the structural heterogeneity; it can barely handle the semantic heterogeneity. Ontologies are regarded as an important and natural means to represent the implicit semantics and relationships in the real world. And they are used to assist to reach semantic interoperability in HII in this research. In this thesis, we provide a generic construct orientation no ad hoc method to generate the global schema to enable the web-based alternative to traditional HII. We provide a wiser query method over multiple heterogeneous information sources by applying global-as-view (GAV) approach with the use of ontology to enhance both structural and semantic interoperability of the underlying heterogeneous information sources. We construct a prototype implementing the method to provide a proof on the validity and feasibility.
26

Contribution to interoperable products design and manufacturing information : application to plastic injection products manufacturing / Contribution à l'interopérabilité des informations de conception et de fabrication de produits : application à la fabrication par injection de produits plastiques

Szejka, Anderson Luis 14 October 2016 (has links)
La compétitivité toujours plus importante et la mondialisation ont mis l'industrie manufacturière au défi de rationaliser les différentes façons de mettre sur le marché de nouveaux produits dans un délai court, avec des prix compétitifs tout en assurant des niveaux de qualité élevés. Le PDP moderne exige simultanément la collaboration de plusieurs groupes de travail qui assurent la création et l’échange d’information avec des points de vue multiples dans et à travers les frontières institutionnelles. Dans ce contexte, des problèmes d’interopérabilité sémantique ont été identifiés en raison de l'hétérogénéité des informations liées à des points de vue différents et leurs relations pour le développement de produits. Le travail présenté dans ce mémoire propose un cadre conceptuel d’interopération pour la conception et la fabrication de produits. Ce cadre est basé sur un ensemble d’ontologies clés, de base d’ingénierie et sur des approches de cartographie sémantique. Le cadre soutient les mécanismes qui permettent la conciliation sémantique en termes de partage, conversion et traduction, tout en améliorant la capacité de partage des connaissances entre les domaines hétérogènes qui doivent interopérer. La recherche a particulièrement porté sur la conception et la fabrication de produits tournants en plastique et explore les points particuliers de la malléabilité - la conception et la fabrication de moules. Un système expérimental a été proposé à l’aide de l'outil Protégé pour modéliser des ontologies de base et d’une plateforme Java intégrée à Jena pour développer l'interface avec l'utilisateur. Le concept et la mise en œuvre de cette recherche ont été testés par des expériences en utilisant des produits tournants en plastiques. Les résultats ont montré que l'information et ses relations rigoureusement définies peuvent assurer l'efficacité de la conception et la fabrication du produit dans un processus de développement de produits moderne et collaboratif / Global competitiveness has challenged manufacturing industry to rationalise different ways of bringing to the market new products in a short lead-time with competitive prices while ensuring higher quality levels. Modern PDP has required simultaneously collaborations of multiple groups, producing and exchanging information from multi-perspectives within and across institutional boundaries. However, it has been identified semantic interoperability issues in view of the information heterogeneity from multiple perspectives and their relationships across product development. This research proposes a conceptual framework of an Interoperable Product Design and Manufacturing based on a set of core ontological foundations and semantic mapping approaches. This framework has been particularly instantiated for the design and manufacturing of plastic injection moulded rotational products and has explored the particular viewpoints of moldability, mould design and manufacturing. The research approach explored particular information structures to support Design and Manufacture application. Subsequently, the relationships between these information structures have been investigated and the semantics reconciliation has been designed through mechanisms to convert, share and translate information from the multi-perspectives. An experimental system has been performed using the Protégé tool to model the core ontologies and the Java platform integrated with the Jena to develop the interface with the user. The conceptual framework proposed in this research has been tested through experiments using rotational plastic products. Therefore, this research has shown that information rigorously-defined and their well-defined relationships can ensure the effectiveness of product design and manufacturing in a modern and collaborative PDP
27

Weaving the semantic web: Contributions and insights

Cregan, Anne, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
28

Μέθοδοι και τεχνικές ανακάλυψης γνώσης στο σημαντικό ιστό : παραγωγική απόκτηση γνώσης από οντολογικά έγγραφα και η τεχνική της σημασιακής προσαρμογής / Methods and techniques for semantic web knowledge discovery : deductive knowledge acquisition from ontology documents and the semantic profiling technique

Κουτσομητρόπουλος, Δημήτριος 03 August 2009 (has links)
Ο Σημαντικός Ιστός (Semantic Web) είναι ένας συνδυασμός τεχνολογιών και προτύπων με σκοπό να προσδοθεί στη διαδικτυακή πληροφορία αυστηρά καθορισμένη σημασιακή δομή και ερμηνεία. Στόχος είναι να μπορούν οι χρήστες του Παγκόσμιου Ιστού καθώς και αυτοματοποιημένοι πράκτορες να επεξεργάζονται, να διαχειρίζονται και να αξιοποιούν την κατάλληλα χαρακτηρισμένη πληροφορία με τρόπο ευφυή και αποδοτικό. Ωστόσο, παρά τις τεχνικές που έχουν κατά καιρούς προταθεί, δεν υπάρχει ξεκάθαρη μέθοδος ώστε, αξιοποιώντας το φάσμα του Σημαντικού Ιστού, η διαδικτυακή πληροφορία να ανακτάται με τρόπο παραγωγικό, δηλαδή με βάση τα ήδη εκπεφρασμένα γεγονότα να συνάγεται νέα, άρρητη πληροφορία. Για την αντιμετώπιση αυτής της κατάστασης, αρχικά εισάγεται και προσδιορίζεται το πρόβλημα της Ανακάλυψης Γνώσης στο Σημαντικό Ιστό (Semantic Web Knowledge Discovery, SWKD). Η Ανακάλυψη Γνώσης στο Σημαντικό Ιστό εκμεταλλεύεται το σημασιακό υπόβαθρο και τις αντίστοιχες σημασιακές περιγραφές των πληροφοριών, όπως αυτές είναι θεμελιωμένες σε μια λογική θεωρία (οντολογίες εκφρασμένες σε γλώσσα OWL). Βάσει αυτών και με τη χρήση των κατάλληλων μηχανισμών αυτοματοποιημένου συλλογισμού μπορεί να συμπεραθεί νέα, άδηλη γνώση, η οποία, μέχρι τότε, μόνο υπονοούνταν στα ήδη υπάρχοντα δεδομένα. Για να απαντηθεί το ερώτημα αν και σε πιο βαθμό οι τεχνολογίες και η λογική θεωρία του Σημαντικού Ιστού συνεισφέρουν αποδοτικά και εκφραστικά στο πρόβλημα της SWKD καταρτίζεται μια πρότυπη Μεθοδολογία Ανακάλυψης Γνώσης στο Σημαντικό Ιστό, η οποία θεμελιώνεται σε πρόσφατα θεωρητικά αποτελέσματα, αλλά και στην ποιοτική και πειραματική συγκριτική αξιολόγηση διαδεδομένων μηχανισμών συμπερασμού (inference engines) που βασίζονται σε Λογικές Περιγραφής (Description Logics). H αποδοτικότητα και η εκφραστικότητα της μεθόδου αυτής δείχνεται ότι εξαρτώνται από συγκεκριμένους θεωρητικούς, οργανωτικούς και τεχνικούς περιορισμούς. Η πειραματική επαλήθευση της μεθοδολογίας επιτυγχάνεται με την κατασκευή και επίδειξη της Διεπαφής Ανακάλυψης Γνώσης (Knowledge Discovery Interface) μιας κατανεμημένης δηλαδή δικτυακής υπηρεσίας, η οποία έχει εφαρμοστεί με επιτυχία σε πειραματικά δεδομένα. Τα αποτελέσματα που προκύπτουν με τη χρήση της διεπαφής επαληθεύουν, μέχρι ορισμένο βαθμό, τις υποθέσεις που έχουν γίνει σχετικά κυρίως με την παράμετρο της εκφραστικότητας και δίνουν το έναυσμα για την αναζήτηση και εξέταση της υποστήριξης των νέων προτεινόμενων επεκτάσεων της λογικής θεωρίας του Σημαντικού Ιστού, δηλαδή της γλώσσας OWL 1.1. Για την ενίσχυση της εκφραστικότητας της ανακάλυψης γνώσης στην περίπτωση συγκεκριμένων πεδίων γνώσης (knowledge domains) εισάγεται μια νέα τεχνική, αποκαλούμενη Σημασιακή Προσαρμογή. Η τεχνική αυτή εξελίσσει την Προσαρμογή Μεταδεδομένων Εφαρμογής (Metadata Application Profiling) από μια επίπεδη συρραφή και συγχώνευση σχημάτων και πεδίων μεταδεδομένων, σε μία ουσιαστική επέκταση και σημασιακή αναγωγή και εμπλουτισμό του αντίστοιχου μοντέλου στο οποίο εφαρμόζεται. Έτσι, η σημασιακή προσαρμογή εξειδικεύει ένα οντολογικό μοντέλο ως προς μια συγκεκριμένη εφαρμογή, όχι απλά με την προσθήκη λεξιλογίου από ετερογενή σχήματα, αλλά μέσω της σημασιακής εμβάθυνσης (semantic intension) και εκλέπτυνσης (semantic refinement) του αρχικού μοντέλου. Η τεχνική αυτή και τα αποτελέσματά της επαληθεύονται πειραματικά με την εφαρμογή στο μοντέλο πληροφοριών πολιτιστικής κληρονομιάς CIDOC-CRM και δείχνεται ότι, με τη χρήση κατάλληλων μεθόδων, η γενική εφαρμοσιμότητα του μοντέλου μπορεί να διαφυλαχθεί. Για να μπορεί όμως η Ανακάλυψη Γνώσης στο Σημαντικό Ιστό να δώσει ικανοποιητικά αποτελέσματα, απαιτούνται όσο το δυνατόν πληρέστερες και αυξημένες περιγραφές των δικτυακών πόρων. Παρόλο που πληροφορίες άμεσα συμβατές με τη λογική θεωρία του Σημαντικού Ιστού δεν είναι ευχερείς, υπάρχει πληθώρα δεδομένων οργανωμένων σε επίπεδα σχήματα μεταδεδομένων (flat metadata schemata). Διερευνάται επομένως αν η SWKD μπορεί να εφαρμοστεί αποδοτικά και εκφραστικά στην περίπτωση τέτοιων ημιδομημένων μοντέλων γνώσης, όπως για παράδειγμα στην περίπτωση του σχήματος μεταδεδομένων Dublin Core. Δείχνεται ότι το πρόβλημα αυτό ανάγεται μερικώς στην εφαρμογή της σημασιακής προσαρμογής στην περίπτωση τέτοιων μοντέλων, ενώ για τη διαφύλαξη της διαλειτουργικότητας και την επίλυση αμφισημιών που προκύπτουν εφαρμόζονται ανάλογες μέθοδοι και επιπλέον εξετάζεται η τεχνική της παρονομασίας (punning) που εισάγει η OWL 1.1, βάσει της οποίας ο ορισμός ενός ονόματος μπορεί να έχει κυμαινόμενη σημασιακή ερμηνεία ανάλογα με τα συμφραζόμενα. Συμπερασματικά, οι νέες μέθοδοι που προτείνονται μπορούν να βελτιώσουν το πρόβλημα της Ανακάλυψης Γνώσης στο Σημαντικό Ιστό ως προς την εκφραστικότητα, ενώ ταυτόχρονα η πολυπλοκότητα παραμένει η μικρότερη δυνατή. Επιτυγχάνουν επίσης την παραγωγή πιο εκφραστικών περιγραφών από υπάρχοντα μεταδεδομένα, προτείνοντας έτσι μια λύση στο πρόβλημα της εκκίνησης (bootstrapping) για το Σημαντικό Ιστό. Παράλληλα, μπορούν να χρησιμοποιηθούν ως βάση για την υλοποίηση πιο αποδοτικών τεχνικών κατανεμημένου και αυξητικού συλλογισμού. / Semantic Web is a combination of technologies and standards in order to give Web information strictly defined semantic structure and meaning. Its aim is to enable Web users and automated agents to process, manage and utilize properly described information in intelligent and efficient ways. Nevertheless, despite the various techniques that have been proposed, there is no clear method such that, by taking advantage of Semantic Web technologies, to be able to retrieve information deductively, i.e. to infer new and implicit information based on explicitly expressed facts. In order to address this situation, the problem of Semantic Web Knowledge Discovery (SWKD) is first specified and introduced. SWKD takes advantage of the semantic underpinnings and semantic descriptions of information, organized in a logic theory (i.e. ontologies expressed in OWL). Through the use of appropriate automated reasoning mechanisms, SWKD makes then possible to deduce new and unexpressed information that is only implied among explicit facts. The question as to whether and to what extent do Semantic Web technologies and logic theory contribute efficiently and expressively enough to the SWKD problem is evaluated through the establishment of a SWKD methodology, which builds upon recent theoretical results, as well as on the qualitative and experimental comparison of some popular inference engines, based on Description Logics. It is shown that the efficiency and expressivity of this method depends on specific theoretical, organizational and technical limitations. The experimental verification of this methodology is achieved through the development and demonstration of the Knowledge Discovery Interface (KDI), a web-distributed service that has been successfully applied on experimental data. The results taken through the KDI confirm, to a certain extent, the assumptions made mostly about expressivity and motivate the examination and investigation of the newly proposed extensions to the Semantic Web logic theory, namely the OWL 1.1 language. In order to strengthen the expressivity of knowledge discovery in the case of particular knowledge domains a new technique is introduced, known as Semantic Profiling. This technique evolves traditional Metadata Application Profiling from a flat aggregation and mixing of schemata and metadata elements to the substantial extension and semantic enhancement and enrichment of the model on which it is applied. Thus, semantic profiling actually profiles an ontological model for a particular application, not only by bringing together vocabularies from disparate schemata, but also through the semantic intension and semantic refinement of the initial model. This technique and its results are experimentally verified through its application on the CIDOC-CRM cultural heritage information model and it is shown that, through appropriate methods, the general applicability of the model can be preserved. However, for SWKD to be of much value, it requires the availability of rich and detailed resource descriptions. Even though information compatible with the Semantic Web logic theory are not always readily available, there are plenty of data organized in flat metadata schemata. To this end, it is investigated whether SWKD can be efficiently and expressively applied on such semi-structured knowledge models, as is the case for example with the Dublin Core metadata schema. It is shown that this problem can be partially reduced to applying semantic profiling on such models and, in order to retain interoperability and resolve potential ambiguities, the OWL 1.1 punning feature is investigated, based on which a name definition may have variable semantic interpretation depending on the ontological context. In conclusion, these newly proposed methods can improve the SWKD problem in terms of expressive strength, while keeping complexity as low as possible. They also contribute to the creation of expressive descriptions from existing metadata, suggesting a solution to the Semantic Web bootstrapping problem. Finally, they can be utilized as the basis for implementing more efficient techniques that involve distributed and incremental reasoning.
29

Weaving the semantic web: Contributions and insights

Cregan, Anne, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
30

Feature-based Approach for Semantic Interoperability of Shape Models

Gupta, Ravi Kumar January 2012 (has links) (PDF)
Semantic interoperability (SI) of a product model refers to automatic exchange of meaning associated with the product data, among applications/domains throughout the product development cycle. In the product development cycle, several applications (engineering design, industrial design, manufacturing, supply chain, marketing, maintenance etc.) and different engineering domains (mechanical, electrical, electronic etc.) come into play making the ability to exchange product data with semantics very significant. With product development happening in multiple locations with multiple tools/systems, SI between these systems/domains becomes important. The thesis presents a feature-based framework for shape model to address these SI issues when exchanging shape models. Problem of exchanging semantics associated with shape model to support the product lifecycle has been identified and explained. Different types of semantic interoperability issues pertaining to the shape model have been identified and classified. Features in a shape model can be associated with volume addition/subtraction to/from base-solid, deformation/modification of base-sheet/base surface, forming of material of constant thickness. The DIFF model has been extended to represent, classify and extract Free-Form Surface Features (FFSFs) and deformation features in a part model. FFSFs refer to features that modify a free-form surface. Deformation features are created in constant thickness part models, for example, deformation of material (as in sheet-metal parts) or forming of material (as in injection molded parts with constant thickness), also referred to as constant thickness features. Volumetric features covered in the DIFF model have been extended to classify and represent volumetric features based on relative variations of cross-section and PathCurve. Shape feature ontology is described based on unified feature taxonomy with definitions and labels of features as defined in the extended DIFF model. Features definitions are used as intermediate and unambiguous representation for shape features. The feature ontology is used to capture semantics of shape features. The proposed ontology enables reasoning to handle semantic equivalences between feature labels, and is used to map shape features from a source to target applications. Reasoning framework for identification of semantically equivalent feature labels and representations for the feature being exchanged across multiple applications is presented and discussed. This reasoning framework is used to associate multiple construction paths for a feature and associate applicable meanings from the ontology. Interface is provided to select feature label for a target application from the list of labels which are semantically equivalent for the feature being exchanged/mapped. Parameters for the selected feature label can be mapped from the DIFF representation; the feature can then be represented/constructed in the target application using the feature label and mapped parameters. This work shows that product model with feature information (feature labels and representations), as understood by the target application, can be exchanged and maintained in such a way that multiple applications can use the product information as their understandable labels and representations. Finally, the thesis concludes by summarizing the main contributions and outlining the scope for future work.

Page generated in 0.1308 seconds