• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 916
  • 916
  • 269
  • 206
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 107
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

The Design and Implementation of Optimization Approaches for Large Scale Ontology Alignment in SAMBO

Li, Huanyu January 2017 (has links)
The current World Wide Web provides a convenient way for people to acquire information,but it does not have the ability to manipulate semantics. In other words, peoplecan access data from web pages efficiently but computer programs cannot satisfy effectivedata reuse and sharing. Tim Berners-Lee as the inventor ofWorldWideWeb together withJames Hendler and Ora Lassila, proposed the idea of Semantic Web that is expected as anevolution to existing Web. The knowledge representation for Semantic Web witnessed thedevelopment from extensible makeup language (XML) and resource description framework(RDF) to ontologies. A large quantity of researchers utilize ontologies to expressconcepts, relations and relevant semantics in specific domains. However, different researchersmay have diverse comprehension about knowledge that brings inconsistentinformation in same or similar ontologies. SAMBO is an ontology alignment system that was designed and implemented by ADITof Linköping University in 2005. Shortly after implementation, SAMBO could accomplishmost tasks of ontology alignment. Nevertheless, as the scale grows rapidly, SAMBO couldnot achieve large scale ontology alignment. The primary job of this thesis is to optimizeexisting SAMBO system to fulfill alignment of large scale ontologies. The principal parts of this thesis are as follows. First, we achieve an analysis on currenttop ontology alignment systems, AML and LogMap which are capable of aligning largescale ontologies. This analysis aims to obtain the features in the design of high-quality systems.Then, we analyze existing SAMBO to figure out which aspects need to be optimized.We obtain the result that SAMBO should be improved in data structure, database designand parallel matching. Thus, we propose the design of optimization approaches and givethe implementation. Finally, we evaluate the new system with large scale ontologies andacquire desired results.
362

Étude exhaustive de voies de signalisation de grande taille par clustering des trajectoires et caractérisation par analyse sémantique / Comprehensive study of large signaling pathways by clustering trajectories and characterization by semantic analysis

Coquet, Jean 20 December 2017 (has links)
Les voies de signalisation décrivent les réponses d'une cellule à des stimuli externes. Elles sont primordiales dans les processus biologiques tels que la différentiation, la prolifération ou encore l'apoptose. La biologie des systèmes tentent d'étudier ces voies de façon exhaustive à partir de modèles statistiques ou dynamiques. Le nombre de solutions expliquant un phénomène biologique (par exemple la réaction d'une cellule à un stimulus) peut être très élevé dans le cas de grands modèles. Cette thèse propose, dans un premier temps, différentes stratégies de regroupement de ces solutions à partir de méthodes de clustering et d'analyse de concepts formels. Puis elle présente la caractérisation de ces regroupements à partir de web sémantique. Ces stratégies ont été appliquées au réseau de signalisation du TGF-beta, un stimulus extra-cellulaire jouant un rôle important dans le développement du cancer, ce qui a permis d'identifier cinq grands groupes de trajectoires participant chacun à des processus biologiques différents. Dans un second temps, cette thèse se confronte au problème de conversion des données hétérogènes provenant de différentes bases dans un formalisme unique afin de pouvoir généraliser l'étude précédente. Elle propose une stratégie permettant de regrouper les différents réseaux de signalisation provenant d'une base de données en un modèle unique et ainsi permettant de calculer toutes les trajectoires de signalisation d'un stimulus. / Signaling pathways describe the extern stimuli responses of a cell. They are indispensable in biological processes such as differentiation, proliferation or apoptosis. The Systems Biology tries to study exhaustively the signalling pathways using static or dynamic models. The number of solutions which explain a biological phenomenon (for example the stimulus reaction of cell) can be very high in large models. First, this thesis proposes some different strategies to group the solutions describing the stimulus signalling with clustering methods and Formal Concept Analysis. Then, it presents the cluster characterization with semantic web methods. Those strategies have been applied to the TGF-beta signaling network, an extracellular stimulus playing an important role in the cancer growing, which helped to identify 5 large groups of trajectories characterized by different biological processes. Next, this thesis confronts the problem of heterogeneous data translation from different bases to a unique formalism. The goal is to be able to generalize the previous study. It proposes a strategy to group signaling pathways of a database to an unique model, then to calculate every signaling trajectory of the stimulus.
363

Semantic Web Vision : survey of ontology mapping systems and evaluation of progress / Semantic Web Vision : survey of ontology mapping systems and evaluation of progress

Saleem, Arshad January 2006 (has links)
Ever increasing complexity of software systems, and distributed and dynamic nature of today’s enterprise level computing have initiated the demand for more self aware, flexible and robust systems, where human beings could delegate much of their work to software agents. The Semantic Web presents new opportunities for enabling, modeling, sharing and reasoning with knowledge available on the web. These opportunities are made possible through the formal representation of knowledge domains with ontologies. Semantic Web is a vision of World Wide Web (WWW) level knowledge representation system where each piece of information is equipped with well defined meaning; enabling software agents to understand and process that information. This, in turn, enables people and software agents to work in a more smooth and collaborative way. In this thesis we have first presented a detailed overview of Semantic web vision by describing its fundamental building blocks which constitutes famous layered architecture of Semantic Web. We have discussed the mile stones Semantic Web vision has achieved so far in the areas of research, education and industry and on the other hand we have presented some of the social, business and technological barriers in the way of this vision to become reality. We have also evaluated that how Semantic vision is effecting some of the current technological and research areas like Web Services, Software Agents, Knowledge Engineering and Grid Computing. In the later part of thesis we have focused on problem of ontology mapping for agents on semantic web. We have precisely defined the problem and categorized it on the basis of syntactic and semantic aspects. Finally we have produced a survey of the current state of the art in ontology mapping research. In the survey we have presented some of the selected ontology mapping systems and described their functionality on the basis of the way they approach the problem, their efficiency, effectiveness and the part of problem space they cover. We consider that the survey of current state of the art in ontology mapping will provide a solid basis for further research in this field. / Ever increasing complexity of software systems, and distributed and dynamic nature of today’s enterprise level computing have initiated the demand for more self aware, flexible and robust systems, where human beings could delegate much of their work to software agents. The Semantic Web presents new opportunities for enabling, modeling, sharing and reasoning with knowledge available on the web. These opportunities are made possible through the formal representation of knowledge domains with ontologies. Semantic Web is a vision of World Wide Web (WWW) level knowledge representation system where each piece of information is equipped with well defined meaning; enabling software agents to understand and process that information. This, in turn, enables people and software agents to work in a more smooth and collaborative way. In this thesis we have first presented a detailed overview of Semantic web vision by describing its fundamental building blocks which constitutes famous layered architecture of Semantic Web. We have discussed the mile stones Semantic Web vision has achieved so far in the areas of research, education and industry and on the other hand we have presented some of the social, business and technological barriers in the way of this vision to become reality. We have also evaluated that how Semantic vision is effecting some of the current technological and research areas like Web Services, Software Agents, Knowledge Engineering and Grid Computing. In the later part of thesis we have focused on problem of ontology mapping for agents on semantic web. We have precisely defined the problem and categorized it on the basis of syntactic and semantic aspects. Finally we have produced a survey of the current state of the art in ontology mapping research. In the survey we have presented some of the selected ontology mapping systems and described their functionality on the basis of the way they approach the problem, their efficiency, effectiveness and the part of problem space they cover. We consider that the survey of current state of the art in ontology mapping will provide a solid basis for further research in this field. / Folkparksvagen 18:01,372 40 Ronneby. Sweden
364

Integration of Recommendation and Partial Reference Alignment Algorithms in a Session based Ontology Alignment System

Qadeer, Shahab January 2011 (has links)
SAMBO is a system to assist users for alignment and merging of two ontologies (i.e. to find inter-ontology relationship). The user performs an alignment process with the help of mapping suggestions. The objective of the thesis work is to extend the existing system with new components; multiple sessions, integration of an ontology alignment strategy, recommendation system, integration of a system that can use results from previous sessions, and integration of partial reference alignment (PRA) that can be used to filter mapping suggestions. Most of the theoretical work existed, but it was important to study and implement, how these components can be integrated in the system, and how they can work together.
365

Semantic Matching for Stream Reasoning

Dragisic, Zlatan January 2011 (has links)
Autonomous system needs to do a great deal of reasoning during execution in order to provide timely reactions to changes in their environment. Data needed for this reasoning process is often provided through a number of sensors. One approach for this kind of reasoning is evaluation of temporal logical formulas through progression. To evaluate these formulas it is necessary to provide relevant data for each symbol in a formula. Mapping relevant data to symbols in a formula could be done manually, however as systems become more complex it is harder for a designer to explicitly state and maintain thismapping. Therefore, automatic support for mapping data from sensors to symbols would make system more flexible and easier to maintain. DyKnow is a knowledge processing middleware which provides the support for processing data on different levels of abstractions. The output from the processing components in DyKnow is in the form of streams of information. In the case of DyKnow, reasoning over incrementally available data is done by progressing metric temporal logical formulas. A logical formula contains a number of symbols whose values over time must be collected and synchronized in order to determine the truth value of the formula. Mapping symbols in formula to relevant streams is done manually in DyKnow. The purpose of this matching is for each variable to find one or more streams whose content matches the intended meaning of the variable. This thesis analyses and provides a solution to the process of semantic matching. The analysis is mostly focused on how the existing semantic technologies such as ontologies can be used in this process. The thesis also analyses how this process can be used for matching symbols in a formula to content of streams on distributed and heterogeneous platforms. Finally, the thesis presents an implementation in the Robot Operating System (ROS). The implementation is tested in two case studies which cover a scenario where there is only a single platform in a system and a scenario where there are multiple distributed heterogeneous platforms in a system. The conclusions are that the semantic matching represents an important step towards fully automatized semantic-based stream reasoning. Our solution also shows that semantic technologies are suitable for establishing machine-readable domain models. The use of these technologies made the semantic matching domain and platform independent as all domain and platform specific knowledge is specified in ontologies. Moreover, semantic technologies provide support for integration of data from heterogeneous sources which makes it possible for platforms to use streams from distributed sources.
366

DBpedia Type and Entity Detection Using Word Embeddings and N-gram Models

Zhou, Hanqing January 2018 (has links)
Nowadays, knowledge bases are used more and more in Semantic Web tasks, such as knowledge acquisition (Hellmann et al., 2013), disambiguation (Garcia et al., 2009) and named entity corpus construction (Hahm et al., 2014), to name a few. DBpedia is playing a central role on the linked open data cloud; therefore, the quality of this knowledge base is becoming a central point of focus. However, there are some issues with the quality of DBpedia. In particular, DBpedia suffers from three major types of problems: a) invalid types for entities, b) missing types for entities, and c) invalid entities in the resources’ description. In order to enhance the quality of DBpedia, it is important to detect these invalid types and resources, as well as complete missing types. The three main goals of this thesis are: a) invalid entity type detection in order to solve the problem of invalid DBpedia types for entities, b) automatic detection of the types of entities in order to solve the problem of missing DBpedia types for entities, and c) invalid entity detection in order to solve the problem of invalid entities in the resource description of a DBpedia entity. We compare several methods for the detection of invalid types, automatic typing of entities, and invalid entities detection in the resource descriptions. In particular, we compare different classification and clustering algorithms based on various sets of features: entity embedding features (Skip-gram and CBOW models) and traditional n-gram features. We present evaluation results for 358 DBpedia classes extracted from the DBpedia ontology. The main contribution of this work consists of the development of automatic invalid type detection, automatic entity typing, and automatic invalid entity detection methods using clustering and classification. Our results show that entity embedding models usually perform better than n-gram models, especially the Skip-gram embedding model.
367

Raisonnement incrémental sur des flux de données / Incremental reasoning over triple streams

Chevalier, Jules 05 February 2016 (has links)
Nous proposons dans cette thèse une architecture pour le raisonnement incrémental sur des flux de triples. Afin de passer à l’échelle, elle est conçue sous la forme de modules indépendants, permettant l’exécution parallèle du raisonnement. Plusieurs instances d’une même règle peuvent être exécutées simultanément afin d’améliorer les performances. Nous avons également concentré nos efforts pour limiter la dispersion des doublons dans le système, problème récurrent du raisonnement. Pour cela, un triplestore partagé permet à chaque module de filtrer au plus tôt les doublons. La structure de notre architecture, organisée en modules indépendants par lesquels transitent les triples, lui permet de recevoir en entrée des flux de triples. Enfin, notre architecture est indépendante du fragment utilisé. Nous présentons trois modes d’inférence pour notre architecture. Le premier consiste à inférer l’ensemble des connaissances implicites le plus rapidement possible. Le second priorise l'inférence de certaines connaissances prédéterminées. Le troisième vise à maximiser la quantité de triples inférés par seconde. Nous avons implémenté l’architecture présentée à travers Slider, un raisonneur incrémental prenant nativement en charge les fragments ρdf et RDFS. Il peut être facilement étendu à des fragments plus complexes. Nos expérimentations ont montré une amélioration des performances de plus de 65% par rapport au raisonneur OWLIM-SE. Nous avons également mené des tests montrant que l’utilisation du raisonnement incrémental avec Slider apporte un avantage systématique aux performances par rapport au raisonnement par lots, quels que soient l’ontologie utilisée et le fragment appliqué / In this thesis, we propose an architecture for incremental reasoning on triple streams. To ensure scalability, it is composed of independent modules; thus allowing parallel reasoning. That is, several instances of a same rule can be simultaneously executed to enhance performance. We also focused our efforts to limit the duplicates spreading in the system, a recurrent issue for reasoning. To achieve this, we design a shared triplestore which allows each module to filter duplicates as soon as possible. The triples passes through the different independent modules of the architecture allows the reasoner to receive triple streams as input. Finally, our architecture is of agnostic nature regarding the fragment used for the inference. We also present three inference modes for our architecture: the first one infers all the implicit knowledge as fast as possible; the second mode should be used when the priority has to be defined for the inference of a specific type of knowledge; the third one proposes to maximize the amount of triples inferred per second. We implemented this architecture through Slider, an incremental reasoning natively supporting the fragments ρdf and RDFS: It can easily be extended to more complex fragments. Our experimentations show a 65% improvement over the reasoner OWLIM-SE. However, the recently published reasoner RDFox exhibits better performance, although this one does not provide prioritized inference. We also conducted experimentations showing that the use of incremental reasoning over batch-based reasoning offers systematically better performance for all the ontologies and fragments used
368

Ontology module extraction and applications to ontology classification

Armas Romero, Ana January 2015 (has links)
Module extraction is the task of computing a (preferably small) fragment <i>M</i> of an ontology <i>O</i> that preserves a class of entailments over a signature of interest ∑. Existing practical approaches ensure that <i>M</i> preserves all second-order entailments of <i>O</i> over ∑, which is a stronger condition than is required in many applications. In the first part of this thesis, we propose a novel approach to module extraction which, based on a reduction to a datalog reasoning problem, makes it possible to compute modules that are tailored to preserve only specific kinds of entailments. This leads to obtaining modules that are often significantly smaller than those produced by other practical approaches, as shown in an empirical evaluation. In the second part of this thesis, we consider the application of module extraction to the optimisation of ontology classification. Classification is a fundamental reasoning task in ontology design, and there is currently a wide range of reasoners that provide this service. Reasoners aimed at so-called lightweight ontology languages are much more efficient than those aimed at more expressive ones, but they do not offer completeness guarantees for ontologies containing axioms outside the relevant language. We propose an original approach to classification based on exploiting module extraction techniques to divide the workload between a general purpose reasoner and a more efficient reasoner for a lightweight language in such a way that the bulk of the workload is assigned to the latter. We show how the proposed approach can be realised using two particular module extraction techniques, including the one presented in the first part of the thesis. Furthermore, we present the results of an empirical evaluation that shows that this approach can lead to a significant performance improvement in many cases.
369

Aligning Biomedical Ontologies

Tan, He January 2007 (has links)
The amount of biomedical information that is disseminated over the Web increases every day. This rich resource is used to find solutions to challenges across the life sciences. The Semantic Web for life sciences shows promise for effectively and efficiently locating, integrating, querying and inferring related information that is needed in daily biomedical research. One of the key technologies in the Semantic Web is ontologies, which furnish the semantics of the Semantic Web. A large number of biomedical ontologies have been developed. Many of these ontologies contain overlapping information, but it is unlikely that eventually there will be one single set of standard ontologies to which everyone will conform. Therefore, applications often need to deal with multiple overlapping ontologies, but the heterogeneity of ontologies hampers interoperability between different ontologies. Aligning ontologies, i.e. identifying relationships between different ontologies, aims to overcome this problem. A number of ontology alignment systems have been developed. In these systems various techniques and ideas have been proposed to facilitate identification of alignments between ontologies. However, there still is a range of issues to be addressed when we have alignment problems at hand. The work in this thesis contributes to three different aspects of identification of high quality alignments: 1) Ontology alignment strategies and systems. We surveyed the existing ontology alignment systems, and proposed a general ontology alignment framework. Most existing systems can be seen as instantiations of the framework. Also, we developed a system for aligning biomedical ontologies (SAMBO) according to this framework. We implemented various alignment strategies in the system. 2) Evaluation of ontology alignment strategies. We developed and implemented the KitAMO framework for comparative evaluation of different alignment strategies, and we evaluated different alignment strategies using the implementation. 3) Recommending optimal alignment strategies for different applications. We proposed a method for making recommendations.
370

SWORM : a Semantic Web Object Recognition Model

Minnaar, Ursula 11 October 2011 (has links)
D.Phil. / The Semantic Web is an extension of the current Web. The goal of the Semantic Web is to give information “well-defined meaning, enabling computers and people to work in better cooperation” (Berners-Lee, Hendler, & Lassila, 2001). While the Semantic Web is not artificial intelligence, it does involve defining information in such a way that it can be more easily “understood” by machines. The Semantic Web builds upon the advantages offered by XML, and introduces languages such as the Resource Description Framework to address some of the shortcomings of XML. It uses ontologies to provide a mechanism for information processing on the Web. Object recognition involves the recognition of unknown objects and is usually divided into two types of recognition: object classification and object identification. Classification refers to the categorization of an unknown object into a known group, while identification is the matching of an unknown object against the memory of a known object. Most object recognition techniques, regardless of the recognition type, involve the extraction of some type of processable data from objects, and the subsequent comparison of the extracted information. The research presented in this thesis investigates the possibility of using the languages developed for the Semantic Web to perform some type of object recognition. It is hoped that by treating object recognition as an information management task, the advantages provided by the information-centric Semantic Web can be used in good stead. The goal of the research is to determine whether ontology-based descriptions can be created, whether such descriptions can be compared, and to what extent the use of the Semantic Web could enhance information sharing in object recognition. In order to investigate these questions, the research defines the Semantic Web Object Recognition Model. The model provides a recognition framework that uses ontologies to create and compare object descriptions. The model also suggests the use of web agents to perform distributed object comparisons across the relevant domain.

Page generated in 0.0529 seconds