• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 217
  • 76
  • 44
  • 24
  • 20
  • 19
  • 18
  • 17
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 839
  • 839
  • 249
  • 189
  • 176
  • 155
  • 139
  • 112
  • 108
  • 105
  • 105
  • 104
  • 102
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Aligning Biomedical Ontologies

Tan, He January 2007 (has links)
The amount of biomedical information that is disseminated over the Web increases every day. This rich resource is used to find solutions to challenges across the life sciences. The Semantic Web for life sciences shows promise for effectively and efficiently locating, integrating, querying and inferring related information that is needed in daily biomedical research. One of the key technologies in the Semantic Web is ontologies, which furnish the semantics of the Semantic Web. A large number of biomedical ontologies have been developed. Many of these ontologies contain overlapping information, but it is unlikely that eventually there will be one single set of standard ontologies to which everyone will conform. Therefore, applications often need to deal with multiple overlapping ontologies, but the heterogeneity of ontologies hampers interoperability between different ontologies. Aligning ontologies, i.e. identifying relationships between different ontologies, aims to overcome this problem. A number of ontology alignment systems have been developed. In these systems various techniques and ideas have been proposed to facilitate identification of alignments between ontologies. However, there still is a range of issues to be addressed when we have alignment problems at hand. The work in this thesis contributes to three different aspects of identification of high quality alignments: 1) Ontology alignment strategies and systems. We surveyed the existing ontology alignment systems, and proposed a general ontology alignment framework. Most existing systems can be seen as instantiations of the framework. Also, we developed a system for aligning biomedical ontologies (SAMBO) according to this framework. We implemented various alignment strategies in the system. 2) Evaluation of ontology alignment strategies. We developed and implemented the KitAMO framework for comparative evaluation of different alignment strategies, and we evaluated different alignment strategies using the implementation. 3) Recommending optimal alignment strategies for different applications. We proposed a method for making recommendations.
302

Using Semantic Web Technology in Requirements Specifications

Kroha, Petr, Labra Gayo, José Emilio 05 November 2008 (has links) (PDF)
In this report, we investigate how the methods developed for using in Semantic Web technology could be used in capturing, modeling, developing, checking, and validating of requirements specifications. Requirements specification is a complex and time-consuming process. The goal is to describe exactly what the user wants and needs before the next phase of the software development cycle will be started. Any failure and mistake in requirements specification is very expensive because it causes the development of software parts that are not compatible with the real needs of the user and must be reworked later. When the analysis phase of a project starts, analysts have to discuss the problem to be solved with the customer (users, domain experts) and then write the requirements found in form of a textual description. This is a form the customer can understand. However, any textual description of requirements can be (and usually is) incorrect, incomplete, ambiguous, and inconsistent. Later on, the analyst specifies a UML model based on the requirements description written by himself before. However, users and domain experts cannot validate the UML model as most of them do not understand (semi-)formal languages such as UML. It is well-known that the most expensive failures in software projects have their roots in requirements specifications. Misunderstanding between analysts, experts, users, and customers (stakeholders) is very common and brings projects over budget. The goal of this investigation is to do some (at least partial) checking and validation of the UML model using a predefined domain-specific ontology in OWL, and to process some checking using the assertions in descriptive logic. As we described in our previous papers, we have implemented a tool obtaining a modul (a computer linguistic component) that can generate a text of requirements description using information from UML models, so that the stakeholders can read it and decide whether the analyst's understanding is right or how different it is from their own one. We argue that the feedback caused by the UML model checking (by ontologies and OWL DL reasoning) can have an important impact on the quality of the outgoing requirements. This report contains a description and explanation of methods developed and used in Semantic Web Technology and a proposed concept for their use in requirements specification. It has been written during my sabbatical in Oviedo and it should serve as a starting point for theses of our students who will implement ideas described here and run some experiments concerning the efficiency of the proposed method.
303

Automatic semantic image annotation and retrieval

Wong, Chun Fan 01 January 2010 (has links)
No description available.
304

[en] AN APPLICATION BUILDER FOR QUERING RDF/RDFS DATASETS / [pt] GERADOR DE APLICAÇÕES PARA CONSULTAS A BASES RDF/RDFS

MARCELO COHEN DE AZEVEDO 27 July 2010 (has links)
[pt] Com o crescimento da web semântica, cada vez mais bases de dados em RDF contendo todo tipo de informações, nos mais variados domínios, estão disponíveis para acesso na Internet. Para auxiliar o acesso e a integração dessas informações, esse trabalho apresenta uma ferramenta que permite a geração de aplicações para consultas a bases em RDF e RDFS através da programação por exemplo. Usuários podem criar casos de uso através de operações simples em cima do modelo RFDS da própria base. Esses casos de uso podem ser generalizados e compartilhados com outros usuários, que podem reutilizá-los. Com esse compartilhamento, cria-se a possibilidade desses casos de uso serem customizados e evoluídos colaborativamente no próprio ambiente em que foram desenvolvidos. Novas operações também podem ser criadas e compartilhadas, o que contribui para o aumento gradativo do poder da ferramenta. Finalmente, utilizando um conjunto desses casos de uso, é possível gerar uma aplicação web que abstraia o modelo RDF em que os dados estão representados, tornando possível o acesso a essas informações por usuários que não conheçam o modelo RDF. / [en] Due to increasing popularity of the semantic web, more data sets, containing information about varied domains, have become available for access in the Internet. This thesis proposes a tool to assist accessing and exploring this information. This tool allows the generation of applications for querying databases in RDF and RDFS through programming by example. Users are able to create use cases through simple operations using the RDFS model. These use cases can be generalized and shared with other users, who can reuse them. The shared use cases can be customized and extended collaboratively in the environment which they were developed. New operations can also be created and shared, making the tool increasingly more powerful. Finally, using a set of use cases, it’s possible to generate a web application that abstracts the RDF model where the data is represented, making it possible for lay users to access this information without any knowledge of the RDF model.
305

[en] A NEW APPROACH FOR MINING SOFTWARE REPOSITORIES USING SEMANTIC WEB TOOLS / [pt] UMA NOVA ABORDAGEM DE MINERAÇÃO DE REPOSITÓRIOS DE SOFTWARE UTILIZANDO FERRAMENTAS DA WEB SEMÂNTICA

FERNANDO DE FREITAS SILVA 15 July 2015 (has links)
[pt] A Mineração de Repositórios de Software é um campo de pesquisa que extrai e analisa informações disponíveis em repositórios de software, como sistemas de controle de versão e gerenciadores de issues. Atualmente, diversos trabalhos nesta área de pesquisa têm utilizado as ferramentas da Web Semântica durante o processo de extração a fim de superar algumas limitações que as abordagens tradicionais enfrentam. O objetivo deste trabalho é estender estas abordagens que utilizam a Web Semântica para minerar informações não consideradas atualmente. Uma destas informações é o relacionamento existente entre as revisões do controle de versão e as mudanças que ocorrem no Abstract Syntax Trees dos arquivos modificados por essas revisões. Adicionalmente, esta nova abordagem também permite modelar a interdependência entre os projetos de software, suas licenças e extrair informações dos builds gerados por ferramentas de integração contínua. A validação desta nova abordagem é demonstrada através de um conjunto de questões que são feitas por desenvolvedores e gerentes durante a execução de um projeto e que foram identificadas em vários trabalhos da literatura. Demonstramos como estas questões foram convertidas para consultas SPARQL e como este trabalho consegue responder às questões que não são respondidas ou são respondidas parcialmente em outras ferramentas. / [en] The Mining of Software Repositories is a field of research that extracts and analyzes information available in software repositories, such as version control systems and issue trackers. Currently, several research works in this area have used Semantic Web tools during the extraction process to overcome some limitations that traditional approaches face. The objective of this work is to extend the existing approaches that use Semantic Web tools to mine information not considered in these works. The objective of this work is to extend these approaches using the Semantic Web to mine information not currently considered. One of these information is the relationship between revisions of version control and the changes that occur in the Abstract Syntax Trees of files modified by these revisions. Additionally, this new approach also allows modeling the interdependence of software projects, their licenses and extracting information from builds generated by continuous integration tools. The validation of this approach is demonstrated through a set of questions that are asked by developers and managers during the execution of a project and have been identified in various works in the literature. We show how these questions were translated into SPARQL queries and how this work can answer the questions that are not answered or are partially answered in other tools.
306

Gestion de l'incertitude dans le processus d'extraction de connaissances à partir de textes / Uncertainty management in the knowledge extraction process from text

Kerdjoudj, Fadhela 08 December 2015 (has links)
La multiplication de sources textuelles sur le Web offre un champ pour l'extraction de connaissances depuis des textes et à la création de bases de connaissances. Dernièrement, de nombreux travaux dans ce domaine sont apparus ou se sont intensifiés. De ce fait, il est nécessaire de faire collaborer des approches linguistiques, pour extraire certains concepts relatifs aux entités nommées, aspects temporels et spatiaux, à des méthodes issues des traitements sémantiques afin de faire ressortir la pertinence et la précision de l'information véhiculée. Cependant, les imperfections liées au langage naturel doivent être gérées de manière efficace. Pour ce faire, nous proposons une méthode pour qualifier et quantifier l'incertitude des différentes portions des textes analysés. Enfin, pour présenter un intérêt à l'échelle du Web, les traitements linguistiques doivent être multisources et interlingue. Cette thèse s'inscrit dans la globalité de cette problématique, c'est-à-dire que nos contributions couvrent aussi bien les aspects extraction et représentation de connaissances incertaines que la visualisation des graphes générés et leur interrogation. Les travaux de recherche se sont déroulés dans le cadre d'une bourse CIFRE impliquant le Laboratoire d'Informatique Gaspard Monge (LIGM) de l'Université Paris-Est Marne la Vallée et la société GEOLSemantics. Nous nous appuyons sur une expérience cumulée de plusieurs années dans le monde de la linguistique (GEOLSemantics) et de la sémantique (LIGM).Dans ce contexte, nos contributions sont les suivantes :- participation au développement du système d'extraction de connaissances de GEOLSemantics, en particulier : (1) le développement d'une ontologie expressive pour la représentation des connaissances, (2) le développement d'un module de mise en cohérence, (3) le développement d'un outil visualisation graphique.- l'intégration de la qualification de différentes formes d'incertitude, au sein du processus d'extraction de connaissances à partir d'un texte,- la quantification des différentes formes d'incertitude identifiées ;- une représentation, à l'aide de graphes RDF, des connaissances et des incertitudes associées ;- une méthode d'interrogation SPARQL intégrant les différentes formes d'incertitude ;- une évaluation et une analyse des résultats obtenus avec notre approche / The increase of textual sources over the Web offers an opportunity for knowledge extraction and knowledge base creation. Recently, several research works on this topic have appeared or intensified. They generally highlight that to extract relevant and precise information from text, it is necessary to define a collaboration between linguistic approaches, e.g., to extract certain concepts regarding named entities, temporal and spatial aspects, and methods originating from the field of semantics' processing. Moreover, successful approaches also need to qualify and quantify the uncertainty present in the text. Finally, in order to be relevant in the context of the Web, the linguistic processing need to be consider several sources in different languages. This PhD thesis tackles this problematic in its entirety since our contributions cover the extraction, representation of uncertain knowledge as well as the visualization of generated graphs and their querying. This research work has been conducted within a CIFRE funding involving the Laboratoire d'Informatique Gaspard Monge (LIGM) of the Université Paris-Est Marne la Vallée and the GEOLSemantics start-up. It was leveraging from years of accumulated experience in natural language processing (GeolSemantics) and semantics processing (LIGM).In this context, our contributions are the following:- the integration of a qualifation of different forms of uncertainty, based on ontology processing, within the knowledge extraction processing,- the quantification of uncertainties based on a set of heuristics,- a representation, using RDF graphs, of the extracted knowledge and their uncertainties,- an evaluation and an analysis of the results obtained using our approach
307

Drug repositioning and indication discovery using description logics

Croset, Samuel January 2014 (has links)
Drug repositioning is the discovery of new indications for approved or failed drugs. This practice is commonly done within the drug discovery process in order to adjust or expand the application line of an active molecule. Nowadays, an increasing number of computational methodologies aim at predicting repositioning opportunities in an automated fashion. Some approaches rely on the direct physical interaction between molecules and protein targets (docking) and some methods consider more abstract descriptors, such as a gene expression signature, in order to characterise the potential pharmacological action of a drug (Chapter 1). On a fundamental level, repositioning opportunities exist because drugs perturb multiple biological entities, (on and off-targets) themselves involved in multiple biological processes. Therefore, a drug can play multiple roles or exhibit various mode of actions responsible for its pharmacology. The work done for my thesis aims at characterising these various modes and mechanisms of action for approved drugs, using a mathematical framework called description logics. In this regard, I first specify how living organisms can be compared to complex black box machines and how this analogy can help to capture biomedical knowledge using description logics (Chapter 2). Secondly, the theory is implemented in the Functional Therapeutic Chemical Classification System (FTC - https://www.ebi.ac.uk/chembl/ftc/), a resource defining over 20,000 new categories representing the modes and mechanisms of action of approved drugs. The FTC also indexes over 1,000 approved drugs, which have been classified into the mode of action categories using automated reasoning. The FTC is evaluated against a gold standard, the Anatomical Therapeutic Chemical Classification System (ATC), in order to characterise its quality and content (Chapter 3). Finally, from the information available in the FTC, a series of drug repositioning hypotheses were generated and made publicly available via a web application (https://www.ebi.ac.uk/chembl/research/ftc-hypotheses). A subset of the hypotheses related to the cardiovascular hypertension as well as for Alzheimer’s disease are further discussed in more details, as an example of an application (Chapter 4). The work performed illustrates how new valuable biomedical knowledge can be automatically generated by integrating and leveraging the content of publicly available resources using description logics and automated reasoning. The newly created classification (FTC) is a first attempt to formally and systematically characterise the function or role of approved drugs using the concept of mode of action. The open hypotheses derived from the resource are available to the community to analyse and design further experiments.
308

Ontology-Driven Self-Organization of Politically Engaged Social Groups / Ontology-Driven Self-Organization of Politically Engaged Social Groups

Belák, Václav January 2009 (has links)
This thesis deals with the use of knowledge technologies in support of self-organization of people with joint political goals. It first provides a theoretical background for a development of a social-semantic system intended to support self-organization and then it applies this background in the development of a core ontology and algorithms for support of self-organization of people. It also presents a design and implementation of a proof-of-concept social-semantic web application that has been built to test our research. The application stores all data in an RDF store and represents them using the core ontology. Descriptions of content are disambiguated using the WordNet thesaurus. Emerging politically engaged groups can establish themselves into local political initiatives, NGOs, or even new political parties. Therefore, the system may help people easily participate on solutions of issues which are influencing them.
309

Integrace CMS Joomla! s Ontopia Knowledge Suite / CMS Joomla! and Ontopia Knowledge Suite Integration

Hazucha, Andrej January 2010 (has links)
The aim of this thesis is to outline issues related to integration of Content Management Systems and Knowledge Bases based on semantic web technologies. The work begins with semantic technologies research and their use cases. The possibilities and proposals of integration of these technologies into CMS and collaborative wikis are discussed. As far as the most of open-source CMS are based on PHP platform tools written in PHP are insisted. CMS Joomla! and Ontopia Knowledge Suite integration is demonstrated in practical the part of the thesis. Possibility to communicate with different systems that allow HTTP requests is presented, too. Joomla! and OKS communication is through RESTful TMRAP protocol implemented in OKS. The query language used in this case is tolog. Communication with SPARQL endpoint or XML database is also demonstrated. Raw XML returned from Knowledge Base data source is transformed by XSLT into (X)HTML fragments. The transformations are user defined. Created demo application is included into SEWEBAR project. This application enables to incorporate results of semantically rich queries into analytical reports of data mining tasks within CMS Joomla! Interface.
310

Podpora sémantiky v CMS Drupal / Support of Semantics in CMS Drupal

Kubaliak, Lukáš January 2011 (has links)
The work concern about the support of semantics in known content managing systems. It is describing the possibilities of use for these technologies and their public accessibility. We find out, that today's technologies and methods are in the state of public inducting. In the question of semantic support in CMS Drupal we developed a tool for extending its support of semantic formats. This tool allows CMS Drupal to export its information in a Topic Maps format. For this it uses the XTM file.

Page generated in 0.2815 seconds