• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 51
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

AGUIA: Um Gerador Semântico de Interface Gráfica do Usuário para Ensaios Clínicos / AGUIA:A Generator Semantics for Graphical User Interface for Clinical Trials

Miriã da Silveira Coelho Corrêa 04 March 2010 (has links)
AGUIA é uma aplicação web front-end, desenvolvida para gerenciar dados clínicos, demográficos e biomoleculares de pacientes coletados durante os ensaios clínicos gastrointestinais no MD Anderson Cancer Center. A diversidade de metodologias envolvidas na triagem de pacientes e no processamento da amostra traz uma heterogeneidade dos tipos de dados correspondentes. Sendo assim, estes devem ser baseados em uma arquitetura orientada a recurso que transforma dados heterogêneos em dados semânticos, mais especificamente em RDF (Resource Description Framework - Estrutura para a descrição de recursos). O banco de dados escolhido foi o S3DB, por este ter cumprido os requisitos necessários de transformação dos dados heterogêneos de diferentes fontes em RDF, distinguindo explicitamente a descrição do domínio e sua instanciação, permitindo simultaneamente a contínua edição de ambos. Além disso, ele usa um protocolo REST, e é de código aberto e domínio público o que facilita o desenvolvimento e divulgação. Contudo, por mais abrangente e flexível, um formato de web semântica pode por si só, não abordar a questão de representar o conteúdo de uma forma que faça sentido para especialistas do domínio. Assim, o objetivo do trabalho aqui descrito foi identificar um conjunto adicional de descritores que forneceu as especificações para a interface gráfica do usuário. Esse objetivo foi perseguido através da identificação de um formalismo que faz uso do esquema RDF para permitir a montagem automática de interfaces gráficas de uma forma significativa. Um modelo RDF generalizado foi, portanto, definido de tal forma que as mudanças nos descritores gráficos sejam automaticamente e imediatamente refletidas na configuração da aplicação web do cliente, que também está disponível neste trabalho. Embora os padrões de design identificados reflitam e beneficiem os requisitos específicos de interagir com os dados gerados pelos ensaios clínicos, a expectativa é que eles contenham pistas para uma solução de propósito geral. Em particular, sugere-se que os padrões mais úteis identificados pelos utilizadores deste sistema sejam suscetíveis de serem reutilizáveis para outras fontes de dados, ou pelo menos para outros bancos de dados semânticos de ensaios clínicos. / AGUIA is a web application front-end originally developed to manage clinical, demographic and biomolecular patient data collected during gastrointestinal clinical trials at MD Anderson Cancer Center. The diversity of methodologies involved in patient screening and sample processing, brings corresponding heterogeneity of data types. Thus, this data must be based on a Resource Oriented Architecture that transforms heterogeneous data in semantic data, most specifically in RDF (Resource Description Framework). The database chosen was a S3DB, because it met the necessary requirements of transforming heterogeneous data from different sources in RDF, explicitly distinguishing the description of the domain from its instantiation, while allowing for continuous editing of both. Furthermore, it uses a REST protocol, and is open source and in the public domain which facilitates development and dissemination. Nevertheless, comprehensive and flexible a semantic web format may be, it does not by itself address the issue of representing content in a form that makes sense for domain experts. Accordingly, the goal of the work described here was to identify an additional set of descriptors that provide specifications for the graphic user interface. That goal was pursued by identifying a formalism that makes use of the RDF schema to enable automatic assembly of graphic user interfaces in a meaningful manner. A generalized RDF model was therefore defined such that changes in the graphic descriptors are automatically and immediately reflected into the configuration of the client web browser interface application, which is also made available with this report. Although the design patterns identified reflect, and benefit, from the specific requirements of interacting with data generated by clinical trials, the expectation is that they contain clues for a general purpose solution. In particular, it is suggested that the most useful patterns identified by the users of this system are susceptible to being reusable for other data sources, or at least for other clinical trial semantic web data stores.
112

Intégrer des sources de données hétérogènes dans le Web de données / Integrating heterogeneous data sources in the Web of data

Michel, Franck 03 March 2017 (has links)
Le succès du Web de Données repose largement sur notre capacité à atteindre les données stockées dans des silos invisibles du web. Dans les 15 dernières années, des travaux ont entrepris d’exposer divers types de données structurées au format RDF. Dans le même temps, le marché des bases de données (BdD) est devenu très hétérogène avec le succès massif des BdD NoSQL. Celles-ci sont potentiellement d’importants fournisseurs de données liées. Aussi, l’objectif de cette thèse est de permettre l’intégration en RDF de sources de données hétérogènes, et notamment d'alimenter le Web de Données avec les données issues des BdD NoSQL. Nous proposons un langage générique, xR2RML, pour décrire le mapping de sources hétérogènes vers une représentation RDF arbitraire. Ce langage étend des travaux précédents sur la traduction de sources relationnelles, CSV/TSV et XML en RDF. Sur cette base, nous proposons soit de matérialiser les données RDF, soit d'évaluer dynamiquement des requêtes SPARQL sur la base native. Dans ce dernier cas, nous proposons une approche en deux étapes : (i) traduction d’une requête SPARQL en une requête pivot, abstraite, en se basant sur le mapping xR2RML ; (ii) traduction de la requête abstraite en une requête concrète, prenant en compte les spécificités du langage de requête de la BdD cible. Un souci particulier est apporté à l'optimisation des requêtes, aux niveaux abstrait et concret. Nous démontrons l’applicabilité de notre approche via un prototype pour la populaire base MongoDB. Nous avons validé la méthode dans un cas d’utilisation réel issu du domaine des humanités numériques. / To a great extent, the success of the Web of Data depends on the ability to reach out legacy data locked in silos inaccessible from the web. In the last 15 years, various works have tackled the problem of exposing various structured data in the Resource Description Format (RDF). Meanwhile, the overwhelming success of NoSQL databases has made the database landscape more diverse than ever. NoSQL databases are strong potential contributors of valuable linked open data. Hence, the object of this thesis is to enable RDF-based data integration over heterogeneous data sources and, in particular, to harness NoSQL databases to populate the Web of Data. We propose a generic mapping language, xR2RML, to describe the mapping of heterogeneous data sources into an arbitrary RDF representation. xR2RML relies on and extends previous works on the translation of RDBs, CSV/TSV and XML into RDF. With such an xR2RML mapping, we propose either to materialize RDF data or to dynamically evaluate SPARQL queries on the native database. In the latter, we follow a two-step approach. The first step performs the translation of a SPARQL query into a pivot abstract query based on the xR2RML mapping of the target database to RDF. In the second step, the abstract query is translated into a concrete query, taking into account the specificities of the database query language. Great care is taken of the query optimization opportunities, both at the abstract and the concrete levels. To demonstrate the effectiveness of our approach, we have developed a prototype implementation for MongoDB, the popular NoSQL document store. We have validated the method using a real-life use case in Digital Humanities.
113

Flexible querying of RDF databases : a contribution based on fuzzy logic / Interrogation flexible de bases de données RDF : une contribution basée sur la logique floue

Slama, Olfa 22 November 2017 (has links)
Cette thèse porte sur la définition d'une approche flexible pour interroger des graphes RDF à la fois classiques et flous. Cette approche, basée sur la théorie des ensembles flous, permet d'étendre SPARQL qui est le langage de requête standardisé W3C pour RDF, de manière à pouvoir exprimer i) des préférences utilisateur floues sur les données (par exemple, l'année de publication d'un album est récente) et sur la structure du graphe (par exemple, le chemin entre deux amis doit être court) et ii) des préférences utilisateur plus complexes, prenant la forme de propositions quantifiées floues (par exemple, la plupart des albums qui sont recommandés par un artiste, sont très bien notés et ont été créés par un jeune ami de cet artiste). Nous avons effectué des expérimentations afin d'étudier les performances de cette approche. L'objectif principal de ces expérimentations était de montrer que le coût supplémentaire dû à l'introduction du flou reste limité/acceptable. Nous avons également étudié, dans un cadre plus général, celui de bases de données graphe, la question de l'intégration du même type de propositions quantifiées floues dans une extension floue de Cypher qui est un langage déclaratif pour l'interrogation des bases de données graphe classiques. Les résultats expérimentaux obtenus montrent que le coût supplémentaire induit par la présence de conditions quantifiées floues dans les requêtes reste également très limité dans ce cas. / This thesis concerns the definition of a flexible approach for querying both crisp and fuzzy RDF graphs. This approach, based on the theory of fuzzy sets, makes it possible to extend SPARQL which is the W3C-standardised query language for RDF, so as to be able to express i) fuzzy user preferences on data (e.g., the release year of an album is recent) and on the structure of the data graph (e.g., the path between two friends is required to be short) and ii) more complex user preferences, namely, fuzzy quantified statements (e.g., most of the albums that are recommended by an artist, are highly rated and have been created by a young friend of this artist). We performed some experiments in order to study the performances of this approach. The main objective of these experiments was to show that the extra cost due to the introduction of fuzziness remains limited/acceptable. We also investigated, in a more general framework, namely graph databases, the issue of integrating the same type of fuzzy quantified statements in a fuzzy extension of Cypher which is a declarative language for querying (crisp) graph databases. Some experimental results are reported and show that the extra cost induced by the fuzzy quantified nature of the queries also remains very limited.
114

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 04 July 2017 (has links) (PDF)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
115

Sémantická analýza webového obsahu / Semantic Analysis of Web Content

Hubl, Lukáš January 2020 (has links)
This work deals with the topics of semantic web, web page segmentation and technologies, which are used in this area. It also deals with a modification of one web page segmentation method, specifically DOM-based segmentation, using semantic web technologies. Thus, this work designs the way of web page segmentation based on semantic analysis of individual elements of the web pages content. An application that demonstrates the functionality of the designed segmentation method was also created within this work. With the implemented application, experiments were performed, whose results are also part of this work.
116

Roštový kotel na spalování dřevní štěpky a tříděného odpadu 50t/h / RDF Grate Biomass Boiler

Malíková, Veronika January 2020 (has links)
The diploma thesis deals with the design of a grate boiler for the combustion of a mixture of RDF and wood chips with the specified output and parameters of superheated steam. The introduction consists of stoichiometric calculations and determination of the thermal efficiency of the boiler. The thesis is devoted to determining the dimensions of the boiler, heat transfer calculations, determining pressure losses, checking the heat balance and chlorine corrosion.
117

OntoApp : une approche déclarative pour la simulation du fonctionnement d’un logiciel dès une étape précoce du cycle de vie de développement / OntoApp : a declarative approach for software reuse and simulation in early stage of software development life cycle

Pham, Tuan Anh 22 September 2017 (has links)
Dans cette thèse, nous étudions plusieurs modèles de collaboration entre l’ingénierie logiciel et le web sémantique. À partir de l’état de l’art, nous proposons une approche d’utilisation de l’ontologie dans la couche de métier d’une application. L’objectif principal de notre travail est de fournir au développeur des outils pour concevoir la matière déclarative une couche de métier "exécutable" d’une application afin de simuler son fonctionnement et de montrer ainsi la conformité de l’application par rapport aux exigences du client au début du cycle de vie du logiciel. Un autre avantage de cette approche est de permettre au développeur de partager et de réutiliser la description de la couche de métier d’une application dans un domaine en utilisant l’ontologie. Celle-ci est appelée "patron d’application". La réutilisation de la description de la couche de métier d’une application est un aspect intéressant à l'ingénier logiciel. C’est le point-clé que nous voulons considérer dans cette thèse. Dans la première partie de notre travail, nous traitons la modélisation de la couche de métier. Nous présentons d’abord une approche fondée sur l’ontologie pour représenter les processus de métiers et les règles de métiers et nous montrons comment vérifier la cohérence du processus et de l’ensemble des règles de métier. Puis, nous présentons le mécanisme de vérification automatique de la conformité d’un processus de métier avec un ensemble de règles de métier. La deuxième partie de cette thèse est consacrée à définir une méthodologie, dite de personnalisation, de création une application à partir d'un "patron d’application". Cette méthode permettra à l'utilisateur d'utiliser un patron d'application pour créer sa propre application en évitant les erreurs de structures et les erreurs sémantiques. Nous introduisons à la fin de cette partie, la description d’une plateforme expérimentale permettant d’illustrer la faisabilité des mécanismes proposés dans cette thèse. Cette plateforme est réalisée sur un SGBD relationnel. / In this thesis, we study several models of collaboration between Software Engineering and Semantic Web. From the state of the art, we propose an approach to the use of ontology in the business application layer. The main objective of our work is to provide the developer with the tools to design, in the declarative manner, a business "executable" layer of an application in order to simulate its operation and thus show the compliance of the application with the customer requirements defined at the beginning of the software life cycle. On the other hand, another advantage of this approach is to allow the developer to share and reuse the business layer description of a typical application in a domain using ontology. This typical application description is called "Application Template". The reuse of the business layer description of an application is an interesting aspect of software engineering. That is the key point we want to consider in this thesis. In the first part of this thesis, we deal with the modeling of the business layer. We first present an ontology-based approach to represent business process and the business rules and show how to verify the consistency of business process and the set of business rules. Then, we present an automatic check mechanism of compliance of business process with a set of business rules. The second part of this thesis is devoted to define a methodology, called personalization, of creating of an application from an "Application Template". This methodology will allow the user to use an Application Template to create his own application by avoiding deadlock and semantic errors. We introduce at the end of this part the description of an experimental platform to illustrate the feasibility of the mechanisms proposed in the thesis. This platform s carried out on a relational DBMS.Finally, we present, in a final chapter, the conclusion, the perspective and other annexed works developed during this thesis.
118

Découverte de règles d'association multi-relationnelles à partir de bases de connaissances ontologiques pour l'enrichissement d'ontologies / Discovering multi-relational association rules from ontological knowledge bases to enrich ontologies

Tran, Duc Minh 23 July 2018 (has links)
Dans le contexte du Web sémantique, les ontologies OWL représentent des connaissances explicites sur un domaine sur la base d'une conceptualisation des domaines d'intérêt, tandis que la connaissance correspondante sur les individus est donnée par les données RDF qui s'y réfèrent. Dans cette thèse, sur la base d'idées dérivées de l'ILP, nous visons à découvrir des motifs de connaissance cachés sous la forme de règles d'association multi-relationnelles en exploitant l'évidence provenant des assertions contenues dans les bases de connaissances ontologiques. Plus précisément, les règles découvertes sont codées en SWRL pour être facilement intégrées dans l'ontologie, enrichissant ainsi son pouvoir expressif et augmentant les connaissances sur les individus (assertions) qui en peuvent être dérivées. Deux algorithmes appliqués aux bases de connaissances ontologiques peuplées sont proposés pour trouver des règles à forte puissance inductive : (i) un algorithme de génération et test par niveaux et (ii) un algorithme évolutif. Nous avons effectué des expériences sur des ontologies accessibles au public, validant les performances de notre approche et les comparant avec les principaux systèmes de l'état de l'art. En outre, nous effectuons une comparaison des métriques asymétriques les plus répandues, proposées à l'origine pour la notation de règles d'association, comme éléments constitutifs d'une fonction de fitness pour l'algorithme évolutif afin de sélectionner les métriques qui conviennent à la sémantique des données. Afin d'améliorer les performances du système, nous avons proposé de construire un algorithme pour calculer les métriques au lieu d'interroger viaSPARQL-DL. / In the Semantic Web context, OWL ontologies represent explicit domain knowledge based on the conceptualization of domains of interest while the corresponding assertional knowledge is given by RDF data referring to them. In this thesis, based on ideas derived from ILP, we aim at discovering hidden knowledge patterns in the form of multi-relational association rules by exploiting the evidence coming from the assertional data of ontological knowledge bases. Specifically, discovered rules are coded in SWRL to be easily integrated within the ontology, thus enriching its expressive power and augmenting the assertional knowledge that can be derived. Two algorithms applied to populated ontological knowledge bases are proposed for finding rules with a high inductive power: (i) level-wise generated-and-test algorithm and (ii) evolutionary algorithm. We performed experiments on publicly available ontologies, validating the performances of our approach and comparing them with the main state-of-the-art systems. In addition, we carry out a comparison of popular asymmetric metrics, originally proposed for scoring association rules, as building blocks for a fitness function for evolutionary algorithm to select metrics that are suitable with data semantics. In order to improve the system performance, we proposed to build an algorithm to compute metrics instead of querying via SPARQL-DL.
119

TopFed: TCGA tailored federated query processing and linking to LOD

Saleem, Muhammad, Padmanabhuni, Shanmukha S., Ngonga Ngomo, Axel-Cyrille, Iqbal, Aftab, Almeida, Jonas S., Decker, Stefan, Deus, Helena F. January 2014 (has links)
Methods: We address these issues by transforming the TCGA data into the Semantic Web standard Resource Description Format (RDF), link it to relevant datasets in the Linked Open Data (LOD) cloud and further propose an efficient data distribution strategy to host the resulting 20.4 billion triples data via several SPARQL endpoints. Having the TCGA data distributed across multiple SPARQL endpoints, we enable biomedical scientists to query and retrieve information from these SPARQL endpoints by proposing a TCGA tailored federated SPARQL query processing engine named TopFed. Results: We compare TopFed with a well established federation engine FedX in terms of source selection and query execution time by using 10 different federated SPARQL queries with varying requirements. Our evaluation results show that TopFed selects on average less than half of the sources (with 100% recall) with query execution time equal to one third to that of FedX. Conclusion: With TopFed, we aim to offer biomedical scientists a single-point-of-access through which distributed TCGA data can be accessed in unison. We believe the proposed system can greatly help researchers in the biomedical domain to carry out their research effectively with TCGA as the amount and diversity of data exceeds the ability of local resources to handle its retrieval and parsing.
120

Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications

Gerber, Daniel 07 June 2016 (has links)
The Data Web has undergone a tremendous growth period. It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts. In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day. However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc. As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically. Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process. In addition, users are accustomed to entering keyword queries to satisfy their information needs. With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers. In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means. First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web. We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities. Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds. Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true. The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web. Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.

Page generated in 0.0462 seconds