• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 30
  • 14
  • 13
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 227
  • 227
  • 106
  • 91
  • 52
  • 46
  • 38
  • 36
  • 33
  • 32
  • 31
  • 31
  • 28
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Techniques d'optimisation pour des données semi-structurées du web sémantique

Leblay, Julien 27 September 2013 (has links) (PDF)
RDF et SPARQL se sont imposés comme modèle de données et langage de requêtes standard pour décrire et interroger les données sur la Toile. D'importantes quantités de données RDF sont désormais disponibles, sous forme de jeux de données ou de méta-données pour des documents semi-structurés, en particulier XML. La coexistence et l'interdépendance grandissantes entre RDF et XML rendent de plus en plus pressant le besoin de représenter et interroger ces données conjointement. Bien que de nombreux travaux couvrent la production et la publication, manuelles ou automatiques, d'annotations pour données semi-structurées, peu de recherches ont été consacrées à l'exploitation de telles données. Cette thèse pose les bases de la gestion de données hybrides XML-RDF. Nous présentons XR, un modèle de données accommodant l'aspect structurel d'XML et la sémantique de RDF. Le modèle est suffisamment général pour représenter des données indépendantes ou interconnectées, pour lesquelles chaque nœud XML est potentiellement une ressource RDF. Nous introduisons le langage XRQ, qui combine les principales caractéristiques des langages XQuery et SPARQL. Le langage permet d'interroger la structure des documents ainsi que la sémantique de leurs annotations, mais aussi de produire des données semi-structurées annotées. Nous introduisons le problème de composition de requêtes dans le langage XRQ et étudions de manière exhaustive les techniques d'évaluation de requêtes possibles. Nous avons développé la plateforme XRP, implantant les algorithmes d'évaluation de requêtes dont nous comparons les performances expérimentalement. Nous présentons une application reposant sur cette plateforme pour l'annotation automatique et manuelle de pages trouvées sur la Toile. Enfin, nous présentons une technique pour l'inférence RDFS dans les systèmes de gestion de données RDF (et par extension XR).
172

The epidemiology of respiratory infections diagnosed in Western Australian hospital emergency departments 2000 to 2003

Ingarfield, Sharyn Lee January 2007 (has links)
[Truncated abstract] Background Emergency department (ED) presentations of respiratory infections are not well described. Baseline ED data are needed to monitor trends, and to help evaluate the impact of health interventions, and assess changes in clinical practice for these conditions. Aims: To describe the epidemiology of respiratory infections diagnosed in Western Australian hospital EDs from 2000 to 2003; to determine the extent and usefulness of bacterial cultures ordered in hospital, and to describe and evaluate the antibiotic prescribing pattern in the ED setting. Methods: The cohort consisted of patients diagnosed with a respiratory infection at the ED of Perth's major metropolitan teaching hospitals from 1 July 2000 to 30 June 2003. The analysis was based on a linked data set containing patient data from the Emergency Department Information System, the Hospital Morbidity Data Set, the death registry, and the Ultra Laboratory Information System. Further, a sample of patient medical records from 1 adult hospital was examined to assess antibiotic prescribing practice. Results: Overall, there were 37,455 presentations (28,885 patients) given an ED diagnosis of a respiratory infection. Of these, 14,884 (39.7%, 95% CI: 39.2 to 40.2) were admitted and 715 (1.9%, 95% CI: 1.8 to 2.0) died in hospital. The infections included; 48.1% acute upper respiratory infections (URI), 18.5% pneumonia, 23.5% other acute lower respiratory infections (LRI), 7.4% chronic obstructive pulmonary disease with lower respiratory infection (COPD+), 1.3% influenza or viral pneumonia and 1.2% other URI. Children accounted for 80.7% of acute URI diagnoses, COPD+ mainly affected the elderly, just over 40% of pneumonia diagnoses were in patients 65 years or older and 30.7% in patients younger than 15 years. ... The most common pathogen isolated from blood was Streptococcus pneumoniae and 10.4% (95% CI: 4.8 to 16.0) had reduced susceptibility to penicillin. For those diagnosed with pneumonia, Strep. pneumoniae accounted for over 90% of pathogens isolated from the blood of young children and isolation of Enterobacteriaceae from blood increased with age. Around 30% of patients had positive sputum cultures and from these Haemophilus influenzae, Strep. pneumoniae and Pseudomonas aeruginosa were the most common organisms grown. Of those diagnosed with pneumonia, acute LRI or COPD+, 34.7% (95% CI: 26.1 to 43.3) of S. aureus isolated from sputum and 16.4% (95% CI: 7.1 to 25.7) from blood were methicillin resistant. Of 366 adult patient medical records reviewed, 56.8% (95% CI: 51.7 to 61.9) noted that an antibiotic was prescribed in the ED and amoxycillin was the most frequently prescribed. For those with pneumonia, concordance between prescribing guidelines and practice was low. Conclusions The administrative data sets used in the present study are useful for monitoring outcomes for respiratory infections diagnosed in the ED. Pneumonia continues to place a burden on the hospital system. Routine blood and sputum cultures have limited value. However, an appropriately designed surveillance program is needed to monitor potential Abstract v respiratory pathogens and assist in monitoring the appropriateness of current empiric antimicrobial therapy.
173

Συστήματα διαχείρισης περιεχομένου και σημαντικός ιστός / Content management systems and semantic web

Νάκος, Κωνσταντίνος 24 January 2012 (has links)
Ένα μεγάλο ποσοστό ιστότοπων παράγεται και συντηρείται με χρήση Συστημάτων Διαχείρισης Περιεχομένου (Content Management Systems – CMS), τα οποία, εκτός από περιεχόμενο κειμένου, διαχειρίζονται και δομημένα δεδομένα. Από την άλλη, ο Σημαντικός Ιστός αν και έχει αρχίσει να γίνεται πραγματικότητα παραμένει σε εμβρυϊκό στάδιο σε σχέση με τον παραδοσιακό Ιστό. Η σύγκλιση των δύο κόσμων θα μπορούσε να αποφέρει τεράστια οφέλη και να πυροδοτήσει την ταχύτερη εξάπλωση του Σημαντικού Ιστού. Στην παρούσα διπλωματική εργασία καταγράφονται και μελετώνται τα πιο διαδεδομένα εργαλεία Σημαντικού εμπλουτισμού CMS, καθώς και μια σειρά από CMS που φέρουν εγγενή χαρακτηριστικά Σημαντικού Ιστού. Τέλος, υλοποιείται μια πρότυπη δικτυακή πύλη με τη χρήση της έκδοσης 7 του CMS Drupal, η οποία ενσωματώνει χαρακτηριστικά Σημαντικού Ιστού στον πυρήνα της (όπως αυτόματη ενσωμάτωση στις σελίδες που παράγονται του νεοσύστατου πρότυπου RDFa). / Currently a large number of Web sites are driven by Content Management Systems (CMS), which manage not only textual content but also structured data. On the other hand, even though Semantic Web is beginning to materialize, it is still dwarfed by the traditional Web. The convergence of the two worlds could produce significant benefits and trigger a faster spread of the Semantic Web. In the current diploma thesis, prevalent CMS semantic enrichment tools and a series of Semantic CMS are thoroughly examined. Finally, an experimental web portal is developed using CMS Drupal’s version 7, which integrates Semantic Web features in its core (such as the automatic embedding of the emergent RDFa standard in the pages created).
174

Usages et applications du web sémantique en bibliothèques numériques / Uses and applications of Semantic Web in Digital Libraries

Melhem, Hiba 27 October 2017 (has links)
Ce travail de recherche se situe dans le champ interdisciplinaire des sciences de l’information et de la communication (SIC) et a pour but d’explorer la question de l'usage du web sémantique en bibliothèques numériques. Le web oblige les bibliothèques à repenser leurs organisations, leurs activités, leurs pratiques et leurs services, afin de se repositionner en tant qu'instituts de références pour la diffusion des savoirs. Dans cette thèse, nous souhaitons comprendre les contextes d'usage du web sémantique en bibliothèques numériques françaises. Il s'agit de s'interroger sur les apports du web sémantique au sein de ces bibliothèques, ainsi que sur les défis et les obstacles qui accompagnent sa mise en place. Ensuite, nous nous intéressons aux pratiques documentaires et à leurs évolutions suite à l'introduction du web sémantique en bibliothèques numériques. La problématique s'attache au rôle que peuvent jouer les professionnels de l'information dans la mise en place du web sémantique en bibliothèques numériques. Après avoir sélectionné 98 bibliothèques numériques suite à une analyse de trois recensements, une enquête s'appuyant sur un questionnaire vise à recueillir des données sur l'usage du web sémantique dans ces bibliothèques. Ensuite, une deuxième enquête réalisée au moyen d'entretiens permet de mettre en évidence les représentations qu'ont les professionnels de l'information du web sémantique et de son usage en bibliothèque, ainsi que de l'évolution de leurs pratiques professionnelles. Les résultats montrent que la représentation des connaissances dans le cadre du web sémantique nécessite une intervention humaine permettant de fournir le cadre conceptuel pour déterminer les liens entre les données. Enfin, les professionnels de l'information peuvent devenir des acteurs du web sémantique, dans le sens où leurs rôles ne se limitent pas à l'utilisation du web sémantique mais aussi au développement de ses standards pour assurer une meilleure organisation des connaissances. / This research work deals with the interdisciplinary field of the information and communication sciences (CIS) and aims to explore the use of the semantic web in digital libraries. The web requires libraries to rethink their organizations, activities, practices and services in order to reposition themselves as reference institutes for the dissemination of knowledge. In this thesis, we wish to understand the contexts of use of the semantic web in French digital libraries. It questions the contributions of the semantic web within these libraries, as well as on the challenges and the obstacles that accompany its implementation. We are also interested in documentary practices and their evolutions following the introduction of the semantic web in digital libraries. The problem is related to the role that information professionals can play in the implementation of the semantic web in digital libraries. After selecting 98 digital libraries following an analysis of three censuses, a questionnaire survey aims to collect data on the use of the semantic web in these libraries. Then, a second interview-based survey consists of highlighting the representations that the information professionals have of the semantic web and its use in the library, as well as on the evolution of their professional practices. The results show that the representation of knowledge within the semantic web requires human intervention to provide the conceptual framework to determine the links between the data. Finally, information professionals can become actors of the semantic web, in the sense that their roles are not limited to the use of the semantic web but also to the development of its standards to ensure better organization of knowledge.
175

Méthodes d'optimisation pour le traitement de requêtes réparties à grande échelle sur des données liées / Optimization methods for large-scale distributed query processing on linked data

Oğuz, Damla 28 June 2017 (has links)
Données Liées est un terme pour définir un ensemble de meilleures pratiques pour la publication et l'interconnexion des données structurées sur le Web. A mesure que le nombre de fournisseurs de Données Liées augmente, le Web devient un vaste espace de données global. La fédération de requêtes est l'une des approches permettant d'interroger efficacement cet espace de données distribué. Il est utilisé via un moteur de requêtes fédéré qui vise à minimiser le temps de réponse du premier tuple du résultat et le temps d'exécution pour obtenir tous les tuples du résultat. Il existe trois principales étapes dans un moteur de requêtes fédéré qui sont la sélection de sources de données, l'optimisation de requêtes et l'exécution de requêtes. La plupart des études sur l'optimisation de requêtes dans ce contexte se concentrent sur l'optimisation de requêtes statique qui génère des plans d'exécution de requêtes avant l'exécution et nécessite des statistiques. Cependant, l'environnement des Données Liées a plusieurs caractéristiques spécifiques telles que les taux d'arrivée de données imprévisibles et les statistiques peu fiables. En conséquence, l'optimisation de requêtes statique peut provoquer des plans d'exécution inefficaces. Ces contraintes montrent que l'optimisation de requêtes adaptative est une nécessité pour le traitement de requêtes fédéré sur les données liées. Dans cette thèse, nous proposons d'abord un opérateur de jointure adaptatif qui vise à minimiser le temps de réponse et le temps d'exécution pour les requêtes fédérées sur les endpoints SPARQL. Deuxièmement, nous étendons la première proposition afin de réduire encore le temps d'exécution. Les deux propositions peuvent changer la méthode de jointure et l'ordre de jointures pendant l'exécution en utilisant une optimisation de requêtes adaptative. Les opérateurs adaptatifs proposés peuvent gérer différents taux d'arrivée des données et le manque de statistiques sur des relations. L'évaluation de performances dans cette thèse montre l'efficacité des opérateurs adaptatifs proposés. Ils offrent des temps d'exécution plus rapides et presque les mêmes temps de réponse, comparé avec une jointure par hachage symétrique. Par rapport à bind join, les opérateurs proposés se comportent beaucoup mieux en ce qui concerne le temps de réponse et peuvent également offrir des temps d'exécution plus rapides. En outre, le deuxième opérateur proposé obtient un temps de réponse considérablement plus rapide que la bind-bloom join et peut également améliorer le temps d'exécution. Comparant les deux propositions, la deuxième offre des temps d'exécution plus rapides que la première dans toutes les conditions. En résumé, les opérateurs de jointure adaptatifs proposés présentent le meilleur compromis entre le temps de réponse et le temps d'exécution. Même si notre objectif principal est de gérer différents taux d'arrivée des données, l'évaluation de performance révèle qu'ils réussissent à la fois avec des taux d'arrivée de données fixes et variés. / Linked Data is a term to define a set of best practices for publishing and interlinking structured data on the Web. As the number of data providers of Linked Data increases, the Web becomes a huge global data space. Query federation is one of the approaches for efficiently querying this distributed data space. It is employed via a federated query engine which aims to minimize the response time and the completion time. Response time is the time to generate the first result tuple, whereas completion time refers to the time to provide all result tuples. There are three basic steps in a federated query engine which are data source selection, query optimization, and query execution. This thesis contributes to the subject of query optimization for query federation. Most of the studies focus on static query optimization which generates the query plans before the execution and needs statistics. However, the environment of Linked Data has several difficulties such as unpredictable data arrival rates and unreliable statistics. As a consequence, static query optimization can cause inefficient execution plans. These constraints show that adaptive query optimization should be used for federated query processing on Linked Data. In this thesis, we first propose an adaptive join operator which aims to minimize the response time and the completion time for federated queries over SPARQL endpoints. Second, we extend the first proposal to further reduce the completion time. Both proposals can change the join method and the join order during the execution by using adaptive query optimization. The proposed operators can handle different data arrival rates of relations and the lack of statistics about them. The performance evaluation of this thesis shows the efficiency of the proposed adaptive operators. They provide faster completion times and almost the same response times, compared to symmetric hash join. Compared to bind join, the proposed operators perform substantially better with respect to the response time and can also provide faster completion times. In addition, the second proposed operator provides considerably faster response time than bind-bloom join and can improve the completion time as well. The second proposal also provides faster completion times than the first proposal in all conditions. In conclusion, the proposed adaptive join operators provide the best trade-off between the response time and the completion time. Even though our main objective is to manage different data arrival rates of relations, the performance evaluation reveals that they are successful in both fixed and different data arrival rates.
176

Modelo para a publicação de dados de autoridade como Linked Data / Model for publishing authority data as Linked Data

Assumpção, Fabrício Silva [UNESP] 05 February 2018 (has links)
Submitted by Fabrício Silva Assumpção null (assumpcao.f@gmail.com) on 2018-02-18T22:51:55Z No. of bitstreams: 1 modelo-para-publicacao-dados-de-autoridade-linked-data.pdf: 3758953 bytes, checksum: b931683bbc9f76cdbb096f52e63ef88f (MD5) / Approved for entry into archive by Satie Tagara (satie@marilia.unesp.br) on 2018-02-19T17:10:03Z (GMT) No. of bitstreams: 1 assumpcao_fs_dr_mar.pdf: 3758953 bytes, checksum: b931683bbc9f76cdbb096f52e63ef88f (MD5) / Made available in DSpace on 2018-02-19T17:10:03Z (GMT). No. of bitstreams: 1 assumpcao_fs_dr_mar.pdf: 3758953 bytes, checksum: b931683bbc9f76cdbb096f52e63ef88f (MD5) Previous issue date: 2018-02-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A Ciência da Informação tem entre suas preocupações o acesso à informação e aos recursos informacionais, sendo, portanto, objetos de seu interesse os instrumentos utilizados para esse acesso, tais como os catálogos, que incluem dados bibliográficos (representações de recursos informacionais) e dados de autoridade (representações das entidades associadas aos recursos informacionais, tais como pessoas, entidades coletivas e conceitos). A proposta de criação de uma Web Semântica, em que os dados sejam processados não somente por sua sintaxe, mas também por sua semântica, tem impulsionado o desenvolvimento de um conjunto de tecnologias para a representação de dados na Web, assim como para a consulta a esses dados e o raciocínio computadorizado a partir deles. O uso de algumas dessas tecnologias para a publicação e o relacionamento de dados levou ao surgimento do conceito de Linked Data, e o anseio por sua aplicação na Ciência da Informação deu origem a projetos para a publicação de dados de autoridade como Linked Data. No entanto, observa-se que esses projetos, ainda em estágios iniciais ou experimentais, carecem de um quadro teórico construído na Ciência da Informação que possa orientar quanto às políticas, aos procedimentos e às tecnologias empregadas na publicação desses dados. Assim, partindo do problema “como publicar dados de autoridade como Linked Data?” são delineados o objetivo geral – propor, a partir do conceito das funções dos dados de autoridade nos catálogos e de seus benefícios nos ambientes de Linked Data, um modelo de publicação de dados de autoridade como Linked Data compreendendo políticas, procedimentos e tecnologias – a tese e a hipótese desta pesquisa, de caráter bibliográfico, documental e metodológico. Para o alcance desse objetivo, primeiramente é conduzida uma revisão de literatura acerca do controle de autoridade e do desenvolvimento dos dados de autoridade nos catálogos em livros, em fichas e digitais, com destaque para os modelos conceituais FRAD e FRSAD que sintetizam as funções desempenhadas pelos dados de autoridade. Em seguida, são apresentados o conceito de Linked Data e as principais tecnologias da Web Semântica relacionadas a ele: URIs, RDF, RDFS e OWL, apresentação essa que serve de base para a descrição de três vocabulários que podem ser utilizados na publicação de dados de autoridade (SKOS, MADS/RDF e RDA Element Sets), de iniciativas para a publicação desses dados (LC Linked Data Service, datos.bne.es, data.bnf.fr, VIAF e AGROVOC) e dos potenciais benefícios da publicação dos dados de autoridade como Linked Data. A partir dos resultados dessa revisão de literatura, é proposto o modelo para a publicação de dados de autoridade como Linked Data, compreendendo as etapas de planejamento; modelagem e mapeamento; tratamento, relacionamento e conversão; publicação; e feedback e retroalimentação. Após a descrição de cada etapa do modelo, com suas políticas, procedimentos e tecnologias, são apresentadas considerações finais sobre os resultados alcançados e sobre o modelo proposto. / One of the issues that Information Science is concerned with is the access to information resources, therefore the studies in this area include the library catalogs, which comprise bibliographic data (representations of information resources) and authority data (representations of the entities related to the information resources, such as persons, corporate body and concepts). The proposal of a Semantic Web in which data are processed not just by their syntax but also by their semantics has led to the development of a set of technologies for publishing and linking data on the Web, as well as technologies for data querying and for the computerized reasoning. The use of a subset of these technologies for publishing and linking data has led to the Linked Data concept. The experiments with these technologies in Information Science have given rise to projects for publishing authority data as Linked Data. However, we observed that these projects, still in initial or experimental stages, are not based on a theoretical framework developed in Information Science that can guide them regarding to the policies, to the procedures and to the technologies used in the publication of these data. Thus, starting from the question “how to publish authority data as Linked Data?” we define the main goal – conceptualize the functions of authority data in the catalogs and their benefits in Linked Data environments in order to propose a model for authority data publishing, as well as highlighting its policies, procedures and technologies – the thesis and the hypothesis of this research. To achieve this goal, first we made a literature review about authority control and the development of authority data in books, cards and digital catalogs, remarking the FRAD and FRSAD conceptual models which synthesize the functions of authority data. Then, we present the Linked Data concept and the main Semantic Web technologies related to it: URIs, RDF, RDFS and OWL; starting from this introduction, we describe three vocabularies that can be used to publishing authority data (SKOS, MADS/RDF and RDA Element Sets), some initiatives (Library of Congress Linked Data Service, datos.bne.es, data.bnf.fr, VIAF and AGROVOC) and the advantages of authority data published as Linked Data. Based on the results of this literature review, we propose a model for publishing authority data as Linked Data, The model comprases the following stages: planning; modeling and mapping; processing, linking and conversion; publishing; and feedback. After describing each of these stages, with its policies, procedures and technologies, we present the conclusions about the results and about the proposed model.
177

Topological stability and textual differentiation in human interaction networks: statistical analysis, visualization and linked data / Estabilidade topológica e diferenciação textual em redes de interação humana: análise estatística, visualização e dados ligados

Renato Fabbri 08 May 2017 (has links)
This work reports on stable (or invariant) topological properties and textual differentiation in human interaction networks, with benchmarks derived from public email lists. Activity along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free outline, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdös-Rényi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, 3-12% of the vertices are hubs, 15-45% are intermediary and 44-81% are peripheral vertices. Texts from each of such sectors are shown to be very different through direct measurements and through an adaptation of the Kolmogorov-Smirnov test. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria. For guiding and supporting this research, we also developed a visualization method of dynamic networks through animations. To facilitate verification and further steps in the analyses, we supply a linked data representation of data related to our results. / Este trabalho relata propriedades topológicas estáveis (ou invariantes) e diferenciação textual em redes de interação humana, com referências derivadas de listas públicas de e-mail. A atividade ao longo do tempo e a topologia foram observadas em instantâneos ao longo de uma linha do tempo e em diferentes escalas. A análise mostra que a atividade é praticamente a mesma para todas as redes em escalas temporais de segundos a meses. As componentes principais dos participantes no espaço das métricas topológicas mantêm-se praticamente inalteradas quando diferentes conjuntos de mensagens são considerados. A atividade dos participantes segue o esperado perfil livre de escala, produzindo, assim, as classes de vértices dos hubs, dos intermediários e dos periféricos em comparação com o modelo Erdös-Rényi. Os tamanhos relativos destes três setores são essencialmente os mesmos para todas as listas de e-mail e ao longo do tempo. Normalmente, 3-12% dos vértices são hubs, 15-45% são intermediários e 44-81% são vértices periféricos. Os textos de cada um destes setores são considerados muito diferentes através de uma adaptação dos testes de Kolmogorov-Smirnov. Estas propriedades são consistentes com a literatura e podem ser gerais para redes de interação humana, o que tem implicações importantes para o estabelecimento de uma tipologia dos participantes com base em critérios quantitativos. De modo a guiar e apoiar esta pesquisa, também desenvolvemos um método de visualização para redes dinâmicas através de animações. Para facilitar a verificação e passos seguintes nas análises, fornecemos uma representação em dados ligados dos dados relacionados aos nossos resultados.
178

FSI: uma infraestrutura de apoio ao projeto FrameNet utilizando web semântica

Encarnação, Paulo Victor Hauck da 04 September 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-06T17:20:33Z No. of bitstreams: 1 paulovictorhauckdaencarnacao.pdf: 5047447 bytes, checksum: ecb025b511614ad78ad5b157ec438634 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-07T11:03:00Z (GMT) No. of bitstreams: 1 paulovictorhauckdaencarnacao.pdf: 5047447 bytes, checksum: ecb025b511614ad78ad5b157ec438634 (MD5) / Made available in DSpace on 2017-06-07T11:03:00Z (GMT). No. of bitstreams: 1 paulovictorhauckdaencarnacao.pdf: 5047447 bytes, checksum: ecb025b511614ad78ad5b157ec438634 (MD5) Previous issue date: 2014-09-04 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O projeto FrameNet é um projeto desenvolvido pelo International Computer Science Institute (ICSI) em Berkeley, com o objetivo de documentar frames da língua inglesa baseando-se no conceito de frames semânticos da Inteligência Computacional. Sendo também estendido para outras línguas, como, por exemplo, o Projeto FrameNet-Br, de senvolvido na Universidade Federal de Juiz de Fora, que tem como foco a documentação dos frames linguísticos em português do Brasil. O recurso lexical construído pelo Frame Net pode ser utilizado em diversas aplicações de Processamento de Linguagem Natural, como traduções, sumarização, dentre outras. A utilização de tecnologias de web semân tica, como ontologias e dados ligados podem trazer diversos benefícios que colaboram com o compartilhamento das informações, principalmente considerando a possibilidade de uso de mecanismos de inferência para validação dos dados e obtenção de novas informações. Além disso, o uso de anotações semânticas em serviços Web também é considerado uma tecnologia promissora para facilitar a integração de recursos computacionais na web, ser vindo como mecanismo para facilitar a interação entre ferramentas de software. Estas anotações permitem que ferramentas possam compreender a estrutura dos serviços e como executá-los de maneira automática Desta forma, considerando as vantagens apresentadas por estas tecnologias, é possível associa-las de maneira a criar uma infraestrutura que permita a utilização de recursos lexicais em conjunto com recursos da web semântica para facilitar a compreensão e busca por informações em um dado domínio. Neste trabalho foi especificada uma infraestrutura baseada em serviços, que busca aliar as tecnologias da web semântica aos dados da base do projeto FrameNet, a partir da hipótese de que a aplicação de tecnologias como ontologias, dados ligados, e anotações em serviços Web podem contribuir com a construção e reuso de recursos lexicais baseados na semântica de frames, sendo estas contribuições tanto voltadas para a confiabilidade dos dados, quanto para o enriquecimento das informações mantidas pela base lexical. / The FrameNet project is developed by the International Computer Sciences Institute (ICSI) in Berkeley, with the goal of documenting frames of the English language based on the concept of semantic frames of Computational Inteligence. The FrameNet has been well translated for other languages, such as the FrameNet-BR Project, developed at the Universidade Federal de Juiz de Fora (UFJF), which focus on the documentation of lin guistic frames in Portuguese from Brazil. The lexical resource built by FrameNet can be used in many applications of natural language processing, such translations, summariza tion, among others. The use of semantic web technologies, such ontologies and linked data can bring many benefits that contribute to the sharing of information, specially con sidering the possibility of the use of inference mechanisms to validate data and retrieve new information. The use of semantic annotations for Web services is also considered a promising technology to facilitate the integration of computational resource on the web, serving as mechanism to facilitate interaction between software tools. These annotations allow tools understand the structure of services and how to execute them automatically. Thus, considering the advantages presented by these technologies, it is possible to asso ciate them in order to create an infrastructure that enables the use of lexical resources in conjunction with semantic web resources to facilitate understanding and search for information in a given domain. In this work was specified a service-based infrastructure that seeks to combine the technologies of semantic web to the database of the FrameNet project, considering the hypothesis that the application of technologies such as ontologies, linked data, and annotations in Web services can contribute to the construction and reuse of lexical resources based on semantic frames, and these contributions are both related to the reliability of data, and for the enrichment of the his information kept by lexical base.
179

Interconnexion et visualisation de ressources géoréférencées du Web de données à l’aide d’un référentiel topographique de support / Interlinking and visualizing georeferenced resources of the Web of data with geographic reference data

Feliachi, Abdelfettah 27 October 2017 (has links)
Plusieurs ressources publiées sur le Web de données sont dotées de références spatiales qui décrivent leur localisation géographique. Ces références spatiales sont un moyen favori pour interconnecter et visualiser les ressources sur le Web de données. Cependant, les hétérogénéités des niveaux de détail et de modélisations géométriques entre les sources de données constituent un défi majeur pour l’utilisation de la comparaison des références spatiales comme critère pour l’interconnexion des ressources. Ce défi est amplifié par la nature ouverte et collaborative des sources de données du Web qui engendre des hétérogénéités géométriques internes aux sources de données. En outre, les applications de visualisation cartographique des ressources géoréférencées du Web de données ne fournissent pas une visualisation lisible à toutes les échelles.Dans cette thèse, nous proposons un vocabulaire pour formaliser les connaissances sur les caractéristiques de chaque géométrie dans un jeu de données. Nous proposons également une approche semi-automatique basée sur un référentiel topographique pour acquérir ces connaissances. Nous proposons de mettre en oeuvre ces connaissances dans une approche d’adaptation dynamique du paramétrage de la comparaison des géométries dans un processus d’interconnexion. Nous proposons une approche complémentaire s’appuyant sur un référentiel topographique pour la détection des liens de cardinalité n:m. Nous proposons finalement des applications qui s’appuient sur des données topographiques de référence et leurs liens avec les ressources géoréférencées du Web pour offrir une visualisation cartographique multiéchelle lisible et conviviale / Many resources published on the Web of data are related to spatial references that describe their location. These spatial references are a valuable asset for interlinking and visualizing data over the Web. However, these spatial references may be presented with different levels of detail and different geometric modelling from one data source to another. These differences are a major challenge for using geometries comparison as a criterion for interlinking georeferenced resources. This challenge is even amplified more due to the open and often volunteered nature of the data that causes geometric heterogeneities between the resources of a same data source. Furthermore, Web mapping applications of georeferenced data are limited when it comes to visualize data at different scales.In this PhD thesis, we propose a vocabulary for formalizing the knowledge about the characteristics of every single geometry in a dataset. We propose a semi-automatic approach for acquiring this knowledge by using geographic reference data. Then, we propose to use this knowledge in approach for adapting dynamically the setting of the comparison of each pair of geometries during an interlinking process. We propose an additional interlinking approach based on geographic reference data for detecting n:m links between data sources. Finally, we propose Web mapping applications for georeferenced resources that remain readable at different map scales
180

[en] MATERIALIZATION AND MAINTENANCE OF OWL: SAMEAS LINKS / [pt] MATERIALIZAÇÃO E MANUTENÇÃO DE LIGAÇÕES OWL: SAMEAS

CARLA GONCALVES OUROFINO 17 January 2017 (has links)
[pt] A Web de Dados cresceu significativamente nos últimos anos, tanto em quantidade de dados, quanto em fontes responsáveis por esses. A partir desse aumento no número de fontes de dados, ligações owl:sameAs têm sido cada vez mais utilizadas para conectar dados equivalentes e publicados por fontes distintas. Com isso, torna-se necessário haver uma rotina de identificação e manutenção dessas conexões. Com o objetivo de automatizar essa tarefa, desenvolvemos o Framework MsA – Materialização de sameAs para materializar e recomputar ligações do tipo owl:sameAs entre bancos de dados locais e dados publicados na Web. Essas ligações, uma vez identificadas, são materializadas juntamente aos dados locais e recomputadas apenas quando necessário. Para isso, a ferramenta monitora as operações (cadastramento, remoção e atualização) realizadas nos dados locais e remotos e, para cada tipo, implementa uma estratégia de manutenção das ligações envolvidas. / [en] The Web of Data has grown significantly in recent years, not only in the amount of data but also in the number of data sources. In parallel with this tendency, owl:sameAs links have been increasingly used to connect equivalent data published by different sources. As a consequence, it becomes necessary to have a routine for the identification and maintenance of these connections. In order to automate this task, we have developed the MsA Framework - sameAs Materialization to materialize and recompute owl:sameAs links between local databases and data published on the Web. These connections, once identified, are materialized along with the local data and recomputed only when necessary. To achieve this goal, the tool monitors the operations (insertion, update and deletion) performed on local and remote records, and for each type of operation it implements a maintenance strategy on the links involved.

Page generated in 0.0396 seconds