Spelling suggestions: "subject:"linkeddata"" "subject:"linkedgeodata""
161 |
Automating Geospatial RDF Dataset Integration and EnrichmentSherif, Mohamed Ahmed Mohamed 12 May 2016 (has links)
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data.
The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data.
The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases.
The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added.
The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets.
The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.
Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
|
162 |
Integrating Natural Language Processing (NLP) and Language Resources Using Linked DataHellmann, Sebastian 09 January 2014 (has links)
This thesis is a compendium of scientific works and engineering
specifications that have been contributed to a large community of
stakeholders to be copied, adapted, mixed, built upon and exploited in
any way possible to achieve a common goal: Integrating Natural Language
Processing (NLP) and Language Resources Using Linked Data
The explosion of information technology in the last two decades has led
to a substantial growth in quantity, diversity and complexity of
web-accessible linguistic data. These resources become even more useful
when linked with each other and the last few years have seen the
emergence of numerous approaches in various disciplines concerned with
linguistic resources and NLP tools. It is the challenge of our time to
store, interlink and exploit this wealth of data accumulated in more
than half a century of computational linguistics, of empirical,
corpus-based study of language, and of computational lexicography in all
its heterogeneity.
The vision of the Giant Global Graph (GGG) was conceived by Tim
Berners-Lee aiming at connecting all data on the Web and allowing to
discover new relations between this openly-accessible data. This vision
has been pursued by the Linked Open Data (LOD) community, where the
cloud of published datasets comprises 295 data repositories and more
than 30 billion RDF triples (as of September 2011).
RDF is based on globally unique and accessible URIs and it was
specifically designed to establish links between such URIs (or
resources). This is captured in the Linked Data paradigm that postulates
four rules: (1) Referred entities should be designated by URIs, (2)
these URIs should be resolvable over HTTP, (3) data should be
represented by means of standards such as RDF, (4) and a resource should
include links to other resources.
Although it is difficult to precisely identify the reasons for the
success of the LOD effort, advocates generally argue that open licenses
as well as open access are key enablers for the growth of such a network
as they provide a strong incentive for collaboration and contribution by
third parties. In his keynote at BNCOD 2011, Chris Bizer argued that
with RDF the overall data integration effort can be “split between data
publishers, third parties, and the data consumer”, a claim that can be
substantiated by observing the evolution of many large data sets
constituting the LOD cloud.
As written in the acknowledgement section, parts of this thesis has
received numerous feedback from other scientists, practitioners and
industry in many different ways. The main contributions of this thesis
are summarized here:
Part I – Introduction and Background.
During his keynote at the Language Resource and Evaluation Conference in
2012, Sören Auer stressed the decentralized, collaborative, interlinked
and interoperable nature of the Web of Data. The keynote provides strong
evidence that Semantic Web technologies such as Linked Data are on its
way to become main stream for the representation of language resources.
The jointly written companion publication for the keynote was later
extended as a book chapter in The People’s Web Meets NLP and serves as
the basis for “Introduction” and “Background”, outlining some stages of
the Linked Data publication and refinement chain. Both chapters stress
the importance of open licenses and open access as an enabler for
collaboration, the ability to interlink data on the Web as a key feature
of RDF as well as provide a discussion about scalability issues and
decentralization. Furthermore, we elaborate on how conceptual
interoperability can be achieved by (1) re-using vocabularies, (2) agile
ontology development, (3) meetings to refine and adapt ontologies and
(4) tool support to enrich ontologies and match schemata.
Part II - Language Resources as Linked Data.
“Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge
Acquisition Spiral” summarize the results of the Linked Data in
Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in
2013 and give a preview of the MLOD special issue. In total, five
proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP &
DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012)
and one journal special issue (Multilingual Linked Open Data, MLOD to
appear) – have been (co-)edited to create incentives for scientists to
convert and publish Linked Data and thus to contribute open and/or
linguistic data to the LOD cloud. Based on the disseminated call for
papers, 152 authors contributed one or more accepted submissions to our
venues and 120 reviewers were involved in peer-reviewing.
“DBpedia as a Multilingual Language Resource” and “Leveraging the
Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked
Data Cloud” contain this thesis’ contribution to the DBpedia Project in
order to further increase the size and inter-linkage of the LOD Cloud
with lexical-semantic resources. Our contribution comprises extracted
data from Wiktionary (an online, collaborative dictionary similar to
Wikipedia) in more than four languages (now six) as well as
language-specific versions of DBpedia, including a quality assessment of
inter-language links between Wikipedia editions and internationalized
content negotiation rules for Linked Data. In particular the work
described in created the foundation for a DBpedia Internationalisation
Committee with members from over 15 different languages with the common
goal to push DBpedia as a free and open multilingual language resource.
Part III - The NLP Interchange Format (NIF).
“NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and
“Evaluation and Related Work” constitute one of the main contribution of
this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format
that aims to achieve interoperability between Natural Language
Processing (NLP) tools, language resources and annotations. The core
specification is included in and describes which URI schemes and RDF
vocabularies must be used for (parts of) natural language texts and
annotations in order to create an RDF/OWL-based interoperability layer
with NIF built upon Unicode Code Points in Normal Form C. In , classes
and properties of the NIF Core Ontology are described to formally define
the relations between text, substrings and their URI schemes. contains
the evaluation of NIF.
In a questionnaire, we asked questions to 13 developers using NIF. UIMA,
GATE and Stanbol are extensible NLP frameworks and NIF was not yet able
to provide off-the-shelf NLP domain ontologies for all possible domains,
but only for the plugins used in this study. After inspecting the
software, the developers agreed however that NIF is adequate enough to
provide a generic RDF output based on NIF using literal objects for
annotations. All developers were able to map the internal data structure
to NIF URIs to serialize RDF output (Adequacy). The development effort
in hours (ranging between 3 and 40 hours) as well as the number of code
lines (ranging between 110 and 445) suggest, that the implementation of
NIF wrappers is easy and fast for an average developer. Furthermore the
evaluation contains a comparison to other formats and an evaluation of
the available URI schemes for web annotation.
In order to collect input from the wide group of stakeholders, a total
of 16 presentations were given with extensive discussions and feedback,
which has lead to a constant improvement of NIF from 2010 until 2013.
After the release of NIF (Version 1.0) in November 2011, a total of 32
vocabulary employments and implementations for different NLP tools and
converters were reported (8 by the (co-)authors, including Wiki-link
corpus, 13 by people participating in our survey and 11 more, of
which we have heard). Several roll-out meetings and tutorials were held
(e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC
2014).
Part IV - The NLP Interchange Format in Use.
“Use Cases and Applications for NIF” and “Publication of Corpora using
NIF” describe 8 concrete instances where NIF has been successfully used.
One major contribution in is the usage of NIF as the recommended RDF
mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard
and the conversion algorithms from ITS to NIF and back. One outcome
of the discussions in the standardization meetings and telephone
conferences for ITS 2.0 resulted in the conclusion there was no
alternative RDF format or vocabulary other than NIF with the required
features to fulfill the working group charter. Five further uses of NIF
are described for the Ontology of Linguistic Annotations (OLiA), the
RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and
visualisations of NIF using the RelFinder tool. These 8 instances
provide an implemented proof-of-concept of the features of NIF.
starts with describing the conversion and hosting of the huge Google
Wikilinks corpus with 40 million annotations for 3 million web sites.
The resulting RDF dump contains 477 million triples in a 5.6 GB
compressed dump file in turtle syntax. describes how NIF can be used to
publish extracted facts from news feeds in the RDFLiveNews tool as
Linked Data.
Part V - Conclusions.
provides lessons learned for NIF, conclusions and an outlook on future
work. Most of the contributions are already summarized above. One
particular aspect worth mentioning is the increasing number of
NIF-formated corpora for Named Entity Recognition (NER) that have come
into existence after the publication of the main NIF paper Integrating
NLP using Linked Data at ISWC 2013. These include the corpora converted
by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an
OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of
three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP
Interchange Format for Open German Governmental Data, N^3 – A Collection
of Datasets for Named Entity Recognition and Disambiguation in the NLP
Interchange Format and Global Intelligent Content: Active Curation of
Language Resources using Linked Data as well as an early implementation
of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr.
Further funding for the maintenance, interlinking and publication of
Linguistic Linked Data as well as support and improvements of NIF is
available via the expiring LOD2 EU project, as well as the CSA EU
project called LIDER, which started in November 2013. Based on the
evidence of successful adoption presented in this thesis, we can expect
a decent to high chance of reaching critical mass of Linked Data
technology as well as the NIF standard in the field of Natural Language
Processing and Language Resources.:CONTENTS
i introduction and background 1
1 introduction 3
1.1 Natural Language Processing . . . . . . . . . . . . . . . 3
1.2 Open licenses, open access and collaboration . . . . . . 5
1.3 Linked Data in Linguistics . . . . . . . . . . . . . . . . . 6
1.4 NLP for and by the Semantic Web – the NLP Inter-
change Format (NIF) . . . . . . . . . . . . . . . . . . . . 8
1.5 Requirements for NLP Integration . . . . . . . . . . . . 10
1.6 Overview and Contributions . . . . . . . . . . . . . . . 11
2 background 15
2.1 The Working Group on Open Data in Linguistics (OWLG) 15
2.1.1 The Open Knowledge Foundation . . . . . . . . 15
2.1.2 Goals of the Open Linguistics Working Group . 16
2.1.3 Open linguistics resources, problems and chal-
lenges . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.4 Recent activities and on-going developments . . 18
2.2 Technological Background . . . . . . . . . . . . . . . . . 18
2.3 RDF as a data model . . . . . . . . . . . . . . . . . . . . 21
2.4 Performance and scalability . . . . . . . . . . . . . . . . 22
2.5 Conceptual interoperability . . . . . . . . . . . . . . . . 22
ii language resources as linked data 25
3 linked data in linguistics 27
3.1 Lexical Resources . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Linguistic Corpora . . . . . . . . . . . . . . . . . . . . . 30
3.3 Linguistic Knowledgebases . . . . . . . . . . . . . . . . 31
3.4 Towards a Linguistic Linked Open Data Cloud . . . . . 32
3.5 State of the Linguistic Linked Open Data Cloud in 2012 33
3.6 Querying linked resources in the LLOD . . . . . . . . . 36
3.6.1 Enriching metadata repositories with linguistic
features (Glottolog → OLiA) . . . . . . . . . . . 36
3.6.2 Enriching lexical-semantic resources with lin-
guistic information (DBpedia (→ POWLA) →
OLiA) . . . . . . . . . . . . . . . . . . . . . . . . 38
4 DBpedia as a multilingual language resource:
the case of the greek dbpedia edition. 39
4.1 Current state of the internationalization effort . . . . . 40
4.2 Language-specific design of DBpedia resource identifiers 41
4.3 Inter-DBpedia linking . . . . . . . . . . . . . . . . . . . 42
4.4 Outlook on DBpedia Internationalization . . . . . . . . 44
5 leveraging the crowdsourcing of lexical resources
for bootstrapping a linguistic linked data cloud 47
5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Problem Description . . . . . . . . . . . . . . . . . . . . 50
5.2.1 Processing Wiki Syntax . . . . . . . . . . . . . . 50
5.2.2 Wiktionary . . . . . . . . . . . . . . . . . . . . . . 52
5.2.3 Wiki-scale Data Extraction . . . . . . . . . . . . . 53
5.3 Design and Implementation . . . . . . . . . . . . . . . . 54
5.3.1 Extraction Templates . . . . . . . . . . . . . . . . 56
5.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . 56
5.3.3 Language Mapping . . . . . . . . . . . . . . . . . 58
5.3.4 Schema Mediation by Annotation with lemon . 58
5.4 Resulting Data . . . . . . . . . . . . . . . . . . . . . . . . 58
5.5 Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . 60
5.6 Discussion and Future Work . . . . . . . . . . . . . . . 60
5.6.1 Next Steps . . . . . . . . . . . . . . . . . . . . . . 61
5.6.2 Open Research Questions . . . . . . . . . . . . . 61
6 nlp & dbpedia, an upward knowledge acquisition
spiral 63
6.1 Knowledge acquisition and structuring . . . . . . . . . 64
6.2 Representation of knowledge . . . . . . . . . . . . . . . 65
6.3 NLP tasks and applications . . . . . . . . . . . . . . . . 65
6.3.1 Named Entity Recognition . . . . . . . . . . . . 66
6.3.2 Relation extraction . . . . . . . . . . . . . . . . . 67
6.3.3 Question Answering over Linked Data . . . . . 67
6.4 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.4.1 Gold and silver standards . . . . . . . . . . . . . 69
6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
iii the nlp interchange format (nif) 73
7 nif 2.0 core specification 75
7.1 Conformance checklist . . . . . . . . . . . . . . . . . . . 75
7.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7.2.1 Definition of Strings . . . . . . . . . . . . . . . . 78
7.2.2 Representation of Document Content with the
nif:Context Class . . . . . . . . . . . . . . . . . . 80
7.3 Extension of NIF . . . . . . . . . . . . . . . . . . . . . . 82
7.3.1 Part of Speech Tagging with OLiA . . . . . . . . 83
7.3.2 Named Entity Recognition with ITS 2.0, DBpe-
dia and NERD . . . . . . . . . . . . . . . . . . . 84
7.3.3 lemon and Wiktionary2RDF . . . . . . . . . . . 86
8 nif 2.0 resources and architecture 89
8.1 NIF Core Ontology . . . . . . . . . . . . . . . . . . . . . 89
8.1.1 Logical Modules . . . . . . . . . . . . . . . . . . 90
8.2 Workflows . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.2.1 Access via REST Services . . . . . . . . . . . . . 92
8.2.2 NIF Combinator Demo . . . . . . . . . . . . . .
92
8.3 Granularity Profiles . . . . . . . . . . . . . . . . . . . . .
93
8.4 Further URI Schemes for NIF . . . . . . . . . . . . . . .
95
8.4.1 Context-Hash-based URIs . . . . . . . . . . . . .
99
9 evaluation and related work 101
9.1 Questionnaire and Developers Study for NIF 1.0 . . . . 101
9.2 Qualitative Comparison with other Frameworks and
Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.3 URI Stability Evaluation . . . . . . . . . . . . . . . . . . 103
9.4 Related URI Schemes . . . . . . . . . . . . . . . . . . . . 104
iv the nlp interchange format in use 109
10 use cases and applications for nif 111
10.1 Internationalization Tag Set 2.0 . . . . . . . . . . . . . . 111
10.1.1 ITS2NIF and NIF2ITS conversion . . . . . . . . . 112
10.2 OLiA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.3 RDFaCE . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.4 Tiger Corpus Navigator . . . . . . . . . . . . . . . . . . 121
10.4.1 Tools and Resources . . . . . . . . . . . . . . . . 122
10.4.2 NLP2RDF in 2010 . . . . . . . . . . . . . . . . . . 123
10.4.3 Linguistic Ontologies . . . . . . . . . . . . . . . . 124
10.4.4 Implementation . . . . . . . . . . . . . . . . . . . 125
10.4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . 126
10.4.6 Related Work and Outlook . . . . . . . . . . . . 129
10.5 OntosFeeder – a Versatile Semantic Context Provider
for Web Content Authoring . . . . . . . . . . . . . . . . 131
10.5.1 Feature Description and User Interface Walk-
through . . . . . . . . . . . . . . . . . . . . . . . 132
10.5.2 Architecture . . . . . . . . . . . . . . . . . . . . . 134
10.5.3 Embedding Metadata . . . . . . . . . . . . . . . 135
10.5.4 Related Work and Summary . . . . . . . . . . . 135
10.6 RelFinder: Revealing Relationships in RDF Knowledge
Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
10.6.1 Implementation . . . . . . . . . . . . . . . . . . . 137
10.6.2 Disambiguation . . . . . . . . . . . . . . . . . . . 138
10.6.3 Searching for Relationships . . . . . . . . . . . . 139
10.6.4 Graph Visualization . . . . . . . . . . . . . . . . 140
10.6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . 141
11 publication of corpora using nif 143
11.1 Wikilinks Corpus . . . . . . . . . . . . . . . . . . . . . . 143
11.1.1 Description of the corpus . . . . . . . . . . . . . 143
11.1.2 Quantitative Analysis with Google Wikilinks Cor-
pus . . . . . . . . . . . . . . . . . . . . . . . . . . 144
11.2 RDFLiveNews . . . . . . . . . . . . . . . . . . . . . . . . 144
11.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . 145
11.2.2 Mapping to RDF and Publication on the Web of
Data . . . . . . . . . . . . . . . . . . . . . . . . . 146
v conclusions 149
12 lessons learned, conclusions and future work 151
12.1 Lessons Learned for NIF . . . . . . . . . . . . . . . . . . 151
12.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 151
12.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 153
|
163 |
Le Web de données et le Web sémantique à Bibliothèque et Archives nationales du Québec : constats et recommandations fondés sur l'initiative de la Bibliothèque nationale de FranceSt-Germain, Marielle 05 1900 (has links)
Ce mémoire traite des concepts et de l'implantation du Web sémantique et du Web de données au sein d'institutions documentaires. Une analyse et une définition des technologies caractérisant ces concepts sont d'abord présentés dans l'objectif de les clarifier et d'assurer une bonne compréhension des différents enjeux qui en découlent pour les acteurs du domaine. Ensuite, les éléments démontrant la pertinence et les défis pour les professionnels de l'information sont décrits. Puis, l'objectif est d'analyser le processus de mise sur pied d'un projet de Web de données au sein de la Bibliothèque nationale de France pour proposer une transposition possible au contexte de Bibliothèque et Archives nationales du Québec, en vue d'une application. La liste des treize étapes pour l'implantation d'un projet de Web de données en bibliothèque ainsi que la proposition de l'application d'une méthodologie de développement de logiciel à ces pratiques sont ensuite présentées. Suite à cette analyse, des recommandations quant aux différentes étapes d'implantation sont proposées. / This dissertation discusses the concepts and implementation of Semantic Web and Linked Data within libraries. Analysis and definition of technologies characterizing these concepts are first presented with the objective to clarify and ensure a good understanding of the various issues arising for actors in the field. Then, the elements demonstrating the relevance and challenges for information professionals are described. The objective is to analyze the implementation process of a Linked Data project with the Bibliothèque nationale de France to propose a possible transposition to the context of Bibliothèque et Archives nationales du Québec, for an application within the latter. A list of thirteen steps for the implementation of a library Linked Data project and the proposal for applying a software development process on these practices are presented. Following this analysis, recommendations regarding these various stages of implementation are proposed.
|
164 |
Publikování bibliografických informací v souladu s principy linked data / Publishing of bibliographic information according to the Linked Data principlesHladká, Jitka January 2010 (has links)
Bibliographic data provide well established means of the information resource description. Nowadays, there is obvious need to make this data more web-friendly and web-compatible, while web is mostly seen as the only place where people are seeking information. The proposed solution is to change the traditional bibliographic data representation format. Semantic Web initiative presents enhancement of the current web with machine processable data semantics (meaning). The main goal is to manage more effective information processing by web applications that brings up many benefits for web users. Linked data are seen as practical means to reach the semantic web vision. This initiative promotes best practices of publishing, connecting and sharing structural data on the web. This thesis reports the transformation of bibliographic records' format in case of Database of publishing activities at University of Economics, Prague (PCVSE). Benefits of linked data publishing model are demonstrated on the realized projects. The main part describes practical experiences gained during the implementation of RDF framework in phases of data modelling, interlinking and exposing the dataset. The linked data publishing model is seen there as an optimal web-compatible bibliographic data representation format.
|
165 |
La recommandation des jeux de données basée sur le profilage pour le liage des données RDF / Profile-based Datas and Recommendation for RDF Data LinkingBen Ellefi, Mohamed 01 December 2016 (has links)
Avec l’émergence du Web de données, notamment les données ouvertes liées, une abondance de données est devenue disponible sur le web. Cependant, les ensembles de données LOD et leurs sous-graphes inhérents varient fortement par rapport a leur taille, le thème et le domaine, les schémas et leur dynamicité dans le temps au niveau des données. Dans ce contexte, l'identification des jeux de données appropriés, qui répondent a des critères spécifiques, est devenue une tâche majeure, mais difficile a soutenir, surtout pour répondre a des besoins spécifiques tels que la recherche d'entités centriques et la recherche des liens sémantique des données liées. Notamment, en ce qui concerne le problème de liage des données, le besoin d'une méthode efficace pour la recommandation des jeux de données est devenu un défi majeur, surtout avec l'état actuel de la topologie du LOD, dont la concentration des liens est très forte au niveau des graphes populaires multi-domaines tels que DBpedia et YAGO, alors qu'une grande liste d'autre jeux de données considérés comme candidats potentiels pour le liage est encore ignorée. Ce problème est dû a la tradition du web sémantique dans le traitement du problème de "identification des jeux de données candidats pour le liage". Bien que la compréhension de la nature du contenu d'un jeu de données spécifique est une condition cruciale pour les cas d'usage mentionnées, nous adoptons dans cette thèse la notion de "profil de jeu de données"- un ensemble de caractéristiques représentatives pour un jeu de données spécifique, notamment dans le cadre de la comparaison avec d'autres jeux de données. Notre première direction de recherche était de mettre en œuvre une approche de recommandation basée sur le filtrage collaboratif, qui exploite à la fois les prols thématiques des jeux de données, ainsi que les mesures de connectivité traditionnelles, afin d'obtenir un graphe englobant les jeux de données du LOD et leurs thèmes. Cette approche a besoin d'apprendre le comportement de la connectivité des jeux de données dans le LOD graphe. Cependant, les expérimentations ont montré que la topologie actuelle de ce nuage LOD est loin d'être complète pour être considéré comme des données d'apprentissage.Face aux limites de la topologie actuelle du graphe LOD, notre recherche a conduit a rompre avec cette représentation de profil thématique et notamment du concept "apprendre pour classer" pour adopter une nouvelle approche pour l'identification des jeux de données candidats basée sur le chevauchement des profils intensionnels entre les différents jeux de données. Par profil intensionnel, nous entendons la représentation formelle d'un ensemble d'étiquettes extraites du schéma du jeu de données, et qui peut être potentiellement enrichi par les descriptions textuelles correspondantes. Cette représentation fournit l'information contextuelle qui permet de calculer la similarité entre les différents profils d'une manière efficace. Nous identifions le chevauchement de différentes profils à l'aide d'une mesure de similarité semantico-fréquentielle qui se base sur un classement calcule par le tf*idf et la mesure cosinus. Les expériences, menées sur tous les jeux de données lies disponibles sur le LOD, montrent que notre méthode permet d'obtenir une précision moyenne de 53% pour un rappel de 100%.Afin d'assurer des profils intensionnels de haute qualité, nous introduisons Datavore- un outil oriente vers les concepteurs de métadonnées qui recommande des termes de vocabulaire a réutiliser dans le processus de modélisation des données. Datavore fournit également les métadonnées correspondant aux termes recommandés ainsi que des propositions des triples utilisant ces termes. L'outil repose sur l’écosystème des Vocabulaires Ouverts Lies (LOV) pour l'acquisition des vocabulaires existants et leurs métadonnées. / With the emergence of the Web of Data, most notably Linked Open Data (LOD), an abundance of data has become available on the web. However, LOD datasets and their inherent subgraphs vary heavily with respect to their size, topic and domain coverage, the schemas and their data dynamicity (respectively schemas and metadata) over the time. To this extent, identifying suitable datasets, which meet specific criteria, has become an increasingly important, yet challenging task to supportissues such as entity retrieval or semantic search and data linking. Particularlywith respect to the interlinking issue, the current topology of the LOD cloud underlines the need for practical and efficient means to recommend suitable datasets: currently, only well-known reference graphs such as DBpedia (the most obvious target), YAGO or Freebase show a high amount of in-links, while there exists a long tail of potentially suitable yet under-recognized datasets. This problem is due to the semantic web tradition in dealing with "finding candidate datasets to link to", where data publishers are used to identify target datasets for interlinking.While an understanding of the nature of the content of specific datasets is a crucial prerequisite for the mentioned issues, we adopt in this dissertation the notion of "dataset profile" - a set of features that describe a dataset and allow the comparison of different datasets with regard to their represented characteristics. Our first research direction was to implement a collaborative filtering-like dataset recommendation approach, which exploits both existing dataset topic proles, as well as traditional dataset connectivity measures, in order to link LOD datasets into a global dataset-topic-graph. This approach relies on the LOD graph in order to learn the connectivity behaviour between LOD datasets. However, experiments have shown that the current topology of the LOD cloud group is far from being complete to be considered as a ground truth and consequently as learning data.Facing the limits the current topology of LOD (as learning data), our research has led to break away from the topic proles representation of "learn to rank" approach and to adopt a new approach for candidate datasets identication where the recommendation is based on the intensional profiles overlap between differentdatasets. By intensional profile, we understand the formal representation of a set of schema concept labels that best describe a dataset and can be potentially enriched by retrieving the corresponding textual descriptions. This representation provides richer contextual and semantic information and allows to compute efficiently and inexpensively similarities between proles. We identify schema overlap by the help of a semantico-frequential concept similarity measure and a ranking criterion based on the tf*idf cosine similarity. The experiments, conducted over all available linked datasets on the LOD cloud, show that our method achieves an average precision of up to 53% for a recall of 100%. Furthermore, our method returns the mappings between the schema concepts across datasets, a particularly useful input for the data linking step.In order to ensure a high quality representative datasets schema profiles, we introduce Datavore| a tool oriented towards metadata designers that provides rankedlists of vocabulary terms to reuse in data modeling process, together with additional metadata and cross-terms relations. The tool relies on the Linked Open Vocabulary (LOV) ecosystem for acquiring vocabularies and metadata and is made available for the community.
|
166 |
Topological stability and textual differentiation in human interaction networks: statistical analysis, visualization and linked data / Estabilidade topológica e diferenciação textual em redes de interação humana: análise estatística, visualização e dados ligadosFabbri, Renato 08 May 2017 (has links)
This work reports on stable (or invariant) topological properties and textual differentiation in human interaction networks, with benchmarks derived from public email lists. Activity along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free outline, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdös-Rényi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, 3-12% of the vertices are hubs, 15-45% are intermediary and 44-81% are peripheral vertices. Texts from each of such sectors are shown to be very different through direct measurements and through an adaptation of the Kolmogorov-Smirnov test. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria. For guiding and supporting this research, we also developed a visualization method of dynamic networks through animations. To facilitate verification and further steps in the analyses, we supply a linked data representation of data related to our results. / Este trabalho relata propriedades topológicas estáveis (ou invariantes) e diferenciação textual em redes de interação humana, com referências derivadas de listas públicas de e-mail. A atividade ao longo do tempo e a topologia foram observadas em instantâneos ao longo de uma linha do tempo e em diferentes escalas. A análise mostra que a atividade é praticamente a mesma para todas as redes em escalas temporais de segundos a meses. As componentes principais dos participantes no espaço das métricas topológicas mantêm-se praticamente inalteradas quando diferentes conjuntos de mensagens são considerados. A atividade dos participantes segue o esperado perfil livre de escala, produzindo, assim, as classes de vértices dos hubs, dos intermediários e dos periféricos em comparação com o modelo Erdös-Rényi. Os tamanhos relativos destes três setores são essencialmente os mesmos para todas as listas de e-mail e ao longo do tempo. Normalmente, 3-12% dos vértices são hubs, 15-45% são intermediários e 44-81% são vértices periféricos. Os textos de cada um destes setores são considerados muito diferentes através de uma adaptação dos testes de Kolmogorov-Smirnov. Estas propriedades são consistentes com a literatura e podem ser gerais para redes de interação humana, o que tem implicações importantes para o estabelecimento de uma tipologia dos participantes com base em critérios quantitativos. De modo a guiar e apoiar esta pesquisa, também desenvolvemos um método de visualização para redes dinâmicas através de animações. Para facilitar a verificação e passos seguintes nas análises, fornecemos uma representação em dados ligados dos dados relacionados aos nossos resultados.
|
167 |
Uma proposta de publicação de dados do orçamento público na Web / A proposal for public budget data publishing in the WebSantana, Marcelo Tavares de 04 December 2013 (has links)
Este trabalho apresenta uma proposta de publicação de dados de execução do orçamento público em acordo com a legislação brasileira. A partir das leis e da revisão da literatura foram levantados requisitos que devem ser considerados numa publicação de dados que seja processável por máquina. São apresentadas dimensões de dados como qualidade de dados, metadados e taxonomia, além das abordagens Dados Governamentais Abertos e Linked Data. A proposta de publicação resultante além de alinhada com os requisitos estudados prevê processamento por máquina em três níveis: tratamento de dados, validação de dados e colecionamento de conjuntos de dados. Com as tecnologias selecionadas, como XBRL e XSLT, alcançou-se um modelo que atendeu cartorze requisitos levantados, abrindo caminho para estudos como a de uma rede de conjuntos de dados baseada nesta proposta. / This paper presents a proposal for publishing public budget data in accordance with Brazilian law. From the laws have raised requirements that must be considered in the publication data format that is machine-processable. From the literature review were also raised requirements that are part of the proposed publication of this work, such as controlled vocabularies, metadata and taxonomy. Two approaches of open data were also considered in the preparation of the publication format: Open Government Data and Linked Data. The proposed publication resulting from this work, that use technologies like XBRL and XSLT, is aligned with the raised requirements provides three levels of machine-processing: data processing, data validation and collecting datasets.
|
168 |
Towards the French Biomedical Ontology Enrichment / Vers l'enrichissement d'ontologies biomédicales françaisesLossio-Ventura, Juan Antonio 09 November 2015 (has links)
En biomedicine, le domaine du « Big Data » (l'infobésité) pose le problème de l'analyse de gros volumes de données hétérogènes (i.e. vidéo, audio, texte, image). Les ontologies biomédicales, modèle conceptuel de la réalité, peuvent jouer un rôle important afin d'automatiser le traitement des données, les requêtes et la mise en correspondance des données hétérogènes. Il existe plusieurs ressources en anglais mais elles sont moins riches pour le français. Le manque d'outils et de services connexes pour les exploiter accentue ces lacunes. Dans un premier temps, les ontologies ont été construites manuellement. Au cours de ces dernières années, quelques méthodes semi-automatiques ont été proposées. Ces techniques semi-automatiques de construction/enrichissement d'ontologies sont principalement induites à partir de textes en utilisant des techniques du traitement du langage naturel (TALN). Les méthodes de TALN permettent de prendre en compte la complexité lexicale et sémantique des données biomédicales : (1) lexicale pour faire référence aux syntagmes biomédicaux complexes à considérer et (2) sémantique pour traiter l'induction du concept et du contexte de la terminologie. Dans cette thèse, afin de relever les défis mentionnés précédemment, nous proposons des méthodologies pour l'enrichissement/la construction d'ontologies biomédicales fondées sur deux principales contributions.La première contribution est liée à l'extraction automatique de termes biomédicaux spécialisés (complexité lexicale) à partir de corpus. De nouvelles mesures d'extraction et de classement de termes composés d'un ou plusieurs mots ont été proposées et évaluées. L'application BioTex implémente les mesures définies.La seconde contribution concerne l'extraction de concepts et le lien sémantique de la terminologie extraite (complexité sémantique). Ce travail vise à induire des concepts pour les nouveaux termes candidats et de déterminer leurs liens sémantiques, c'est-à-dire les positions les plus pertinentes au sein d'une ontologie biomédicale existante. Nous avons ainsi proposé une approche d'extraction de concepts qui intègre de nouveaux termes dans l'ontologie MeSH. Les évaluations, quantitatives et qualitatives, menées par des experts et non experts, sur des données réelles soulignent l'intérêt de ces contributions. / Big Data for biomedicine domain deals with a major issue, the analyze of large volume of heterogeneous data (e.g. video, audio, text, image). Ontology, conceptual models of the reality, can play a crucial role in biomedical to automate data processing, querying, and matching heterogeneous data. Various English resources exist but there are considerably less available in French and there is a strong lack of related tools and services to exploit them. Initially, ontologies were built manually. In recent years, few semi-automatic methodologies have been proposed. The semi-automatic construction/enrichment of ontologies are mostly induced from texts by using natural language processing (NLP) techniques. NLP methods have to take into account lexical and semantic complexity of biomedical data : (1) lexical refers to complex phrases to take into account, (2) semantic refers to sense and context induction of the terminology.In this thesis, we propose methodologies for enrichment/construction of biomedical ontologies based on two main contributions, in order to tackle the previously mentioned challenges. The first contribution is about the automatic extraction of specialized biomedical terms (lexical complexity) from corpora. New ranking measures for single- and multi-word term extraction methods have been proposed and evaluated. In addition, we present BioTex software that implements the proposed measures. The second contribution concerns the concept extraction and semantic linkage of the extracted terminology (semantic complexity). This work seeks to induce semantic concepts of new candidate terms, and to find the semantic links, i.e. relevant location of new candidate terms, in an existing biomedical ontology. We proposed a methodology that extracts new terms in MeSH ontology. The experiments conducted on real data highlight the relevance of the contributions.
|
169 |
Uma proposta de publicação de dados do orçamento público na Web / A proposal for public budget data publishing in the WebMarcelo Tavares de Santana 04 December 2013 (has links)
Este trabalho apresenta uma proposta de publicação de dados de execução do orçamento público em acordo com a legislação brasileira. A partir das leis e da revisão da literatura foram levantados requisitos que devem ser considerados numa publicação de dados que seja processável por máquina. São apresentadas dimensões de dados como qualidade de dados, metadados e taxonomia, além das abordagens Dados Governamentais Abertos e Linked Data. A proposta de publicação resultante além de alinhada com os requisitos estudados prevê processamento por máquina em três níveis: tratamento de dados, validação de dados e colecionamento de conjuntos de dados. Com as tecnologias selecionadas, como XBRL e XSLT, alcançou-se um modelo que atendeu cartorze requisitos levantados, abrindo caminho para estudos como a de uma rede de conjuntos de dados baseada nesta proposta. / This paper presents a proposal for publishing public budget data in accordance with Brazilian law. From the laws have raised requirements that must be considered in the publication data format that is machine-processable. From the literature review were also raised requirements that are part of the proposed publication of this work, such as controlled vocabularies, metadata and taxonomy. Two approaches of open data were also considered in the preparation of the publication format: Open Government Data and Linked Data. The proposed publication resulting from this work, that use technologies like XBRL and XSLT, is aligned with the raised requirements provides three levels of machine-processing: data processing, data validation and collecting datasets.
|
170 |
Identification automatique d'entités pour l'enrichissement de contenus textuelsStern, Rosa 28 June 2013 (has links) (PDF)
Cette thèse propose une méthode et un système d'identification d'entités (personnes, lieux, organisations) mentionnées au sein des contenus textuels produits par l'Agence France Presse dans la perspective de l'enrichissement automatique de ces contenus. Les différents domaines concernés par cette tâche ainsi que par l'objectif poursuivi par les acteurs de la publication numérique de contenus textuels sont abordés et mis en relation : Web Sémantique, Extraction d'Information et en particulier Reconnaissance d'Entités Nommées (\ren), Annotation Sémantique, Liage d'Entités. À l'issue de cette étude, le besoin industriel formulé par l'Agence France Presse fait l'objet des spécifications utiles au développement d'une réponse reposant sur des outils de Traitement Automatique du Langage. L'approche adoptée pour l'identification des entités visées est ensuite décrite : nous proposons la conception d'un système prenant en charge l'étape de \ren à l'aide de n'importe quel module existant, dont les résultats, éventuellement combinés à ceux d'autres modules, sont évalués par un module de Liage capable à la fois (i) d'aligner une mention donnée sur l'entité qu'elle dénote parmi un inventaire constitué au préalable, (ii) de repérer une dénotation ne présentant pas d'alignement dans cet inventaire et (iii) de remettre en cause la lecture dénotationnelle d'une mention (repérage des faux positifs). Le système \nomos est développé à cette fin pour le traitement de données en français. Sa conception donne également lieu à la construction et à l'utilisation de ressources ancrées dans le réseau des \ld ainsi que d'une base de connaissances riche sur les entités concernées.
|
Page generated in 0.0245 seconds