Spelling suggestions: "subject:"linkeddata"" "subject:"linkedgeodata""
151 |
From Information Extraction to Knowledge Discovery: Semantic Enrichment of Multilingual Content with Linked Open DataDe Wilde, Max 23 October 2015 (has links)
Discovering relevant knowledge out of unstructured text in not a trivial task. Search engines relying on full-text indexing of content reach their limits when confronted to poor quality, ambiguity, or multiple languages. Some of these shortcomings can be addressed by information extraction and related natural language processing techniques, but it still falls short of adequate knowledge representation. In this thesis, we defend a generic approach striving to be as language-independent, domain-independent, and content-independent as possible. To reach this goal, we offer to disambiguate terms with their corresponding identifiers in Linked Data knowledge bases, paving the way for full-scale semantic enrichment of textual content. The added value of our approach is illustrated with a comprehensive case study based on a trilingual historical archive, addressing constraints of data quality, multilingualism, and language evolution. A proof-of-concept implementation is also proposed in the form of a Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), demonstrating to a certain extent the general applicability of our methodology to any language, domain, and type of content. / Découvrir de nouveaux savoirs dans du texte non-structuré n'est pas une tâche aisée. Les moteurs de recherche basés sur l'indexation complète des contenus montrent leur limites quand ils se voient confrontés à des textes de mauvaise qualité, ambigus et/ou multilingues. L'extraction d'information et d'autres techniques issues du traitement automatique des langues permettent de répondre partiellement à cette problématique, mais sans pour autant atteindre l'idéal d'une représentation adéquate de la connaissance. Dans cette thèse, nous défendons une approche générique qui se veut la plus indépendante possible des langues, domaines et types de contenus traités. Pour ce faire, nous proposons de désambiguïser les termes à l'aide d'identifiants issus de bases de connaissances du Web des données, facilitant ainsi l'enrichissement sémantique des contenus. La valeur ajoutée de cette approche est illustrée par une étude de cas basée sur une archive historique trilingue, en mettant un accent particulier sur les contraintes de qualité, de multilinguisme et d'évolution dans le temps. Un prototype d'outil est également développé sous le nom de Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), démontrant ainsi le caractère généralisable de notre approche, dans un certaine mesure, à n'importe quelle langue, domaine ou type de contenu. / Doctorat en Information et communication / info:eu-repo/semantics/nonPublished
|
152 |
Nouvelles méthodes pour l'évaluation, l'évolution et l'interrogation des bases du Web des données / New methods to evaluate, check and query the Web of dataMaillot, Pierre 26 November 2015 (has links)
Le Web des données offre un environnement de partage et de diffusion des données, selon un cadre particulier qui permet une exploitation des données tant par l’humain que par la machine. Pour cela, le framework RDF propose de formater les données en phrases élémentaires de la forme (sujet, relation, objet) , appelées triplets. Les bases du Web des données, dites bases RDF, sont des ensembles de triplets. Dans une base RDF, l’ontologie – données structurelles – organise la description des données factuelles. Le nombre et la taille des bases du Web des données n’a pas cessé de croître depuis sa création en 2001. Cette croissance s’est même accélérée depuis l’apparition du mouvement du Linked Data en 2008 qui encourage le partage et l’interconnexion de bases publiquement accessibles sur Internet. Ces bases couvrent des domaines variés tels que les données encyclopédiques (e.g. Wikipédia), gouvernementales ou bibliographiques. L’utilisation et la mise à jour des données dans ces bases sont faits par des communautés d’utilisateurs liés par un domaine d’intérêt commun. Cette exploitation communautaire se fait avec le soutien d’outils insuffisamment matures pour diagnostiquer le contenu d’une base ou pour interroger ensemble les bases du Web des données. Notre thèse propose trois méthodes pour encadrer le développement, tant factuel qu’ontologique, et pour améliorer l’interrogation des bases du Web des données. Nous proposons d’abord une méthode pour évaluer la qualité des modifications des données factuelles lors d’une mise à jour par un contributeur. Nous proposons ensuite une méthode pour faciliter l’examen de la base par la mise en évidence de groupes de données factuelles en conflit avec l’ontologie. L’expert qui guide l’évolution de cette base peut ainsi modifier l’ontologie ou les données. Nous proposons enfin une méthode d’interrogation dans un environnement distribué qui interroge uniquement les bases susceptibles de fournir une réponse. / The web of data is a mean to share and broadcast data user-readable data as well as machine-readable data. This is possible thanks to rdf which propose the formatting of data into short sentences (subject, relation, object) called triples. Bases from the web of data, called rdf bases, are sets of triples. In a rdf base, the ontology – structural data – organize the description of factual data. Since the web of datacreation in 2001, the number and sizes of rdf bases have been constantly rising. This increase has accelerated since the apparition of linked data, which promote the sharing and interlinking of publicly available bases by user communities. The exploitation – interrogation and edition – by theses communities is made without adequateSolution to evaluate the quality of new data, check the current state of the bases or query together a set of bases. This thesis proposes three methods to help the expansion at factual and ontological level and the querying of bases from the web ofData. We propose a method designed to help an expert to check factual data in conflict with the ontology. Finally we propose a method for distributed querying limiting the sending of queries to bases that may contain answers.
|
153 |
La gestion des données d'autorité archivistiques dans le cadre du Web de donnéesChardonnens, Anne 15 December 2020 (has links) (PDF)
Dans un contexte archivistique en transition, marqué par l'évolution des normes internationales de description archivistique et le passage vers une logique de graphes d'entités, cette thèse se concentre plus spécifiquement sur la gestion des données d'autorité relatives à des personnes physiques. Elle vise à explorer comment le secteur des archives peut bénéficier du développement du Web de données pour favoriser une gestion soutenable de ses données d'autorité :de leur création à leur mise à disposition, en passant par leur maintenance et leur interconnexion avec d'autres ressources.La première partie de la thèse est dédiée à un état de l'art englobant tant les récentes évolutions des normes internationales de description archivistique que le développement de l'écosystème Wikibase. La seconde partie vise à analyser les possibilités et les limites d'une approche faisant appel au logiciel libre Wikibase. Cette seconde partie s'appuie sur une étude empirique menée dans le contexte du Centre d'Études et de Documentation Guerre et Sociétés Contemporaines (CegeSoma). Elle permet de tester les perspectives dont disposent des institutions possédant des ressources limitées et n'ayant pas encore adopté la logique du Web de données. Par le biais de jeux de données relatifs à des personnes liées à la Seconde Guerre mondiale, elle dissèque les différentes étapes conduisant à leur publication sous forme de données ouvertes et liées. L'expérience menée en seconde partie de thèse montre comment une base de connaissance mue par un logiciel tel que Wikibase rationalise la création de données d'autorité structurées multilingues. Des exemples illustrent la façon dont ces entités peuvent ensuite être réutilisées et enrichies à l'aide de données externes dans le cadre d'interfaces destinées au grand public. Tout en soulignant les limites propres à l'utilisation de Wikibase, cette thèse met en lumière ses possibilités, en particulier dans le cadre de la maintenance des données. Grâce à son caractère empirique et aux recommandations qu'elle formule, cette recherche contribue ainsi aux efforts et réflexions menés dans le cadre de la transition des métadonnées archivistiques. / The subject of this thesis is the management of authority records for persons. The research was conducted in an archival context in transition, which was marked by the evolution of international standards of archival description and a shift towards the application of knowledge graphs. The aim of this thesis is to explore how the archival sector can benefit from the developments concerning Linked Data in order to ensure the sustainable management of authority records. Attention is not only devoted to the creation of the records and how they are made available but also to their maintenance and their interlinking with other resources.The first part of this thesis addresses the state of the art of the developments concerning the international standards of archival description as well as those regarding the Wikibase ecosystem. The second part presents an analysis of the possibilities and limits associated with an approach in which the free software Wikibase is used. The analysis is based on an empirical study carried out with data of the Study and Documentation Centre War and Contemporary Society (CegeSoma). It explores the options that are available to institutions that have limited resources and that have not yet implemented Linked Data. Datasets that contain information of people linked to the Second World War were used to examine the different stages involved in the publication of data as Linked Open Data.The experiment carried out in the second part of the thesis shows how a knowledge base driven by software such as Wikibase streamlines the creation of multilingual structured authority data. Examples illustrate how these entities can then be reused and enriched by using external data in interfaces aimed at the general public. This thesis highlights the possibilities of Wikibase, particularly in the context of data maintenance, without ignoring the limitations associated with its use. Due to its empirical nature and the formulated recommendations, this thesis contributes to the efforts and reflections carried out within the framework of the transition of archival metadata. / Doctorat en Information et communication / info:eu-repo/semantics/nonPublished
|
154 |
Structured feedback: a distributed protocol for feedback and patches on the Web of DataArndt, Natanael, Junghanns, Kurt, Meissner, Roy, Frischmuth, Philipp, Radtke, Norman, Frommhold, Marvin, Martin, Michael 23 June 2017 (has links)
The World Wide Web is an infrastructure to publish and retrieve information through web resources. It evolved from a static Web 1.0 to a multimodal and interactive communication and information space which is used to collaboratively contribute and discuss web resources, which is better known as Web 2.0. The evolution into a Semantic Web (Web 3.0) proceeds. One of its remarkable advantages is the decentralized and interlinked data composition. Hence, in contrast to its data distribution, workflows and technologies for decentralized collaborative contribution are missing. In this paper we propose the Structured Feedback protocol as an interactive addition to the Web of Data. It offers support for users to contribute to the evolution of web resources, by providing structured data artifacts as patches for web resources, as well as simple plain text comments. Based on this approach it enables crowd-supported quality assessment and web data cleansing processes in an ad-hoc fashion most web users are familiar with.
|
155 |
OpenResearch: collaborative management of scholarly communication metadateVahdati, Sahar, Arndt, Natanael, Auer, Sören, Lange, Christoph 01 August 2017 (has links)
Scholars often need to search for matching, high-profile sci-entific events to publish their research results. Information about topical focus and quality of events is not made suÿciently explicit in the existing communication channels where events are announced. Therefore, schol-ars have to spend a lot of time on reading and assessing calls for papers but might still not find the right event. Additionally, events might be overlooked because of the large number of events announced every day. We introduce OpenResearch, a crowd sourcing platform that supports researchers in collecting, organizing, sharing and disseminating informa-tion about scientific events in a structured way. It enables quality-related queries over a multidisciplinary collection of events according to a broad range of criteria such as acceptance rate, sustainability of event series, and reputation of people and organizations. Events are represented in di˙erent views using map extensions, calendar and time-line visualiz-ations. We have systematically evaluated the timeliness, usability and performance of OpenResearch.
|
156 |
[pt] MANUTENÇÃO DE LINKS SAMEAS MATERIALIZADOS UTILIZANDO VISÕES / [en] MATERIALIZED SAMEAS LINK MAINTENANCE WITH VIEWSELISA SOUZA MENENDEZ 11 February 2016 (has links)
[pt] Na área de dados interligados, usuários frequentemente utilizam ferramentas de descoberta de links para materializar links sameAs entre diferentes base de dados. No entanto, pode ser difícil especificar as regras de ligação nas ferramentas, se as bases de dados tiverem modelos complexos. Uma possível solução para esse problema seria estimular os administradores das base de dados a publicarem visões simples, que funcionem como catálogos de recursos. Uma vez que os links estão materializados, um segundo problema que surge é como manter esses links atualizados quando as bases de dados são atualizadas. Para ajudar a resolver o segundo problema, este trabalho apresenta um framework para a manutenção de visões e links materializados, utilizando uma estratégia incremental. A ideia principal da estratégia é recomputar apenas os links dos recursos que foram atualizadas e que fazem parte da visão. Esse trabalho também apresenta um experimento para comparar a performance da estratégia incremental com a recomputação total das visões e dos links materializados. / [en] In the Linked Data field, data publishers frequently materialize sameAs links between two different datasets using link discovery tools. However, it may be difficult to specify linking conditions, if the datasets have complex models. A possible solution lies in stimulating dataset administrators to publish simple predefined views to work as resource catalogues. A second problem is related to maintaining materialized sameAs linksets, when the source datasets are updated. To help solve this second problem, this work presents a framework for maintaining views and linksets using an incremental strategy. The key idea is to re-compute only the set of updated resources that are part of the view. This work also describes an experiment to compare the performance of the incremental strategy with the full re-computation of views and linksets.
|
157 |
Linked Data in VRA Core 4.0: Converting VRA XML Records into RDF/XMLMixter, Jeffrey 08 May 2013 (has links)
No description available.
|
158 |
Making a common graphical language for the validation of linked data. / Skapandet av ett generiskt grafiskt språk för validering av länkad data.Echegaray, Daniel January 2017 (has links)
A variety of embedded systems is used within the design and the construction of trucks within Scania. Because of their heterogeneity and complexity, such systems require the use of many software tools to support embedded systems development. These tools need to form a well-integrated and effective development environment, in order to ensure that product data is consistent and correct across the developing organisation. A prototype is under development which adapts a linked data approach for data integration, more specifically this prototype adapt the Open Services for Lifecycle Collaboration(OSLC) specification for data-integration. The prototype allows users, to design OSLC-interfaces between product management tools and OSLC-links between their data. The user is further allowed to apply constraints on the data conforming to the OSLC validation language Resource Shapes(ReSh). The problem lies in the prototype conforming only to the language of Resource Shapes whose constraints are often too coarse-grained for Scania’s needs, and that there exists no standardised language for the validation of linked data. Thus, for framing this study two research questions was formulated (1) How can a common graphical language be created for supporting all validation technologies of RDF-data? and (2) How can this graphical language support the automatic generation of RDF-graphs? A case study is conducted where the specific case consists of a software tool named SESAMM-tool at Scania. The case study included a constraint language comparison and a prototype extension. Furthermore, a design science research strategy is followed, where an effective artefact was searched for answering the stated research questions. Design science promotes an iterative process including implementation and evaluation. Data has been empirically collected in an iterative development process and evaluated using the methods of informed argument and controlled experiment, respectively, for the constraint language comparison and the extension of the prototype. Two constraint languages were investigated Shapes Constraint Language (SHACL) and Shapes Expression (ShEx). The result of the constraint language comparison concluded SHACL as the constraint language with a larger domain of constraints having finer-grained constraints also with the possibility of defining new constraints. This was based on that SHACL constraints was measured to cover 89.5% of ShEx constraints and 67.8% for the converse. The SHACL and ShEx coverage on ReSh property constraints was measured to 75% and 50%. SHACL was recommended and chosen for extending the prototype. On extending the prototype abstract super classes was introduced into the underlying data model. Constraint language classes were stated as subclasses. SHACL was additionally stated as such a subclass. This design offered an increased code reuse within the prototype but gave rise to issues relating to the plug-in technologies that the prototype is based upon. The current solution still has the issue that properties of one constraint language may be added to classes of another constraint language. / En mängd olika inbyggda system används inom design och konstruktion av lastbilar inom Scania. På grund av deras heterogenitet och komplexitet kräver sådana system användningen av många mjukvaruverktyg för att stödja inbyggd systemutveckling. Dessa verktyg måste bilda en välintegrerad och effektiv utvecklingsmiljö för att säkerställa att produktdata är konsekventa och korrekta över utvecklingsorganisationen.En prototyp håller på att utvecklas som anpassar en länkad datainriktning för dataintegration, mer specifikt anpassar denna prototyp en dataintegration specifikation utvecklad av Open Services for Lifecycle Collaboration(OSLC). Prototypen tillåter användare att utforma OSLC-gränssnitt mellan produkthanteringsverktyg och OSLC-länkar mellan deras data. Användaren får vidare tillämpa begränsningar på de data som överensstämmer med OSLC-valideringsspråket Resource Shapes. Problemet ligger i prototypen som endast överensstämmer med Resource Shapes, vars begränsningar ofta är för grova för Scanias behov och att det inte finns något standardiserat språk för validering av länkad data. Således, för att utforma denna studie formulerades två forskningsfrågor (1) Hur kan ett gemensamt grafiskt språk skapas för att stödja alla valideringsteknologier av RDF-data? och (2) Hur kan detta grafiska språk stödja Automatisk generering av RDF-grafer? En fallstudie genomförs där det specifika fallet består av ett mjukvaruverktyg som heter SESAMM-tool hos Scania. Fallstudien innehöll en jämförelse av valideringsspråk och vidareutveckling av prototypen. Vidare följs Design Science som forskningsstrategi där en effektiv artefakt sökts för att svara på de angivna forskningsfrågorna. Design Science främjar en iterativ process inklusive genomförande och utvärdering. Data har empiriskt samlats på ett iterativt sätt och utvärderats med hjälp av utvärderingsmetoderna informerat argument och kontrollerat experiment, för valideringsspråkjämförelsen och vidareutvecklingen av prototypen. Två valideringsspråk undersöktes Shapes Constraint Language (SHACL) och Shapes Expression (ShEx).Resultatet av valideringsspråksjämförelsen konkluderade SHACL som valideringsspråket med en större domän av begränsningar, mer finkorniga begränsningar och med möjligheten att definiera nya begränsningar. Detta var baserat på att SHACL-begränsningarna uppmättes täcka 89,5 % av ShEx-begränsningarna och 67,8 % för det omvända. SHACL- och ShEx-täckningen för Resource Shapes-egenskapsbegränsningar mättes till 75 % respektive 50 %. SHACL rekommenderades och valdes för att vidareutveckla prototypen.Vid vidareutveckling av prototypen infördes abstrakta superklasser i den underliggande datamodellen. Superklasserna tog i huvudsak rollen som tidigare klasser för valideringsspråk, som istället utgjordes som underklasser. SHACL anges som en sådan underklass. Denna design erbjöd hög kodåteranvändning inom prototypen men gav också upphov till problem som relaterade till plugin-teknologier som prototypen bygger på. Den nuvarande lösningen har fortfarande problemet att egenskaper hos ett valideringsspråk kan läggas till klasser av ett annat valideringsspråk.
|
159 |
Linked data for improving student experience in searching e-learning resourcesCastellanos Ardila, Julieth Patricia January 2012 (has links)
The collection and the use of data on the internet with e-learning purposes are tasks made by many people every day, because of their role as teachers or students. The web provides several data sources with relevant information that could be used in educational environments, but the information is widely distributed, or poorly structured. Also, resources on the web are diverse, sometimes with high quality, but sometimes not. These situations involve a difficult search of e-learning resources, and therefore a lot of time invested, because the search process – typing, reasoning, selecting, using resources, bookmarking, and so forth - is completely executed by humans, despite that some of them can be executed by computers. Linked data provides designed practices for organizing, and for discovering information using the processing power of computers. At the same time, the community of linked data provides data sets that are already connected, and this information could be consumed by people anytime as resources with e-learning purposesThe current article presents the findings of a master thesis that address the linked data techniques and also the techniques used by students when searching e-learning resources, using internet. The resources used by the students, as well as the sources preferred, will be comparing with the current resources offered by the linked data community. Likewise, the strategies and techniques selected by the students will be taken into account, in order to establish the basic requirements of an e-learning collaborative environment prototype.The outline of the thesis is:Chapter 2 discusses the research methodology as well as the constructing and administering the research survey in which the current thesis based the requirements elicitation. Chapter 3 lays the groundwork for the rest of the thesis by presenting the principles and terminology of linked data, as well as related work about the internet in education, availability of e-learning resources, and surveys about the use of the internet in e-learning resources searchingChapter 4 presents the investigation of the methods used by student for exploring and discovering e-learning resources through the data analysis and interpretation of the survey. Chapter 5 Introduces to the prototype design. It includes the prototype idea, the requirements specification using the data analysis of the survey, and the architecture of the e-learning collaborative environment using the assumptions reached in the literature review and the dereferencable URIs found in the linked open data cloud diagram. The design of components in the environment will be addressed in terms of UML diagrams. Chapter 6 Validates the requirements of the prototype.Chapter 7 Tackles conclusions of the master thesis project in order to find incomes for further research in the area. This chapter also shows the contributions e-learning world evaluation is based on the benefits indentified by using this approach and gives indications of what future work can be done to improve the results. The findings address the decision-making process for designing a new era of e-learning environments that enhance the selection of e-learning resources, taking into account the technology available, and the information around the World Wide Web.
|
160 |
Integrating Natural Language Processing (NLP) and Language Resources Using Linked DataHellmann, Sebastian 12 January 2015 (has links) (PDF)
This thesis is a compendium of scientific works and engineering
specifications that have been contributed to a large community of
stakeholders to be copied, adapted, mixed, built upon and exploited in
any way possible to achieve a common goal: Integrating Natural Language
Processing (NLP) and Language Resources Using Linked Data
The explosion of information technology in the last two decades has led
to a substantial growth in quantity, diversity and complexity of
web-accessible linguistic data. These resources become even more useful
when linked with each other and the last few years have seen the
emergence of numerous approaches in various disciplines concerned with
linguistic resources and NLP tools. It is the challenge of our time to
store, interlink and exploit this wealth of data accumulated in more
than half a century of computational linguistics, of empirical,
corpus-based study of language, and of computational lexicography in all
its heterogeneity.
The vision of the Giant Global Graph (GGG) was conceived by Tim
Berners-Lee aiming at connecting all data on the Web and allowing to
discover new relations between this openly-accessible data. This vision
has been pursued by the Linked Open Data (LOD) community, where the
cloud of published datasets comprises 295 data repositories and more
than 30 billion RDF triples (as of September 2011).
RDF is based on globally unique and accessible URIs and it was
specifically designed to establish links between such URIs (or
resources). This is captured in the Linked Data paradigm that postulates
four rules: (1) Referred entities should be designated by URIs, (2)
these URIs should be resolvable over HTTP, (3) data should be
represented by means of standards such as RDF, (4) and a resource should
include links to other resources.
Although it is difficult to precisely identify the reasons for the
success of the LOD effort, advocates generally argue that open licenses
as well as open access are key enablers for the growth of such a network
as they provide a strong incentive for collaboration and contribution by
third parties. In his keynote at BNCOD 2011, Chris Bizer argued that
with RDF the overall data integration effort can be “split between data
publishers, third parties, and the data consumer”, a claim that can be
substantiated by observing the evolution of many large data sets
constituting the LOD cloud.
As written in the acknowledgement section, parts of this thesis has
received numerous feedback from other scientists, practitioners and
industry in many different ways. The main contributions of this thesis
are summarized here:
Part I – Introduction and Background.
During his keynote at the Language Resource and Evaluation Conference in
2012, Sören Auer stressed the decentralized, collaborative, interlinked
and interoperable nature of the Web of Data. The keynote provides strong
evidence that Semantic Web technologies such as Linked Data are on its
way to become main stream for the representation of language resources.
The jointly written companion publication for the keynote was later
extended as a book chapter in The People’s Web Meets NLP and serves as
the basis for “Introduction” and “Background”, outlining some stages of
the Linked Data publication and refinement chain. Both chapters stress
the importance of open licenses and open access as an enabler for
collaboration, the ability to interlink data on the Web as a key feature
of RDF as well as provide a discussion about scalability issues and
decentralization. Furthermore, we elaborate on how conceptual
interoperability can be achieved by (1) re-using vocabularies, (2) agile
ontology development, (3) meetings to refine and adapt ontologies and
(4) tool support to enrich ontologies and match schemata.
Part II - Language Resources as Linked Data.
“Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge
Acquisition Spiral” summarize the results of the Linked Data in
Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in
2013 and give a preview of the MLOD special issue. In total, five
proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP &
DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012)
and one journal special issue (Multilingual Linked Open Data, MLOD to
appear) – have been (co-)edited to create incentives for scientists to
convert and publish Linked Data and thus to contribute open and/or
linguistic data to the LOD cloud. Based on the disseminated call for
papers, 152 authors contributed one or more accepted submissions to our
venues and 120 reviewers were involved in peer-reviewing.
“DBpedia as a Multilingual Language Resource” and “Leveraging the
Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked
Data Cloud” contain this thesis’ contribution to the DBpedia Project in
order to further increase the size and inter-linkage of the LOD Cloud
with lexical-semantic resources. Our contribution comprises extracted
data from Wiktionary (an online, collaborative dictionary similar to
Wikipedia) in more than four languages (now six) as well as
language-specific versions of DBpedia, including a quality assessment of
inter-language links between Wikipedia editions and internationalized
content negotiation rules for Linked Data. In particular the work
described in created the foundation for a DBpedia Internationalisation
Committee with members from over 15 different languages with the common
goal to push DBpedia as a free and open multilingual language resource.
Part III - The NLP Interchange Format (NIF).
“NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and
“Evaluation and Related Work” constitute one of the main contribution of
this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format
that aims to achieve interoperability between Natural Language
Processing (NLP) tools, language resources and annotations. The core
specification is included in and describes which URI schemes and RDF
vocabularies must be used for (parts of) natural language texts and
annotations in order to create an RDF/OWL-based interoperability layer
with NIF built upon Unicode Code Points in Normal Form C. In , classes
and properties of the NIF Core Ontology are described to formally define
the relations between text, substrings and their URI schemes. contains
the evaluation of NIF.
In a questionnaire, we asked questions to 13 developers using NIF. UIMA,
GATE and Stanbol are extensible NLP frameworks and NIF was not yet able
to provide off-the-shelf NLP domain ontologies for all possible domains,
but only for the plugins used in this study. After inspecting the
software, the developers agreed however that NIF is adequate enough to
provide a generic RDF output based on NIF using literal objects for
annotations. All developers were able to map the internal data structure
to NIF URIs to serialize RDF output (Adequacy). The development effort
in hours (ranging between 3 and 40 hours) as well as the number of code
lines (ranging between 110 and 445) suggest, that the implementation of
NIF wrappers is easy and fast for an average developer. Furthermore the
evaluation contains a comparison to other formats and an evaluation of
the available URI schemes for web annotation.
In order to collect input from the wide group of stakeholders, a total
of 16 presentations were given with extensive discussions and feedback,
which has lead to a constant improvement of NIF from 2010 until 2013.
After the release of NIF (Version 1.0) in November 2011, a total of 32
vocabulary employments and implementations for different NLP tools and
converters were reported (8 by the (co-)authors, including Wiki-link
corpus, 13 by people participating in our survey and 11 more, of
which we have heard). Several roll-out meetings and tutorials were held
(e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC
2014).
Part IV - The NLP Interchange Format in Use.
“Use Cases and Applications for NIF” and “Publication of Corpora using
NIF” describe 8 concrete instances where NIF has been successfully used.
One major contribution in is the usage of NIF as the recommended RDF
mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard
and the conversion algorithms from ITS to NIF and back. One outcome
of the discussions in the standardization meetings and telephone
conferences for ITS 2.0 resulted in the conclusion there was no
alternative RDF format or vocabulary other than NIF with the required
features to fulfill the working group charter. Five further uses of NIF
are described for the Ontology of Linguistic Annotations (OLiA), the
RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and
visualisations of NIF using the RelFinder tool. These 8 instances
provide an implemented proof-of-concept of the features of NIF.
starts with describing the conversion and hosting of the huge Google
Wikilinks corpus with 40 million annotations for 3 million web sites.
The resulting RDF dump contains 477 million triples in a 5.6 GB
compressed dump file in turtle syntax. describes how NIF can be used to
publish extracted facts from news feeds in the RDFLiveNews tool as
Linked Data.
Part V - Conclusions.
provides lessons learned for NIF, conclusions and an outlook on future
work. Most of the contributions are already summarized above. One
particular aspect worth mentioning is the increasing number of
NIF-formated corpora for Named Entity Recognition (NER) that have come
into existence after the publication of the main NIF paper Integrating
NLP using Linked Data at ISWC 2013. These include the corpora converted
by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an
OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of
three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP
Interchange Format for Open German Governmental Data, N^3 – A Collection
of Datasets for Named Entity Recognition and Disambiguation in the NLP
Interchange Format and Global Intelligent Content: Active Curation of
Language Resources using Linked Data as well as an early implementation
of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr.
Further funding for the maintenance, interlinking and publication of
Linguistic Linked Data as well as support and improvements of NIF is
available via the expiring LOD2 EU project, as well as the CSA EU
project called LIDER, which started in November 2013. Based on the
evidence of successful adoption presented in this thesis, we can expect
a decent to high chance of reaching critical mass of Linked Data
technology as well as the NIF standard in the field of Natural Language
Processing and Language Resources.
|
Page generated in 0.0482 seconds