• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 10
  • 4
  • Tagged with
  • 54
  • 49
  • 42
  • 36
  • 31
  • 31
  • 31
  • 24
  • 17
  • 14
  • 13
  • 13
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

VIKA - Konzeptstudien eines virtuellen Konstruktionsberaters für additiv zu fertigende Flugzeugstrukturbauteile

Steffen, Johann 06 September 2021 (has links)
Gegenstand der Arbeit ist die konzeptionelle Ausarbeitung einer virtuellen Anwendung, die es den Anwendern in der Flugzeugstrukturkonstruktion im Kontext der additiven Fertigung ermöglicht, interaktiv und intuitiv wichtige Entscheidungen für den Bauteilentstehungsprozess zu treffen. Dabei soll sich die Anwendung adaptiv je nach Anwendungsfall in der Informationsbereitstellung an die jeweils benötigten Anforderungen und Bedürfnisse des Anwenders anpassen können.
42

Standard and Non-standard reasoning in Description Logics

Brandt, Sebastian-Philipp 05 April 2006 (has links)
The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
43

Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations

Jin, Yi 17 January 2007 (has links)
The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model.
44

Relational Exploration: Combining Description Logics and Formal Concept Analysis for Knowledge Specification

Rudolph, Sebastian 01 December 2006 (has links)
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
45

Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis

Distel, Felix 27 April 2011 (has links)
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
46

Rechnerunterstützung für die Suche nach verarbeitungstechnischen Prinziplösungen

Majschak, Jens-Peter 04 November 1997 (has links)
Die hier zur Verfügung gestellte Datei ist leider nicht vollständig, aus technischen Gründen sind die folgenden Anhänge leider nicht enthalten: Anhang 3: Begriffshierarchie "verarbeitungstechnische Funktion" S. 141 Anhang 4: Begriffshierarchie "Eigenschaftsänderung" S. 144 Anhang 5: Begriffshierarchie "Verarbeitungsgut" S. 149 Anhang 6: Begriffshierarchie "Verarbeitungstechnisches Prinzip" S. 151 Konsultieren Sie die Druckausgabe, die Sie im Bestand der SLUB Dresden finden: http://slubdd.de/katalog?TN_libero_mab21079933:ABKÜRZUNGEN UND FORMELZEICHEN S. 5 1. EINLEITUNG S. 7 2. UNTERSTÜTZUNGSMITTEL FÜR DIE KONZEPTPHASE IN DER VERARBEITUNGSMASCHINEN-KONSTRUKTION - ALLGEMEINE ANFORDERUNGEN, ENTWICKLUNGSSTAND 9 2.1. DIE BEDEUTUNG DER KONZEPTPHASE IN DER VERARBEITUNGSMASCHINENKONSTRUKTION S. 9 2.2. ALLGEMEINE ANFORDERUNGEN AN UNTERSTÜTZUNGSMITTEL FÜR DEN KONSTRUKTEUR ALS PROBLEMLÖSER S. 13 2.3. SPEZIFIK VERARBEITUNGSTECHNISCHER PROBLEMSTELLUNGEN S. 17 2.3.1. Verarbeitungstechnische Informationen im Konstruktionsprozeß von Verarbeitungsmaschinen S. 17 2.3.2. Komplexität verarbeitungstechnischer Probleme S. 19 2.3.3. Unbestimmtheit verarbeitungstechnischer Probleme S. 21 2.3.4. Beschreibungsspezifik verarbeitungstechnischer Problemstellungen S. 22 2.4. UNTERSTÜTZUNGSMITTEL FÜR DIE KONZEPTPHASE UND IHRE EIGNUNG FÜR DIE VERARBEITUNGSMASCHINENKONSTRUKTION S. 24 2.4.1. Traditionelle Unterstützungsmittel für die Lösungssuche S. 24 2.4.1.1. Lösungskataloge S. 24 2.4.1.2. Konstruktionsmethodik in der Prinzipphase S. 25 2.4.2. Rechnerunterstützung für die Konstruktion mit Relevanz für die Konzeptphase S. 28 2.4.2.1. Kurzüberblick über Konstruktionsunterstützungssysteme und ihre Einbindung in übergeordnete Systeme S. 28 2.4.2.2. Rechnerunterstützung zum Analysieren S. 31 2.4.2.3. Rechnerunterstützung zum Informieren S. 32 2.4.2.4. Rechnerunterstützung zum Synthetisieren S. 34 2.4.2.5. Rechnerunterstützung zum Bewerten und Auswählen S. 39 2.4.2.6. Integrierende Systeme mit Unterstützung für die Konzeptphase S. 41 2.4.3. Der Wissensspeicher Verarbeitungstechnik S. 43 2.5. SCHLUßFOLGERUNGEN AUS DER ANALYSE DES IST-STANDES S. 46 3. ANFORDERUNGEN AN EINE RECHNERUNTERSTÜTZUNG DER PRINZIPPHASE DER VERARBEITUNGSMASCHINENKONSTRUKTION 47 3.1. FUNKTIONSBESTIMMUNG S. 47 3.1.1. Typisierung der mit dem System zu lösenden Fragestellungen S. 47 3.1.2. Anforderungen an Funktionalität und Dialoggestaltung S. 50 3.2. INHALTLICHE ABGRENZUNG S. 54 3.3. ANFORDERUNGEN AN DIE WISSENSREPRÄSENTATION S. 57 4. INFORMATIONSMODELL DES VERARBEITUNGSTECHNISCHEN PROBLEMRAUMES S. 61 4.1. ÜBERBLICK ÜBER MÖGLICHE DARSTELLUNGSARTEN S. 61 4.1.1. Allgemeiner Überblick S. 61 4.1.1.1. Unterschiede zwischen wissensbasierten Systemen und anderen Wissensrepräsentationsformen S. 61 4.1.1.2. Algorithmische Modellierung S. 62 4.1.1.3. Relationale Modellierung S. 63 4.1.1.4. Darstellungsformen in wissensbasierten Systemen S. 64 4.1.2. Die verwendete Software und ihre Möglichkeiten S. 71 4.2. ÜBERBLICK ÜBER DEN SYSTEMAUFBAU S. 74 4.2.1. Gesamtüberblick S. 74 4.2.2. Sichtenmodell S. 78 4.2.3. Relationale Darstellung von Prinzipinformationen, Kennwerten und Kenngrößen S. 83 4.2.4. Bildinformationen S. 85 4.2.5. Ergänzende Informationen in der Benutzeroberfläche S. 86 4.3. MODELLIERUNG VON WISSENSKOMPONENTEN DER DOMÄNE VERARBEITUNGSTECHNIK S. 87 4.3.1. Abbildung verarbeitungstechnischer Funktionen S. 87 4.3.1.1. Darstellungsarten für verarbeitungstechnische Funktionen - Bedeutung, Verwendung, Probleme S. 87 4.3.1.2. Die Sicht "Verarbeitungstechnische Funktion" S. 89 4.3.1.3. Die Sicht "Eigenschaftsänderung" S. 90 4.3.2. Abbildung von Informationen über Verarbeitungsgüter S. 93 4.3.2.1. Beschreibungskomponenten und ihre Verwendung bei der Lösungssuche S. 93 4.3.2.2. Die Sicht "Verarbeitungsgut" S. 94 4.3.2.3. Abbildung von Verarbeitungsguteigenschaften S. 94 4.3.3. Abbildung verarbeitungstechnischer Prinzipe S. 96 4.3.3.1. Die Sicht "Verarbeitungstechnisches Prinzip" S. 96 4.3.3.2. Die Detailbeschreibung verarbeitungstechnischer Prinzipe S. 97 4.3.4. Verarbeitungstechnische Kenngrößen S. 99 4.3.5. Darstellung von Zusammenhängen mittels Regeln S. 100 4.3.6. Unterstützung der Feinauswahl S. 102 5. PROBLEMLÖSEN MIT DEM BERATUNGSSYSTEM VERARBEITUNGSTECHNIK S. 104 5.1. INTERAKTIVE PROBLEMAUFBEREITUNG S. 104 5.2. BESTIMMUNG DER LÖSUNGSMENGE - GROBAUSWAHL S. 109 5.3. FEINAUSWAHL S. 110 5.4. VERARBEITUNG DER ERGEBNISSE S. 112 6. WISSENSAKQUISITION S. 113 6.1. PROBLEME BEI DER WISSENSAKQUISITION S. 113 6.2. VORSCHLÄGE ZUR UNTERSTÜTZUNG UND ORGANISATION DER AKQUISITION FÜR DAS BERATUNGSSYSTEM VERARBEITUNGSTECHNIK S. 115 7. GEDANKEN ZUR WEITERENTWICKLUNG S. 116 7.1. INHALTLICHER UND FUNKTIONALER AUSBAU DES BERATUNGSSYSTEMS VERARBEITUNGSTECHNIK S. 116 7.1.1. Ergänzung der Sichtenbeschreibung durch weitere Sichten S. 116 7.1.2. Andere Erweiterungsmöglichkeiten S. 117 7.2. EINBINDUNGSMÖGLICHKEITEN FÜR DAS BERATUNGSSYSTEMS VERARBEITUNGSTECHNIK S. 118 8. ZUSAMMENFASSUNG S. 120 LITERATURVERZEICHNIS S. 123 Anhang 1: Beispiele für phasenübergreifende Rechnerunterstützung der Konstruktion 134 Anhang 2: Inhalt der Kerntabelle "Prinzip" S. 138 Anhang 3: Begriffshierarchie "verarbeitungstechnische Funktion" S. 141 Anhang 4: Begriffshierarchie "Eigenschaftsänderung" S. 144 Anhang 5: Begriffshierarchie "Verarbeitungsgut" S. 149 Anhang 6: Begriffshierarchie "Verarbeitungstechnisches Prinzip" S. 151 Anhang 7: Implementierung einer umstellbaren Formel am Beispiel Dichteberechnung S. 158
47

Representing and Reasoning on Conceptual Queries Over Image Databases

Rigotti, Christophe, Hacid, Mohand-Saïd 20 May 2022 (has links)
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient. / This is an extended version of the article in: Eleventh International Symposium on Methodologies for Intelligent Systems, Warsaw, Poland, 1999.
48

Regionalportal Saxorum. Genese - Stand - Perspektiven: Von Sachsen.digital zu Saxorum

Munke, Martin 24 November 2022 (has links)
No description available.
49

LTCS-Report

Technische Universität Dresden 17 March 2022 (has links)
This series consists of technical reports produced by the members of the Chair for Automata Theory at TU Dresden. The purpose of these reports is to provide detailed information (e.g., formal proofs, worked out examples, experimental results, etc.) for articles published in conference proceedings with page limits. The topics of these reports lie in different areas of the overall research agenda of the chair, which includes Logic in Computer Science, symbolic AI, Knowledge Representation, Description Logics, Automated Deduction, and Automata Theory and its applications in the other fields.
50

Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data

Hellmann, Sebastian 12 January 2015 (has links) (PDF)
This thesis is a compendium of scientific works and engineering specifications that have been contributed to a large community of stakeholders to be copied, adapted, mixed, built upon and exploited in any way possible to achieve a common goal: Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data The explosion of information technology in the last two decades has led to a substantial growth in quantity, diversity and complexity of web-accessible linguistic data. These resources become even more useful when linked with each other and the last few years have seen the emergence of numerous approaches in various disciplines concerned with linguistic resources and NLP tools. It is the challenge of our time to store, interlink and exploit this wealth of data accumulated in more than half a century of computational linguistics, of empirical, corpus-based study of language, and of computational lexicography in all its heterogeneity. The vision of the Giant Global Graph (GGG) was conceived by Tim Berners-Lee aiming at connecting all data on the Web and allowing to discover new relations between this openly-accessible data. This vision has been pursued by the Linked Open Data (LOD) community, where the cloud of published datasets comprises 295 data repositories and more than 30 billion RDF triples (as of September 2011). RDF is based on globally unique and accessible URIs and it was specifically designed to establish links between such URIs (or resources). This is captured in the Linked Data paradigm that postulates four rules: (1) Referred entities should be designated by URIs, (2) these URIs should be resolvable over HTTP, (3) data should be represented by means of standards such as RDF, (4) and a resource should include links to other resources. Although it is difficult to precisely identify the reasons for the success of the LOD effort, advocates generally argue that open licenses as well as open access are key enablers for the growth of such a network as they provide a strong incentive for collaboration and contribution by third parties. In his keynote at BNCOD 2011, Chris Bizer argued that with RDF the overall data integration effort can be “split between data publishers, third parties, and the data consumer”, a claim that can be substantiated by observing the evolution of many large data sets constituting the LOD cloud. As written in the acknowledgement section, parts of this thesis has received numerous feedback from other scientists, practitioners and industry in many different ways. The main contributions of this thesis are summarized here: Part I – Introduction and Background. During his keynote at the Language Resource and Evaluation Conference in 2012, Sören Auer stressed the decentralized, collaborative, interlinked and interoperable nature of the Web of Data. The keynote provides strong evidence that Semantic Web technologies such as Linked Data are on its way to become main stream for the representation of language resources. The jointly written companion publication for the keynote was later extended as a book chapter in The People’s Web Meets NLP and serves as the basis for “Introduction” and “Background”, outlining some stages of the Linked Data publication and refinement chain. Both chapters stress the importance of open licenses and open access as an enabler for collaboration, the ability to interlink data on the Web as a key feature of RDF as well as provide a discussion about scalability issues and decentralization. Furthermore, we elaborate on how conceptual interoperability can be achieved by (1) re-using vocabularies, (2) agile ontology development, (3) meetings to refine and adapt ontologies and (4) tool support to enrich ontologies and match schemata. Part II - Language Resources as Linked Data. “Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge Acquisition Spiral” summarize the results of the Linked Data in Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in 2013 and give a preview of the MLOD special issue. In total, five proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP & DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012) and one journal special issue (Multilingual Linked Open Data, MLOD to appear) – have been (co-)edited to create incentives for scientists to convert and publish Linked Data and thus to contribute open and/or linguistic data to the LOD cloud. Based on the disseminated call for papers, 152 authors contributed one or more accepted submissions to our venues and 120 reviewers were involved in peer-reviewing. “DBpedia as a Multilingual Language Resource” and “Leveraging the Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked Data Cloud” contain this thesis’ contribution to the DBpedia Project in order to further increase the size and inter-linkage of the LOD Cloud with lexical-semantic resources. Our contribution comprises extracted data from Wiktionary (an online, collaborative dictionary similar to Wikipedia) in more than four languages (now six) as well as language-specific versions of DBpedia, including a quality assessment of inter-language links between Wikipedia editions and internationalized content negotiation rules for Linked Data. In particular the work described in created the foundation for a DBpedia Internationalisation Committee with members from over 15 different languages with the common goal to push DBpedia as a free and open multilingual language resource. Part III - The NLP Interchange Format (NIF). “NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and “Evaluation and Related Work” constitute one of the main contribution of this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. The core specification is included in and describes which URI schemes and RDF vocabularies must be used for (parts of) natural language texts and annotations in order to create an RDF/OWL-based interoperability layer with NIF built upon Unicode Code Points in Normal Form C. In , classes and properties of the NIF Core Ontology are described to formally define the relations between text, substrings and their URI schemes. contains the evaluation of NIF. In a questionnaire, we asked questions to 13 developers using NIF. UIMA, GATE and Stanbol are extensible NLP frameworks and NIF was not yet able to provide off-the-shelf NLP domain ontologies for all possible domains, but only for the plugins used in this study. After inspecting the software, the developers agreed however that NIF is adequate enough to provide a generic RDF output based on NIF using literal objects for annotations. All developers were able to map the internal data structure to NIF URIs to serialize RDF output (Adequacy). The development effort in hours (ranging between 3 and 40 hours) as well as the number of code lines (ranging between 110 and 445) suggest, that the implementation of NIF wrappers is easy and fast for an average developer. Furthermore the evaluation contains a comparison to other formats and an evaluation of the available URI schemes for web annotation. In order to collect input from the wide group of stakeholders, a total of 16 presentations were given with extensive discussions and feedback, which has lead to a constant improvement of NIF from 2010 until 2013. After the release of NIF (Version 1.0) in November 2011, a total of 32 vocabulary employments and implementations for different NLP tools and converters were reported (8 by the (co-)authors, including Wiki-link corpus, 13 by people participating in our survey and 11 more, of which we have heard). Several roll-out meetings and tutorials were held (e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC 2014). Part IV - The NLP Interchange Format in Use. “Use Cases and Applications for NIF” and “Publication of Corpora using NIF” describe 8 concrete instances where NIF has been successfully used. One major contribution in is the usage of NIF as the recommended RDF mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard and the conversion algorithms from ITS to NIF and back. One outcome of the discussions in the standardization meetings and telephone conferences for ITS 2.0 resulted in the conclusion there was no alternative RDF format or vocabulary other than NIF with the required features to fulfill the working group charter. Five further uses of NIF are described for the Ontology of Linguistic Annotations (OLiA), the RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and visualisations of NIF using the RelFinder tool. These 8 instances provide an implemented proof-of-concept of the features of NIF. starts with describing the conversion and hosting of the huge Google Wikilinks corpus with 40 million annotations for 3 million web sites. The resulting RDF dump contains 477 million triples in a 5.6 GB compressed dump file in turtle syntax. describes how NIF can be used to publish extracted facts from news feeds in the RDFLiveNews tool as Linked Data. Part V - Conclusions. provides lessons learned for NIF, conclusions and an outlook on future work. Most of the contributions are already summarized above. One particular aspect worth mentioning is the increasing number of NIF-formated corpora for Named Entity Recognition (NER) that have come into existence after the publication of the main NIF paper Integrating NLP using Linked Data at ISWC 2013. These include the corpora converted by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP Interchange Format for Open German Governmental Data, N^3 – A Collection of Datasets for Named Entity Recognition and Disambiguation in the NLP Interchange Format and Global Intelligent Content: Active Curation of Language Resources using Linked Data as well as an early implementation of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr. Further funding for the maintenance, interlinking and publication of Linguistic Linked Data as well as support and improvements of NIF is available via the expiring LOD2 EU project, as well as the CSA EU project called LIDER, which started in November 2013. Based on the evidence of successful adoption presented in this thesis, we can expect a decent to high chance of reaching critical mass of Linked Data technology as well as the NIF standard in the field of Natural Language Processing and Language Resources.

Page generated in 0.1892 seconds