• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

#4 CRAWLING VON TEXTDATEN MIT DDC, LCC BEZUG ZUR GENERIERUNG EINER TRAININGSDATENMENGE FÜR DIE TEXTKLASSIFIKATION: Praktikumsbericht Textmining – Wissensrohstoff Text

Schulz, Waiya, Halbauer, Mathias, Klähn, Jannis 15 June 2022 (has links)
Ziel unseres Berichts ist die Evaluation der Datenverfügbarkeit und das Erstellen eines Datensatzes, der später zum maschinellen Lernen von Bibliotheksklassifikationen genutzt werden könnte. Als Basis für die Textdaten werden wir Wikidata-Einträge nutzen, da diese teilweise bereits mit solchen Klassifikationen versehen und direkt mit dem zugehörigen Wikipedia-Artikel verknüpft sind.
2

Construcción automática de cajas de información para Wikipedia

Sáez Binelli, Tomás Andrés January 2018 (has links)
Ingeniero Civil en Computación / Las Infobox son tablas de resumen, que pretenden describir brevemente una entidad mediante la presentación se sus principales características de forma clara y en un formato establecido. Lamentablemente estas Infoboxes son construidas de forma manual por editores de Wikipedia, lo que se traduce en que muchos artículos en idiomas poco frecuentes no cuentan con Infoboxes o éstas son de baja calidad. Utilizando Wikidata como fuente de información, el desafío de este trabajo es ordenar y seleccionar las propiedades y valores según importancia, para lograr una Infobox concisa con la información ordenada según pertenencia. Con este objetivo en mente, este trabajo propone una estrategia de control y 4 estrategias experimentales para la construcción de Infoboxes en forma automática. Durante el desarrollo de este trabajo se implementa una API en Django, que se recibe una petición indicando la entidad, el lenguaje y la estrategia a utilizar para generar la Infobox. Como respuesta se obtiene un JSON que representa la Infobox generada. Se construye adicionalmente una interfaz gráfica que permita una rápida utilización de dicha API y opere como facilitador de un proceso de evaluación comparativo entre las diversas estrategias. La evaluación comparativa se realiza enfrentando a encuestados a un listado de 15 entidades cuyas 5 Infoboxes (una por estrategia) han sido previamente calculadas y dispuestas en forma paralela. Asignando una nota de 1 (menor valoración) a 7, 12 usuarios proceden a evaluar cada Infobox; obteniendo un total de 728 valoraciones. Los resultados indican que la estrategia mejor evaluada combina la frecuencia de una propiedad y el PageRank de su valor como indicadores de orden de importancia.
3

A faceted browsing interface for diverse Large-Scale RDF Datasets

Moreno Vega, José Ignacio January 2018 (has links)
Magíster en Ciencias, Mención Computación. / Las bases de conocimiento en RDF contienen información acerca de millones de recursos, las cuales son consultadas utilizando el lenguaje estándar de consultas para RDF: SPARQL. Sin embargo, esta información no está accesible fácilmente porque requiere conocer el lenguaje SPARQL y la estructura de los datos a consultar; requisitos que no cumple un usuario común de internet. Se propone una interfaz de navegación por facetas para estos datos de gran tamaño que no requiere conocimientos previos de la estructura ni de SPARQL. La navegación por facetas consiste en agregar filtros (conocidos como facetas) para mostrar únicamente los elementos que cumplen los requisitos. Interfaces de navegación por facetas para RDF existentes no escalan bien para las bases de conocimientos actuales. Se propone un nuevo sistema que crea índices para búsquedas fáciles y rápidas sobre los datos, permitiendo calcular y sugerir facetas al usuario. Para validar la escalabilidad y eficiencia del sistema, se escogió Wikidata como la base de datos de gran tamaño para realizar los experimentos de desempeño. Luego, se realizó un estudio de usuarios para evaluar la usabilidad e interacción del sistema, los resultados obtenidos muestran en qué aspectos el sistema desempeña bien y cuáles pueden ser mejorados. Un prototipo final junto a un cuestionario fue enviado a contribuidores de Wikidata para descubrir como este sistema puede ayudar a la comunidad.
4

Lexeme Extraction for Wikidata : A proof of concept study for Swedish lexeme extraction

Samzelius, Simon January 2020 (has links)
Wikipedia has a problem with organizing and managing data as well as references. As a solution, they created Wikidata to make it possible for machines to interpret these data, with the help of lexemes. A lexeme is an abstract lexical unit which consists of a word’s lemmas and its word class. The object of this paper is to present one possible way to provide Swedish lexeme data to Wikidata. This was implemented in two phases, namely, the first phase was to identify the lemmas and their word classes; the second phase was to process these words to create coherent lexemes. The developed model was able to process large amounts of words from the data source but barely succeeded to generate coherent lexemes. Although the lexemes was supposed to provide an efficient way of data understanding for machines, the obtained results lead to the conclusion that the developed model did not achieve the anticipated results. This is due to the amount of words found in correlation to the words processed. It is needed to find a way to import lexeme data to Wikidata from another data source.
5

Development of a semantic data collection tool. : The Wikidata Project as a step towards the semantic web.

Ubah, Ifeanyichukwu January 2013 (has links)
The World Wide Web contains a vast amount of information. This feature makes it a very useful part of our everyday activities but the information contained within is made up of an exponentially increasing repository of semantically unstructured data. The semantic web movement involves the evolution of the existing World Wide web in order to enable computers make meaning of and understand the data they process and consequently increase their processing capabilities. Over the past decade a number of new projects implementing the semantic web technology have been developed albeit still in their infancy. These projects are based on semantic data models and one such is the Wikidata project. The Wikidata project is targeted at providing a more semantic platform for editing and sharing data throughout the Wikipedia and Wikimedia communities. This project studies how the Wikidata project facilitates such a semantic platform for the Wikimedia communities and includes the development of an application utilizing the semantic capabilities of Wikidata. The objective of the project is to develop an application capable of retrieving and presenting statistical data and also be able to make missing or invalid data on Wikidata detectable. The result is an application currently aimed at researchers and students who require a convenient tool for statistical data collection and data mining projects. Usability and performance tests of the application are also conducted with the results presented in the report. Keywords: Semantic web, World Wide Web, Semantic data model, Wikidata, data mining.
6

Linked Data für Bildrepositorien

Erlinger, Christian, Bemme, Jens 27 May 2022 (has links)
No description available.
7

Créer un corpus annoté en entités nommées avec Wikipédia et WikiData : de mauvais résultats et du potentiel

Pagès, Lucas 04 1900 (has links)
Ce mémoire explore l'utilisation conjointe de WikiData et de Wikipédia pour créer une ressource d'entités nommées (NER) annotée : DataNER. Il fait suite aux travaux ayant utilisé les bases de connaissance Freebase et DBpedia et tente de les remplacer avec WikiData, une base de connaissances collaborative dont la croissance continue est garantie par une communauté active. Malheureusement, les résultats du processus proposé dans ce mémoire ne sont pas à la hauteur des attentes initiales. Ce document décrit dans un premier temps la façon dont on construit DataNER. L'utilisation des ancres de Wikipédia permet d'identifier un grand nombre d'entités nommées dans la ressource et le programme NECKAr permet de les classifier parmi les classes LOC, PER, ORG et MISC en utilisant WikiData. On décrit de ce fait les détails de ce processus, dont la façon dont on utilise les données de Wikipédia et WikiData afin de produire de nouvelles entités nommées et comment calibrer les paramètres du processus de création de DataNER. Dans un second temps, on compare DataNER à d'autres ressources similaires en utilisant des modèles de NER ainsi qu'avec des comparaisons manuelles. Ces comparaisons nous permettent de mettre en valeur différentes raisons pour lesquelles les données de DataNER ne sont pas d'aussi bonne qualité que celles de ces autres ressources. On conclut de ce fait sur des pistes d'améliorations de DataNER ainsi que sur un commentaire sur le travail effectué, tout en insistant sur le potentiel de cette méthode de création de corpus. / This master's thesis explores the joint use of WikiData and Wikipedia to make an annotated named entities (NER) corpus : DataNER. It follows papers which have used the knowledge bases DBpedia and Freebase and attempts at replacing them with WikiData, a collaborative knowledge base with an active community guaranteeing its continuous growth. Unfortunately, the results of the process described in this thesis did not reach our initial expectations. This document first describes the way in which we build DataNER. The use of Wikipedia anchors enable us to identify a significant quantity of named entities in the resource and the NECKAr toolkit labels them with classes LOC, PER, ORG and MISC using WikiData. Thus, we describe the details of the corpus making process, including the way in which we infer more named entities thanks to Wikipedia and WikiData, as well as how we calibrate the making of DataNER with all the information at our availability. Secondly, we compare DataNER with other similar corpora using models trained on each of them, as well as manual comparisons. Those comparisons enable us to identify different reasons why the quality of DataNER does not match the one of those other corpora. We conclude by giving ideas as to how to enhance the quality of DataNER, giving a more personal comment of the work that has been accomplished and insisting on the potential of using Wikipedia and WikiData to automatically create a corpus.
8

Semantic Web Identity of academic organizations / search engine entity recognition and the sources that influence Knowledge Graph Cards in search results

Arlitsch, Kenning 11 January 2017 (has links)
Semantic Web Identity kennzeichnet den Zustand, in dem ein Unternehmen von Suchmaschinen als Solches erkannt wird. Das Abrufen einer Knowledge Graph Card in Google-Suchergebnissen für eine akademische Organisation wird als Indikator für SWI nominiert, da es zeigt, dass Google nachprüfbare Tatsachen gesammelt hat, um die Organisation als Einheit zu etablieren. Diese Anerkennung kann wiederum die Relevanz ihrer Verweisungen an diese Organisation verbessern. Diese Dissertation stellt Ergebnisse einer Befragung der 125 Mitgliedsbibliotheken der Association of Research Libraries vor. Die Ergebnisse zeigen, dass diese Bibliotheken in den strukturierten Datensätzen, die eine wesentliche Grundlage des Semantic Web sind und Faktor bei der Erreichung der SWI sind, schlecht vertreten sind. Der Mangel an SWI erstreckt sich auf andere akademische Organisationen, insbesondere auf die unteren Hierarchieebenen von Universitäten. Ein Mangel an SWI kann andere Faktoren von Interesse für akademische Organisationen beeinflussen, einschließlich der Fähigkeit zur Gewinnung von Forschungsförderung, Immatrikulationsraten und Verbesserung des institutionellen Rankings. Diese Studie vermutet, dass der schlechte Zustand der SWI das Ergebnis eines Versagens dieser Organisationen ist, geeignete Linked Open Data und proprietäre Semantic Web Knowledge Bases zu belegen. Die Situation stellt eine Gelegenheit für akademische Bibliotheken dar, Fähigkeiten zu entwickeln, um ihre eigene SWI zu etablieren und den anderen Organisationen in ihren Institutionen einen SWI-Service anzubieten. Die Forschung untersucht den aktuellen Stand der SWI für ARL-Bibliotheken und einige andere akademische Organisationen und beschreibt Fallstudien, die die Wirksamkeit dieser Techniken zur Verbesserung der SWI validieren. Die erklärt auch ein neues Dienstmodell der SWI-Pflege, die von anderen akademischen Bibliotheken für ihren eigenen institutionellen Kontext angepasst werden. / Semantic Web Identity (SWI) characterizes an entity that has been recognized as such by search engines. The display of a Knowledge Graph Card in Google search results for an academic organization is proposed as an indicator of SWI, as it demonstrates that Google has gathered enough verifiable facts to establish the organization as an entity. This recognition may in turn improve the accuracy and relevancy of its referrals to that organization. This dissertation presents findings from an in-depth survey of the 125 member libraries of the Association of Research Libraries (ARL). The findings show that these academic libraries are poorly represented in the structured data records that are a crucial underpinning of the Semantic Web and a significant factor in achieving SWI. Lack of SWI extends to other academic organizations, particularly those at the lower hierarchical levels of academic institutions, including colleges, departments, centers, and research institutes. A lack of SWI may affect other factors of interest to academic organizations, including ability to attract research funding, increase student enrollment, and improve institutional reputation and ranking. This study hypothesizes that the poor state of SWI is in part the result of a failure by these organizations to populate appropriate Linked Open Data (LOD) and proprietary Semantic Web knowledge bases. The situation represents an opportunity for academic libraries to develop skills and knowledge to establish and maintain their own SWI, and to offer SWI service to other academic organizations in their institutions. The research examines the current state of SWI for ARL libraries and some other academic organizations, and describes case studies that validate the effectiveness of proposed techniques to correct the situation. It also explains new services that are being developed at the Montana State University Library to address SWI needs on its campus, which could be adapted by other academic libraries.

Page generated in 0.0698 seconds