Spelling suggestions: "subject:"comedia"" "subject:"bredia""
1 |
Efficient Extraction and Query Benchmarking of Wikipedia DataMorsey, Mohamed 06 January 2014 (has links) (PDF)
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches.
However, the DBpedia release process is heavy-weight and the releases are
sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication.
Basically, knowledge bases, including DBpedia, are stored in triplestores in
order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general.
Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks.
Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
|
2 |
KGScore-Open: Leveraging Knowledge Graph Semantics For Open-QA EvaluationHausman, Nicholas 01 June 2024 (has links) (PDF)
Evaluating active Question Answering (QA) systems, as users ask questions outside of the original testing data, has proven to be difficult, due to the difficulty of gauging answer quality without ground truth responses. We propose KGScore-Open, a configurable system capable of scoring questions and answers in Open Domain Question Answering (Open-QA) without ground truth answers present by leveraging DBPedia, a Knowledge Graph (KG) derived from Wikipedia, to score question-answer pairs. The system maps entities from questions and answers to DBPedia nodes, constructs a Knowledge Graph based on these entities, and calculates a relatedness score. Our system is validated on multiple datasets, achieving up to 83% accuracy in differentiating relevant from irrelevant answers in the Natural Questions dataset, 55% accuracy in classifying correct versus incorrect answers (hallucinations) in the TruthfulQA and HaluEval datasets, and 54% accuracy on the QA-Eval task using the EVOUNA dataset. The contributions of this work include a novel scoring system for indicating both relevancy and answer confidence in Open-QA without the need for ground truth answers, demonstrated efficacy across various tasks, and an extendable framework applicable to different KGs for evaluating QA systems of other domains.
|
3 |
Využití propojených dat na webu ke tvorbě strategické znalostní hry / The use of linked open data for strategic knowledge game creationTurečková, Šárka January 2015 (has links)
The general theme of this thesis was the use of linked open data for a creation of games. This thesis specifically addressed the issue of usage of DBpedia for automatic question generation suitable for use in games. Within that are proposed suitable ways of selecting wanted objects from DBpedia and ways of obtaining and processing relevant information from them. Including a method for estimating renown of individual objects. Some of methods are then applied to create a program for a question generation from the data obtained through DBpedia during the run of the application. The real possibility of using these questions generated from DBpedia for gaming purposes is subsequently proved by the design, prototype and tests of a knowledge strategic multiplayer game. The paper also summarizes all the major issues and possible complications from using the data obtained through DBpedia or DBpedia Live endpoints. Current challenges and opportunities for mutual utilization of games and LOD are also briefly discussed.
|
4 |
Recherche exploratoire basée sur des données liées / Linked data based exploratory searchMarie, Nicolas 12 December 2014 (has links)
Cette thèse s’intéresse à l’exploitation de la sémantique de données pour la recherche exploratoire. La recherche exploratoire se réfère à des tâches de recherche qui sont très ouvertes, avec de multiples facettes, et itératives. Les données sémantiques et les données liées en particulier, offrent de nouvelles possibilités pour répondre à des requêtes de recherche et des besoins d’information complexes. Dans ce contexte, le nuage de données ouvertes liées (LOD) joue un rôle important en permettant des traitements de données avancés et des interactions innovantes. Nous détaillons un état de l’art de la recherche exploratoire sur les données liées. Puis nous proposons un algorithme de recherche exploratoire à base de données liées basé sur une recherche associative. A partir d’un algorithme de propagation d’activation nous proposons une nouvelle formule de diffusion optimisée pour les graphes typés. Nous proposons ensuite des formalisations supplémentaires de plusieurs modes d’interrogation avancée. Nous présentons également une architecture logicielle innovante basée sur deux choix de conception paradigmatiques. D’abord, les résultats doivent être calculés à la demande. Deuxièmement, les données sont consommées à distance à partir de services SPARQL distribués. Cela nous permet d’atteindre un niveau élevé de flexibilité en termes d’interrogation et de sélection des données. L’application Discovery Hub implémente ces résultats et les présente dans une interface optimisée pour l’exploration. Nous évaluons notre approche grâce à plusieurs campagnes avec des utilisateurs et nous ouvrons le débat sur de nouvelles façons d’évaluer les moteurs de recherche exploratoires. / The general topic of the thesis is web search. It focused on how to leverage the data semantics for exploratory search. Exploratory search refers to cognitive consuming search tasks that are open-ended, multi-faceted, and iterative like learning or topic investigation. Semantic data and linked data in particular offer new possibilities to solve complex search queries and information needs including exploratory search ones. In this context the linked open data cloud plays an important role by allowing advanced data processing and innovative interactions model elaboration. First, we detail a state-of-the-art review of linked data based exploratory search approaches and systems. Then we propose a linked data based exploratory search solution which is mainly based on an associative retrieval algorithm. We started from a spreading activation algorithm and proposed new diffusion formula optimized for typed graph. Starting from this formalization we proposed additional formalizations of several advanced querying modes in order to solve complex exploratory search needs. We also propose an innovative software architecture based on two paradigmatic design choices. First the results have to be computed at query-time. Second the data are consumed remotely from distant SPARQL endpoints. This allows us to reach a high level of flexibility in terms of querying and data selection. We specified, designed and evaluated the Discovery Hub web application that retrieves the results and present them in an interface optimized for exploration. We evaluate our approach thanks to several human evaluations and we open the discussion about new ways to evaluate exploratory search engines.
|
5 |
Sémantická analýza webového obsahu / Semantic Analysis of Web ContentHubl, Lukáš January 2020 (has links)
This work deals with the topics of semantic web, web page segmentation and technologies, which are used in this area. It also deals with a modification of one web page segmentation method, specifically DOM-based segmentation, using semantic web technologies. Thus, this work designs the way of web page segmentation based on semantic analysis of individual elements of the web pages content. An application that demonstrates the functionality of the designed segmentation method was also created within this work. With the implemented application, experiments were performed, whose results are also part of this work.
|
6 |
A framework for event classification in Tweets based on hybrid semantic enrichment / Um framework para classificação de eventos em tweets baseado em enriquecimento semântico híbridoRomero, Simone Aparecida Pinto January 2017 (has links)
As plataformas de Mídias Sociais se tornaram um meio essencial para a disponibilização de informações. Dentre elas, o Twitter tem se destacado, devido ao grande volume de mensagens que são compartilhadas todos os dias, principalmente mencionando eventos ao redor do mundo. Tais mensagens são uma importante fonte de informação e podem ser utilizadas em diversas aplicações. Contudo, a classificação de texto em tweets é uma tarefa não trivial. Além disso, não há um consenso quanto à quais tarefas devem ser executadas para Identificação e Classificação de Eventos em tweets, uma vez que as abordagens existentes trabalham com tipos específicos de eventos e determinadas suposições, que dificultam a reprodução e a comparação dessas abordagens em eventos de natureza distinta. Neste trabalho, nós elaboramos um framework para a classificação de eventos de natureza distinta. O framework possui os seguintes elementos chave: a) enriquecimento externo a partir da exploração de páginas web relacionadas, como uma forma de complementar a extração de features conceituais do conteúdo dos tweets; b) enriquecimento semântico utilizando recursos da Linked Open Data cloud para acrescentar features semânticas relacionadas; e c) técnica de poda para selecionar as features semânticas mais discriminativas Nós avaliamos o framework proposto através de um vasto conjunto de experimentos, que incluem: a) sete eventos alvos de natureza distinta; b) diferentes combinações das features conceituais propostas (i.e. entidades, vocabulário, e a combinação de ambos); c) estratégias distintas para a extração de features (i.e. a partir do conteúdo dos tweets e das páginas web); d) diferentes métodos para a seleção das features semânticas mais relevantes de acordo com o domínio (i.e. poda, seleção de features, e a combinação de ambos); e) dois algoritmos de classificação. Nós também comparamos o desempenho do framework em relação a outro método utilização para o enriquecimento contextual, o qual tem como base word embeddings. Os resultados mostraram as vantagens da utilização do framework proposto e que a nossa solução é factível e generalizável, dando suporte a classificação de diferentes tipos de eventos. / Social Media platforms have become key as a means of spreading information, opinions or awareness about real-world events. Twitter stands out due to the huge volume of messages about all sorts of topics posted every day. Such messages are an important source of useful information about events, presenting many useful applications (e.g. the detection of breaking news, real-time awareness, updates about events). However, text classification on Twitter is by no means a trivial task that can be handled by conventional Natural Language Processing techniques. In addition, there is no consensus about the definition of which kind of tasks are executed in the Event Identification and Classification in tweets, since existing approaches often focus on specific types of events, based on specific assumptions, which makes it difficult to reproduce and compare these approaches in events of distinct natures. In this work, we aim at building a unifying framework that is suitable for the classification of events of distinct natures. The framework has as key elements: a) external enrichment using related web pages for extending the conceptual features contained within the tweets; b) semantic enrichment using the Linked Open Data cloud to add related semantic features; and c) a pruning technique that selects the semantic features with discriminative potential We evaluated our proposed framework using a broad experimental setting, that includes: a) seven target events of different natures; b) different combinations of the conceptual features proposed (i.e. entities, vocabulary and their combination); c) distinct feature extraction strategies (i.e. from tweet text and web related documents); d) different methods for selecting the discriminative semantic features (i.e. pruning, feature selection, and their combination); and e) two classification algorithms. We also compared the proposed framework against another kind of contextual enrichment based on word embeddings. The results showed the advantages of using the proposed framework, and that our solution is a feasible and generalizable method to support the classification of distinct event types.
|
7 |
A framework for event classification in Tweets based on hybrid semantic enrichment / Um framework para classificação de eventos em tweets baseado em enriquecimento semântico híbridoRomero, Simone Aparecida Pinto January 2017 (has links)
As plataformas de Mídias Sociais se tornaram um meio essencial para a disponibilização de informações. Dentre elas, o Twitter tem se destacado, devido ao grande volume de mensagens que são compartilhadas todos os dias, principalmente mencionando eventos ao redor do mundo. Tais mensagens são uma importante fonte de informação e podem ser utilizadas em diversas aplicações. Contudo, a classificação de texto em tweets é uma tarefa não trivial. Além disso, não há um consenso quanto à quais tarefas devem ser executadas para Identificação e Classificação de Eventos em tweets, uma vez que as abordagens existentes trabalham com tipos específicos de eventos e determinadas suposições, que dificultam a reprodução e a comparação dessas abordagens em eventos de natureza distinta. Neste trabalho, nós elaboramos um framework para a classificação de eventos de natureza distinta. O framework possui os seguintes elementos chave: a) enriquecimento externo a partir da exploração de páginas web relacionadas, como uma forma de complementar a extração de features conceituais do conteúdo dos tweets; b) enriquecimento semântico utilizando recursos da Linked Open Data cloud para acrescentar features semânticas relacionadas; e c) técnica de poda para selecionar as features semânticas mais discriminativas Nós avaliamos o framework proposto através de um vasto conjunto de experimentos, que incluem: a) sete eventos alvos de natureza distinta; b) diferentes combinações das features conceituais propostas (i.e. entidades, vocabulário, e a combinação de ambos); c) estratégias distintas para a extração de features (i.e. a partir do conteúdo dos tweets e das páginas web); d) diferentes métodos para a seleção das features semânticas mais relevantes de acordo com o domínio (i.e. poda, seleção de features, e a combinação de ambos); e) dois algoritmos de classificação. Nós também comparamos o desempenho do framework em relação a outro método utilização para o enriquecimento contextual, o qual tem como base word embeddings. Os resultados mostraram as vantagens da utilização do framework proposto e que a nossa solução é factível e generalizável, dando suporte a classificação de diferentes tipos de eventos. / Social Media platforms have become key as a means of spreading information, opinions or awareness about real-world events. Twitter stands out due to the huge volume of messages about all sorts of topics posted every day. Such messages are an important source of useful information about events, presenting many useful applications (e.g. the detection of breaking news, real-time awareness, updates about events). However, text classification on Twitter is by no means a trivial task that can be handled by conventional Natural Language Processing techniques. In addition, there is no consensus about the definition of which kind of tasks are executed in the Event Identification and Classification in tweets, since existing approaches often focus on specific types of events, based on specific assumptions, which makes it difficult to reproduce and compare these approaches in events of distinct natures. In this work, we aim at building a unifying framework that is suitable for the classification of events of distinct natures. The framework has as key elements: a) external enrichment using related web pages for extending the conceptual features contained within the tweets; b) semantic enrichment using the Linked Open Data cloud to add related semantic features; and c) a pruning technique that selects the semantic features with discriminative potential We evaluated our proposed framework using a broad experimental setting, that includes: a) seven target events of different natures; b) different combinations of the conceptual features proposed (i.e. entities, vocabulary and their combination); c) distinct feature extraction strategies (i.e. from tweet text and web related documents); d) different methods for selecting the discriminative semantic features (i.e. pruning, feature selection, and their combination); and e) two classification algorithms. We also compared the proposed framework against another kind of contextual enrichment based on word embeddings. The results showed the advantages of using the proposed framework, and that our solution is a feasible and generalizable method to support the classification of distinct event types.
|
8 |
A framework for event classification in Tweets based on hybrid semantic enrichment / Um framework para classificação de eventos em tweets baseado em enriquecimento semântico híbridoRomero, Simone Aparecida Pinto January 2017 (has links)
As plataformas de Mídias Sociais se tornaram um meio essencial para a disponibilização de informações. Dentre elas, o Twitter tem se destacado, devido ao grande volume de mensagens que são compartilhadas todos os dias, principalmente mencionando eventos ao redor do mundo. Tais mensagens são uma importante fonte de informação e podem ser utilizadas em diversas aplicações. Contudo, a classificação de texto em tweets é uma tarefa não trivial. Além disso, não há um consenso quanto à quais tarefas devem ser executadas para Identificação e Classificação de Eventos em tweets, uma vez que as abordagens existentes trabalham com tipos específicos de eventos e determinadas suposições, que dificultam a reprodução e a comparação dessas abordagens em eventos de natureza distinta. Neste trabalho, nós elaboramos um framework para a classificação de eventos de natureza distinta. O framework possui os seguintes elementos chave: a) enriquecimento externo a partir da exploração de páginas web relacionadas, como uma forma de complementar a extração de features conceituais do conteúdo dos tweets; b) enriquecimento semântico utilizando recursos da Linked Open Data cloud para acrescentar features semânticas relacionadas; e c) técnica de poda para selecionar as features semânticas mais discriminativas Nós avaliamos o framework proposto através de um vasto conjunto de experimentos, que incluem: a) sete eventos alvos de natureza distinta; b) diferentes combinações das features conceituais propostas (i.e. entidades, vocabulário, e a combinação de ambos); c) estratégias distintas para a extração de features (i.e. a partir do conteúdo dos tweets e das páginas web); d) diferentes métodos para a seleção das features semânticas mais relevantes de acordo com o domínio (i.e. poda, seleção de features, e a combinação de ambos); e) dois algoritmos de classificação. Nós também comparamos o desempenho do framework em relação a outro método utilização para o enriquecimento contextual, o qual tem como base word embeddings. Os resultados mostraram as vantagens da utilização do framework proposto e que a nossa solução é factível e generalizável, dando suporte a classificação de diferentes tipos de eventos. / Social Media platforms have become key as a means of spreading information, opinions or awareness about real-world events. Twitter stands out due to the huge volume of messages about all sorts of topics posted every day. Such messages are an important source of useful information about events, presenting many useful applications (e.g. the detection of breaking news, real-time awareness, updates about events). However, text classification on Twitter is by no means a trivial task that can be handled by conventional Natural Language Processing techniques. In addition, there is no consensus about the definition of which kind of tasks are executed in the Event Identification and Classification in tweets, since existing approaches often focus on specific types of events, based on specific assumptions, which makes it difficult to reproduce and compare these approaches in events of distinct natures. In this work, we aim at building a unifying framework that is suitable for the classification of events of distinct natures. The framework has as key elements: a) external enrichment using related web pages for extending the conceptual features contained within the tweets; b) semantic enrichment using the Linked Open Data cloud to add related semantic features; and c) a pruning technique that selects the semantic features with discriminative potential We evaluated our proposed framework using a broad experimental setting, that includes: a) seven target events of different natures; b) different combinations of the conceptual features proposed (i.e. entities, vocabulary and their combination); c) distinct feature extraction strategies (i.e. from tweet text and web related documents); d) different methods for selecting the discriminative semantic features (i.e. pruning, feature selection, and their combination); and e) two classification algorithms. We also compared the proposed framework against another kind of contextual enrichment based on word embeddings. The results showed the advantages of using the proposed framework, and that our solution is a feasible and generalizable method to support the classification of distinct event types.
|
9 |
DBpedia Type and Entity Detection Using Word Embeddings and N-gram ModelsZhou, Hanqing January 2018 (has links)
Nowadays, knowledge bases are used more and more in Semantic Web tasks, such as knowledge acquisition (Hellmann et al., 2013), disambiguation (Garcia et al., 2009) and named entity corpus construction (Hahm et al., 2014), to name a few. DBpedia is playing a central role on the linked open data cloud; therefore, the quality of this knowledge base is becoming a central point of focus. However, there are some issues with the quality of DBpedia. In particular, DBpedia suffers from three major types of problems: a) invalid types for entities, b) missing types for entities, and c) invalid entities in the resources’ description. In order to enhance the quality of DBpedia, it is important to detect these invalid types and resources, as well as complete missing types.
The three main goals of this thesis are: a) invalid entity type detection in order to solve the problem of invalid DBpedia types for entities, b) automatic detection of the types of entities in order to solve the problem of missing DBpedia types for entities, and c) invalid entity detection in order to solve the problem of invalid entities in the resource description of a DBpedia entity. We compare several methods for the detection of invalid types, automatic typing of entities, and invalid entities detection in the resource descriptions. In particular, we compare different classification and clustering algorithms based on various sets of features: entity embedding features (Skip-gram and CBOW models) and traditional n-gram features. We present evaluation results for 358 DBpedia classes extracted from the DBpedia ontology.
The main contribution of this work consists of the development of automatic invalid type detection, automatic entity typing, and automatic invalid entity detection methods using clustering and classification. Our results show that entity embedding models usually perform better than n-gram models, especially the Skip-gram embedding model.
|
10 |
Resúmenes semiautomáticos de conocimiento : caso de RDFGarrido García, Camilo Fernando January 2013 (has links)
Ingeniero Civil en Computación / En la actualidad, la cantidad de información que se genera en el mundo es inmensa. En el campo científico tenemos, por ejemplo, datos astronómicos con imágenes de las estrellas, los datos de pronósticos meteorológicos, los datos de infomación biológica y genética, etc. No sólo en el mundo científico se produce este fenómeno, por ejemplo, un usuario navegando por Internet produce grandes cantidades de información: Comentarios en foros, participación en redes sociales o simplemente la comunicación a través de la web.
Manejar y analizar esta cantidad de información trae grandes problemas y costos. Por ello, antes de realizar un análisis, es conveniente determinar si el conjunto de datos que se posee es adecuado para lo que se desea o si trata sobre los temas que son de nuestro interés. Estas preguntas podrían responderse si se contara con un resumen del conjunto de datos. De aquí surge el problema que esta memoria abarca: Crear resúmenes semi-automáticos de conocimiento formalizado.
En esta memoria se diseñó e implementó un método para la obtención de resúmenes semiautomáticos de conjuntos RDF. Dado un grafo RDF se puede obtener un conjunto de nodos, cuyo tamaño es determinado por el usuario, el cual representa y da a entender cuáles son los temas más importantes dentro del conjunto completo. Este método fue diseñado en base a los conjuntos de datos provistos por DBpedia. La selección de recursos dentro del conjunto de datos se hizo utilizando dos métricas usadas ampliamente en otros escenarios: Centralidad de intermediación y grados. Con ellas se detectaron los recursos más importantes en forma global y local.
Las pruebas realizadas, las cuales contaron con evaluación de usuarios y evaluación automática, indicaron que el trabajo realizado cumple con el objetivo de realizar resúmenes que den a entender y representen al conjunto de datos. Las pruebas también mostraron que los resúmenes logran un buen balance de los temas generales, temas populares y la distribución respecto al conjunto de datos completo.
|
Page generated in 0.0231 seconds