11 |
Uma estratégia para seleção de atributos relevantes no processo de resolução de entidadesCANALLE, Gabrielle Karine 22 August 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-03-02T12:07:34Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertacao_versao_final.pdf: 2318178 bytes, checksum: 1c672f9c2706d51a970a72df59fdb7a1 (MD5) / Made available in DSpace on 2017-03-02T12:07:34Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertacao_versao_final.pdf: 2318178 bytes, checksum: 1c672f9c2706d51a970a72df59fdb7a1 (MD5)
Previous issue date: 2016-08-22 / Integração de Dados é um processo essencial quando deseja-se obter uma visão unificada de dados armazenados em fontes de dados autônomas, heterogêneas e distribuídas. Uma etapa crucial desse processo é a Resolução de Entidades, que consiste em identificar instâncias que se referem à mesma entidade do mundo real. A Resolução de Entidades se subdivide em várias fases, incluindo uma fase de comparação entre pares de instâncias. Nesta fase, são utilizadas funções que avaliam a similaridade entre os valores dos atributos que descrevem as instâncias. É importante notar que a qualidade do resultado do processo de Resolução de Entidades é diretamente afetada pelo conjunto de atributos selecionados para a fase de comparação de instâncias. Contudo, selecionar tais atributos pode ser um grande desafio, devido ao grande número de atributos que descrevem as instâncias ou à baixa relevância de alguns atributos para o processo de Resolução de Entidades. Na literatura existem alguns trabalhos que abordam esse problema. Em sua maioria, as abordagens propostas para seleção de atributos utilizam aprendizagem de máquina. No entanto, além da necessidade de um conjunto de treinamento, cuja definição é uma tarefa difícil, principalmente em cenários de grandes volumes de dados, a aprendizagem de máquina é um processo custoso. Neste contexto, este trabalho propõe uma estratégia para seleção de atributos relevantes a serem considerados na fase de comparação de instâncias do processo de Resolução de Entidades. A estratégia proposta considera critérios relacionados aos dados, tais como a densidade e repetição de valores de cada atributo, e critérios relacionados às fontes, tal como a confiabilidade, para avaliar a relevância de um atributo para a fase de comparação de instâncias. Um atributo é considerado relevante se contribui positivamente para a identificação de correspondências verdadeiras, e irrelevante se contribui na identificação de correspondências erradas (falsos positivos e falsos negativos). Em experimentos realizados, utilizando a estratégia proposta, foi possível alcançar bons resultados na comparação de instâncias do processo de Resolução de Entidades, ou seja, os atributos dados como relevantes foram aqueles que contribuíram para encontrar o maior número de correspondências verdadeiras, com o menor número de correspondências erradas. / Data integration is an essential task for achieving a unified view of data stored in autonomous, heterogeneous and distributed sources. A key step in this process is Entity Resolution, which consists of identifying instances that refer to the same real-world entity. Entity Resolution can be subdivided into several stages, including a comparison step between instance pairs. In this step, functions that check the similarity between values of attributes are used to discover equivalent instances. It is important to note that the quality of the result of the entity resolution process is directly affected by the set of selected attributes used to compare the instances. However, selecting such attributes can be challenging, due to either the large number of attributes that describes an instance or to the low relevance of some attributes regarding to the entity resolution process. In the literature, there are some approaches that investigated this problem. Most of them employ machine learning techniques for selecting relevant attributes. Usually, these techniques are computationally costly and also have the necessity of defining a training set, which requirements are non-trivial, mainly in large volumes of data scenarios. In this context, this work proposes a strategy for selecting relevant attributes to be considered in the instance comparison phase of the process of Entity Resolution. The proposed strategy considers criteria related to data, such as density and repetition of values of each attribute, and related to sources, such as reliability, to evaluate the relevance of the attributes. An attribute is considered relevant if contributes positively for the identification of true matches, and irrelevant if contributes for the identification of incorrect matches (false positives and false negatives). In our experiments, the proposed strategy achieved good results for the Entity Resolution process. That is, the attributes classified as relevant were the ones that contributed to find the greatest number of true matches with a few incorrect matches.
|
12 |
Advanced Methods for Entity Linking in the Life SciencesChristen, Victor 25 January 2021 (has links)
The amount of knowledge increases rapidly due to the increasing number of available data sources. However, the autonomy of data sources and the resulting heterogeneity prevent comprehensive data analysis and applications.
Data integration aims to overcome heterogeneity by unifying different data sources and enriching unstructured data. The enrichment of data consists of different subtasks, amongst other the annotation process. The annotation process links document phrases to terms of a standardized vocabulary. Annotated documents enable effective retrieval methods, comparability of different documents, and comprehensive data analysis, such as finding adversarial drug effects based on patient data.
A vocabulary allows the comparability using standardized terms. An ontology can also represent a vocabulary, whereas concepts, relationships, and logical constraints additionally define an ontology. The annotation process is applicable in different domains. Nevertheless, there is a difference between generic and specialized domains according to the annotation process. This thesis emphasizes the differences between the domains and addresses the identified challenges. The majority of annotation approaches focuses on the evaluation of general domains, such as Wikipedia. This thesis evaluates the developed annotation approaches with case report forms that are medical documents for examining clinical trials. The natural language provides different challenges, such as similar meanings using different phrases. The proposed annotation method, AnnoMap, considers the fuzziness of natural language. A further challenge is the reuse of verified annotations. Existing annotations represent knowledge that can be reused for further annotation processes. AnnoMap consists of a reuse strategy that utilizes verified annotations to link new documents to appropriate concepts. Due to the broad spectrum of areas in the biomedical domain, different tools exist. The tools perform differently regarding a particular domain. This thesis proposes a combination approach to unify results from different tools. The method utilizes existing tool results to build a classification model that can classify new annotations as correct or incorrect.
The results show that the reuse and the machine learning-based combination improve the annotation quality compared to existing approaches focussing on the biomedical domain.
A further part of data integration is entity resolution to build unified knowledge bases from different data sources. A data source consists of a set of records characterized by attributes. The goal of entity resolution is to identify records representing the same real-world entity. Many methods focus on linking data sources consisting of records being characterized by attributes. Nevertheless, only a few methods can handle graph-structured knowledge bases or consider temporal aspects. The temporal aspects are essential to identify the same entities over different time intervals since these aspects underlie certain conditions. Moreover, records can be related to other records so that a small graph structure exists for each record. These small graphs can be linked to each other if they represent the same. This thesis proposes an entity resolution approach for census data consisting of person records for different time intervals. The approach also considers the graph structure of persons given by family relationships.
For achieving qualitative results, current methods apply machine-learning techniques to classify record pairs as the same entity. The classification task used a model that is generated by training data. In this case, the training data is a set of record pairs that are labeled as a duplicate or not. Nevertheless, the generation of training data is a time-consuming task so that active learning techniques are relevant for reducing the number of training examples.
The entity resolution method for temporal graph-structured data shows an improvement compared to previous collective entity resolution approaches. The developed active learning approach achieves comparable results to supervised learning methods and outperforms other limited budget active learning methods.
Besides the entity resolution approach, the thesis introduces the concept of evolution operators for communities. These operators can express the dynamics of communities and individuals. For instance, we can formulate that two communities merged or split over time. Moreover, the operators allow observing the history of individuals.
Overall, the presented annotation approaches generate qualitative annotations for medical forms. The annotations enable comprehensive analysis across different data sources as well as accurate queries. The proposed entity resolution approaches improve existing ones so that they contribute to the generation of qualitative knowledge graphs and data analysis tasks.
|
13 |
Scalable Sprase Bayesian Nonparametric and Matrix Tri-factorization Models for Text Mining ApplicationsRanganath, B N January 2017 (has links) (PDF)
Hierarchical Bayesian Models and Matrix factorization methods provide an unsupervised way to learn latent components of data from the grouped or sequence data. For example, in document data, latent component corn-responds to topic with each topic as a distribution over a note vocabulary of words. For many applications, there exist sparse relationships between the domain entities and the latent components of the data. Traditional approaches for topic modelling do not take into account these sparsity considerations. Modelling these sparse relationships helps in extracting relevant information leading to improvements in topic accuracy and scalable solution. In our thesis, we explore these sparsity relationships for di errant applications such as text segmentation, topical analysis and entity resolution in dyadic data through the Bayesian and Matrix tri-factorization approaches, propos-in scalable solutions.
In our rest work, we address the problem of segmentation of a collection of sequence data such as documents using probabilistic models. Existing state-of-the-art Hierarchical Bayesian Models are connected to the notion of Complete Exchangeability or Markov Exchangeability. Bayesian Nonpareil-metric Models based on the notion of Markov Exchangeability such as HDP-HMM and Sticky HDP-HMM, allow very restricted permutations of latent variables in grouped data (topics in documents), which in turn lead to com-mutational challenges for inference. At the other extreme, models based on Complete Exchangeability such as HDP allow arbitrary permutations within each group or document, and inference is significantly more tractable as a result, but segmentation is not meaningful using such models. To over-come these problems, we explored a new notion of exchangeability called Block Exchangeability that lies between Markov Exchangeability and Com-plate Exchangeability for which segmentation is meaningful, but inference is computationally less expensive than both Markov and Complete Exchange-ability. Parametrically, Block Exchangeability contains sparser number of transition parameters, linear in number of states compared to the quadratic order for Markov Exchangeability that is still less than that for Complete Exchangeability and for which parameters are on the order of the number of documents. For this, we propose a nonparametric Block Exchangeable model (BEM) based on the new notion of Block Exchangeability, which we have shown to be a superclass of Complete Exchangeability and subclass of Markov Exchangeability. We propose a scalable inference algorithm for BEM to infer the topics for words and segment boundaries associated with topics for a document using the collapsed Gibbs Sampling procedure. Empirical results show that BEM outperforms state-of-the-art nonparametric models in terms of scalability and generalization ability and shows nearly the same segmentation quality on News dataset, Product review dataset and on a Synthetic dataset. Interestingly, we can tune the scalability by varying the block size through a parameter in our model for a small trade-o with segmentation quality.
In addition to exploring the association between documents and words, we also explore the sparse relationships for dyadic data, where associations between one pair of domain entities such as (documents, words) and as-associations between another pair such as (documents, users) are completely observed. We motivate the analysis of such dyadic data introducing an additional discrete dimension, which we call topics, and explore sparse relation-ships between the domain entities and the topic, such as of user-topic and document-topic respectively.
In our second work, for this problem of sparse topical analysis of dyadic data, we propose a formulation using sparse matrix tri-factorization. This formulation requires sparsity constraints, not only on the individual factor matrices, but also on the product of two of the factors. To the best of our knowledge, this problem of sparse matrix tri-factorization has not been stud-ide before. We propose a solution that introduces a surrogate for the product of factors and enforces sparsity on this surrogate as well as on the individual factors through L1-regularization. The resulting optimization problem is e - cogently solvable in an alternating minimization framework over sub-problems involving individual factors using the well-known FISTA algorithm. For the sub-problems that are constrained, we use a projected variant of the FISTA algorithm. We also show that our formulation leads to independent sub-problems towards solving a factor matrix, thereby supporting parallel implementation leading to a scalable solution. We perform experiments over bibliographic and product review data to show that the proposed framework based on sparse tri-factorization formulation results in better generalization ability and factorization accuracy compared to baselines that use sparse bi-factorization.
Even though the second work performs sparse topical analysis for dyadic data, ending sparse topical associations for the users, the user references with di errant names could belong to the same entity and those with same names could belong to different entities. The problem of entity resolution is widely studied in the research community, where the goal is to identify real users associated with the user references in the documents.
Finally, we focus on the problem of entity resolution in dyadic data, where associations between one pair of domain entities such as documents-words and associations between another pair such as documents-users are ob.-served, an example of which includes bibliographic data. In our nil work, for this problem of entity resolution in bibliographic data, we propose a Bayesian nonparametric `Sparse entity resolution model' (SERM) exploring the sparse relationships between the grouped data involving grouping of the documents, and the topics/author entities in the group. Further, we also exploit the sparseness between an author entity and the associated author aliases. Grouping of the documents is achieved with the stick breaking prior for the Dirichlet processes (DP). To achieve sparseness, we propose a solution that introduces separate Indian Bu et process (IBP) priors over topics and the author entities for the groups and k-NN mechanism for selecting author aliases for the author entities. We propose a scalable inference for SERM by appropriately combining partially collapsed Gibbs sampling scheme in Focussed topic model (FTM), the inference scheme used for parametric IBP prior and the k-NN mechanism. We perform experiments over bibliographic datasets, Cite seer and Rexa, to show that the proposed SERM model imp-proves the accuracy of entity resolution by ending relevant author entities through modelling sparse relationships and is scalable, when compared to the state-of-the-art baseline
|
14 |
Semi-automated co-reference identification in digital humanities collectionsCroft, David January 2014 (has links)
Locating specific information within museum collections represents a significant challenge for collection users. Even when the collections and catalogues exist in a searchable digital format, formatting differences and the imprecise nature of the information to be searched mean that information can be recorded in a large number of different ways. This variation exists not just between different collections, but also within individual ones. This means that traditional information retrieval techniques are badly suited to the challenges of locating particular information in digital humanities collections and searching, therefore, takes an excessive amount of time and resources. This thesis focuses on a particular search problem, that of co-reference identification. This is the process of identifying when the same real world item is recorded in multiple digital locations. In this thesis, a real world example of a co-reference identification problem for digital humanities collections is identified and explored. In particular the time consuming nature of identifying co-referent records. In order to address the identified problem, this thesis presents a novel method for co-reference identification between digitised records in humanities collections. Whilst the specific focus of this thesis is co-reference identification, elements of the method described also have applications for general information retrieval. The new co-reference method uses elements from a broad range of areas including; query expansion, co-reference identification, short text semantic similarity and fuzzy logic. The new method was tested against real world collections information, the results of which suggest that, in terms of the quality of the co-referent matches found, the new co-reference identification method is at least as effective as a manual search. The number of co-referent matches found however, is higher using the new method. The approach presented here is capable of searching collections stored using differing metadata schemas. More significantly, the approach is capable of identifying potential co-reference matches despite the highly heterogeneous and syntax independent nature of the Gallery, Library Archive and Museum (GLAM) search space and the photo-history domain in particular. The most significant benefit of the new method is, however, that it requires comparatively little manual intervention. A co-reference search using it has, therefore, significantly lower person hour requirements than a manually conducted search. In addition to the overall co-reference identification method, this thesis also presents: • A novel and computationally lightweight short text semantic similarity metric. This new metric has a significantly higher throughput than the current prominent techniques but a negligible drop in accuracy. • A novel method for comparing photographic processes in the presence of variable terminology and inaccurate field information. This is the first computational approach to do so.
|
15 |
Pareamento privado de atributos no contexto da resolução de entidades com preservação de privacidade.NÓBREGA, Thiago Pereira da. 10 September 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-09-10T19:58:50Z
No. of bitstreams: 1
THIAGO PEREIRA DA NÓBREGA – DISSERTAÇÃO (PPGCC) 2018.pdf: 3402601 bytes, checksum: b1a8d86821a4d14435d5adbdd850ec04 (MD5) / Made available in DSpace on 2018-09-10T19:58:50Z (GMT). No. of bitstreams: 1
THIAGO PEREIRA DA NÓBREGA – DISSERTAÇÃO (PPGCC) 2018.pdf: 3402601 bytes, checksum: b1a8d86821a4d14435d5adbdd850ec04 (MD5)
Previous issue date: 2018-05-11 / A Resolução de entidades com preservação de privacidade (REPP) consiste em identificar entidades (e.g. Pacientes), armazenadas em bases de dados distintas, que correspondam a um mesmo objeto do mundo real. Como as entidades em questão possuem dados privados (ou seja, dados que não podem ser divulgados) é fundamental que a tarefa de REPP seja executada sem que nenhuma informação das entidades seja revelada entre os participantes (proprietários das bases de dados), de modo que a privacidade dos dados seja preservada. Ao final da tarefa de REPP, cada participante identifica quais entidades de sua base de dados estão presentes nas bases de dados dos demais participantes. Antes de iniciar a tarefa de REPP os participantes devem concordar em relação à entidade (em comum), a ser considerada na tarefa, e aos atributos das entidades a serem utilizados para comparar as entidades. Em geral, isso exige que os participantes tenham que expor os esquemas de suas bases de dados, compartilhando (meta-) informações que podem ser utilizadas para quebrar a privacidade dos dados. Este trabalho propõe uma abordagem semiautomática para identificação de atributos similares (pareamento de atributos) a serem utilizados para comparar entidades durante a REPP. A abordagem é inserida em uma etapa preliminar da REPP (etapa de Apresentação) e seu resultado (atributos similares) pode ser utilizado pelas
etapas subsequentes (Blocagem e Comparação). Na abordagem proposta a identificação dos atributos similares é realizada utilizando-se representações dos atributos (Assinaturas de Dados), geradas por cada participante, eliminando a necessidade de divulgar informações sobre seus esquemas, ou seja, melhorando a segurança e privacidade da tarefa de REPP. A avaliação da abordagem aponta que a qualidade do pareamento de atributos é equivalente a uma solução que não considera a privacidade dos dados, e que a abordagem é capaz de preservar a privacidade dos dados. / The Privacy Preserve Record Linkage (PPRL) aims to identify entities, that can not
have their information disclosed (e.g., Medical Records), which correspond to the same
real-world object across different databases. It is crucial to the PPRL tasks that it is executed without revealing any information between the participants (database owners) during the PPRL task, to preserve the privacy of the original data. At the end of a PPRL task, each participant identifies which entities in its database are present in the databases of the other participants. Thus, before starting the PPRL task, the participants must agree on the entity and its attributes, to be compared in the task. In general, this agreement requires that participants have to expose their schemas, sharing (meta-)information that can be used to break the privacy of the data. This work proposes a semiautomatic approach to identify similar attributes (attribute pairing) to identify the entities attributes. The approach is inserted as a preliminary step of the PPRL (Handshake), and its result (similar attributes) can be used by subsequent steps (Blocking and Comparison). In the proposed approach, the participants generate a privacy-preserving representation (Data Signatures) of the attributes values that are sent to a trusted third-party to identify similar attributes from different data sources. Thus, by eliminating the need to share information about their schemas, consequently, improving the security and privacy of the PPRL task. The evaluation of the approach points out that the quality of attribute pairing is equivalent to a solution that does not consider data privacy, and is capable of preserving data privacy.
|
16 |
Découverte des relations dans les réseaux sociaux / Relationship discovery in social networksRaad, Elie 22 December 2011 (has links)
Les réseaux sociaux occupent une place de plus en plus importante dans notre vie quotidienne et représentent une part considérable des activités sur le web. Ce succès s’explique par la diversité des services/fonctionnalités de chaque site (partage des données souvent multimédias, tagging, blogging, suggestion de contacts, etc.) incitant les utilisateurs à s’inscrire sur différents sites et ainsi à créer plusieurs réseaux sociaux pour diverses raisons (professionnelle, privée, etc.). Cependant, les outils et les sites existants proposent des fonctionnalités limitées pour identifier et organiser les types de relations ne permettant pas de, entre autres, garantir la confidentialité des utilisateurs et fournir un partage plus fin des données. Particulièrement, aucun site actuel ne propose une solution permettant d’identifier automatiquement les types de relations en tenant compte de toutes les données personnelles et/ou celles publiées. Dans cette étude, nous proposons une nouvelle approche permettant d’identifier les types de relations à travers un ou plusieurs réseaux sociaux. Notre approche est basée sur un framework orientéutilisateur qui utilise plusieurs attributs du profil utilisateur (nom, age, adresse, photos, etc.). Pour cela, nous utilisons des règles qui s’appliquent à deux niveaux de granularité : 1) au sein d’un même réseau social pour déterminer les relations sociales (collègues, parents, amis, etc.) en exploitant principalement les caractéristiques des photos et leurs métadonnées, et, 2) à travers différents réseaux sociaux pour déterminer les utilisateurs co-référents (même personne sur plusieurs réseaux sociaux) en étant capable de considérer tous les attributs du profil auxquels des poids sont associés selon le profil de l’utilisateur et le contenu du réseau social. À chaque niveau de granularité, nous appliquons des règles de base et des règles dérivées pour identifier différents types de relations. Nous mettons en avant deux méthodologies distinctes pour générer les règles de base. Pour les relations sociales, les règles de base sont créées à partir d’un jeu de données de photos créées en utilisant le crowdsourcing. Pour les relations de co-référents, en utilisant tous les attributs, les règles de base sont générées à partir des paires de profils ayant des identifiants de mêmes valeurs. Quant aux règles dérivées, nous utilisons une technique de fouille de données qui prend en compte le contexte de chaque utilisateur en identifiant les règles de base fréquemment utilisées. Nous présentons notre prototype, intitulé RelTypeFinder, que nous avons implémenté afin de valider notre approche. Ce prototype permet de découvrir différents types de relations, générer des jeux de données synthétiques, collecter des données du web, et de générer les règles d’extraction. Nous décrivons les expériementations que nous avons menées sur des jeux de données réelles et syntéthiques. Les résultats montrent l’efficacité de notre approche à découvrir les types de relations. / In recent years, social network sites exploded in popularity and become an important part of the online activities on the web. This success is related to the various services/functionalities provided by each site (ranging from media sharing, tagging, blogging, and mainly to online social networking) pushing users to subscribe to several sites and consequently to create several social networks for different purposes and contexts (professional, private, etc.). Nevertheless, current tools and sites provide limited functionalities to organize and identify relationship types within and across social networks which is required in several scenarios such as enforcing users’ privacy, and enhancing targeted social content sharing, etc. Particularly, none of the existing social network sites provides a way to automatically identify relationship types while considering users’ personal information and published data. In this work, we propose a new approach to identify relationship types among users within either a single or several social networks. We provide a user-oriented framework able to consider several features and shared data available in user profiles (e.g., name, age, interests, photos, etc.). This framework is built on a rule-based approach that operates at two levels of granularity: 1) within a single social network to discover social relationships (i.e., colleagues, relatives, friends, etc.) by exploiting mainly photos’ features and their embedded metadata, and 2) across different social networks to discover co-referent relationships (same real-world persons) by considering all profiles’ attributes weighted by the user profile and social network contents. At each level of granularity, we generate a set of basic and derived rules that are both used to discover relationship types. To generate basic rules, we propose two distinct methodologies. On one hand, social relationship basic rules are generated from a photo dataset constructed using crowdsourcing. On the other hand, using all weighted attributes, co-referent relationship basic rules are generated from the available pairs of profiles having the same unique identifier(s) attribute(s) values. To generate the derived rules, we use a mining technique that takes into account the context of users, namely by identifying frequently used valid basic rules for each user. We present here our prototype, called RelTypeFinder, implemented to validate our approach. It allows to discover appropriately different relationship types, generate synthetic datesets, collect web data and photo, and generate mining rules. We also describe here the sets of experiments conducted on real-world and synthetic datasets. The evaluation results demonstrate the efficiency of the proposed relationship discovery approach.
|
17 |
Crowdsourcing in pay-as-you-go data integrationOsorno Gutierrez, Fernando January 2016 (has links)
In pay-as-you-go data integration, feedback can inform the regeneration of different aspects of a data integration system, and as a result, helps to improve the system's quality. However, feedback could be expensive as the amount of feedback required to annotate all the possible integration artefacts is potentially big in contexts where the budget can be limited. Also, feedback could be used in different ways. Feedback of different types and in different orders could have different effects in the quality of the integration. Some feedback types could give rise to more benefit than others. There is a need to develop techniques to collect feedback effectively. Previous efforts have explored the benefit of feedback in one aspect of the integration. However, the contributions have not considered the benefit of different feedback types in a single integration task. We have investigated the annotation of mapping results using crowdsourcing, and implementing techniques for reliability. The results indicate that precision estimates derived from crowdsourcing improve rapidly, suggesting that crowdsourcing can be used as a cost-effective source of feedback. We propose an approach to maximize the improvement of data integration systems given a budget for feedback. Our approach takes into account the annotation of schema matchings, mapping results and pairs of candidate record duplicates. We define a feedback plan, which indicates the type of feedback to collect, the amount of feedback to collect and the order in which different types of feedback are collected. We defined a fitness function and a genetic algorithm to search for the most cost-effective feedback plans. We implemented a framework to test the application of feedback plans and measure the improvement of different data integration systems. In the framework, we use a greedy algorithm for the selection of mappings. We designed quality measures to estimate the quality of a dataspace after the application of a feedback plan. For the evaluation of our approach, we propose a method to generate synthetic data scenarios. We evaluate our approach in scenarios with different characteristics. The results showed that the generated feedback plans achieved higher quality values than the randomly generated feedback plans in several scenarios.
|
18 |
Implementierung eines File Managers für das Hadoop Distributed Filesystem und Realisierung einer MapReduce Workflow Submission-KomponenteFischer, Axel 02 February 2018 (has links)
Die vorliegende Bachelorarbeit erläutert die Entwicklung eines File Managers für das Hadoop Distributed Filesystem (HDFS) im Zusammenhang mit der Entwicklung des Dedoop Prototyps. Der File Manager deckt die Anwendungsfälle refresh, rename, move und delete ab. Darüber hinaus erlaubt er Uploads vom und Downloads zum lokalen Dateisystem des Anwenders. Besonders beachtet werden mussten hierbei die speziellen Anforderungen des Mehrbenutzerbetriebs. Darüber hinaus beschreibt die Bachelorarbeit die Entwicklung einer MapReduce Workflow Submission-Komponente für Dedoop, welche für die Übertragung und Ausführung der vom Anwender erzeugten Worflows verantworklich ist. Auch hierbei mussten die Anforderungen des Mehrbenutzer- und Multi-Cluster-Betriebs beachtet werden.
|
19 |
On the discovery of relevant structures in dynamic and heterogeneous dataPreti, Giulia 22 October 2019 (has links)
We are witnessing an explosion of available data coming from a huge amount of sources and domains, which is leading to the creation of datasets larger and larger, as well as richer and richer.
Understanding, processing, and extracting useful information from those datasets requires specialized algorithms that take into consideration both the dynamism and the heterogeneity of the data they contain.
Although several pattern mining techniques have been proposed in the literature, most of them fall short in providing interesting structures when the data can be interpreted differently from user to user, when it can change from time to time, and when it has different representations.
In this thesis, we propose novel approaches that go beyond the traditional pattern mining algorithms, and can effectively and efficiently discover relevant structures in dynamic and heterogeneous settings.
In particular, we address the task of pattern mining in multi-weighted graphs, pattern mining in dynamic graphs, and pattern mining in heterogeneous temporal databases.
In pattern mining in multi-weighted graphs, we consider the problem of mining patterns for a new category of graphs called emph{multi-weighted graphs}. In these graphs, nodes and edges can carry multiple weights that represent, for example, the preferences of different users or applications, and that are used to assess the relevance of the patterns.
We introduce a novel family of scoring functions that assign a score to each pattern based on both the weights of its appearances and their number, and that respect the anti-monotone property, pivotal for efficient implementations.
We then propose a centralized and a distributed algorithm that solve the problem both exactly and approximately. The approximate solution has better scalability in terms of the number of edge weighting functions, while achieving good accuracy in the results found.
An extensive experimental study shows the advantages and disadvantages of our strategies, and proves their effectiveness.
Then, in pattern mining in dynamic graphs, we focus on the particular task of discovering structures that are both well-connected and correlated over time, in graphs where nodes and edges can change over time.
These structures represent edges that are topologically close and exhibit a similar behavior of appearance and disappearance in the snapshots of the graph.
To this aim, we introduce two measures for computing the density of a subgraph whose edges change in time, and a measure to compute their correlation.
The density measures are able to detect subgraphs that are silent in some periods of time but highly connected in the others, and thus they can detect events or anomalies happened in the network.
The correlation measure can identify groups of edges that tend to co-appear together, as well as edges that are characterized by similar levels of activity.
For both variants of density measure, we provide an effective solution that enumerates all the maximal subgraphs whose density and correlation exceed given minimum thresholds, but can also return a more compact subset of representative subgraphs that exhibit high levels of pairwise dissimilarity.
Furthermore, we propose an approximate algorithm that scales well with the size of the network, while achieving a high accuracy.
We evaluate our framework with an extensive set of experiments on both real and synthetic datasets, and compare its performance with the main competitor algorithm.
The results confirm the correctness of the exact solution, the high accuracy of the approximate, and the superiority of our framework over the existing solutions.
In addition, they demonstrate the scalability of the framework and its applicability to networks of different nature.
Finally, we address the problem of entity resolution in heterogeneous temporal data-ba-se-s, which are datasets that contain records that give different descriptions of the status of real-world entities at different periods of time, and thus are characterized by different sets of attributes that can change over time.
Detecting records that refer to the same entity in such scenario requires a record similarity measure that takes into account the temporal information and that is aware of the absence of a common fixed schema between the records.
However, existing record matching approaches either ignore the dynamism in the attribute values of the records, or assume that all the records share the same set of attributes throughout time.
In this thesis, we propose a novel time-aware schema-agnostic similarity measure for temporal records to find pairs of matching records, and integrate it into an exact and an approximate algorithm.
The exact algorithm can find all the maximal groups of pairwise similar records in the database.
The approximate algorithm, on the other hand, can achieve higher scalability with the size of the dataset and the number of attributes, by relying on a technique called meta-blocking. This algorithm can find a good-quality approximation of the actual groups of similar records, by adopting an effective and efficient clustering algorithm.
|
20 |
Rapprochement de données pour la reconnaissance d'entités dans les documents océrisés / Data matching for entity recognition in ocred documentsKooli, Nihel 13 September 2016 (has links)
Cette thèse traite de la reconnaissance d'entités dans les documents océrisés guidée par une base de données. Une entité peut être, par exemple, une entreprise décrite par son nom, son adresse, son numéro de téléphone, son numéro TVA, etc. ou des méta-données d'un article scientifique tels que son titre, ses auteurs et leurs affiliations, le nom de son journal, etc. Disposant d'un ensemble d'entités structurées sous forme d'enregistrements dans une base de données et d'un document contenant une ou plusieurs de ces entités, nous cherchons à identifier les entités contenues dans le document en utilisant la base de données. Ce travail est motivé par une application industrielle qui vise l'automatisation du traitement des images de documents administratifs arrivant en flux continu. Nous avons abordé ce problème comme un problème de rapprochement entre le contenu du document et celui de la base de données. Les difficultés de cette tâche sont dues à la variabilité de la représentation d'attributs d'entités dans la base et le document et à la présence d'attributs similaires dans des entités différentes. À cela s'ajoutent les redondances d'enregistrements et les erreurs de saisie dans la base de données et l'altération de la structure et du contenu du document, causée par l'OCR. Devant ces problèmes, nous avons opté pour une démarche en deux étapes : la résolution d'entités et la reconnaissance d'entités. La première étape consiste à coupler les enregistrements se référant à une même entité et à les synthétiser dans un modèle entité. Pour ce faire, nous avons proposé une approche supervisée basée sur la combinaison de plusieurs mesures de similarité entre attributs. Ces mesures permettent de tolérer quelques erreurs sur les caractères et de tenir compte des permutations entre termes. La deuxième étape vise à rapprocher les entités mentionnées dans un document avec le modèle entité obtenu. Nous avons procédé par deux manières différentes, l'une utilise le rapprochement par le contenu et l'autre intègre le rapprochement par la structure. Pour le rapprochement par le contenu, nous avons proposé deux méthodes : M-EROCS et ERBL. M-EROCS, une amélioration/adaptation d'une méthode de l'état de l'art, consiste à faire correspondre les blocs de l'OCR avec le modèle entité en se basant sur un score qui tolère les erreurs d'OCR et les variabilités d'attributs. ERBL consiste à étiqueter le document par les attributs d'entités et à regrouper ces labels en entités. Pour le rapprochement par les structures, il s'agit d'exploiter les relations structurelles entre les labels d'une entité pour corriger les erreurs d'étiquetage. La méthode proposée, nommée G-ELSE, consiste à utiliser le rapprochement inexact de graphes attribués modélisant des structures locales, avec un modèle structurel appris pour cet objectif. Cette thèse étant effectuée en collaboration avec la société ITESOFT-Yooz, nous avons expérimenté toutes les étapes proposées sur deux corpus administratifs et un troisième corpus extrait du Web / This thesis focuses on entity recognition in documents recognized by OCR, driven by a database. An entity is a homogeneous group of attributes such as an enterprise in a business form described by the name, the address, the contact numbers, etc. or meta-data of a scientific paper representing the title, the authors and their affiliation, etc. Given a database which describes entities by its records and a document which contains one or more entities from this database, we are looking to identify entities in the document using the database. This work is motivated by an industrial application which aims to automate the image document processing, arriving in a continuous stream. We addressed this problem as a matching issue between the document and the database contents. The difficulties of this task are due to the variability of the entity attributes representation in the database and in the document and to the presence of similar attributes in different entities. Added to this are the record redundancy and typing errors in the database, and the alteration of the structure and the content of the document, caused by OCR. To deal with these problems, we opted for a two-step approach: entity resolution and entity recognition. The first step is to link the records referring to the same entity and to synthesize them in an entity model. For this purpose, we proposed a supervised approach based on a combination of several similarity measures between attributes. These measures tolerate character mistakes and take into account the word permutation. The second step aims to match the entities mentioned in documents with the resulting entity model. We proceeded by two different ways, one uses the content matching and the other integrates the structure matching. For the content matching, we proposed two methods: M-EROCS and ERBL. M-EROCS, an improvement / adaptation of a state of the art method, is to match OCR blocks with the entity model based on a score that tolerates the OCR errors and the attribute variability. ERBL is to label the document with the entity attributes and to group these labels into entities. The structure matching is to exploit the structural relationships between the entity labels to correct the mislabeling. The proposed method, called G-ELSE, is based on local structure graph matching with a structural model which is learned for this purpose. This thesis being carried out in collaboration with the ITESOFT-Yooz society, we have experimented all the proposed steps on two administrative corpuses and a third one extracted from the web
|
Page generated in 0.1405 seconds