Spelling suggestions: "subject:"graphbased"" "subject:"graphsbased""
41 |
Ensembles na classificação relacional / Ensembles in relational classificationNils Ever Murrugarra Llerena 08 September 2011 (has links)
Em diversos domínios, além das informações sobre os objetos ou entidades que os compõem, existem, também, informaçõoes a respeito das relações entre esses objetos. Alguns desses domínios são, por exemplo, as redes de co-autoria, e as páginas Web. Nesse sentido, é natural procurar por técnicas de classificação que levem em conta estas informações. Dentre essas técnicas estão as denominadas classificação baseada em grafos, que visam classificar os exemplos levando em conta as relações existentes entre eles. Este trabalho aborda o desenvolvimento de métodos para melhorar o desempenho de classificadores baseados em grafos utilizando estratégias de ensembles. Um classificador ensemble considera um conjunto de classificadores cujas predições individuais são combinadas de alguma forma. Este classificador normalmente apresenta um melhor desempenho do que seus classificadores individualmente. Assim, foram desenvolvidas três técnicas: a primeira para dados originalmente no formato proposicional e transformados para formato relacional baseado em grafo e a segunda e terceira para dados originalmente já no formato de grafo. A primeira técnica, inspirada no algoritmo de boosting, originou o algoritmo KNN Adaptativo Baseado em Grafos (A-KNN). A segunda ténica, inspirada no algoritmo de Bagging originou trê abordagens de Bagging Baseado em Grafos (BG). Finalmente, a terceira técnica, inspirada no algoritmo de Cross-Validated Committees, originou o Cross-Validated Committees Baseado em Grafos (CVCG). Os experimentos foram realizados em 38 conjuntos de dados, sendo 22 conjuntos proposicionais e 16 conjuntos no formato relacional. Na avaliação foi utilizado o esquema de 10-fold stratified cross-validation e para determinar diferenças estatísticas entre classificadores foi utilizado o método proposto por Demsar (2006). Em relação aos resultados, as três técnicas melhoraram ou mantiveram o desempenho dos classificadores bases. Concluindo, ensembles aplicados em classificadores baseados em grafos apresentam bons resultados no desempenho destes / In many fields, besides information about the objects or entities that compose them, there is also information about the relationships between objects. Some of these fields are, for example, co-authorship networks and Web pages. Therefore, it is natural to search for classification techniques that take into account this information. Among these techniques are the so-called graphbased classification, which seek to classify examples taking into account the relationships between them. This paper presents the development of methods to improve the performance of graph-based classifiers by using strategies of ensembles. An ensemble classifier considers a set of classifiers whose individual predictions are combined in some way. This combined classifier usually performs better than its individual classifiers. Three techniques have been developed: the first applied for originally propositional data transformed to relational format based on graphs and the second and the third applied for data originally in graph format. The first technique, inspired by the boosting algorithm originated the Adaptive Graph-Based K-Nearest Neighbor (A-KNN). The second technique, inspired by the bagging algorithm led to three approaches of Graph-Based Bagging (BG). Finally the third technique, inspired by the Cross- Validated Committees algorithm led to the Graph-Based Cross-Validated Committees (CVCG). The experiments were performed on 38 data sets, 22 datasets in propositional format and 16 in relational format. Evaluation was performed using the scheme of 10-fold stratified cross-validation and to determine statistical differences between the classifiers it was used the method proposed by Demsar (2006). Regarding the results, these three techniques improved or at least maintain the performance of the base classifiers. In conclusion, ensembles applied to graph-based classifiers have good results in the performance of them
|
42 |
Interactive 3D segmentation repair with image-foresting transform, supervoxels and seed robustness / Reparação interativa de segmentações 3D com transformada imagem-floresta, supervoxels, robustez de sementesAnderson Carlos Moreira Tavares 02 June 2017 (has links)
Image segmentation consists on its partition into relevant regions, such as to isolate the pixels belonging to desired objects in the image domain, which is an important step for computer vision, medical image processing, and other applications. Many times automatic segmentation generates results with imperfections. The user can correct them by editing manually, interactively or can simply discard the segmentation and try to automatically generate another result by a different method. Interactive methods combine benefits from manual and automatic ones, reducing user effort and using its high-level knowledge. In seed-based methods, to continue or repair a prior segmentation (presegmentation), avoiding the user to start from scratch, it is necessary to solve the Reverse Interactive Segmentation Problem (RISP), that is, how to automatically estimate the seeds that would generate it. In order to achieve this goal, we first divide the segmented object into its composing cores. Inside a core, two seeds separately always produce the same result, making one redundant. With this, only one seed per core is required. Cores leading to segmentations which are contained in the result of other cores are redundant and can also be discarded, further reducing the seed set, a process called Redundancy Analysis. A minimal set of seeds for presegmentation is generated and the problem of interactive repair can be solved by adding new seeds or removing seeds. Within the framework of the Image-Foresting Transform (IFT), new methods such as Oriented Image-Foresting Transform (OIFT) and Oriented Relative Fuzzy Connectedness (ORFC) were developed. However, there were no known algorithms for computing the core of these methods. This work develops such algorithms, with proof of correctness. The cores also give an indication of the degree of robustness of the methods on the positioning of the seeds. Therefore, a hybrid method that combines GraphCut and the ORFC cores, as well as the Robustness Coefficient (RC), have been developed. In this work, we present another developed solution to repair segmentations, which is based on IFT-SLIC, originally used to generate supervoxels. Experimental results analyze, compare and demonstrate the potential of these solutions. / Segmentação de imagem consiste no seu particionamento em regiões, tal como para isolar os pixels pertencentes a objetos de interesse em uma imagem, sendo uma etapa importante para visão computacional, processamento de imagens médicas e outras aplicações. Muitas vezes a segmentação automática gera resultados com imperfeições. O usuário pode corrigi-las editando-a manualmente, interativamente ou simplesmente descartar o resultado e gerar outro automaticamente. Métodos interativos combinam os benefícios dos métodos manuais e automáticos, reduzindo o esforço do usuário e utilizando seu conhecimento de alto nível. Nos métodos baseados em sementes, para continuar ou reparar uma segmentação prévia (presegmentação), evitando o usuário começar do zero, é necessário resolver o Problema da Segmentação Interativa Reversa (RISP), ou seja, estimar automaticamente as sementes que o gerariam. Para isso, este trabalho particiona o objeto da segmentação em núcleos. Em um núcleo, duas sementes separadamente produzem o mesmo resultado, tornando uma delas redundante. Com isso, apenas uma semente por núcleo é necessária. Núcleos contidos nos resultados de outros núcleos são redundantes e também podem ser descartados, reduzindo ainda mais o conjunto de sementes, um processo denominado Análise de Redundância. Um conjunto mínimo de sementes para a presegmentação é gerado e o problema da reparação interativa pode então ser resolvido através da adição de novas sementes ou remoção. Dentro do arcabouço da Transformada Imagem-Floresta (IFT), novos métodos como Oriented Image-Foresting Transform (OIFT) e Oriented Relative Fuzzy Connectedness (ORFC) foram desenvolvidos. Todavia, não há algoritmos para calcular o núcleo destes métodos. Este trabalho desenvolve tais algoritmos, com prova de corretude. Os núcleos também nos fornecem uma indicação do grau de robustez dos métodos sobre o posicionamento das sementes. Por isso, um método híbrido do GraphCut com o núcleo do ORFC, bem como um Coeficiente de Robustez (RC), foram desenvolvidos. Neste trabalho também foi desenvolvida outra solução para reparar segmentações, a qual é baseada em IFT-SLIC, originalmente utilizada para gerar supervoxels. Resultados experimentais analisam, comparam e demonstram o potencial destas soluções.
|
43 |
Evaluation Techniques and Graph-Based Algorithms for Automatic Summarization and Keyphrase ExtractionHamid, Fahmida 08 1900 (has links)
Automatic text summarization and keyphrase extraction are two interesting areas of research which extend along natural language processing and information retrieval. They have recently become very popular because of their wide applicability. Devising generic techniques for these tasks is challenging due to several issues. Yet we have a good number of intelligent systems performing the tasks. As different systems are designed with different perspectives, evaluating their performances with a generic strategy is crucial. It has also become immensely important to evaluate the performances with minimal human effort.
In our work, we focus on designing a relativized scale for evaluating different algorithms. This is our major contribution which challenges the traditional approach of working with an absolute scale. We consider the impact of some of the environment variables (length of the document, references, and system-generated outputs) on the performance. Instead of defining some rigid lengths, we show how to adjust to their variations. We prove a mathematically sound baseline that should work for all kinds of documents. We emphasize automatically determining the syntactic well-formedness of the structures (sentences). We also propose defining an equivalence class for each unit (e.g. word) instead of the exact string matching strategy. We show an evaluation approach that considers the weighted relatedness of multiple references to adjust to the degree of disagreements between the gold standards. We publish the proposed approach as a free tool so that other systems can use it. We have also accumulated a dataset (scientific articles) with a reference summary and keyphrases for each document. Our approach is applicable not only for evaluating single-document based tasks but also for evaluating multiple-document based tasks.
We have tested our evaluation method for three intrinsic tasks (taken from DUC 2004 conference), and in all three cases, it correlates positively with ROUGE. Based on our experiments for DUC 2004 Question-Answering task, it correlates with the human decision (extrinsic task) with 36.008% of accuracy. In general, we can state that the proposed relativized scale performs as well as the popular technique (ROUGE) with flexibility for the length of the output.
As part of the evaluation we have also devised a new graph-based algorithm focusing on sentiment analysis. The proposed model can extract units (e.g. words or sentences) from the original text belonging either to the positive sentiment-pole or to the negative sentiment-pole. It embeds both (positive and negative) types of sentiment-flow into a single text-graph. The text-graph is composed with words or phrases as nodes, and their relations as edges. By recursively calling two mutually exclusive relations the model builds the final rank of the nodes. Based on the final rank, it splits two segments from the article: one with highly positive sentiment and the other with highly negative sentiments. The output of this model was tested with the non-polar TextRank generated output to quantify how much of the polar summaries actually covers the fact along with sentiment.
|
44 |
Modeling and Execution of Resilient Business Processes in Unreliable Communication EnvironmentsNordemann, Frank 01 March 2022 (has links)
Business processes define workflows by structuring sets of activities according to given objectives. Process Modeling Languages (PMLs) provide graphical elements to define process models. Apart from use cases in finance and commerce, PMLs gain popularity in application domains such as Cyber-Physical Systems, the Internet of Things, ubiquitous computing, mobile devices, and scenarios happening in rural, restricted, or disaster-affected regions. Many of the domains are exposed to delayed, intermittent, or broken connectivity. Existing PMLs show limitations in considering connectivity-related issues, leading to failures and breakdowns at process runtime. This thesis addresses connectivity-related issues regarding the modeling and execution of resilient business processes taking place in unreliable communication environments. With resilient BPMN (rBPMN), an extension for the Business Process Model and Notation (BPMN) addressing environments with delayed, intermittent, or broken connectivity is introduced. rBPMN extends the BPMN metamodel by new elements for resilient process models. Domain experts may define alternatives for possibly failing message flows based on priorities or characteristics of the alternatives. Functionality offered by remote participants may be moved to other participants for local execution. This thesis illustrates approaches for the graph-based analysis of business processes regarding their resilient operation. By translating process models into directed graphs, graph algorithms allow to dynamically find the most suitable process path. Domain experts are enabled to identify non-resilient parts of a process model, allowing them to optimize the involved segments before runtime. Multi-criteria analysis approaches optimize process operation according to a chosen set of criteria. A real-world scenario of an environmental-friendly slurry application illustrates the use of rBPMN’s concepts and approaches for modeling and executing resilient processes. Technical approaches realizing rBPMN’s resilience strategies using a BPMN runtime engine and microservices are illustrated. The proof-of-concept implementations may be extended and adapted, serving as guides for other application domains.
|
45 |
How to explain graph-based semi-supervised learning for non-mathematicians?Jönsson, Mattias, Borg, Lucas January 2019 (has links)
Den stora mängden tillgänglig data på internet kan användas för att förbättra förutsägelser genom maskininlärning. Problemet är att sådan data ofta är i ett obehandlat format och kräver att någon manuellt bestämmer etiketter på den insamlade datan innan den kan användas av algoritmen. Semi-supervised learning (SSL) är en teknik där algoritmen använder ett fåtal förbehandlade exempel och därefter automatiskt bestämmer etiketter för resterande data. Ett tillvägagångssätt inom SSL är att representera datan i en graf, vilket kallas för graf-baserad semi-supervised learning (GSSL), och sedan hitta likheter mellan noderna i grafen för att automatiskt bestämma etiketter.Vårt mål i denna uppsatsen är att förenkla de avancerade processerna och stegen för att implementera en GSSL-algoritm. Vi kommer att gå igen grundläggande steg som hur utvecklingsmiljön ska installeras men även mer avancerade steg som data pre-processering och feature extraction. Feature extraction metoderna som uppsatsen använder sig av är bag-of-words (BOW) och term frequency-inverse document frequency (TF-IDF). Slutgiltligen presenterar vi klassificering av dokument med Label Propagation (LP) och Multinomial Naive Bayes (MNB) samt en detaljerad beskrivning över hur GSSL fungerar.Vi presenterar även prestanda för klassificering-algoritmerna genom att klassificera 20 Newsgroup datasetet med LP och MNB. Resultaten dokumenteras genom två olika utvärderingspoäng vilka är F1-score och accuracy. Vi gör även en jämförelse mellan MNB och LP med två olika typer av kärnor, KNN och RBF, på olika mängder av förbehandlade träningsdokument. Resultaten ifrån klassificering-algoritmerna visar att MNB är bättre på att klassificera datasetet än LP. / The large amount of available data on the web can be used to improve the predictions made by machine learning algorithms. The problem is that such data is often in a raw format and needs to be manually labeled by a human before it can be used by a machine learning algorithm. Semi-supervised learning (SSL) is a technique where the algorithm uses a few prepared samples to automatically prepare the rest of the data. One approach to SSL is to represent the data in a graph, also called graph-based semi-supervised learning (GSSL), and find similarities between the nodes for automatic labeling.Our goal in this thesis is to simplify the advanced processes and steps to implement a GSSL-algorithm. We will cover basic tasks such as setup of the developing environment and more advanced steps such as data preprocessing and feature extraction. The feature extraction techniques covered are bag-of-words (BOW) and term frequency-inverse document frequency (TF-IDF). Lastly, we present how to classify documents using Label Propagation (LP) and Multinomial Naive Bayes (MNB) with a detailed explanation of the inner workings of GSSL. We showcased the classification performance by classifying documents from the 20 Newsgroup dataset using LP and MNB. The results are documented using two different evaluation scores called F1-score and accuracy. A comparison between MNB and the LP-algorithm using two different types of kernels, KNN and RBF, was made on different amount of labeled documents. The results from the classification algorithms shows that MNB is better at classifying the data than LP.
|
46 |
Towards Understanding Natural Language: Semantic Parsing, Commonsense Knowledge Acquisition, Reasoning Framework and ApplicationsJanuary 2019 (has links)
abstract: Reasoning with commonsense knowledge is an integral component of human behavior. It is due to this capability that people know that a weak person may not be able to lift someone. It has been a long standing goal of the Artificial Intelligence community to simulate such commonsense reasoning abilities in machines. Over the years, many advances have been made and various challenges have been proposed to test their abilities. The Winograd Schema Challenge (WSC) is one such Natural Language Understanding (NLU) task which was also proposed as an alternative to the Turing Test. It is made up of textual question answering problems which require resolution of a pronoun to its correct antecedent.
In this thesis, two approaches of developing NLU systems to solve the Winograd Schema Challenge are demonstrated. To this end, a semantic parser is presented, various kinds of commonsense knowledge are identified, techniques to extract commonsense knowledge are developed and two commonsense reasoning algorithms are presented. The usefulness of the developed tools and techniques is shown by applying them to solve the challenge. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
|
47 |
Graph-based Modern Nonparametrics For High-dimensional DataWang, Kaijun January 2019 (has links)
Developing nonparametric statistical methods and inference procedures for high-dimensional large data have been a challenging frontier problem of statistics. To attack this problem, in recent years, a clear rising trend has been observed with a radically different viewpoint--``Graph-based Nonparametrics," which is the main research focus of this dissertation. The basic idea consists of two steps: (i) representation step: code the given data using graphs, (ii) analysis step: apply statistical methods on the graph-transformed problem to systematically tackle various types of data structures. Under this general framework, this dissertation develops two major research directions. Chapter 2—based on Mukhopadhyay and Wang (2019a)—introduces a new nonparametric method for high-dimensional k-sample comparison problem that is distribution-free, robust, and continues to work even when the dimension of the data is larger than the sample size. The proposed theory is based on modern LP-nonparametrics tools and unexplored connections with spectral graph theory. The key is to construct a specially-designed weighted graph from the data and to reformulate the k-sample problem into a community detection problem. The procedure is shown to possess various desirable properties along with a characteristic exploratory flavor that has practical consequences. The numerical examples show surprisingly well performance of our method under a broad range of realistic situations. Chapter 3—based on Mukhopadhyay and Wang (2019b)—revisits some foundational questions about network modeling that are still unsolved. In particular, we present unified statistical theory of the fundamental spectral graph methods (e.g., Laplacian, Modularity, Diffusion map, regularized Laplacian, Google PageRank model), which are often viewed as spectral heuristic-based empirical mystery facts. Despite half a century of research, this question has been one of the most formidable open issues, if not the core problem in modern network science. Our approach integrates modern nonparametric statistics, mathematical approximation theory (of integral equations), and computational harmonic analysis in a novel way to develop a theory that unifies and generalizes the existing paradigm. From a practical standpoint, it is shown that this perspective can provide adequate guidance for designing next-generation computational tools for large-scale problems. As an example, we have described the high-dimensional change-point detection problem. Chapter 4 discusses some further extensions and application of our methodologies to regularized spectral clustering and spatial graph regression problems. The dissertation concludes with the a discussion of two important areas of future studies. / Statistics
|
48 |
Construção de redes baseadas em vizinhança para o aprendizado semissupervisionado / Graph construction based on neighborhood for semisupervisedBerton, Lilian 25 January 2016 (has links)
Com o aumento da capacidade de armazenamento, as bases de dados são cada vez maiores e, em muitas situações, apenas um pequeno subconjunto de itens de dados pode ser rotulado. Isto acontece devido ao processo de rotulagem ser frequentemente caro, demorado e necessitar do envolvimento de especialistas humanos. Com isso, diversos algoritmos semissupervisionados foram propostos, mostrando que é possível obter bons resultados empregando conhecimento prévio, relativo à pequena fração de dados rotulados. Dentre esses algoritmos, os que têm ganhado bastante destaque na área têm sido aqueles baseados em redes. Tal interesse, justifica-se pelas vantagens oferecidas pela representação via redes, tais como, a possibilidade de capturar a estrutura topológica dos dados, representar estruturas hierárquicas, bem como modelar manifolds no espaço multi-dimensional. No entanto, existe uma grande quantidade de dados representados em tabelas atributo-valor, nos quais não se poderia aplicar os algoritmos baseados em redes sem antes construir uma rede a partir desses dados. Como a geração das redes, assim como sua relação com o desempenho dos algoritmos têm sido pouco estudadas, esta tese investigou esses aspectos e propôs novos métodos para construção de redes, considerando características ainda não exploradas na literatura. Foram propostos três métodos para construção de redes com diferentes topologias: 1) S-kNN (Sequential k Nearest Neighbors), que gera redes regulares; 2) GBILI (Graph Based on the Informativeness of Labeled Instances) e RGCLI (Robust Graph that Considers Labeled Instances), que exploram os rótulos disponíveis gerando redes com distribuição de grau lei de potência; 3) GBLP (Graph Based on Link Prediction), que se baseia em medidas de predição de links gerando redes com propriedades mundo-pequeno. As estratégias de construção de redes propostas foram analisadas por meio de medidas de teoria dos grafos e redes complexas e validadas por meio da classificação semissupervisionada. Os métodos foram aplicados em benchmarks da área e também na classificação de gêneros musicais e segmentação de imagens. Os resultados mostram que a topologia da rede influencia diretamente os algoritmos de classificação e as estratégias propostas alcançam boa acurácia. / With the increase capacity of storage, databases are getting larger and, in many situations, only a small subset of data items can be labeled. This happens because the labeling process is often expensive, time consuming and requires the involvement of human experts. Hence, several semi-supervised algorithms have been proposed, showing that it is possible to achieve good results by using prior knowledge. Among these algorithms, those based on graphs have gained prominence in the area. Such interest is justified by the benefits provided by the representation via graphs, such as the ability to capture the topological structure of the data, represent hierarchical structures, as well as model manifold in high dimensional spaces. Nevertheless, most of available data is represented by attribute-value tables, making necessary the study of graph construction techniques in order to convert these tabular data into graphs for applying such algorithms. As the generation of the weight matrix and the sparse graph, and their relation to the performance of the algorithms have been little studied, this thesis investigated these aspects and proposed new methods for graph construction with characteristics litle explored in the literature yet. We have proposed three methods for graph construction with different topologies: 1) S-kNN (Sequential k Nearest Neighbors) that generates regular graphs; 2) GBILI (Graph Based on the informativeness of Labeled Instances) and RGCLI (Robust Graph that Considers Labeled Instances), which exploit the labels available generating power-law graphs; 3) GBLP (Graph Based on Link Prediction), which are based on link prediction measures and generates small-world graphs. The strategies proposed were analyzed by graph theory and complex networks measures and validated in semi-supervised classification tasks. The methods were applied in benchmarks of the area and also in the music genre classification and image segmentation. The results show that the topology of the graph directly affects the classification algorithms and the proposed strategies achieve good accuracy.
|
49 |
Constrained graph-based semi-supervised learning with higher order regularization / Aprendizado semissupervisionado restrito baseado em grafos com regularização de ordem elevadaSousa, Celso Andre Rodrigues de 10 August 2017 (has links)
Graph-based semi-supervised learning (SSL) algorithms have been widely studied in the last few years. Most of these algorithms were designed from unconstrained optimization problems using a Laplacian regularizer term as smoothness functional in an attempt to reflect the intrinsic geometric structure of the datas marginal distribution. Although a number of recent research papers are still focusing on unconstrained methods for graph-based SSL, a recent statistical analysis showed that many of these algorithms may be unstable on transductive regression. Therefore, we focus on providing new constrained methods for graph-based SSL. We begin by analyzing the regularization framework of existing unconstrained methods. Then, we incorporate two normalization constraints into the optimization problem of three of these methods. We show that the proposed optimization problems have closed-form solution. By generalizing one of these constraints to any distribution, we provide generalized methods for constrained graph-based SSL. The proposed methods have a more flexible regularization framework than the corresponding unconstrained methods. More precisely, our methods can deal with any graph Laplacian and use higher order regularization, which is effective on general SSL taks. In order to show the effectiveness of the proposed methods, we provide comprehensive experimental analyses. Specifically, our experiments are subdivided into two parts. In the first part, we evaluate existing graph-based SSL algorithms on time series data to find their weaknesses. In the second part, we evaluate the proposed constrained methods against six state-of-the-art graph-based SSL algorithms on benchmark data sets. Since the widely used best case analysis may hide useful information concerning the SSL algorithms performance with respect to parameter selection, we used recently proposed empirical evaluation models to evaluate our results. Our results show that our methods outperforms the competing methods on most parameter settings and graph construction methods. However, we found a few experimental settings in which our methods showed poor performance. In order to facilitate the reproduction of our results, the source codes, data sets, and experimental results are freely available. / Algoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
|
50 |
Historical document image analysis : a structural approach based on texture / Analyse d'images de documents patrimoniaux : une approche structurelle à base de textureMehri, Maroua 28 May 2015 (has links)
Les récents progrès dans la numérisation des collections de documents patrimoniaux ont ravivé de nouveaux défis afin de garantir une conservation durable et de fournir un accès plus large aux documents anciens. En parallèle de la recherche d'information dans les bibliothèques numériques ou l'analyse du contenu des pages numérisées dans les ouvrages anciens, la caractérisation et la catégorisation des pages d'ouvrages anciens a connu récemment un regain d'intérêt. Les efforts se concentrent autant sur le développement d'outils rapides et automatiques de caractérisation et catégorisation des pages d'ouvrages anciens, capables de classer les pages d'un ouvrage numérisé en fonction de plusieurs critères, notamment la structure des mises en page et/ou les caractéristiques typographiques/graphiques du contenu de ces pages. Ainsi, dans le cadre de cette thèse, nous proposons une approche permettant la caractérisation et la catégorisation automatiques des pages d'un ouvrage ancien. L'approche proposée se veut indépendante de la structure et du contenu de l'ouvrage analysé. Le principal avantage de ce travail réside dans le fait que l'approche s'affranchit des connaissances préalables, que ce soit concernant le contenu du document ou sa structure. Elle est basée sur une analyse des descripteurs de texture et une représentation structurelle en graphe afin de fournir une description riche permettant une catégorisation à partir du contenu graphique (capturé par la texture) et des mises en page (représentées par des graphes). En effet, cette catégorisation s'appuie sur la caractérisation du contenu de la page numérisée à l'aide d'une analyse des descripteurs de texture, de forme, géométriques et topologiques. Cette caractérisation est définie à l'aide d'une représentation structurelle. Dans le détail, l'approche de catégorisation se décompose en deux étapes principales successives. La première consiste à extraire des régions homogènes. La seconde vise à proposer une signature structurelle à base de texture, sous la forme d'un graphe, construite à partir des régions homogènes extraites et reflétant la structure de la page analysée. Cette signature assure la mise en œuvre de nombreuses applications pour gérer efficacement un corpus ou des collections de livres patrimoniaux (par exemple, la recherche d'information dans les bibliothèques numériques en fonction de plusieurs critères, ou la catégorisation des pages d'un même ouvrage). En comparant les différentes signatures structurelles par le biais de la distance d'édition entre graphes, les similitudes entre les pages d'un même ouvrage en termes de leurs mises en page et/ou contenus peuvent être déduites. Ainsi de suite, les pages ayant des mises en page et/ou contenus similaires peuvent être catégorisées, et un résumé/une table des matières de l'ouvrage analysé peut être alors généré automatiquement. Pour illustrer l'efficacité de la signature proposée, une étude expérimentale détaillée a été menée dans ce travail pour évaluer deux applications possibles de catégorisation de pages d'un même ouvrage, la classification non supervisée de pages et la segmentation de flux de pages d'un même ouvrage. En outre, les différentes étapes de l'approche proposée ont donné lieu à des évaluations par le biais d'expérimentations menées sur un large corpus de documents patrimoniaux. / Over the last few years, there has been tremendous growth in digitizing collections of cultural heritage documents. Thus, many challenges and open issues have been raised, such as information retrieval in digital libraries or analyzing page content of historical books. Recently, an important need has emerged which consists in designing a computer-aided characterization and categorization tool, able to index or group historical digitized book pages according to several criteria, mainly the layout structure and/or typographic/graphical characteristics of the historical document image content. Thus, the work conducted in this thesis presents an automatic approach for characterization and categorization of historical book pages. The proposed approach is applicable to a large variety of ancient books. In addition, it does not assume a priori knowledge regarding document image layout and content. It is based on the use of texture and graph algorithms to provide a rich and holistic description of the layout and content of the analyzed book pages to characterize and categorize historical book pages. The categorization is based on the characterization of the digitized page content by texture, shape, geometric and topological descriptors. This characterization is represented by a structural signature. More precisely, the signature-based characterization approach consists of two main stages. The first stage is extracting homogeneous regions. Then, the second one is proposing a graph-based page signature which is based on the extracted homogeneous regions, reflecting its layout and content. Afterwards, by comparing the different obtained graph-based signatures using a graph-matching paradigm, the similarities of digitized historical book page layout and/or content can be deduced. Subsequently, book pages with similar layout and/or content can be categorized and grouped, and a table of contents/summary of the analyzed digitized historical book can be provided automatically. As a consequence, numerous signature-based applications (e.g. information retrieval in digital libraries according to several criteria, page categorization) can be implemented for managing effectively a corpus or collections of books. To illustrate the effectiveness of the proposed page signature, a detailed experimental evaluation has been conducted in this work for assessing two possible categorization applications, unsupervised page classification and page stream segmentation. In addition, the different steps of the proposed approach have been evaluated on a large variety of historical document images.
|
Page generated in 0.0544 seconds