• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 3
  • 2
  • 1
  • Tagged with
  • 27
  • 27
  • 16
  • 12
  • 10
  • 8
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A constituição semiótica da imagem-documento

Costa, Rafael Wagner dos Santos 12 April 2013 (has links)
Submitted by Nara Lays Domingues Viana Oliveira (naradv) on 2015-06-16T14:36:37Z No. of bitstreams: 1 000007F4.pdf: 2768727 bytes, checksum: 9fc3a810e196d69669b571b4c67a6252 (MD5) / Made available in DSpace on 2015-06-16T14:36:37Z (GMT). No. of bitstreams: 1 000007F4.pdf: 2768727 bytes, checksum: 9fc3a810e196d69669b571b4c67a6252 (MD5) Previous issue date: 2013-04-12 / Nenhuma / Esta tese tem como finalidade a caracterização do estatuto semiótico da imagemdocumento à luz dos pensamentos de Gilles Deleuze e Charles Sanders Peirce. Ao se retomar a influência peirceana sobre o pensamento cinematográfico de Deleuze – em geral associado ao bergsonismo – empreendeu-se uma revisão crítica das imagens-movimento e das imagenstempo sob o viés semiótico e se descobriu um novo tipo de imagem derivada da experiência cinematográfica contemporânea e que introduz problemáticas bem específicas para o estudo do cinema: a imagem-documento, foco desta tese. A imagem-documento é aqui concebida como imagem-tempo, mais precisamente como a face documental da imagem-cristal, cuja matéria expressiva é constituída pelas figuras da fabulação, da potência do falso, da alteridade do personagem, do agente provocador, do personagem intercessor, do jogo cênico, do transe, do deslocamento e da montagem nuclear. Nesse cinema de imagem-documento, ao invés de se identificarem os índices de uma realidade já vivida (como nos documentários clássicos), elas se comportam como flechas do tempo (em séries) que apontam para o vir a ser do mundo, inseparáveis das ideias de ação, intervenção e invenção, possibilitadas pelas imagens de fabulação. Além da demonstração das relações teóricas entre Peirce e Deleuze – que evidenciam as potencialidades desse encontro –, a imagem-documento foi abordada ensaisticamente. Nos ensaios, procurou-se estabelecer de que forma tal imagem funciona em cenas dos filmes Iracema, uma transa amazônica (1974), Jogo de cena (2007) e Di Glauber (1977), propostos como experiências brasileiras da imagem-documento. Esta tese, assim, procura revelar os modos como os pensamentos de Peirce e Deleuze se agenciam para dar lugar a uma nova imagem de pensamento cuja matéria de expressão são as imagens cinematográficas. Por fim, esta pesquisa também contribui para demonstrar que a prática da escrita deleuzeana concebida pelo viés peirceano pode trazer relevantes contribuições para ambas as teorias. / This thesis aims to characterize the semiotic state of the document-image in the light of Gilles Deleuze and Charles Sanders Peirce’s thoughts. Resuming the influence of Peirce in Deleuze’s cinematic thought – generally associated with Bergson – allowed a critical review about the movement-images and the time-images in the light of semiotics, and discovered a new type of image derived from the contemporary cinematic experience that introduces specific problematic for the study of cinema: the document-image, the core of the thesis. The image-document is thought as a time-image, more precisely, as a documentary face of the image-crystal, whose expressive matter consists in the figures of fabulation, the power of the false, the otherness of the character, the agent provocateur, the Character intercessor, the scenic game, the trance, the displacement and the nuclear assembly. In this cinema of document-image, instead of identifying the index of a reality already experienced (as in the classic documentaries), it behaves like a arrows of time (in series) that point to become of the world, inseparable from ideas of action, intervention and invention, made possible by the images of fabulation. In addition to demonstrate the theoretical relations between Peirce and Deleuze – where highlight the potential of this encounter –, the document-image was approached through essays. In these essays we intended to establish how the image-document works in scenes of the films Iracema, uma transa amazônica (1974), Jogo de cena (2007) and Di Glauber (1977), proposed as Brazilian experiences of image-document. This thesis, therefore, intends to reveal the ways in which thoughts of Peirce and Deleuze can be the way for a new image of thought whose matter of expression are the cinematic images. Finally, this research contributes to demonstrate that the practice of deleuzean writing conceived in the light of Peirce can bring important contributions to both theories.
12

Analyzing symbols in architectural floor plans via traditional computer vision and deep learning approaches

Rezvanifar, Alireza 13 December 2021 (has links)
Architectural floor plans are scale-accurate 2D drawings of one level of a building, seen from above, which convey structural and semantic information related to rooms, walls, symbols, textual data, etc. They consist of lines, curves, symbols, and textual markings, showing the relationships between rooms and all physical features, required for the proper construction or renovation of the building. First, this thesis provides a thorough study of state-of-the-art on symbol spotting methods for architectural drawings, an application domain providing the document image analysis and graphic recognition communities with an interesting set of challenges linked to the sheer complexity and density of embedded information, that have yet to be resolved. Second, we propose a hybrid method that capitalizes on strengths of both vector-based and pixel-based symbol spotting techniques. In the description phase, the salient geometric constituents of a symbol are extracted by a variety of vectorization techniques, including a proposed voting-based algorithm for finding partial ellipses. This enables us to better handle local shape irregularities and boundary discontinuities, as well as partial occlusion and overlap. In the matching phase, the spatial relationship between the geometric primitives is encoded via a primitive-aware proximity graph. A statistical approach is then used to rapidly yield a coarse localization of symbols within the plan. Localization is further refined with a pixel-based step implementing a modified cross-correlation function. Experimental results on the public SESYD synthetic dataset and real-world images demonstrate that our approach clearly outperforms other popular symbol spotting approaches. Traditional on-the-fly symbol spotting methods are unable to address the semantic challenge of graphical notation variability, i.e. low intra-class symbol similarity, an issue that is particularly important in architectural floor plan analysis. The presence of occlusion and clutter, characteristic of real-world plans, along with a varying graphical symbol complexity from almost trivial to highly complex, also pose challenges to existing spotting methods. Third, we address all the above issues by leveraging recent advances in deep learning-based neural networks and adapting an object detection framework based on the YOLO (You Only Look Once) architecture. We propose a training strategy based on tiles, avoiding many issues particular to deep learning-based object detection networks related to the relatively small size of symbols compared to entire floor plans, aspect ratios, and data augmentation. Experimental results demonstrate that our method successfully detects architectural symbols with low intra-class similarity and of variable graphical complexity, even in the presence of heavy occlusion and clutter. / Graduate
13

Deep Learning for Document Image Analysis

Tensmeyer, Christopher Alan 01 April 2019 (has links)
Automatic machine understanding of documents from image inputs enables many applications in modern document workflows, digital archives of historical documents, and general machine intelligence, among others. Together, the techniques for understanding document images comprise the field of Document Image Analysis (DIA). Within DIA, the research community has identified several sub-problems, such as page segmentation and Optical Character Recognition (OCR). As the field has matured, there has been a trend of moving away from heuristic-based methods, designed for particular tasks and domains of documents, and moving towards machine learning methods that learn to solve tasks from examples of input/output pairs. Within machine learning, a particular class of models, known as deep learning models, have established themselves as the state-of-the-art for many image-based applications, including DIA. While traditional machine learning models typically operate on features designed by researchers, deep learning models are able to learn task-specific features directly from raw pixel inputs.This dissertation is collection of papers that proposes several deep learning models to solve a variety of tasks within DIA. The first task is historical document binarization, where an input image of a degraded historical document is converted to a bi-tonal image to separate foreground text from background regions. The next part of the dissertation considers document segmentation problems, including identifying the boundary between the document page and its background, as well as segmenting an image of a data table into rows, columns, and cells. Finally, a variety of deep models are proposed to solve recognition tasks. These tasks include whole document image classification, identifying the font of a given piece of text, and transcribing handwritten text in low-resource languages.
14

Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis / Segmentation d'images hétérogènes de documents : une approche basée sur l'apprentissage automatique de données, l'analyse en composantes connexes et l'analyse de texture

Bonakdar Sakhi, Omid 06 December 2012 (has links)
La segmentation de page est l'une des étapes les plus importantes de l'analyse d'images de documents. Idéalement, une méthode de segmentation doit être capable de reconstituer la structure complète de toute page de document, en distinguant les zones de textes, les parties graphiques, les photographies, les croquis, les figures, les tables, etc. En dépit de nombreuses méthodes proposées à ce jour pour produire une segmentation de page correcte, les difficultés sont toujours nombreuses. Le chef de file du projet qui a rendu possible le financement de ce travail de thèse (*) utilise une chaîne de traitement complète dans laquelle les erreurs de segmentation sont corrigées manuellement. Hormis les coûts que cela représente, le résultat est subordonné au réglage de nombreux paramètres. En outre, certaines erreurs échappent parfois à la vigilance des opérateurs humains. Les résultats des méthodes de segmentation de page sont généralement acceptables sur des documents propres et bien imprimés; mais l'échec est souvent à constater lorsqu'il s'agit de segmenter des documents manuscrits, lorsque la structure de ces derniers est vague, ou lorsqu'ils contiennent des notes de marge. En outre, les tables et les publicités présentent autant de défis supplémentaires à relever pour les algorithmes de segmentation. Notre méthode traite ces problèmes. La méthode est divisée en quatre parties : - A contrario de ce qui est fait dans la plupart des méthodes de segmentation de page classiques, nous commençons par séparer les parties textuelles et graphiques de la page en utilisant un arbre de décision boosté. - Les parties textuelles et graphiques sont utilisées, avec d'autres fonctions caractéristiques, par un champ conditionnel aléatoire bidimensionnel pour séparer les colonnes de texte. - Une méthode de détection de lignes, basée sur les profils partiels de projection, est alors lancée pour détecter les lignes de texte par rapport aux frontières des zones de texte. - Enfin, une nouvelle méthode de détection de paragraphes, entraînée sur les modèles de paragraphes les plus courants, est appliquée sur les lignes de texte pour extraire les paragraphes, en s'appuyant sur l'apparence géométrique des lignes de texte et leur indentation. Notre contribution sur l'existant réside essentiellement dans l'utilisation, ou l'adaptation, d'algorithmes empruntés aux méthodes d'apprentissage automatique de données, pour résoudre les cas les plus difficiles. Nous démontrons en effet un certain nombre d'améliorations : sur la séparation des colonnes de texte lorsqu'elles sont proches l'une de l'autre~; sur le risque de fusion d'au moins deux cellules adjacentes d'une même table~; sur le risque qu'une région encadrée fusionne avec d'autres régions textuelles, en particulier les notes de marge, même lorsque ces dernières sont écrites avec une fonte proche de celle du corps du texte. L'évaluation quantitative, et la comparaison des performances de notre méthode avec des algorithmes concurrents par des métriques et des méthodologies d'évaluation reconnues, sont également fournies dans une large mesure.(*) Cette thèse a été financée par le Conseil Général de Seine-Saint-Denis, par l'intermédiaire du projet Demat-Factory, initié et conduit par SAFIG SA / Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
15

Document image segmentation : content categorization / Analyse d'images de documents : segmentation du contenu

Felhi, Mehdi 10 July 2014 (has links)
Dans cette thèse, nous abordons le problème de la segmentation des images de documents en proposant de nouvelles approches pour la détection et la classification de leurs contenus. Dans un premier lieu, nous étudions le problème de l'estimation d'inclinaison des documents numérisées. Le but de ce travail étant de développer une approche automatique en mesure d'estimer l'angle d'inclinaison du texte dans les images de document. Notre méthode est basée sur la méthode Maximum Gradient Difference (MGD), la R-signature et la transformée de Ridgelets. Nous proposons ensuite une approche hybride pour la segmentation des documents. Nous décrivons notre descripteur de trait qui permet de détecter les composantes de texte en se basant sur la squeletisation. La méthode est appliquée pour la segmentation des images de documents numérisés (journaux et magazines) qui contiennent du texte, des lignes et des régions de photos. Le dernier volet de la thèse est consacré à la détection du texte dans les photos et posters. Pour cela, nous proposons un ensemble de descripteurs de texte basés sur les caractéristiques du trait. Notre approche commence par l'extraction et la sélection des candidats de caractères de texte. Deux méthodes ont été établies pour regrouper les caractères d'une même ligne de texte (mot ou phrase) ; l'une consiste à parcourir en profondeur un graphe, l'autre consiste à établir un critère de stabilité d'une région de texte. Enfin, les résultats sont affinés en classant les candidats de texte en régions « texte » et « non-texte » en utilisant une version à noyau du classifieur Support Vector Machine (K-SVM) / In this thesis I discuss the document image segmentation problem and I describe our new approaches for detecting and classifying document contents. First, I discuss our skew angle estimation approach. The aim of this approach is to develop an automatic approach able to estimate, with precision, the skew angle of text in document images. Our method is based on Maximum Gradient Difference (MGD) and R-signature. Then, I describe our second method based on Ridgelet transform.Our second contribution consists in a new hybrid page segmentation approach. I first describe our stroke-based descriptor that allows detecting text and line candidates using the skeleton of the binarized document image. Then, an active contour model is applied to segment the rest of the image into photo and background regions. Finally, text candidates are clustered using mean-shift analysis technique according to their corresponding sizes. The method is applied for segmenting scanned document images (newspapers and magazines) that contain text, lines and photo regions. Finally, I describe our stroke-based text extraction method. Our approach begins by extracting connected components and selecting text character candidates over the CIE LCH color space using the Histogram of Oriented Gradients (HOG) correlation coefficients in order to detect low contrasted regions. The text region candidates are clustered using two different approaches ; a depth first search approach over a graph, and a stable text line criterion. Finally, the resulted regions are refined by classifying the text line candidates into « text» and « non-text » regions using a Kernel Support Vector Machine K-SVM classifier
16

Historical document image analysis : a structural approach based on texture / Analyse d'images de documents patrimoniaux : une approche structurelle à base de texture

Mehri, Maroua 28 May 2015 (has links)
Les récents progrès dans la numérisation des collections de documents patrimoniaux ont ravivé de nouveaux défis afin de garantir une conservation durable et de fournir un accès plus large aux documents anciens. En parallèle de la recherche d'information dans les bibliothèques numériques ou l'analyse du contenu des pages numérisées dans les ouvrages anciens, la caractérisation et la catégorisation des pages d'ouvrages anciens a connu récemment un regain d'intérêt. Les efforts se concentrent autant sur le développement d'outils rapides et automatiques de caractérisation et catégorisation des pages d'ouvrages anciens, capables de classer les pages d'un ouvrage numérisé en fonction de plusieurs critères, notamment la structure des mises en page et/ou les caractéristiques typographiques/graphiques du contenu de ces pages. Ainsi, dans le cadre de cette thèse, nous proposons une approche permettant la caractérisation et la catégorisation automatiques des pages d'un ouvrage ancien. L'approche proposée se veut indépendante de la structure et du contenu de l'ouvrage analysé. Le principal avantage de ce travail réside dans le fait que l'approche s'affranchit des connaissances préalables, que ce soit concernant le contenu du document ou sa structure. Elle est basée sur une analyse des descripteurs de texture et une représentation structurelle en graphe afin de fournir une description riche permettant une catégorisation à partir du contenu graphique (capturé par la texture) et des mises en page (représentées par des graphes). En effet, cette catégorisation s'appuie sur la caractérisation du contenu de la page numérisée à l'aide d'une analyse des descripteurs de texture, de forme, géométriques et topologiques. Cette caractérisation est définie à l'aide d'une représentation structurelle. Dans le détail, l'approche de catégorisation se décompose en deux étapes principales successives. La première consiste à extraire des régions homogènes. La seconde vise à proposer une signature structurelle à base de texture, sous la forme d'un graphe, construite à partir des régions homogènes extraites et reflétant la structure de la page analysée. Cette signature assure la mise en œuvre de nombreuses applications pour gérer efficacement un corpus ou des collections de livres patrimoniaux (par exemple, la recherche d'information dans les bibliothèques numériques en fonction de plusieurs critères, ou la catégorisation des pages d'un même ouvrage). En comparant les différentes signatures structurelles par le biais de la distance d'édition entre graphes, les similitudes entre les pages d'un même ouvrage en termes de leurs mises en page et/ou contenus peuvent être déduites. Ainsi de suite, les pages ayant des mises en page et/ou contenus similaires peuvent être catégorisées, et un résumé/une table des matières de l'ouvrage analysé peut être alors généré automatiquement. Pour illustrer l'efficacité de la signature proposée, une étude expérimentale détaillée a été menée dans ce travail pour évaluer deux applications possibles de catégorisation de pages d'un même ouvrage, la classification non supervisée de pages et la segmentation de flux de pages d'un même ouvrage. En outre, les différentes étapes de l'approche proposée ont donné lieu à des évaluations par le biais d'expérimentations menées sur un large corpus de documents patrimoniaux. / Over the last few years, there has been tremendous growth in digitizing collections of cultural heritage documents. Thus, many challenges and open issues have been raised, such as information retrieval in digital libraries or analyzing page content of historical books. Recently, an important need has emerged which consists in designing a computer-aided characterization and categorization tool, able to index or group historical digitized book pages according to several criteria, mainly the layout structure and/or typographic/graphical characteristics of the historical document image content. Thus, the work conducted in this thesis presents an automatic approach for characterization and categorization of historical book pages. The proposed approach is applicable to a large variety of ancient books. In addition, it does not assume a priori knowledge regarding document image layout and content. It is based on the use of texture and graph algorithms to provide a rich and holistic description of the layout and content of the analyzed book pages to characterize and categorize historical book pages. The categorization is based on the characterization of the digitized page content by texture, shape, geometric and topological descriptors. This characterization is represented by a structural signature. More precisely, the signature-based characterization approach consists of two main stages. The first stage is extracting homogeneous regions. Then, the second one is proposing a graph-based page signature which is based on the extracted homogeneous regions, reflecting its layout and content. Afterwards, by comparing the different obtained graph-based signatures using a graph-matching paradigm, the similarities of digitized historical book page layout and/or content can be deduced. Subsequently, book pages with similar layout and/or content can be categorized and grouped, and a table of contents/summary of the analyzed digitized historical book can be provided automatically. As a consequence, numerous signature-based applications (e.g. information retrieval in digital libraries according to several criteria, page categorization) can be implemented for managing effectively a corpus or collections of books. To illustrate the effectiveness of the proposed page signature, a detailed experimental evaluation has been conducted in this work for assessing two possible categorization applications, unsupervised page classification and page stream segmentation. In addition, the different steps of the proposed approach have been evaluated on a large variety of historical document images.
17

Simulação de forças físicas para segmentação e restauração de dígitos e sequências de dígitos em imagens de documentos manuscritos

LOPES FILHO, Alberto Nicodemus Gomes 26 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T14:22:48Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese - Alberto Lopes_FINAL.pdf: 3638051 bytes, checksum: eaabca9285409b7fd175305c73677557 (MD5) / Made available in DSpace on 2016-03-15T14:22:48Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese - Alberto Lopes_FINAL.pdf: 3638051 bytes, checksum: eaabca9285409b7fd175305c73677557 (MD5) Previous issue date: 2015-02-26 / Dentre os problemas e desafios que permeiam o processo de digitalização de documentos e todos os passos subsequentes até a transposição da informação para o meio digital, dois pontos específicos são focados: o texto partido ou degradado e texto escrito em tamanha proximidade que geram sobreposições dos traços. Assim, métodos para solucionar tais problemas, foram pesquisados e desenvolvidos. Baseamos nossa abordagem na emulação de forças físicas de inércia e centrípeta pois entendemos que estas podem ser bem utilizadas para o processamento de imagens de caracteres manuscritos. Para o problema de dígitos partidos, foi desenvolvida uma solução para a restauração de dígitos isolados quebrados e de cadeias de dígitos quebrados através da emulação das forças centrípeta e de inércia. Esta solução tem como princípio gerar uma reconstrução da quebra de modo que se assemelhe à escrita do dígito em questão. Também é abordado a sobreposição de pares de dígitos, problema para o qual foi proposta uma solução de segmentação. Esta solução de segmentação se baseia no conceito de uma bola deformável que tem seus movimentos regidos pela emulação da força de inércia e pela deformação que lhe é permitida receber. Ainda, para desenvolvimento e experimentação dos métodos, foram formadas bases de imagens pertinentes a cada aplicação. Os resultados obtidos mostram desempenhos promissores. Ao aplicar a reconstrução, obtivemos um ganho de aproximadamente seis pontos percentuais em taxa de reconhecimento em relação ao reconhecimento dos dígitos partidos. Já a segmentação provou que supera outros dois métodos de segmentação quando aplicamos o reconhecimento aos dígitos segmentados. Também deve-se ressaltar a questão do custo computacional, especificamente a solução voltada para a segmentação de dígitos sobrepostos, onde seu custo se apresenta mais baixo em relação aos métodos similares pesquisados e testados. Assim, mostramos que os métodos propostos atingem seus objetivos, aliando bons desempenhos com custos computacionais baixos. / Among the problems and challenges that surround the process of document digitization and all subsequent steps until the conversion of the information to a digital medium, two specific steps are focused: broken text and text written in such proximity that cause overlapping of strokes. Methods to solve these problems were researched and developed. We base our approach on the emulation of physical forces of inertia and centripetal force, since it is our understanding that the emulation of such forces can be used for the processing of images of handwritten characters and digits. For the problem of broken digits, a solution for the restoration of isolated broken digits and chains of broken digits through the emulations of inertia and centripetal force was developed. This solution has as principle to generate a reconstruction of the break in such a way that it resembles closely the writing style of the digit in question. We also tackle overlapping pairs of digits, problem for which we propose a segmentation solution. This segmentation is based on the concept of a deformable ball that has its movements governed by the emulation of inertia and the degree of deformation the ball is allowed to have. For development and experimentation of the created methods, image databases pertinent to each application were formed. The obtained results show promising performance. When applying the reconstruction, we obtained a gain of approximately six percentage points in recognition rates when compared to rates obtained for broken digits. In regards to segmentation, it proved to outperform two other methods when recognition is applied to the output segmented digits. The computational cost of the methods should also be pointed out, specifically regarding the solution created for the segmentation of overlapped digits, which is lower when compared to other similar methods that were researched and tested. Therefore, we show that the proposed methods reach their goals, coupling performance with low computational costs.
18

Ověření pravosti razítek v dokumentu / Verification of Authenticity of Stamps in Documents

Micenková, Barbora January 2011 (has links)
Klasická inkoustová razítka, která se používají k autorizaci dokumentů, se dnes díky rozšíření moderních technologií dají relativně snadno padělat metodou oskenování a vytištění. V rámci diplomové práce je vyvíjen automatický nástroj pro ověření pravosti razítek, který najde využití zejména v prostředích, kde je nutné zpracovávat velké množství dokumentů. Procesu ověření pravosti razítka musí přirozeně předcházet jeho detekce v dokumentu - úloha zpracování obrazu, která zatím nemá přesvědčivé řešení. V této diplomové práci je navržena zcela nová metoda detekce a ověření pravosti razítka v barevných obrazech dokumentů. Tato metoda zahrnuje plnou segmentaci stránky za účelem určení kandidátních řešení, dále extrakci příznaků a následnou klasifikaci kandidátů za pomoci algoritmu podpůrných vektorů (SVM). Evaluace ukázala, že algoritmus umožňuje rozlišovat razítka od jiných barevných objektů v dokumentu jako jsou například loga a barevné nápisy. Kromě toho algoritmus dokáže rozlišit pravá razítka od kopií.
19

Camera-Captured Document Image Analysis

Kasar, Thotreingam 11 1900 (has links) (PDF)
Text is no longer confined to scanned pages and often appears in camera-based images originating from text on real world objects. Unlike the images from conventional flatbed scanners, which have a controlled acquisition environment, camera-based images pose new challenges such as uneven illumination, blur, poor resolution, perspective distortion and 3D deformations that can severely affect the performance of any optical character recognition (OCR) system. Due to the variations in the imaging condition as well as the target document type, traditional OCR systems, designed for scanned images, cannot be directly applied to camera-captured images and a new level of processing needs to be addressed. In this thesis, we study some of the issues commonly encountered in camera-based image analysis and propose novel methods to overcome them. All the methods make use of color connected components. 1. Connected component descriptor for document image mosaicing Document image analysis often requires mosaicing when it is not possible to capture a large document at a reasonable resolution in a single exposure. Such a document is captured in parts and mosaicing stitches them into a single image. Since connected components (CCs) in a document image can easily be extracted regardless of the image rotation, scale and perspective distortion, we design a robust feature named connected component descriptor that is tailored for mosaicing camera-captured document images. The method involves extraction of a circular measurement region around each CC and its description using the angular radial transform (ART). To ensure geometric consistency during feature matching, the ART coefficients of a CC are augmented with those of its 2 nearest neighbors. Our method addresses two critical issues often encountered in correspondence matching: (i) the stability of features and (ii) robustness against false matches due to multiple instances of many characters in a document image. We illustrate the effectiveness of the proposed method on camera-captured document images exhibiting large variations in viewpoint, illumination and scale. 2. Font and background color independent text binarization The first step in an OCR system, after document acquisition, is binarization, which converts a gray-scale/color image into a two-level image -the foreground text and the background. We propose two methods for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. (a) Hierarchical CC Analysis: The method employs an edge-based connected component approach and automatically determines a threshold for each component. It overcomes several limitations of existing locally-adaptive thresholding techniques. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. However, the method is sensitive to complex backgrounds since it relies on the edge information to identify CCs. It also uses script-specific characteristics to filter out edge components before binarization and currently works well for Roman script only. (b) Contour-based color clustering (COCOCLUST): To overcome the above limitations, we introduce a novel unsupervised color clustering approach that operates on a ‘small’ representative set of color pixels identified using the contour information. Based on the assumption that every character is of a uniform color, we analyze each color layer individually and identify potential text regions for binarization. Experiments on several complex images having large variations in font, size, color, orientation and script illustrate the robustness of the method. 3. Multi-script and multi-oriented text extraction from scene images Scene text understanding normally involves a pre-processing step of text detection and extraction before subjecting the acquired image for character recognition task. The subsequent recognition task is performed only on the detected text regions so as to mitigate the effect of background complexity. We propose a color-based CC labeling for robust text segmentation from natural scene images. Text CCs are identified using a combination of support vector machine and neural network classifiers trained on a set of low-level features derived from the boundary, stroke and gradient information. We develop a semiautomatic annotation toolkit to generate pixel-accurate groundtruth of 100 scenic images containing text in various layout styles and multiple scripts. The overall precision, recall and f-measure obtained on our dataset are 0.8, 0.86 and 0.83, respectively. The proposed method is also compared with others in the literature using the ICDAR 2003 robust reading competition dataset, which, however, has only horizontal English text. The overall precision, recall and f-measure obtained are 0.63, 0.59 and 0.61 respectively, which is comparable to the best performing methods in the ICDAR 2005 text locating competition. A recent method proposed by Epshtein et al. [1] achieves better results but it cannot handle arbitrarily oriented text. Our method, however, works well for generic scene images having arbitrary text orientations. 4. Alignment of curved text lines Conventional OCR systems perform poorly on document images that contain multi-oriented text lines. We propose a technique that first identifies individual text lines by grouping adjacent CCs based on their proximity and regularity. For each identified text string, a B-spline curve is fitted to the centroids of the constituent characters and normal vectors are computed along the fitted curve. Each character is then individually rotated such that the corresponding normal vector is aligned with the vertical axis. The method has been tested on a data set consisting of 50 images with text laid out in various ways namely along arcs, waves, triangles and a combination of these with linearly skewed text lines. It yields 95.9% recognition accuracy on text strings, where, before alignment, state-of-the-art OCRs fail to recognize any text. The CC-based pre-processing algorithms developed are well-suited for processing camera-captured images. We demonstrate the feasibility of the algorithms on the publicly-available ICDAR 2003 robust reading competition dataset and our own database comprising camera-captured document images that contain multiple scripts and arbitrary text layouts.
20

Information spotting in huge repositories of scanned document images / Localisation d'information dans des très grands corpus de documents numérisés

Dang, Quoc Bao 06 April 2018 (has links)
Ce travail vise à développer un cadre générique qui est capable de produire des applications de localisation d'informations à partir d’une caméra (webcam, smartphone) dans des très grands dépôts d'images de documents numérisés et hétérogènes via des descripteurs locaux. Ainsi, dans cette thèse, nous proposons d'abord un ensemble de descripteurs qui puissent être appliqués sur des contenus aux caractéristiques génériques (composés de textes et d’images) dédié aux systèmes de recherche et de localisation d'images de documents. Nos descripteurs proposés comprennent SRIF, PSRIF, DELTRIF et SSKSRIF qui sont construits à partir de l’organisation spatiale des points d’intérêts les plus proches autour d'un point-clé pivot. Tous ces points sont extraits à partir des centres de gravité des composantes connexes de l‘image. A partir de ces points d’intérêts, des caractéristiques géométriques invariantes aux dégradations sont considérées pour construire nos descripteurs. SRIF et PSRIF sont calculés à partir d'un ensemble local des m points d’intérêts les plus proches autour d'un point d’intérêt pivot. Quant aux descripteurs DELTRIF et SSKSRIF, cette organisation spatiale est calculée via une triangulation de Delaunay formée à partir d'un ensemble de points d’intérêts extraits dans les images. Cette seconde version des descripteurs permet d’obtenir une description de forme locale sans paramètres. En outre, nous avons également étendu notre travail afin de le rendre compatible avec les descripteurs classiques de la littérature qui reposent sur l’utilisation de points d’intérêts dédiés de sorte qu'ils puissent traiter la recherche et la localisation d'images de documents à contenu hétérogène. La seconde contribution de cette thèse porte sur un système d'indexation de très grands volumes de données à partir d’un descripteur volumineux. Ces deux contraintes viennent peser lourd sur la mémoire du système d’indexation. En outre, la très grande dimensionnalité des descripteurs peut amener à une réduction de la précision de l'indexation, réduction liée au problème de dimensionnalité. Nous proposons donc trois techniques d'indexation robustes, qui peuvent toutes être employées sans avoir besoin de stocker les descripteurs locaux dans la mémoire du système. Cela permet, in fine, d’économiser la mémoire et d’accélérer le temps de recherche de l’information, tout en s’abstrayant d’une validation de type distance. Pour cela, nous avons proposé trois méthodes s’appuyant sur des arbres de décisions : « randomized clustering tree indexing” qui hérite des propriétés des kd-tree, « kmean-tree » et les « random forest » afin de sélectionner de manière aléatoire les K dimensions qui permettent de combiner la plus grande variance expliquée pour chaque nœud de l’arbre. Nous avons également proposé une fonction de hachage étendue pour l'indexation de contenus hétérogènes provenant de plusieurs couches de l'image. Comme troisième contribution de cette thèse, nous avons proposé une méthode simple et robuste pour calculer l'orientation des régions obtenues par le détecteur MSER, afin que celui-ci puisse être combiné avec des descripteurs dédiés. Comme la plupart de ces descripteurs visent à capturer des informations de voisinage autour d’une région donnée, nous avons proposé un moyen d'étendre les régions MSER en augmentant le rayon de chaque région. Cette stratégie peut également être appliquée à d'autres régions détectées afin de rendre les descripteurs plus distinctifs. Enfin, afin d'évaluer les performances de nos contributions, et en nous fondant sur l'absence d'ensemble de données publiquement disponibles pour la localisation d’information hétérogène dans des images capturées par une caméra, nous avons construit trois jeux de données qui sont disponibles pour la communauté scientifique. / This work aims at developing a generic framework which is able to produce camera-based applications of information spotting in huge repositories of heterogeneous content document images via local descriptors. The targeted systems may take as input a portion of an image acquired as a query and the system is capable of returning focused portion of database image that match the query best. We firstly propose a set of generic feature descriptors for camera-based document images retrieval and spotting systems. Our proposed descriptors comprise SRIF, PSRIF, DELTRIF and SSKSRIF that are built from spatial space information of nearest keypoints around a keypoints which are extracted from centroids of connected components. From these keypoints, the invariant geometrical features are considered to be taken into account for the descriptor. SRIF and PSRIF are computed from a local set of m nearest keypoints around a keypoint. While DELTRIF and SSKSRIF can fix the way to combine local shape description without using parameter via Delaunay triangulation formed from a set of keypoints extracted from a document image. Furthermore, we propose a framework to compute the descriptors based on spatial space of dedicated keypoints e.g SURF or SIFT or ORB so that they can deal with heterogeneous-content camera-based document image retrieval and spotting. In practice, a large-scale indexing system with an enormous of descriptors put the burdens for memory when they are stored. In addition, high dimension of descriptors can make the accuracy of indexing reduce. We propose three robust indexing frameworks that can be employed without storing local descriptors in the memory for saving memory and speeding up retrieval time by discarding distance validating. The randomized clustering tree indexing inherits kd-tree, kmean-tree and random forest from the way to select K dimensions randomly combined with the highest variance dimension from each node of the tree. We also proposed the weighted Euclidean distance between two data points that is computed and oriented the highest variance dimension. The secondly proposed hashing relies on an indexing system that employs one simple hash table for indexing and retrieving without storing database descriptors. Besides, we propose an extended hashing based method for indexing multi-kinds of features coming from multi-layer of the image. Along with proposed descriptors as well indexing frameworks, we proposed a simple robust way to compute shape orientation of MSER regions so that they can combine with dedicated descriptors (e.g SIFT, SURF, ORB and etc.) rotation invariantly. In the case that descriptors are able to capture neighborhood information around MSER regions, we propose a way to extend MSER regions by increasing the radius of each region. This strategy can be also applied for other detected regions in order to make descriptors be more distinctive. Moreover, we employed the extended hashing based method for indexing multi-kinds of features from multi-layer of images. This system are not only applied for uniform feature type but also multiple feature types from multi-layers separated. Finally, in order to assess the performances of our contributions, and based on the assessment that no public dataset exists for camera-based document image retrieval and spotting systems, we built a new dataset which has been made freely and publicly available for the scientific community. This dataset contains portions of document images acquired via a camera as a query. It is composed of three kinds of information: textual content, graphical content and heterogeneous content.

Page generated in 0.1025 seconds