• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 34
  • 18
  • 17
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 164
  • 60
  • 47
  • 38
  • 34
  • 29
  • 28
  • 28
  • 22
  • 22
  • 21
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Camera-Captured Document Image Analysis

Kasar, Thotreingam 11 1900 (has links) (PDF)
Text is no longer confined to scanned pages and often appears in camera-based images originating from text on real world objects. Unlike the images from conventional flatbed scanners, which have a controlled acquisition environment, camera-based images pose new challenges such as uneven illumination, blur, poor resolution, perspective distortion and 3D deformations that can severely affect the performance of any optical character recognition (OCR) system. Due to the variations in the imaging condition as well as the target document type, traditional OCR systems, designed for scanned images, cannot be directly applied to camera-captured images and a new level of processing needs to be addressed. In this thesis, we study some of the issues commonly encountered in camera-based image analysis and propose novel methods to overcome them. All the methods make use of color connected components. 1. Connected component descriptor for document image mosaicing Document image analysis often requires mosaicing when it is not possible to capture a large document at a reasonable resolution in a single exposure. Such a document is captured in parts and mosaicing stitches them into a single image. Since connected components (CCs) in a document image can easily be extracted regardless of the image rotation, scale and perspective distortion, we design a robust feature named connected component descriptor that is tailored for mosaicing camera-captured document images. The method involves extraction of a circular measurement region around each CC and its description using the angular radial transform (ART). To ensure geometric consistency during feature matching, the ART coefficients of a CC are augmented with those of its 2 nearest neighbors. Our method addresses two critical issues often encountered in correspondence matching: (i) the stability of features and (ii) robustness against false matches due to multiple instances of many characters in a document image. We illustrate the effectiveness of the proposed method on camera-captured document images exhibiting large variations in viewpoint, illumination and scale. 2. Font and background color independent text binarization The first step in an OCR system, after document acquisition, is binarization, which converts a gray-scale/color image into a two-level image -the foreground text and the background. We propose two methods for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. (a) Hierarchical CC Analysis: The method employs an edge-based connected component approach and automatically determines a threshold for each component. It overcomes several limitations of existing locally-adaptive thresholding techniques. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. However, the method is sensitive to complex backgrounds since it relies on the edge information to identify CCs. It also uses script-specific characteristics to filter out edge components before binarization and currently works well for Roman script only. (b) Contour-based color clustering (COCOCLUST): To overcome the above limitations, we introduce a novel unsupervised color clustering approach that operates on a ‘small’ representative set of color pixels identified using the contour information. Based on the assumption that every character is of a uniform color, we analyze each color layer individually and identify potential text regions for binarization. Experiments on several complex images having large variations in font, size, color, orientation and script illustrate the robustness of the method. 3. Multi-script and multi-oriented text extraction from scene images Scene text understanding normally involves a pre-processing step of text detection and extraction before subjecting the acquired image for character recognition task. The subsequent recognition task is performed only on the detected text regions so as to mitigate the effect of background complexity. We propose a color-based CC labeling for robust text segmentation from natural scene images. Text CCs are identified using a combination of support vector machine and neural network classifiers trained on a set of low-level features derived from the boundary, stroke and gradient information. We develop a semiautomatic annotation toolkit to generate pixel-accurate groundtruth of 100 scenic images containing text in various layout styles and multiple scripts. The overall precision, recall and f-measure obtained on our dataset are 0.8, 0.86 and 0.83, respectively. The proposed method is also compared with others in the literature using the ICDAR 2003 robust reading competition dataset, which, however, has only horizontal English text. The overall precision, recall and f-measure obtained are 0.63, 0.59 and 0.61 respectively, which is comparable to the best performing methods in the ICDAR 2005 text locating competition. A recent method proposed by Epshtein et al. [1] achieves better results but it cannot handle arbitrarily oriented text. Our method, however, works well for generic scene images having arbitrary text orientations. 4. Alignment of curved text lines Conventional OCR systems perform poorly on document images that contain multi-oriented text lines. We propose a technique that first identifies individual text lines by grouping adjacent CCs based on their proximity and regularity. For each identified text string, a B-spline curve is fitted to the centroids of the constituent characters and normal vectors are computed along the fitted curve. Each character is then individually rotated such that the corresponding normal vector is aligned with the vertical axis. The method has been tested on a data set consisting of 50 images with text laid out in various ways namely along arcs, waves, triangles and a combination of these with linearly skewed text lines. It yields 95.9% recognition accuracy on text strings, where, before alignment, state-of-the-art OCRs fail to recognize any text. The CC-based pre-processing algorithms developed are well-suited for processing camera-captured images. We demonstrate the feasibility of the algorithms on the publicly-available ICDAR 2003 robust reading competition dataset and our own database comprising camera-captured document images that contain multiple scripts and arbitrary text layouts.
162

Rozpoznávání ručně psaného písma pomocí neuronových sítí / Handwritten Character Recognition Using Artificial Neural Networks

Horký, Vladimír January 2012 (has links)
Neural networks with algorithm back-propagation will be presented in this work. Theoretical background of the algorithm will be explained. The problems with training neural nets will be solving there. The work discuss some techniques of image preprocessing and image extraction features, which is one of main part in classification. Some part of work discuss few experiments with neural nets with chosen image features.
163

Rozpoznávání textu pomocí konvolučních sítí / Optical Character Recognition Using Convolutional Networks

Csóka, Pavel January 2016 (has links)
This thesis aims at creation of new datasets for text recognition machine learning tasks and experiments with convolutional neural networks on these datasets. It describes architecture of convolutional nets, difficulties of recognizing text from photographs and contemporary works using these networks. Next, creation of annotation, using Tesseract OCR, for dataset comprised from photos of document pages, taken by mobile phones, named Mobile Page Photos. From this dataset two additional are created by cropping characters out of its photos formatted as Street View House Numbers dataset. Dataset Mobile Nice Page Photos Characters contains readable characters and Mobile Page Photos Characters adds hardly readable and unreadable ones. Three models of convolutional nets are created and used for text recognition experiments on these datasets, which are also used for estimation of annotation error.
164

Aplicaciones de los autómatas transductores finitos con pesos (WFST) en la corrección simbólica en interfaces persona-máquina

Navarro Cerdán, José Ramón 18 April 2016 (has links)
[EN] In this thesis a real application related to the string correction process from an OCR classifier in a form digitizing task is presented. These strings come from a classifier with a given error ratio that implies that some characters in the string have been potentially misclassified, producing erroneous words. This raises the need to introduce some kind of postprocess to improve the strings. The implementation of such postprocess takes into account all the available evidence in a given moment. In the case proposed here, these evidences are the charactersrecognized by the classifier with their posterior probabilities, the confusion matrix between symbols and the language model finally accepted. Each evidence is modelled independently by means of a WFST and then combined by means of the composition operation into a single integrated automata. From this automata, the path that maximizes the probability is selected. This path is the string, that belongs to the language model, that is the nearest string to the OCR hypothesis according to the confusion matrix. The final system offers two different results: on the one hand the corrected string, on the other hand the transformation cost produced during the string correction. Additionally, it is proposed a general method of error estimation using the input string transformation cost that establishes a threshold in terms of the cost and the proposed end-user parameter: the acceptable final error. This thesis presents a method for estimating adaptive rejection threshold estimation that allows for a certain percentage of error in a lot of strings from one language (sample) that presents several advantages. On the one hand, it is independent from transformation cost postprocessing distribution of such samples. On the other hand, it allows the user to set the threshold for a familiar and advantageous manner, as is setting the desired rate of sampling error. For this, first, and for a given language, a model that estimates the probability of error associated with the acceptation of postprocessed strings with a given transformation cost is defined. Then, the procedure that performs the rejection threshold estimation adaptively in order to achieve predefined rate error for a test batch is presented. In addition, an approach to obtain the above model is proposed when there are no real and supervised OCR hypothesis in the learning stage. The chapter is accompanied by experiments whose results demonstrate the utility of the proposed method. Next, linking in somehow with the search for an increased productivity in a possible string validation task, of previously strings rejected by the system through the foregoing error estimation method, a method of multimodal and interactive human-computer interaction that composes the above information with the prefix introduced by the user, while the validation process occurs, making use, for this, of WFST and the automata composition operation. The search for the most likely string for each new interaction offered by the user, in the composed automata, presented here, shows a clear increase in productivity by requiring fewer keystrokes in obtaining the correct string. Finally, a tolerant fault multimodal and interactive interface, using also WFST, is shown by making the composition of different information sources together with an error model related with the possible confusion caused due to the arrangement of keys on a keyboard. The application shown in this case is related to the introduction of a destination into a GPS device where is considered both the information related to the next destinations to a specific place, such as the information related to the entered prefix and errors that may occur due to the arrangement of keys on the input device considered. / [ES] En esta tesis se presenta inicialmente una aplicación real de corrección de cadenas procedentes de un clasificador OCR en una tarea de digitalización de formularios. Estas cadenas, proceden de un clasificador con cierta probabilidad de error, lo que implica la posibilidad de que alguno de los caracteres pertenecientes a una palabra sea erróneo, produciendo finalmente palabras incorrectas. Esto plantea la necesidad de introducir algún tipo de postproceso que mejore dichas cadenas. Para implementar dicho postproceso, se tienen en cuenta todas las evidencias disponibles en un momento dado. En el caso propuesto aquí serán los caracteres reconocidos por el propio clasificador con su probabilidad a posteriori, la matriz de confusión entre símbolos y el modelo de lenguaje finalmente aceptado. Cada una de estas evidencias se modela de manera independiente en forma de un WFST. Una vez modeladas se fusionan mediante la operación de composición de autómatas en un único autómata integrado. A partir de este autómata, se selecciona el camino que maximiza la probabilidad y que corresponde con la cadena perteneciente al lenguaje más cercana a la hipótesis OCR según la matriz de confusión entre símbolos. El sistema final ofrecerá dos resultados diferentes: por una parte la cadena corregida y por otra el coste de transformación de dicha corrección. Por otra parte, se plantea un método general de estimación del error frente a un coste de transformación de las cadenas de entrada que permite establecer un umbral dinámico en función de dicho coste y un parámetro propuesto por el usuario final: el error final asumible. Para ello en esta tesis se presenta un método adaptativo de estimación del umbral de rechazo que permite estimarlo para obtener un determinado porcentaje de error en un lote de cadenas de un lenguaje (muestra) que presenta diversas ventajas. Por un lado, es independiente de la distribución de los costes de transformación de dichas muestras. Por otro lado, permite al usuario establecer el umbral de una manera familiar y ventajosa, como es fijando la tasa de error deseada de la muestra. Para todo ello, en primer lugar, y para un lenguaje dado, se define un modelo que estima la probabilidad de error asociada a aceptar cadenas con un coste de transformación determinado. A continuación, se expone el procedimiento que lleva a cabo la estimación del umbral de rechazo de manera adaptativa con el objetivo de alcanzar la tasa de error predefinida para un lote de test. Además, se propone una aproximación para la obtención del modelo anterior cuando no se dispone de hipótesis OCR reales y supervisadas en la etapa de aprendizaje. Seguidamente y enlazando en cierta forma con la búsqueda de un incremento de productividad en una posible validación de las cadenas, previamente rechazadas por el sistema a través del método de estimación del error anteriormente expuesto, se presenta un método de interacción persona-máquina multimodal e interactivo que fusiona la información anterior junto al prefijo introducido, por el propio usuario, durante dicho proceso de validación, haciendo uso para ello de los WFST y la operación de composición de autómatas. Para finalizar, se muestra otra interfaz multimodal e interactiva tolerante a fallos, mediante la fusión de diferentes fuentes de información junto a un modelo de error relacionado con las posibles confusiones producidas debido a la disposición de las teclas de un teclado. Para ello, se hace uso también de WFST para su modelado. La aplicación mostrada en este caso está relacionada con la introducción de un destino en un dispositivo GPS y en ella se considera, tanto la información de los destinos próximos a un lugar concreto, como la información relativa al prefijo introducido y los errores que pueden aparecer debido a la propia disposición de las teclas en el dispositivo de entrada. / [CA] En aquesta tesi es presenta inicialment una aplicació real de correcció de cadenes procedents d'un classificador OCR en una tasca de digitalització de formularis. Aquestes cadenes, procedeixen d'un classificador amb una determinada probabilitat d'error, la qual cosa implica la possibilitat de que algun dels caràcters que pertanyen a una paraula siga erroni, produint finalment paraules incorrectes. Això planteja la necessitat d'introduir algun tipus de postprocés que millore aquestes cadenes. Per implementar aquest postprocés, es tenen en compte totes les evidències disponibles en un moment donat. En el cas proposat ací, seran els caràcters reconeguts pel propi classificador amb la seua probabilitat a posteriori, la matriu de confusió entre símbols i el model de llenguatge finalment acceptat. Cadascuna d'aquestes evidències es modela de manera independent en forma d'un WFST. Una vegada modelades es fusionen mitjançant l'operació de composició d'autòmats en un únic autòmat integrat. A partir d'aquest autòmat, es selecciona el camí que fa màxima la probabilitat i que es correspon amb la cadena més propera a la hipòtesi OCR que pertany al llenguatge segons la matriu de confusió entre símbols. El sistema final oferirà dos resultats diferents: d'una banda la cadena corregida, i d'una altra, el cost de transformació d'aquesta correcció. D'una altra banda, es planteja un mètode general d'estimació de l'error front al cost de transformació de les cadenes d'entrada que permet establir un llindar dinàmic en funció d'aquest cost i un paràmetre proposat per l'usuari final: l'error final assumible. Per això en aquesta tesi es presenta un mètode adaptatiu d'estimació de rebuig, amb la finalitat d'obtindre un determinat percentatge d'error en un lot de cadenes d'un llenguatge (mostra) que presenta diversos avantatges. D'una banda és independent de la distribució dels costos de transformació de les mostres esmentades. D'altra banda, permet l'usuari establir el llindar d'una manera familiar i avantatjosa, com és fixant la tasa d'error desitjada per la mostra. Per tot això, en primer lloc, i donat un llenguatge, es defineix un model que estima la probabilitat d'error associada a acceptar cadenes amb un cost de transformació determinat. A continuació, s'exposa el procediment que du a terme l'estimació del llindar de rebuig de manera adaptativa amb l'objectiu de arribar a la tasa d'error predefinida per a un lot de test. A més a més, es proposa una aproximació per a obtindre el model anterior quant no es disposa d'hipòtesi OCR reals i supervisades a l'etapa d'aprenentatge. Seguidament, i enllaçant amb la recerca d'un increment en la productivitat en una possible validació de cadenes prèviament rebutjades pel sistema a través del mètode d'estimació de l'error anteriorment exposat, es presenta un mètode d'interacció persona-màquina multimodal i interactiu que fusiona la informació anterior, juntament amb el prefix introduït pel propi usuari durant l'esmentat procés de validació, fent ús dels WFST i l'operació de composició d'autòmats. La recerca de la cadena més probable, en cada nova interacció oferida per l'usuari ens mostra un clar increment de la productivitat, al requerir un nombre menor de pulsacions de teclat per obtindre la cadena correcta. Per finalitzar, es mostra una altra interfície multimodal i interactiva tolerant a errades, mitjançant la fusió de diferents fonts d'informació juntament a un model d'error relacionat amb les possibles confusions produïdes a causa de la disposició de les lletres d'un teclat. En aquest cas es fa ús també dels WFST en el seu modelat. L'aplicació mostrada en aquest cas està relacionada amb la introducció d'una destinació en un dispositiu GPS i en aquesta es considera tant la informació pròxima a un lloc concret, com la informació relativa al prefix introduït, junt als errors que poden aparèixer a causa de la pròpia dispos / Navarro Cerdán, JR. (2016). Aplicaciones de los autómatas transductores finitos con pesos (WFST) en la corrección simbólica en interfaces persona-máquina [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62688

Page generated in 0.0281 seconds