• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 12
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mathematical Expression Detection and Segmentation in Document Images

Bruce, Jacob Robert 19 March 2014 (has links)
Various document layout analysis techniques are employed in order to enhance the accuracy of optical character recognition (OCR) in document images. Type-specific document layout analysis involves localizing and segmenting specific zones in an image so that they may be recognized by specialized OCR modules. Zones of interest include titles, headers/footers, paragraphs, images, mathematical expressions, chemical equations, musical notations, tables, circuit diagrams, among others. False positive/negative detections, oversegmentations, and undersegmentations made during the detection and segmentation stage will confuse a specialized OCR system and thus may result in garbled, incoherent output. In this work a mathematical expression detection and segmentation (MEDS) module is implemented and then thoroughly evaluated. The module is fully integrated with the open source OCR software, Tesseract, and is designed to function as a component of it. Evaluation is carried out on freely available public domain images so that future and existing techniques may be objectively compared. / Master of Science
2

GENERATIVE LARGE-SCALE URBAN LAYOUT ANALYSIS AND SYNTHESIS

Liu He (20376051) 05 December 2024 (has links)
<p dir="ltr">A building layout consists of a set of buildings in city blocks defined by a network of roads. Modeling and generating large-scale urban building layouts is of significant interest in computer vision, computer graphics, and urban applications. Researchers seek to obtain building features (e.g. building shapes, counts, and areas) at large scales. However, data quality and data equality challenge the generation and extraction of building features. Blurriness, occlusions, and noise from prevailing satellite images severely hinders performance of image segmentation, super-resolution, or deep-learning based translation networks. Moreover, large-scale urban layout generation struggles with complex and arbitrary shapes of building layouts, and context-sensitive nature of the city morphology, which prior approaches have not considered. Facing the challenges of data quality, generation robustness, and context-sensitivity of urban layout generation, In this thesis, we first address the data quality problem by combing globally-available satellite images and spatial geometric feature datasets, in order to create a generative modeling framework that enables obtaining significantly improved accuracy in per-building feature estimation as well as generation of visually plausible building footprints. Secondly, for generation robustness, We observe that building layouts are discrete structures, consisting of multiple rows of buildings of various shapes, and are amenable to skeletonization for mapping arbitrary city block shapes to a canonical form. In that, we propose a fully automatic approach to building layout generation using graph attention networks. The method generates realistic urban layouts given arbitrary road networks, and enables conditional generation based on learned priors. Nevertheless, we propose the approach addresses context-sensitivity by leveraging a canonical graph representation for the entire city, which facilitates scalability and captures the multi-layer semantics inherent in urban layouts. We introduce a novel graph-based masked autoencoder (GMAE) for city-scale urban layout generation. The method encodes attributed buildings, city blocks, communities and cities into a unified graph structure, enabling self-supervised masked training for graph autoencoder. Additionally, we employ scheduled iterative sampling for 2.5D layout generation, prioritizing the generation of important city blocks and buildings. Our method has proven its robustness by large-scale prototypical experiments covering heterogeneous scenarios from dense urban to sparse rural. It achieves good realism, semantic consistency, and correctness across the heterogeneous urban styles in 330 US cities. </p>
3

A User Centered Design and Prototype of a Mobile Reading Device for the Visually Impaired

Keefer, Robert B. 10 June 2011 (has links)
No description available.
4

An algorithm to evaluate plant layout alternatives using the manufacturing process as a criterion

Imam, Altaf S. January 1995 (has links)
No description available.
5

Analýza rozložení textu v historických dokumentech / Text Layout Analysis in Historical Documents

Palacková, Bianca January 2021 (has links)
The goal of this thesis is to design and implement algorithm for text layout analysis in historical documents. Neural network was used to solve this problem, specifically architecture Faster-RCNN. Dataset of 6 135 images with historical newspaper was used for training and testing. For purpose of the thesis four models of neural networks were trained: model for detection of words, headings, text regions and model for words detection based on position in line. Outputs from these models were processed in order to determine text layout in input image. A modified F-score metric was used for the evaluation. Based on this metric, the algorithm reached an accuracy almost 80 %.
6

Complex Document Parsing with Vision Language Models

Yifei Hu (9193709) 17 December 2024 (has links)
<p dir="ltr">This thesis explores the application of vision language models (VLMs) on document layout analysis (DLA) and optical character recognition (OCR). For document layout analysis, we found that VLMs excel at detecting text areas by leveraging their understanding of textual content, rather than relying solely on visual features. This approach proves more robust than traditional object detection methods, particularly for text-rich images typical in document analysis tasks. In addressing OCR challenges, we identified a critical bottleneck: the lack of high-quality, document-level OCR datasets. To overcome this limitation, we developed a novel synthetic data generation pipeline. This pipeline utilizes Large Language Models to create OCR training data by rendering markdown source text into images. Our experiments show that VLMs trained on this synthetic data outperform models trained on conventional datasets. This research highlights the potential of VLMs in document understanding tasks and introduces an innovative approach to generating training data for OCR. Our findings suggest that leveraging the dual image-text understanding capabilities of VLMs, combined with strategically generated synthetic data, can significantly advance the state of the art in document layout analysis and OCR.</p>
7

Comparative study of table layout analysis : Layout analysis solutions study for Swedish historical hand-written document

Liang, Xusheng January 2019 (has links)
Background. Nowadays, information retrieval system become more and more popular, it helps people retrieve information more efficiently and accelerates daily task. Within this context, Image processing technology play an important role that help transcribing content in printed or handwritten documents into digital data in information retrieval system. This transcribing procedure is called document digitization. In this transcribing procedure, image processing technique such as layout analysis and word recognition are employed to segment the document content and transcribe the image content into words. At this point, a Swedish company (ArkivDigital® AB) has a demand to transcribe their document data into digital data. Objectives. In this study, the aim is to find out effective solution to extract document layout regard to the Swedish handwritten historical documents, which are featured by their tabular forms containing the handwritten content. In this case, outcome of application of OCRopus, OCRfeeder, traditional image processing techniques, machine learning techniques on Swedish historical hand-written document is compared and studied. Methods. Implementation and experiment are used to develop three comparative solutions in this study. One is Hessian filtering with mask operation; another one is Gabor filtering with morphological open operation; the last one is Gabor filtering with machine learning classification. In the last solution, different alternatives were explored to build up document layout extraction pipeline. Hessian filter and Gabor filter are evaluated; Secondly, filter images with the better filter evaluated at previous stage, then refine the filtered image with Hough line transform method. Third, extract transfer learning feature and custom feature. Fourth, feed classifier with previous extracted features and analyze the result. After implementing all the solutions, sample set of the Swedish historical handwritten document is applied with these solutions and compare their performance with survey. Results. Both open source OCR system OCRopus and OCRfeeder fail to deliver the outcome due to these systems are designed to handle general document layout instead of table layout. Traditional image processing solutions work in more than a half of the cases, but it does not work well. Combining traditional image process technique and machine leaning technique give the best result, but with great time cost. Conclusions. Results shows that existing OCR system cannot carry layout analysis task in our Swedish historical handwritten document. Traditional image processing techniques are capable to extract the general table layout in these documents. By introducing machine learning technique, better and more accurate table layout can be extracted, but comes with a bigger time cost. / Scalable resource-efficient systems for big data analytics
8

Semantic Segmentation of Historical Document Images Using Recurrent Neural Networks

Ahrneteg, Jakob, Kulenovic, Dean January 2019 (has links)
Background. This thesis focuses on the task of historical document semantic segmentation with recurrent neural networks. Document semantic segmentation involves the segmentation of a page into different meaningful regions and is an important prerequisite step of automated document analysis and digitisation with optical character recognition. At the time of writing, convolutional neural network based solutions are the state-of-the-art for analyzing document images while the use of recurrent neural networks in document semantic segmentation has not yet been studied. Considering the nature of a recurrent neural network and the recent success of recurrent neural networks in document image binarization, it should be possible to employ a recurrent neural network for document semantic segmentation and further achieve high performance results. Objectives. The main objective of this thesis is to investigate if recurrent neural networks are a viable alternative to convolutional neural networks in document semantic segmentation. By using a combination of a convolutional neural network and a recurrent neural network, another objective is also to determine if the performance of the combination can improve upon the existing case of only using the recurrent neural network. Methods. To investigate the impact of recurrent neural networks in document semantic segmentation, three different recurrent neural network architectures are implemented and trained while their performance are further evaluated with Intersection over Union. Afterwards their segmentation result are compared to a convolutional neural network. By performing pre-processing on training images and multi-class labeling, prediction images are ultimately produced by the employed models. Results. The results from the gathered performance data shows a 2.7% performance difference between the best recurrent neural network model and the convolutional neural network. Notably, it can be observed that this recurrent neural network model has a more consistent performance than the convolutional neural network but comparable performance results overall. For the other recurrent neural network architectures lower performance results are observed which is connected to the complexity of these models. Furthermore, by analyzing the performance results of a model using a combination of a convolutional neural network and a recurrent neural network, it can be noticed that the combination performs significantly better with a 4.9% performance increase compared to the case with only using the recurrent neural network. Conclusions. This thesis concludes that recurrent neural networks are likely a viable alternative to convolutional neural networks in document semantic segmentation but that further investigation is required. Furthermore, by combining a convolutional neural network with a recurrent neural network it is concluded that the performance of a recurrent neural network model is significantly increased. / Bakgrund. Detta arbete handlar om semantisk segmentering av historiska dokument med recurrent neural network. Semantisk segmentering av dokument inbegriper att dela in ett dokument i olika regioner, något som är viktigt för att i efterhand kunna utföra automatisk dokument analys och digitalisering med optisk teckenläsning. Vidare är convolutional neural network det främsta alternativet för bearbetning av dokument bilder medan recurrent neural network aldrig har använts för semantisk segmentering av dokument. Detta är intressant eftersom om vi tar hänsyn till hur ett recurrent neural network fungerar och att recurrent neural network har uppnått mycket bra resultat inom binär bearbetning av dokument, borde det likväl vara möjligt att använda ett recurrent neural network för semantisk segmentering av dokument och även här uppnå bra resultat. Syfte. Syftet med arbetet är att undersöka om ett recurrent neural network kan uppnå ett likvärdigt resultat jämfört med ett convolutional neural network för semantisk segmentering av dokument. Vidare är syftet även att undersöka om en kombination av ett convolutional neural network och ett recurrent neural network kan ge ett bättre resultat än att bara endast använda ett recurrent neural network. Metod. För att kunna avgöra om ett recurrent neural network är ett lämpligt alternativ för semantisk segmentering av dokument utvärderas prestanda resultatet för tre olika modeller av recurrent neural network. Därefter jämförs dessa resultat med prestanda resultatet för ett convolutional neural network. Vidare utförs förbehandling av bilder och multi klassificering för att modellerna i slutändan ska kunna producera mätbara resultat av uppskattnings bilder. Resultat. Genom att utvärdera prestanda resultaten för modellerna kan vi i en jämförelse med den bästa modellen och ett convolutional neural network uppmäta en prestanda skillnad på 2.7%. Noterbart i det här fallet är att den bästa modellen uppvisar en jämnare fördelning av prestanda. För de två modellerna som uppvisade en lägre prestanda kan slutsatsen dras att deras utfall beror på en lägre modell komplexitet. Vidare vid en jämförelse av dessa två modeller, där den ena har en kombination av ett convolutional neural network och ett recurrent neural network medan den andra endast har ett recurrent neural network uppmäts en prestanda skillnad på 4.9%. Slutsatser. Resultatet antyder att ett recurrent neural network förmodligen är ett lämpligt alternativ till ett convolutional neural network för semantisk segmentering av dokument. Vidare dras slutsatsen att en kombination av de båda varianterna bidrar till ett bättre prestanda resultat.
9

Layout Analysis on modern Newspapers using the Object Detection model Faster R-CNN

Funkquist, Mikaela January 2022 (has links)
As society is becoming more and more digitized the amount of digital data is increasing rapidly. Newspapers are one example of this, that many Libraries around the world are storing as digital images. This enables a great opportunity for research on Newspapers, and a particular research area is Document Layout Analysis where one divides the document into different segments and classifies them. In this thesis modern Newspaper pages, provided by KBLab, were used to investigate how well a Deep Learning model developed for General Object Detection performs in this area. In particular the Faster R-CNN Object detection model was trained on manually annotated newspaper pages from two different Swedish publishers, namely Dagens Nyheter and Aftonbladet. All newspaper pages were taken from editions published between 2010 and 2020, meaning only modern newspapers were considered. The methodology in this thesis involved sampling editions from the given publishers and time periods and then manually annotating these by marking out the desired layout elements with bounding boxes. The classes considered were: headlines, subheadlines, decks, charts/infographics, photographs, pull quotes, cartoons, fact boxes, bylines/credits, captions, tableaus and tables. Given the annotated data, a Faster R-CNN with a ResNet-50-FPN backbone was trained on both the Dagens Nyheter and Aftonbladet train sets and then evaluated on different test set. Results such as a mAP0.5:0.95 of 0.6 were achieved for all classes, while class-wise evaluation indicate precisions around 0.8 for some classes such as tableaus, decks and photographs. / I takt med att samhället blir mer och mer digitaliserat ökar mängden digital data snabbt. Tidningar är ett exempel på detta, som många bibliotek runt om i världen lagrar som digitala bilder. Detta möjliggör en stor möjlighet för forskning på tidningar, och ett särskilt forskningsområde är Dokument Layout Analys där man delar in dokumentet i olika segment och klassificerar dem. I denna avhandling användes moderna tidningssidor, tillhandahållna av KBLab, för att undersöka hur väl en djupinlärnings-modell utvecklad för generell Objektdetektering presterar inom detta område. Mer precist, tränades en Faster R-CNN Objektdetekteringsmodell på manuellt annoterade tidningssidor från två olika svenska förlag, nämligen Dagens Nyheter och Aftonbladet. Alla tidningssidor togs från utgåvor som publicerats mellan 2010 och 2020, vilket innebär att endast moderna tidningar behandlades. Metodiken i detta examensarbete innebar att först göra ett urval av utgåvor från givna förlag och tidsperioder och sedan manuellt annotera dessa genom att markera ut önskade layoutelement med begränsningsrutor. Klasserna som användes var: rubriker, underrubriker, ingress, diagram/infografik, fotografier, citat, tecknade serier, faktarutor, författares signatur, bildtexter, tablåer och tabeller. Givet den annoterade datan, tränades en Faster R-CNN med en ResNet-50-FPN ryggrad på både Dagens Nyheter och Aftonbladet träningsdatan och sedan utvärderades dem på olika testset. Resultat som mAP0.5:0.95 på 0.6 uppnåddes för alla klasser, medan klassvis utvärdering indikerar precision kring 0.8 för vissa klasser som tablåer, ingresser och fotografier.
10

Advances in Document Layout Analysis

Bosch Campos, Vicente 05 March 2020 (has links)
[EN] Handwritten Text Segmentation (HTS) is a task within the Document Layout Analysis field that aims to detect and extract the different page regions of interest found in handwritten documents. HTS remains an active topic, that has gained importance with the years, due to the increasing demand to provide textual access to the myriads of handwritten document collections held by archives and libraries. This thesis considers HTS as a task that must be tackled in two specialized phases: detection and extraction. We see the detection phase fundamentally as a recognition problem that yields the vertical positions of each region of interest as a by-product. The extraction phase consists in calculating the best contour coordinates of the region using the position information provided by the detection phase. Our proposed detection approach allows us to attack both higher level regions: paragraphs, diagrams, etc., and lower level regions like text lines. In the case of text line detection we model the problem to ensure that the system's yielded vertical position approximates the fictitious line that connects the lower part of the grapheme bodies in a text line, commonly known as the baseline. One of the main contributions of this thesis, is that the proposed modelling approach allows us to include prior information regarding the layout of the documents being processed. This is performed via a Vertical Layout Model (VLM). We develop a Hidden Markov Model (HMM) based framework to tackle both region detection and classification as an integrated task and study the performance and ease of use of the proposed approach in many corpora. We review the modelling simplicity of our approach to process regions at different levels of information: text lines, paragraphs, titles, etc. We study the impact of adding deterministic and/or probabilistic prior information and restrictions via the VLM that our approach provides. Having a separate phase that accurately yields the detection position (base- lines in the case of text lines) of each region greatly simplifies the problem that must be tackled during the extraction phase. In this thesis we propose to use a distance map that takes into consideration the grey-scale information in the image. This allows us to yield extraction frontiers which are equidistant to the adjacent text regions. We study how our approach escalates its accuracy proportionally to the quality of the provided detection vertical position. Our extraction approach gives near perfect results when human reviewed baselines are provided. / [ES] La Segmentación de Texto Manuscrito (STM) es una tarea dentro del campo de investigación de Análisis de Estructura de Documentos (AED) que tiene como objetivo detectar y extraer las diferentes regiones de interés de las páginas que se encuentran en documentos manuscritos. La STM es un tema de investigación activo que ha ganado importancia con los años debido a la creciente demanda de proporcionar acceso textual a las miles de colecciones de documentos manuscritos que se conservan en archivos y bibliotecas. Esta tesis entiende la STM como una tarea que debe ser abordada en dos fases especializadas: detección y extracción. Consideramos que la fase de detección es, fundamentalmente, un problema de clasificación cuyo subproducto son las posiciones verticales de cada región de interés. Por su parte, la fase de extracción consiste en calcular las mejores coordenadas de contorno de la región utilizando la información de posición proporcionada por la fase de detección. Nuestro enfoque de detección nos permite atacar tanto regiones de alto nivel (párrafos, diagramas¿) como regiones de nivel bajo (líneas de texto principalmente). En el caso de la detección de líneas de texto, modelamos el problema para asegurar que la posición vertical estimada por el sistema se aproxime a la línea ficticia que conecta la parte inferior de los cuerpos de los grafemas en una línea de texto, comúnmente conocida como línea base. Una de las principales aportaciones de esta tesis es que el enfoque de modelización propuesto nos permite incluir información conocida a priori sobre la disposición de los documentos que se están procesando. Esto se realiza mediante un Modelo de Estructura Vertical (MEV). Desarrollamos un marco de trabajo basado en los Modelos Ocultos de Markov (MOM) para abordar tanto la detección de regiones como su clasificación de forma integrada, así como para estudiar el rendimiento y la facilidad de uso del enfoque propuesto en numerosos corpus. Así mismo, revisamos la simplicidad del modelado de nuestro enfoque para procesar regiones en diferentes niveles de información: líneas de texto, párrafos, títulos, etc. Finalmente, estudiamos el impacto de añadir información y restricciones previas deterministas o probabilistas a través de el MEV propuesto que nuestro enfoque proporciona. Disponer de un método independiente que obtiene con precisión la posición de cada región detectada (líneas base en el caso de las líneas de texto) simplifica enormemente el problema que debe abordarse durante la fase de extracción. En esta tesis proponemos utilizar un mapa de distancias que tiene en cuenta la información de escala de grises de la imagen. Esto nos permite obtener fronteras de extracción que son equidistantes a las regiones de texto adyacentes. Estudiamos como nuestro enfoque aumenta su precisión de manera proporcional a la calidad de la detección y descubrimos que da resultados casi perfectos cuando se le proporcionan líneas de base revisadas por humanos. / [CA] La Segmentació de Text Manuscrit (STM) és una tasca dins del camp d'investigació d'Anàlisi d'Estructura de Documents (AED) que té com a objectiu detectar I extraure les diferents regions d'interès de les pàgines que es troben en documents manuscrits. La STM és un tema d'investigació actiu que ha guanyat importància amb els anys a causa de la creixent demanda per proporcionar accés textual als milers de col·leccions de documents manuscrits que es conserven en arxius i biblioteques. Aquesta tesi entén la STM com una tasca que ha de ser abordada en dues fases especialitzades: detecció i extracció. Considerem que la fase de detecció és, fonamentalment, un problema de classificació el subproducte de la qual són les posicions verticals de cada regió d'interès. Per la seva part, la fase d'extracció consisteix a calcular les millors coordenades de contorn de la regió utilitzant la informació de posició proporcionada per la fase de detecció. El nostre enfocament de detecció ens permet atacar tant regions d'alt nivell (paràgrafs, diagrames ...) com regions de nivell baix (línies de text principalment). En el cas de la detecció de línies de text, modelem el problema per a assegurar que la posició vertical estimada pel sistema s'aproximi a la línia fictícia que connecta la part inferior dels cossos dels grafemes en una línia de text, comunament coneguda com a línia base. Una de les principals aportacions d'aquesta tesi és que l'enfocament de modelització proposat ens permet incloure informació coneguda a priori sobre la disposició dels documents que s'estan processant. Això es realitza mitjançant un Model d'Estructura Vertical (MEV). Desenvolupem un marc de treball basat en els Models Ocults de Markov (MOM) per a abordar tant la detecció de regions com la seva classificació de forma integrada, així com per a estudiar el rendiment i la facilitat d'ús de l'enfocament proposat en nombrosos corpus. Així mateix, revisem la simplicitat del modelatge del nostre enfocament per a processar regions en diferents nivells d'informació: línies de text, paràgrafs, títols, etc. Finalment, estudiem l'impacte d'afegir informació i restriccions prèvies deterministes o probabilistes a través del MEV que el nostre mètode proporciona. Disposar d'un mètode independent que obté amb precisió la posició de cada regió detectada (línies base en el cas de les línies de text) simplifica enormement el problema que ha d'abordar-se durant la fase d'extracció. En aquesta tesi proposem utilitzar un mapa de distàncies que té en compte la informació d'escala de grisos de la imatge. Això ens permet obtenir fronteres d'extracció que són equidistants de les regions de text adjacents. Estudiem com el nostre enfocament augmenta la seva precisió de manera proporcional a la qualitat de la detecció i descobrim que dona resultats quasi perfectes quan se li proporcionen línies de base revisades per humans. / Bosch Campos, V. (2020). Advances in Document Layout Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138397

Page generated in 0.1051 seconds