• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Accurately extracting information from a finite set of different report categories and formats / Precis extraktion av information från ett begränsat antal rapporter med olika struktur och format på datan

Holmbäck, Jonatan January 2023 (has links)
POC Sports (hereafter simply POC) is a company that manufactures gear and accessories for winter sports as well as cycling. Their mission is to “Protect lives and reduce the consequences of accidents for athletes and anyone inspired to be one”. To do so, a lot of care needs to be put into making their equipment as protective as possible, while still maintaining the desired functionality. To aid in this, their vendor companies run standardized tests to evaluate their products. The results of these tests are then compiled into a report for POC. The problem is that the different companies use different styles and formats to convey this information, which can be classified into different categories. Therefore, this project aimed to provide a tool that can be used by POC to identify the report’s category and then accurately extract relevant data from it. An accuracy score was used as the metric to evaluate the tool’s accuracy with respect to extracting the relevant data. The development and evaluation of the tool were performed in two evaluation rounds. Additional metrics were used to evaluate a number of existing tools. These metrics included: whether the tools were open source, how easy they are to set up, pricing, and how much of the task the tool could cover. A proof of concept tool was realized and it demonstrated an accuracy of 97%. This was considered adequate when compared to the minimum required accuracy of 95%. However, due to the available time and resources, the sample size was limited, and thus this accuracy may not apply to the entire population with a confidence level higher than 75%. The results of evaluating the iterative improvements in the tool suggest that it is possible by addressing issues as they are found to achieve an acceptable score for a large fraction of the general population. Additionally, it would be beneficial to keep a catalog of the recurring solutions that have been made for different problems, so they can be reused for similar problems, allowing for better extensibility and generalizability. To build on the work performed in this thesis, the next steps might be to look into similar problems for other formats and to examine how different PDF generators may affect the ability to extract and process data present in PDF reports. / POC är ett företag som tillverkar utrustning, i synnerhet hjälmar, för vintersport och cyklister. Deras mål är att “Skydda liv och minska konsekvenserna från olyckor för atleter och vem som helst som är inspirerad till att bli en sådan”. För att uppnå detta har mycket jobb lagts ner för att göra deras utrustning så skyddande som möjligt., men samtidigt bibehålla samma funktionalitet. För att bidra med detta har POCs säljare genomfört standardiserade tester för att evaluera om deras produkter håller upp till standardena som satts på dem. Resultaten från dessa test är ofta presenterade i form av en rapport som sedan skickas till POC. Problemet är att de olika säljarna använder olika sätt och även format för att presentera den här informationen, som kan klassifieras in till olika kategorier. Därför avser det här projektet att skapa ett verktyg som kan användas av POC för att identifiera och därefter extrahera datan från dessa rapporter. Ett precisionsspoäng användes som mått för att utvärdera verktygets precision med avseende på att extrahera relevant data. Utvecklingen och utvärderingen av verktyget genomfördes i två utvärderingsomgångar. Ytterligare mått användes för att utvärdera ett antal befintliga verktyg. Dessa mått inkluderade: om verktygen var öppen källkod, hur enkla de är att installera och bröja använda, prissättning och hur mycket av uppgiften verktyget kunde täcka. En prototype utvecklades med en precision på 97%. Detta ansågs vara tillräckligt jämfört med den minsta nödvändiga precision på 95%. Men på grund av den tillgängliga tiden och resurserna var urvalsstorleken begränsad, och därför kanske denna noggrannhet inte gäller för hela populationen med en konfidensnivå högre än 75%. Resultaten av utvärderingen av de iterativa förbättringarna i verktyget tyder på att det är möjligt att genom att ta itu med problem som dyker upp, att uppnå en acceptabel poäng för en stor del av den allmänna befolkningen. Dessutom skulle det vara fördelaktigt att föra en katalog över de återkommande lösningar som har gjorts för olika problem, så att de kan återanvändas för liknande problem, vilket möjliggör bättre töjbarhet och generaliserbarhet. För att bygga vidare på det arbete som utförts i denna avhandling kan nästa steg vara att undersöka liknande problem för andra format och att undersöka hur olika PDF-generatorer kan påverka hur väl det går att extrahera och bearbeta data som finns i PDF-rapporter.
12

Computer Vision for Document Image Analysis and Text Extraction / Datorseende för analys av dokumentbilder och textutvinning

Benchekroun, Omar January 2022 (has links)
Automatic document processing has been a subject of interest in the industry for the past few years, especially with the recent technological advances in Machine Learning and Computer Vision. This project investigates in-depth a major component used in Document Image Processing known as Optical Character Recognition (OCR). First, an improvement upon existing shallow CNN+LSTM is proposed, using domain-specific data synthesis. We demonstrate that this model can achieve an accuracy of up to 97% on non-handwritten text, with an accuracy improvement of 24% when using synthetic data. Furthermore, we deal with handwritten text that presents more challenges including the variance of writing style, slanting, and character ambiguity. A CNN+Transformer architecture is validated to recognize handwriting extracted from real-world insurance statements data. This model achieves a maximal accuracy of 92% on real-world data. Moreover, we demonstrate how a data pipeline relying on synthetic data can be a scalable and affordable solution for modern OCR needs. / Automatisk dokumenthantering har varit ett ämne av intresse i branschen under de senaste åren, särskilt med de senaste tekniska framstegen inom maskininlärning och datorseende. I detta projekt kommer man att på djupet undersöka en viktig komponent som används vid bildbehandling av dokument och som kallas optisk teckenigenkänning (OCR). Först kommer en förbättring av befintlig ytlig CNN+LSTM att föreslås, med hjälp av domänspecifik datasyntes. Vi kommer att visa att denna modell kan uppnå en noggrannhet på upp till 97% på icke handskriven text, med en förbättring av noggrannheten på 24% när syntetiska data används. Dessutom kommer vi att behandla handskriven text som innebär fler utmaningar, t.ex. variationer i skrivstilen, snedställningar och tvetydiga tecken. En CNN+Transformer-arkitektur kommer att valideras för att känna igen handskrift från verkliga data om försäkringsbesked. Denna modell uppnår en maximal noggrannhet på 92% på verkliga data. Dessutom kommer vi att visa hur en datapipeline som bygger på syntetiska data är en skalbar och prisvärd lösning för moderna OCR-behov.
13

Analysis Of Multi-lingual Documents With Complex Layout And Content

Pati, Peeta Basa 11 1900 (has links)
A document image, beside text, may contain pictures, graphs, signatures, logos, barcodes, hand-drawn sketches and/or seals. Further, the text blocks in an image may be in Manhattan or any complex layout. Document Layout Analysis is an important preprocessing step before subjecting any such image to OCR. Here, the image with complex layout and content is segmented into its constituent components. For many present day applications, separating the text from the non-text blocks is sufficient. This enables the conversion of the text elements present in the image to their corresponding editable form. In this work, an effort has been made to separate the text areas from the various kinds of possible non-text elements. The document images may have been obtained from a scanner or camera. If the source is a scanner, there is control on the scanning resolution, and lighting of the paper surface. Moreover, during the scanning process, the paper surface remains parallel to the sensor surface. However, when an image is obtained through a camera, these advantages are no longer available. Here, an algorithm is proposed to separate the text present in an image from the clutter, irrespective of the imaging technology used. This is achieved by using both the structural and textural information of the text present in the gray image. A bank of Gabor filters characterizes the statistical distribution of the text elements in the document. A connected component based technique removes certain types of non-text elements from the image. When a camera is used to acquire document images, generally, along with the structural and textural information of the text, color information is also obtained. It can be assumed that text present in an image has a certain amount of color homogeneity. So, a graph-theoretical color clustering scheme is employed to segment the iso-color components of the image. Each iso-color image is then analyzed separately for its structural and textural properties. The results of such analyses are merged with the information obtained from the gray component of the image. This helps to separate the colored text areas from the non-text elements. The proposed scheme is computationally intensive, because the separation of the text from non-text entities is performed at the pixel level Since any entity is represented by a connected set of pixels, it makes more sense to carry out the separation only at specific points, selected as representatives of their neighborhood. Harris' operator evaluates an edge-measure at each pixel and selects pixels, which are locally rich on this measure. These points are then employed for separating text from non-text elements. Many government documents and forms in India are bi-lingual or tri-lingual in nature. Further, in school text books, it is common to find English words interspersed within sentences in the main Indian language of the book. In such documents, successive words in a line of text may be of different scripts (languages). Hence, for OCR of these documents, the script must be recognized at the level of words, rather than lines or paragraphs. A database of about 20,000 words each from 11 Indian scripts1 is created. This is so far the largest database of Indian words collected and deployed for script recognition purpose. Here again, a bank of 36 Gabor filters is used to extract the feature vector which represents the script of the word. The effectiveness of Gabor features is compared with that of DCT and it is found that Gabor features marginally outperform the DOT. Simple, linear and non-linear classifiers are employed to classify the word in the feature space. It is assumed that a scheme developed to recognize the script of the words would work equally fine for sentences and paragraphs. This assumption has been verified with supporting results. A systematic study has been conducted to evaluate and compare the accuracy of various feature-classifier combinations for word script recognition. We have considered the cases of bi-script and tri-script documents, which are largely available. Average recognition accuracies for bi-script and tri-script cases are 98.4% and 98.2%, respectively. A hierarchical blind script recognizer, involving all eleven scripts has been developed and evaluated, which yields an average accuracy of 94.1%. The major contributions of the thesis are: • A graph theoretic color clustering scheme is used to segment colored text. • A scheme is proposed to separate text from the non-text content of documents with complex layout and content, captured by scanner or camera. • Computational complexity is reduced by performing the separation task on a selected set of locally edge-rich points. • Script identification at word level is carried out using different feature classifier combinations. Gabor features with SVM classifier outperforms any other feature-classifier combinations. A hierarchical blind script recognition algorithm, involving the recognition of 11 Indian scripts, is developed. This structure employs the most efficient feature-classifier combination at each individual nodal point of the tree to maximize the system performance. A sequential forward feature selection algorithm is employed to. select the most discriminating features, in a case by case basis, for script-recognition. The 11 scripts are Bengali, Devanagari, Gujarati, Kannada, Malayalam, Odiya, Puniabi, Roman. Tamil, Telugu and Urdu.
14

adXtractor – Automated and Adaptive Generation of Wrappers for Information Retrieval

Ademi, Muhamet January 2017 (has links)
The aim of this project is to investigate the feasibility of retrieving unstructured automotive listings from structured web pages on the Internet. The research has two major purposes: (1) to investigate whether it is feasible to pair information extraction algorithms and compute wrappers (2) demonstrate the results of pairing these techniques and evaluate the measurements. We merge two training sets available on the web to construct reference sets which is the basis for the information extraction. The wrappers are computed by using information extraction techniques to identify data properties with a variety of techniques such as fuzzy string matching, regular expressions and document tree analysis. The results demonstrate that it is possible to pair these techniques successfully and retrieve the majority of the listings. Additionally, the findings also suggest that many platforms utilise lazy loading to populate image resources which the algorithm is unable to capture. In conclusion, the study demonstrated that it is possible to use information extraction to compute wrappers dynamically by identifying data properties. Furthermore, the study demonstrates the ability to open non-queryable domain data through a unified service.

Page generated in 0.0809 seconds