Spelling suggestions: "subject:"text extraction"" "subject:"text axtraction""
1 |
A Web Scraper For Forums : Navigation and text extraction methodsPalma, Michael, Zhou, Shidi January 2017 (has links)
Web forums are a popular way of exchanging information and discussing various topics. These websites usually have a special structure, divided into boards, threads and posts. Although the structure might be consistent across forums, the layout of each forum is different. The way a web forum presents the user posts is also very different from how a news website presents a single piece of information. All of this makes the navigation and extraction of text a hard task for web scrapers. The focus of this thesis is the development of a web scraper specialized in forums. Three different methods for text extraction are implemented and tested before choosing the most appropriate method for the task. The methods are Word Count, Text-Detection Framework and Text-to-Tag Ratio. The handling of link duplicates is also considered and solved by implementing a multi-layer bloom filter. The thesis is conducted applying a qualitative methodology. The results indicate that the Text-to-Tag Ratio has the best overall performance and gives the most desirable result in web forums. Thus, this was the selected methods to keep on the final version of the web scraper. / Webforum är ett populärt sätt att utbyta information och diskutera olika ämnen. Dessa webbplatser har vanligtvis en särskild struktur, uppdelad i startsida, trådar och inlägg. Även om strukturen kan vara konsekvent bland olika forum är layouten av varje forum annorlunda. Det sätt på vilket ett webbforum presenterar användarinläggen är också väldigt annorlunda än hur en nyhet webbplats presenterar en enda informationsinlägg. Allt detta gör navigering och extrahering av text en svår uppgift för webbskrapor. Fokuset av detta examensarbete är utvecklingen av en webbskrapa specialiserad på forum. Tre olika metoder för textutvinning implementeras och testas innan man väljer den lämpligaste metoden för uppgiften. Metoderna är Word Count, Text Detection Framework och Text-to-Tag Ratio. Hanteringen av länk dubbleringar noga övervägd och löses genom att implementera ett flerlagers bloom filter. Examensarbetet genomförs med tillämpning av en kvalitativ metodik. Resultaten indikerar att Text-to-Tag Ratio har den bästa övergripande prestandan och ger det mest önskvärda resultatet i webbforum. Således var detta den valda metoden att behålla i den slutliga versionen av webbskrapan.
|
2 |
Separation and Extraction of Valuable Information From Digital Receipts Using Google Cloud Vision OCR.Johansson, Elias January 2019 (has links)
Automatization is a desirable feature in many business areas. Manually extracting information from a physical object such as a receipt is something that can be automated to save resources for a company or a private person. In this paper the process will be described of combining an already existing OCR engine with a developed python script to achieve data extraction of valuable information from a digital image of a receipt. Values such as VAT, VAT%, date, total-, gross-, and net-cost; will be considered as valuable information. This is a feature that has already been implemented in existing applications. However, the company that I have done this project for are interested in creating their own version. This project is an experiment to see if it is possible to implement such an application using restricted resources. To develop a program that can extract the information mentioned above. In this paper you will be guided though the process of the development of the program. As well as indulging in the mindset, findings and the steps taken to overcome the problems encountered along the way. The program achieved a success rate of 86.6% in extracting the most valuable information: total cost, VAT% and date from a set of 53 receipts originated from 34 separate establishments.
|
3 |
Implementation and evaluation of a text extraction tool for adverse drug reaction informationDahlberg, Gunnar January 2010 (has links)
Inom ramen för Världshälsoorganisationens (WHO:s) internationella biverkningsprogram rapporterar sjukvårdspersonal och patienter misstänkta läkemedelsbiverkningar i form av spontana biverkningsrapporter som via nationella myndigheter skickas till Uppsala Monitoring Centre (UMC). Hos UMC lagras rapporterna i VigiBase, WHO:s biverkningsdatabas. Rapporterna i VigiBase analyseras med hjälp av statistiska metoder för att hitta potentiella samband mellan läkemedel och biverkningar. Funna samband utvärderas i flera steg där ett tidigt steg i utvärderingen är att studera den medicinska litteraturen för att se om sambandet redan är känt sedan tidigare (tidigare kända samband filtreras bort från fortsatt analys). Att manuellt leta efter samband mellan ett visst läkemedel och en viss biverkan är tidskrävande. I den här studien har vi utvecklat ett verktyg för att automatiskt leta efter medicinska biverkningstermer i medicinsk litteratur och spara funna samband i ett strukturerat format. I verktyget har vi implementerat och integrerat funktionalitet för att söka efter medicinska biverkningar på olika sätt (utnyttja synonymer,ta bort ändelser på ord, ta bort ord som saknar betydelse, godtycklig ordföljd och stavfel). Verktygets prestanda har utvärderats på manuellt extraherade medicinska termer från SPC-texter (texter från läkemedels bipacksedlar) och på biverkningstexter från Martindale (medicinsk referenslitteratur för information om läkemedel och substanser) där WHO-ART- och MedDRA-terminologierna har använts som källa för biverkningstermer. Studien visar att sofistikerad textextraktion avsevärt kan förbättra identifieringen av biverkningstermer i biverkningstexter jämfört med en ordagrann extraktion. / Background: Initial review of potential safety issues related to the use of medicines involves reading and searching existing medical literature sources for known associations of drug and adverse drug reactions (ADRs), so that they can be excluded from further analysis. The task is labor demanding and time consuming. Objective: To develop a text extraction tool to automatically identify ADR information from medical adverse effects texts. Evaluate the performance of the tool’s underlying text extraction algorithm and identify what parts of the algorithm contributed to the performance. Method: A text extraction tool was implemented on the .NET platform with functionality for preprocessing text (removal of stop words, Porter stemming and use of synonyms) and matching medical terms using permutations of words and spelling variations (Soundex, Levenshtein distance and Longest common subsequence distance). Its performance was evaluated on both manually extracted medical terms (semi-structuredtexts) from summary of product characteristics (SPC) texts and unstructured adverse effects texts from Martindale (i.e. a medical reference for information about drugs andmedicines) using the WHO-ART and MedDRA medical term dictionaries. Results: For the SPC data set, a verbatim match identified 72% of the SPC terms. The text extraction tool correctly matched 87% of the SPC terms while producing one false positive match using removal of stop words, Porter stemming, synonyms and permutations. The use of the full MedDRA hierarchy contributed the most to performance. Sophisticated text algorithms together contributed roughly equally to the performance. Phonetic codes (i.e. Soundex) is evidently inferior to string distance measures (i.e. Levenshtein distance and Longest common subsequence distance) for fuzzy matching in our implementation. The string distance measures increased the number of matched SPC terms, but at the expense of generating false positive matches. Results from Martindaleshow that 90% of the identified medical terms were correct. The majority of false positive matches were caused by extracting medical terms not describing ADRs. Conclusion: Sophisticated text extraction can considerably improve the identification of ADR information from adverse effects texts compared to a verbatim extraction.
|
4 |
A Constraint Based Real-time License Plate Recognition SystemGunaydin, Ali Gokay 01 February 2007 (has links) (PDF)
License Plate Recognition (LPR) systems are frequently utilized in various access controls and security applications. In this thesis, an experimental constraint based real-time License Plate Recognition system is designed, and implemented in Java platform. Many of the available constraint based methods worked under strict restrictions such as plate color, fixed illumination and designated routes, whereas, only the license plate geometry and format constraints are used in this developed system. These constraints are built on top of the current Turkish license plate
regulations. The plate localization algorithm is based on vertical edge features where constraints are used to filter out non-text regions. Vertical and horizontal projections are used for character segmentation and Multi Layered Perceptron
(MLP) based Optical Character Recognition (OCR) module has been implemented for character identification. The extracted license plate characters are validated against possible license plate formats during the recognition process. The system is tested both with Turkish and foreign license plate images
including various plate orientation, image quality and size. An accuracy of 92% is achieved for license plate localization and %88 for character segmentation and recognition.
|
5 |
Data Acquisition from Cemetery HeadstonesChristiansen, Cameron Smith 27 November 2012 (has links) (PDF)
Data extraction from engraved text is discussed rarely, and nothing in the open literature discusses data extraction from cemetery headstones. Headstone images present unique challenges such as engraved or embossed characters (causing inner-character shadows), low contrast with the background, and significant noise due to inconsistent stone texture and weathering. Current systems for extracting text from outdoor environments (billboards, signs, etc.) make assumptions (i.e. clean and/or consistently-textured background and text) that fail when applied to the domain of engraved text. Additionally, the ability to extract the data found on headstones is of great historical value. This thesis describes a novel and efficient feature-based text zoning and segmentation method for the extraction of noisy text from a highly textured engraved medium. Additionally, the usefulness of constraining a problem to a specific domain is demonstrated. The transcriptions of images zoned and segmented through the proposed system result in a precision of 55% compared to 1% precision without zoning, a 62% recall compared to 39%, an F-measure of 58% compared to 2%, and an error rate of 77% compared to 8303%.
|
6 |
Detection of Frozen Video Subtitles Using Machine LearningSjölund, Jonathan January 2019 (has links)
When subtitles are burned into a video, an error can sometimes occur in the encoder that results in the same subtitle being burned into several frames, resulting in subtitles becoming frozen. This thesis provides a way to detect frozen video subtitles with the help of an implemented text detector and classifier. Two types of classifiers, naïve classifiers and machine learning classifiers, are tested and compared on a variety of different videos to see how much a machine learning approach can improve the performance. The naïve classifiers are evaluated using ground truth data to gain an understanding of the importance of good text detection. To understand the difficulty of the problem, two different machine learning classifiers are tested, logistic regression and random forests. The result shows that machine learning improves the performance over using naïve classifiers by improving the specificity from approximately 87.3% to 95.8% and improving the accuracy from 93.3% to 95.5%. Random forests achieve the best overall performance, but the difference compared to when using logistic regression is small enough that more computationally complex machine learning classifiers are not necessary. Using the ground truth shows that the weaker naïve classifiers would be improved by at least 4.2% accuracy, thus a better text detector is warranted. This thesis shows that machine learning is a viable option for detecting frozen video subtitles.
|
7 |
Document image segmentation : content categorization / Analyse d'images de documents : segmentation du contenuFelhi, Mehdi 10 July 2014 (has links)
Dans cette thèse, nous abordons le problème de la segmentation des images de documents en proposant de nouvelles approches pour la détection et la classification de leurs contenus. Dans un premier lieu, nous étudions le problème de l'estimation d'inclinaison des documents numérisées. Le but de ce travail étant de développer une approche automatique en mesure d'estimer l'angle d'inclinaison du texte dans les images de document. Notre méthode est basée sur la méthode Maximum Gradient Difference (MGD), la R-signature et la transformée de Ridgelets. Nous proposons ensuite une approche hybride pour la segmentation des documents. Nous décrivons notre descripteur de trait qui permet de détecter les composantes de texte en se basant sur la squeletisation. La méthode est appliquée pour la segmentation des images de documents numérisés (journaux et magazines) qui contiennent du texte, des lignes et des régions de photos. Le dernier volet de la thèse est consacré à la détection du texte dans les photos et posters. Pour cela, nous proposons un ensemble de descripteurs de texte basés sur les caractéristiques du trait. Notre approche commence par l'extraction et la sélection des candidats de caractères de texte. Deux méthodes ont été établies pour regrouper les caractères d'une même ligne de texte (mot ou phrase) ; l'une consiste à parcourir en profondeur un graphe, l'autre consiste à établir un critère de stabilité d'une région de texte. Enfin, les résultats sont affinés en classant les candidats de texte en régions « texte » et « non-texte » en utilisant une version à noyau du classifieur Support Vector Machine (K-SVM) / In this thesis I discuss the document image segmentation problem and I describe our new approaches for detecting and classifying document contents. First, I discuss our skew angle estimation approach. The aim of this approach is to develop an automatic approach able to estimate, with precision, the skew angle of text in document images. Our method is based on Maximum Gradient Difference (MGD) and R-signature. Then, I describe our second method based on Ridgelet transform.Our second contribution consists in a new hybrid page segmentation approach. I first describe our stroke-based descriptor that allows detecting text and line candidates using the skeleton of the binarized document image. Then, an active contour model is applied to segment the rest of the image into photo and background regions. Finally, text candidates are clustered using mean-shift analysis technique according to their corresponding sizes. The method is applied for segmenting scanned document images (newspapers and magazines) that contain text, lines and photo regions. Finally, I describe our stroke-based text extraction method. Our approach begins by extracting connected components and selecting text character candidates over the CIE LCH color space using the Histogram of Oriented Gradients (HOG) correlation coefficients in order to detect low contrasted regions. The text region candidates are clustered using two different approaches ; a depth first search approach over a graph, and a stable text line criterion. Finally, the resulted regions are refined by classifying the text line candidates into « text» and « non-text » regions using a Kernel Support Vector Machine K-SVM classifier
|
8 |
Generalized Haar-like filters for document analysis : application to word spotting and text extraction from comics / Filtres généralisés de Haar pour l’analyse de documents : application aux word spotting et extraction de texte dans les bandes dessinéesGhorbel, Adam 18 July 2016 (has links)
Dans cette thèse, nous avons proposé une approche analytique multi-échelle pour le word spotting dans les documents manuscrits. Le modèle proposé fonctionne selon deux niveaux différents. Un module de filtrage global permettant de définir plusieurs zones candidates de la requête dans le document testé. Ensuite, l’échelle de l’observation est modifiée à un niveau inférieur afin d’affiner les résultats et sélectionner uniquement ceux qui sont vraiment pertinents. Cette approche de word spotting est basée sur des familles généralisées de filtres de Haar qui s’adaptent à chaque requête pour procéder au processus de spotting et aussi sur un principe de vote qui permet de choisir l’emplacement spatial où les réponses générées par les filtres sont accumulées. Nous avons en plus proposé une autre approche pour l’extraction de texte du graphique dans les bandes dessinées. Cette approche se base essentiellement sur les caractéristiques pseudo-Haar qui sont générées par l’application des filtres généralisés de Haar sur l’image de bande dessinée. Cette approche est une approche analytique et ne nécessite aucun processus d’extraction ni des bulles ni d’autres composants. / The presented thesis follows two directions. The first one disposes a technique for text and graphic separation in comics. The second one points out a learning free segmentation free word spotting framework based on the query-by-string problem for manuscript documents. The two approaches are based on human perception characteristics. Indeed, they were inspired by several characteristics of human vision such as the Preattentive processing. These characteristics guide us to introduce two multi scale approaches for two different document analysis tasks which are text extraction from comics and word spotting in manuscript document. These two approaches are based on applying generalized Haar-like filters globally on each document image whatever its type. Describing and detailing the use of such features throughout this thesis, we offer the researches of document image analysis field a new line of research that has to be more explored in future. The two approaches are layout segmentation free and the generalized Haar-like filters are applied globally on the image. Moreover, no binarization step of the processed document is done in order to avoid losing data that may influence the accuracy of the two frameworks. Indeed, any learning step is performed. Thus, we avoid the process of extraction features a priori which will be performed automatically, taking into consideration the different characteristics of the documents.
|
9 |
Extraction d'informations textuelles au sein de documents numérisés : cas des factures / Extracting textual information within scanned documents : case of invoicesPitou, Cynthia 28 September 2017 (has links)
Le traitement automatique de documents consiste en la transformation dans un format compréhensible par un système informatique de données présentes au sein de documents et compréhensibles par l'Homme. L'analyse de document et la compréhension de documents sont les deux phases du processus de traitement automatique de documents. Étant donnée une image de document constituée de mots, de lignes et d'objets graphiques tels que des logos, l'analyse de documents consiste à extraire et isoler les mots, les lignes et les objets, puis à les regrouper au sein de blocs. Les différents blocs ainsi formés constituent la structure géométrique du document. La compréhension de documents fait correspondre à cette structure géométrique une structure logique en considérant des liaisons logiques (à gauche, à droite, au-dessus, en-dessous) entre les objets du document. Un système de traitement de documents doit être capable de : (i) localiser une information textuelle, (ii) identifier si cette information est pertinente par rapport aux autres informations contenues dans le document, (iii) extraire cette information dans un format compréhensible par un programme informatique. Pour la réalisation d'un tel système, les difficultés à surmonter sont liées à la variabilité des caractéristiques de documents, telles que le type (facture, formulaire, devis, rapport, etc.), la mise en page (police, style, agencement), la langue, la typographie et la qualité de numérisation du document. Dans ce mémoire, nous considérons en particulier des documents numérisés, également connus sous le nom d'images de documents. Plus précisément, nous nous intéressons à la localisation d'informations textuelles au sein d'images de factures, afin de les extraire à l'aide d'un moteur de reconnaissance de caractères. Les factures sont des documents très utilisés mais non standards. En effet, elles contiennent des informations obligatoires (le numéro de facture, le numéro siret de l'émetteur, les montants, etc.) qui, selon l'émetteur, peuvent être localisées à des endroits différents. Les contributions présentées dans ce mémoire s'inscrivent dans le cadre de la localisation et de l'extraction d'informations textuelles fondées sur des régions identifiées au sein d'une image de document.Tout d'abord, nous présentons une approche de décomposition d'une image de documents en sous-régions fondée sur la décomposition quadtree. Le principe de cette approche est de décomposer une image de documents en quatre sous-régions, de manière récursive, jusqu'à ce qu'une information textuelle d'intérêt soit extraite à l'aide d'un moteur de reconnaissance de caractères. La méthode fondée sur cette approche, que nous proposons, permet de déterminer efficacement les régions contenant une information d'intérêt à extraire.Dans une autre approche, incrémentale et plus flexible, nous proposons un système d'extraction d'informations textuelles qui consiste en un ensemble de régions prototypes et de chemins pour parcourir ces régions prototypes. Le cycle de vie de ce système comprend cinq étapes:- Construction d'un jeu de données synthétiques à partir d'images de factures réelles contenant les informations d'intérêts.- Partitionnement des données produites.- Détermination des régions prototypes à partir de la partition obtenue.- Détermination des chemins pour parcourir les régions prototypes, à partir du treillis de concepts d'un contexte formel convenablement construit.- Mise à jour du système de manière incrémentale suite à l'insertion de nouvelles données / Document processing is the transformation of a human understandable data in a computer system understandable format. Document analysis and understanding are the two phases of document processing. Considering a document containing lines, words and graphical objects such as logos, the analysis of such a document consists in extracting and isolating the words, lines and objects and then grouping them into blocks. The subsystem of document understanding builds relationships (to the right, left, above, below) between the blocks. A document processing system must be able to: locate textual information, identify if that information is relevant comparatively to other information contained in the document, extract that information in a computer system understandable format. For the realization of such a system, major difficulties arise from the variability of the documents characteristics, such as: the type (invoice, form, quotation, report, etc.), the layout (font, style, disposition), the language, the typography and the quality of scanning.This work is concerned with scanned documents, also known as document images. We are particularly interested in locating textual information in invoice images. Invoices are largely used and well regulated documents, but not unified. They contain mandatory information (invoice number, unique identifier of the issuing company, VAT amount, net amount, etc.) which, depending on the issuer, can take various locations in the document. The present work is in the framework of region-based textual information localization and extraction.First, we present a region-based method guided by quadtree decomposition. The principle of the method is to decompose the images of documents in four equals regions and each regions in four new regions and so on. Then, with a free optical character recognition (OCR) engine, we try to extract precise textual information in each region. A region containing a number of expected textual information is not decomposed further. Our method allows to determine accurately in document images, the regions containing text information that one wants to locate and retrieve quickly and efficiently.In another approach, we propose a textual information extraction model consisting in a set of prototype regions along with pathways for browsing through these prototype regions. The life cycle of the model comprises five steps:- Produce synthetic invoice data from real-world invoice images containing the textual information of interest, along with their spatial positions.- Partition the produced data.- Derive the prototype regions from the obtained partition clusters.- Derive pathways for browsing through the prototype regions, from the concept lattice of a suitably defined formal context.- Update incrementally the set of protype regions and the set of pathways, when one has to add additional data.
|
10 |
Methods of Text Information Extraction in Digital VideosTarczyńska, Anna January 2012 (has links)
Context The huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval. Objectives The objectives of this thesis are as follows: providing a better understanding of the nature of text extraction; performing a systematic literature review on various methods of text information extraction in digital videos of TV sports news; designing and executing an experiment in the testing environment; evaluating available and promising methods of text information extraction from digital video files in the proposed context associated with video sports news indexing and retrieval; providing an adequate solution in the proposed context described above. Methods This thesis consists of three research methods: Systematic Literature Review, Video Content Analysis with the checklist, and Experiment. The Systematic Literature Review has been used to study the nature of text information extraction, to establish the methods and challenges, and to specify the effective way of conducting the experiment. The video content analysis has been used to establish the context for the experiment. Finally, the experiment has been conducted to answer the main research question: How useful are the methods of text information extraction for indexation of video sports news and information retrieval? Results Through the Systematic Literature Review we identified 29 challenges of the text information extraction methods, and 10 chains between them. We extracted 21 tools and 105 different methods, and analyzed the relations between them. Through Video Content Analysis we specified three groups of probability of text extraction from video, and 14 categories for providing video sports news indexation with the taxonomy hierarchy. We have conducted the Experiment on three videos files, with 127 frames, 8970 characters, and 1814 words, using the only available MoCA tool. As a result, we reported 10 errors and proposed recommendations for each of them. We evaluated the tool according to the categories mentioned above and offered four advantages, and nine disadvantages of the Tool mentioned above. Conclusions It is hard to compare the methods described in the literature, because the tools are not available for testing, and they are not compared with each other. Furthermore, the values of recall and precision measures highly depend on the quality of the text contained in the video. Therefore, performing the experiments on the same indexed database is necessary. However, the text information extraction is time consuming (because of huge amount of frames in video), and even high character recognition rate gives low word recognition rate. Therefore, the usefulness of text information extraction for video indexation is still low. Because most of the text information contained in the videos news is inserted in post-processing, the text extraction could be provided in the root: during the processing of the original video, by the broadcasting company (e.g. by automatically saving inserted text in separate file). Then the text information extraction will not be necessary for managing the new video files / The huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval.
|
Page generated in 0.0768 seconds