Spelling suggestions: "subject:"[een] DOCUMENT CLASSIFICATION"" "subject:"[enn] DOCUMENT CLASSIFICATION""
21 |
Deep Learning for Document Image AnalysisTensmeyer, Christopher Alan 01 April 2019 (has links)
Automatic machine understanding of documents from image inputs enables many applications in modern document workflows, digital archives of historical documents, and general machine intelligence, among others. Together, the techniques for understanding document images comprise the field of Document Image Analysis (DIA). Within DIA, the research community has identified several sub-problems, such as page segmentation and Optical Character Recognition (OCR). As the field has matured, there has been a trend of moving away from heuristic-based methods, designed for particular tasks and domains of documents, and moving towards machine learning methods that learn to solve tasks from examples of input/output pairs. Within machine learning, a particular class of models, known as deep learning models, have established themselves as the state-of-the-art for many image-based applications, including DIA. While traditional machine learning models typically operate on features designed by researchers, deep learning models are able to learn task-specific features directly from raw pixel inputs.This dissertation is collection of papers that proposes several deep learning models to solve a variety of tasks within DIA. The first task is historical document binarization, where an input image of a degraded historical document is converted to a bi-tonal image to separate foreground text from background regions. The next part of the dissertation considers document segmentation problems, including identifying the boundary between the document page and its background, as well as segmenting an image of a data table into rows, columns, and cells. Finally, a variety of deep models are proposed to solve recognition tasks. These tasks include whole document image classification, identifying the font of a given piece of text, and transcribing handwritten text in low-resource languages.
|
22 |
Extended Multidimensional Conceptual Spaces in Document ClassificationHadish, Mulugeta January 2008 (has links)
No description available.
|
23 |
[en] A MULTI-AGENT FRAMEWORK FOR SEARCH AND FLEXIBILIZATION OF DOCUMENT CLASSIFICATION ALGORITHMS / [pt] UM FRAMEWORK MULTI-AGENTES PARA BUSCA E FLEXIBILIZAÇÃO DE ALGORITMOS DE CLASSIFICAÇÃO DE DOCUMENTOSJOAO ALFREDO PINTO DE MAGALHAES 18 June 2003 (has links)
[pt] Vivemos na era da informação, onde o conhecimento é criado
numa velocidade nunca antes vista. Esse aumento de
velocidade teve como principalrazão a Internet, que alterou
os paradigmas até então existentes de troca de informações
entre as pessoas. Através da rede, trabalhos inteiros podem
ser publicados, atingindo um público alvo impossível de ser
alcançado através dos meios existentes anteriormente.
Porém, o excesso de informação também pode agir no sentido
contrário: muita informação pode ser igual a nenhuma
informação. Nosso trabalho foi o de produzir um sistema
multi-agentes para busca e classificação de documentos
textuais de um domínio específico. Foi construída
uma infra-estrutura que separa as questões referentes à
busca e seleção dos documentos (plataforma) das referentes
ao algoritmo de classificação utilizado (uma aplicação do
conceito de separation of concerns). Dessa forma, é possível
não só acoplar algoritmos já existentes, mas também gerar
novos algoritmos levando em consideração características
específicas do domínio de documentos abordado. Foram
geradas quatro instâncias a partir do framework, uma
aplicação de webclipping, um componente para auxílio a
knowledge management, um motor de busca para websites e uma
aplicação para a web semântica. / [en] We are living in the information age, where knowledge is
constantly being created in a rate that was never seen
before. This is mainly due to Internet, that changed all
the information exchange paradigms between people. Through
the net, it is possible to publish or exchange whole works,
reaching an audience impossible to be reached through other
means. However, excess of information can be harmful:
having too much information can be equal to having no
information at all. Our work was to build a multi-agent
framework for search and flexibilization of textual
document classification algorithms of a specific domain.
We have built an infra-structure that separates the
concerns of document search and selection (platform) from
the concerns of document classification (an application of
the separation of concerns concept). It is possible not
only to use existing algorithms, but also to generate new
ones that consider domain-specific characteristics of
documents. We generated four instances of the framework, a
webclipping application, a knowledge management component,
a search engine for websites and an application for the
semantic web.
|
24 |
Méthodes de classifications dynamiques et incrémentales : application à la numérisation cognitive d'images de documents / Incremental and dynamic learning for document image : application for intelligent cognitive scanning of documentsNgo Ho, Anh Khoi 19 March 2015 (has links)
Cette thèse s’intéresse à la problématique de la classification dynamique en environnements stationnaires et non stationnaires, tolérante aux variations de quantités des données d’apprentissage et capable d’ajuster ses modèles selon la variabilité des données entrantes. Pour cela, nous proposons une solution faisant cohabiter des classificateurs one-class SVM indépendants ayant chacun leur propre procédure d’apprentissage incrémentale et par conséquent, ne subissant pas d’influences croisées pouvant émaner de la configuration des modèles des autres classificateurs. L’originalité de notre proposition repose sur l’exploitation des anciennes connaissances conservées dans les modèles de SVM (historique propre à chaque SVM représenté par l’ensemble des vecteurs supports trouvés) et leur combinaison avec les connaissances apportées par les nouvelles données au moment de leur arrivée. Le modèle de classification proposé (mOC-iSVM) sera exploité à travers trois variations exploitant chacune différemment l’historique des modèles. Notre contribution s’inscrit dans un état de l’art ne proposant pas à ce jour de solutions permettant de traiter à la fois la dérive de concepts, l’ajout ou la suppression de concepts, la fusion ou division de concepts, tout en offrant un cadre privilégié d’interactions avec l’utilisateur. Dans le cadre du projet ANR DIGIDOC, notre approche a été appliquée sur plusieurs scénarios de classification de flux d’images pouvant survenir dans des cas réels lors de campagnes de numérisation. Ces scénarios ont permis de valider une exploitation interactive de notre solution de classification incrémentale pour classifier des images arrivant en flux afin d’améliorer la qualité des images numérisées. / This research contributes to the field of dynamic learning and classification in case of stationary and non-stationary environments. The goal of this PhD is to define a new classification framework able to deal with very small learning dataset at the beginning of the process and with abilities to adjust itself according to the variability of the incoming data inside a stream. For that purpose, we propose a solution based on a combination of independent one-class SVM classifiers having each one their own incremental learning procedure. Consequently, each classifier is not sensitive to crossed influences which can emanate from the configuration of the models of the other classifiers. The originality of our proposal comes from the use of the former knowledge kept in the SVM models (represented by all the found support vectors) and its combination with the new data coming incrementally from the stream. The proposed classification model (mOC-iSVM) is exploited through three variations in the way of using the existing models at each step of time. Our contribution states in a state of the art where no solution is proposed today to handle at the same time, the concept drift, the addition or the deletion of concepts, the fusion or division of concepts while offering a privileged solution for interaction with the user. Inside the DIGIDOC project, our approach was applied to several scenarios of classification of images streams which can correspond to real cases in digitalization projects. These different scenarios allow validating an interactive exploitation of our solution of incremental classification to classify images coming in a stream in order to improve the quality of the digitized images.
|
25 |
Analýza sentimentu s využitím dolování dat / Sentiment Analysis with Use of Data MiningSychra, Martin January 2016 (has links)
The theme of the work is sentiment analysis, especially in terms of informatics (marginally from a linguistic point of view). The linguistic part discusses the term sentiment and language methods for its analysis, e.g. lemmatization, POS tagging, using the list of stopwords etc. More attention is paid to the structure of the sentiment analyzer which is based on some of the machine learning methods (support vector machines, Naive Bayes and maximum entropy classification). On the basis of the theoretical background, a functional analyzer is projected and implemented. The experiments are focused mainly on comparing the classification methods and on the benefits of using the individual preprocessing methods. The success rate of the constructed classifier reaches up to 84 % in the cross-validation.
|
26 |
Contribution à la construction d’ontologies et à la recherche d’information : application au domaine médical / Contribution to ontology building and to semantic information retrieval : application to medical domainDrame, Khadim 10 December 2014 (has links)
Ce travail vise à permettre un accès efficace à des informations pertinentes malgré le volume croissant des données disponibles au format électronique. Pour cela, nous avons étudié l’apport d’une ontologie au sein d’un système de recherche d'information (RI).Nous avons tout d’abord décrit une méthodologie de construction d’ontologies. Ainsi, nous avons proposé une méthode mixte combinant des techniques de traitement automatique des langues pour extraire des connaissances à partir de textes et la réutilisation de ressources sémantiques existantes pour l’étape de conceptualisation. Nous avons par ailleurs développé une méthode d’alignement de termes français-anglais pour l’enrichissement terminologique de l’ontologie. L’application de notre méthodologie a permis de créer une ontologie bilingue de la maladie d’Alzheimer.Ensuite, nous avons élaboré des algorithmes pour supporter la RI sémantique guidée par une ontologie. Les concepts issus d’une ontologie ont été utilisés pour décrire automatiquement les documents mais aussi pour reformuler les requêtes. Nous nous sommes intéressés à : 1) l’identification de concepts représentatifs dans des corpus, 2) leur désambiguïsation, 3), leur pondération selon le modèle vectoriel, adapté aux concepts et 4) l’expansion de requêtes. Ces propositions ont permis de mettre en œuvre un portail de RI sémantique dédié à la maladie d’Alzheimer. Par ailleurs, le contenu des documents à indexer n’étant pas toujours accessible dans leur ensemble, nous avons exploité des informations incomplètes pour déterminer les concepts pertinents permettant malgré tout de décrire les documents. Pour cela, nous avons proposé deux méthodes de classification de documents issus d’un large corpus, l’une basée sur l’algorithme des k plus proches voisins et l’autre sur l’analyse sémantique explicite. Ces méthodes ont été évaluées sur de larges collections de documents biomédicaux fournies lors d’un challenge international. / This work aims at providing efficient access to relevant information among the increasing volume of digital data. Towards this end, we studied the benefit from using ontology to support an information retrieval (IR) system.We first described a methodology for constructing ontologies. Thus, we proposed a mixed method which combines natural language processing techniques for extracting knowledge from text and the reuse of existing semantic resources for the conceptualization step. We have also developed a method for aligning terms in English and French in order to enrich terminologically the resulting ontology. The application of our methodology resulted in a bilingual ontology dedicated to Alzheimer’s disease.We then proposed algorithms for supporting ontology-based semantic IR. Thus, we used concepts from ontology for describing documents automatically and for query reformulation. We were particularly interested in: 1) the extraction of concepts from texts, 2) the disambiguation of terms, 3) the vectorial weighting schema adapted to concepts and 4) query expansion. These algorithms have been used to implement a semantic portal about Alzheimer’s disease. Further, because the content of documents are not always fully available, we exploited incomplete information for identifying the concepts, which are relevant for indexing the whole content of documents. Toward this end, we have proposed two classification methods: the first is based on the k nearest neighbors’ algorithm and the second on the explicit semantic analysis. The two methods have been evaluated on large standard collections of biomedical documents within an international challenge.
|
27 |
Applying Natural Language Processing to document classification / Tillämpning av Naturlig Språkbehandling för dokumentklassificeringKragbé, David January 2022 (has links)
In today's digital world, we produce and use more electronic documents than ever before. And this trend is far from slowing down. Particularly, more and more companies and businesses now need to treat a considerable amount of documents to deal with their clients' requests. Scaling this process often requires building an automatic document treatment pipeline. Since the treatment of a document depends on its content, those pipelines heavily rely on an automatic document classifier to correctly process the documents received. Such document classifier should be able to receive a document of any type and output its class based on the text content of the document. In this thesis, we designed and implemented a machine learning pipeline for automated insurance claims documents classification. In order to find the best pipeline, we created several combination of different classifiers (logistic regressor and random forest classifier) and embedding models (Fasttext and Doc2vec). We then compared the performances of all of the pipelines using a the precision and accuracy metrics. We found that a pipeline composed of a Fasttext embedding model combined with a logistic regressor classifier was the most performant, yielding a precision of 85% and an accuracy of 86% on our dataset. / I dagens digitala värld, producerar och använder vi fler elektroniska dokument än någonsin tidigare. Denna trend är långt ifrån att sakta ner sig. Särskilt fler och fler företag behöver nu behandla en stor mängd dokument för att hantera sina kunders önskemål. Att skala denna process kräver ofta att man bygger en pipeline för automatisk dokumentbehandling. Eftersom behandlingen av ett dokument beror på dess innehåll, är dessa pipelines starkt beroende av en automatisk dokumentklassificerare för att korrekt bearbeta de mottagna dokumenten. En sådan dokumentklassificerare skall kunna ta emot ett dokument av vilken typ som helst och mata ut dess klass baserat på dokumentets textinnehåll. I detta examensarbete, designade och implementerade vi en maskininlärningspipeline för automatiserad klassificering av försäkringskrav-dokument. För att hitta den bästa pipelinen, skapade vi flera kombinationer av olika klassificerare (logistisk regressor och random forest klassificerare) och inbäddningsmodeller (Fasttext och Doc2vec). Vi jämförde sedan prestandan för alla pipelines med hjälp av precisions- och noggrannhetsmåtten. Vi fann att en pipeline bestående av en Fasttext-inbäddningsmodell kombinerad med en logistisk regressorklassificerare var den mest presterande, vilket gav en precision på 85% och en noggrannhet på 86% på vår datauppsättning.
|
28 |
Employing a Transformer Language Model for Information Retrieval and Document Classification : Using OpenAI's generative pre-trained transformer, GPT-2 / Transformermodellers användbarhet inom informationssökning och dokumentklassificeringBjöörn, Anton January 2020 (has links)
As the information flow on the Internet keeps growing it becomes increasingly easy to miss important news which does not have a mass appeal. Combating this problem calls for increasingly sophisticated information retrieval methods. Pre-trained transformer based language models have shown great generalization performance on many natural language processing tasks. This work investigates how well such a language model, Open AI’s General Pre-trained Transformer 2 model (GPT-2), generalizes to information retrieval and classification of online news articles, written in English, with the purpose of comparing this approach with the more traditional method of Term Frequency-Inverse Document Frequency (TF-IDF) vectorization. The aim is to shed light on how useful state-of-the-art transformer based language models are for the construction of personalized information retrieval systems. Using transfer learning the smallest version of GPT-2 is trained to rank and classify news articles achieving similar results to the purely TF-IDF based approach. While the average Normalized Discounted Cumulative Gain (NDCG) achieved by the GPT-2 based model was about 0.74 percentage points higher the sample size was too small to give these results high statistical certainty. / Informationsflödet på Internet fortsätter att öka vilket gör det allt lättare att missa viktiga nyheter som inte intresserar en stor mängd människor. För att bekämpa detta problem behövs allt mer sofistikerade informationssökningsmetoder. Förtränade transformermodeller har sedan ett par år tillbaka tagit över som de mest framstående neurala nätverken för att hantera text. Det här arbetet undersöker hur väl en sådan språkmodell, Open AIs General Pre-trained Transformer 2 (GPT-2), kan generalisera från att generera text till att användas för informationssökning och klassificering av texter. För att utvärdera detta jämförs en transformerbaserad modell med en mer traditionell Term Frequency- Inverse Document Frequency (TF-IDF) vektoriseringsmodell. Målet är att klargöra hur användbara förtränade transformermodeller faktiskt är i skapandet av specialiserade informationssökningssystem. Den minsta versionen av språkmodellen GPT-2 anpassas och tränas om till att ranka och klassificera nyhetsartiklar, skrivna på engelska, och uppnår liknande prestanda som den TF-IDF baserade modellen. Den GPT-2 baserade modellen hade i genomsnitt 0.74 procentenheter högre Normalized Discounted Cumulative Gain (NDCG) men provstorleken var ej stor nog för att ge dessa resultat hög statistisk säkerhet.
|
29 |
Deep Learning Methodologies for Textual and Graphical Content-Based Analysis of Handwritten Text ImagesPrieto Fontcuberta, José Ramón 08 July 2024 (has links)
[ES] En esta tesis se abordan problemas no resueltos en el campo de la Inteligencia Artificial aplicada a documentos históricos manuscritos.
Primero haremos un recorrido por diversas técnicas y conceptos que se utilizarán durante la tesis. Se explorarán diferentes formas de representar datos, incluidas imágenes, texto y grafos. Se introducirá el concepto de Índices Probabilísticos (PrIx) para la representación textual y se explicará su codificación usando TfIdf. También se discutirá la selección de las mejores características de entrada para redes neuronales mediante Information Gain (IG). En el ámbito de las redes neuronales, se abordarán modelos específicos como Multilayer Perceptron (MLP), Redes Neuronales Convolucionales (CNNs) y redes basadas en grafos (GNNs), además de una breve introducción a los transformers.
El primer problema que aborda la tesis es la segmentación de libros históricos manuscritos en unidades semánticas, un desafío complejo y recurrente en archivos de todo el mundo. A diferencia de los libros modernos, donde la segmentación en capítulos es más sencilla, los libros históricos presentan desafíos únicos debido a su irregularidad y posible mala conservación. La tesis define formalmente este problema por primera vez y propone un pipeline para extraer consistentemente las unidades semánticas en dos variantes: una con restricciones del corpus y otra sin ellas.
Se emplearán diferentes tipos de redes neuronales, incluidas CNNs para la clasificación de partes de la imagen y RPNs y transformers para detectar y clasificar regiones. Además, se introduce una nueva métrica para medir la pérdida de información en la detección, alineación y transcripción de estas unidades semánticas. Finalmente, se comparan diferentes métodos de ``decoding'' y se evalúan los resultados en hasta cinco conjuntos de datos diferentes.
En otro capítulo, la tesis aborda el desafío de clasificar documentos históricos manuscritos no transcritos, específicamente actos notariales en el Archivo Provincial Histórico de Cádiz. Se desarrollará un framework que utiliza Índices Probabilísticos (PrIx) para clasificar estos documentos y se comparará con transcripciones 1-best obtenidas mediante técnicas de Reconocimiento de Texto Manuscrito (HTR).
Además de la clasificación convencional en un conjunto cerrado de clases (Close Set Classification, CSC), la tesis introduce el framework de Open Set Classification (OSC). Este enfoque no solo clasifica documentos en clases predefinidas, sino que también identifica aquellos que no pertenecen a ninguna de las clases establecidas, permitiendo que un experto los etiquete.
Se compararán varias técnicas para este fin y se propondrán dos. Una sin umbral en las probabilidades a posteriori generadas por el modelo de red neuronal, y otra que utiliza un umbral en las mismas, con la opción de ajustarlo manualmente según las necesidades del experto.
En un tercer capítulo, la tesis se centra en la Extracción de Información (IE) de documentos tabulares manuscritos. Se desarrolla un pipeline que comienza con la detección de texto en imágenes con tablas, línea por línea, seguido de su transcripción mediante técnicas de HTR. De forma paralela, se entrenarán diferentes modelos para identificar la estructura de las tablas, incluidas filas, columnas y secciones de cabecera.
El pipeline también aborda problemas comunes en tablas manuscritas, como el multi-span de columnas y la sustitución de texto entre comillas. Además, se emplea un modelo de lenguaje entrenado específicamente para detectar automáticamente las cabeceras de las tablas.
Se utilizarán dos conjuntos de datos para demostrar la eficacia del pipeline en la tarea de IE, y se identificarán las áreas de mejora en el propio pipeline para futuras investigaciones. / [CA] En aquesta tesi s'aborden problemes no resolts en el camp de la Intel·ligència Artificial aplicada a documents històrics manuscrits.
Primer farem un recorregut per diverses tècniques i conceptes que s'utilitzaran durant la tesi. S'exploraran diferents formes de representar dades, incloses imatges, text i grafos. S'introduirà el concepte d'Índexs Probabilístics (PrIx) per a la representació textual i s'explicarà la seva codificació usant TfIdf. També es discutirà la selecció de les millors característiques d'entrada per a xarxes neuronals mitjançant Information Gain (IG). En l'àmbit de les xarxes neuronals, s'abordaran models específics com Multilayer Perceptron (MLP), Xarxes Neuronals Convolucionals (CNNs) i xarxes basades en grafos (GNNs), a més d'una breu introducció als transformers.
El primer problema que aborda la tesi és la segmentació de llibres històrics manuscrits en unitats semàntiques, un desafiament complex i recurrent en arxius de tot el món. A diferència dels llibres moderns, on la segmentació en capítols és més senzilla, els llibres històrics presenten desafiaments únics degut a la seva irregularitat i possible mala conservació. La tesi defineix formalment aquest problema per primera vegada i proposa un pipeline per extreure consistentment les unitats semàntiques en dues variants: una amb restriccions del corpus i una altra sense elles.
S'empraran diferents tipus de xarxes neuronals, incloses CNNs per a la classificació de parts de la imatge i RPNs i transformers per detectar i classificar regions. A més, s'introdueix una nova mètrica per mesurar la pèrdua d'informació en la detecció, alineació i transcripció d'aquestes unitats semàntiques. Finalment, es compararan diferents mètodes de ``decoding'' i s'avaluaran els resultats en fins a cinc conjunts de dades diferents.
En un altre capítol, la tesi aborda el desafiament de classificar documents històrics manuscrits no transcrits, específicament actes notarials a l'Arxiu Provincial Històric de Càdiz. Es desenvoluparà un marc que utilitza Índexs Probabilístics (PrIx) per classificar aquests documents i es compararà amb transcripcions 1-best obtingudes mitjançant tècniques de Reconèixer Text Manuscrit (HTR).
A més de la classificació convencional en un conjunt tancat de classes (Close Set Classification, CSC), la tesi introdueix el marc d'Open Set Classification (OSC). Aquest enfocament no només classifica documents en classes predefinides, sinó que també identifica aquells que no pertanyen a cap de les classes establertes, permetent que un expert els etiqueti.
Es compararan diverses tècniques per a aquest fi i es proposaran dues. Una sense llindar en les probabilitats a posteriori generades pel model de xarxa neuronal, i una altra que utilitza un llindar en les mateixes, amb l'opció d'ajustar-lo manualment segons les necessitats de l'expert.
En un tercer capítol, la tesi es centra en l'Extracció d'Informació (IE) de documents tabulars manuscrits. Es desenvolupa un pipeline que comença amb la detecció de text en imatges amb taules, línia per línia, seguit de la seva transcripció mitjançant tècniques de HTR. De forma paral·lela, s'entrenaran diferents models per identificar l'estructura de les taules, incloses files, columnes i seccions de capçalera.
El pipeline també aborda problemes comuns en taules manuscrites, com ara el multi-span de columnes i la substitució de text entre cometes. A més, s'empra un model de llenguatge entrenat específicament per detectar automàticament les capçaleres de les taules.
S'utilitzaran dos conjunts de dades per demostrar l'eficàcia del pipeline en la tasca de IE, i s'identificaran les àrees de millora en el propi pipeline per a futures investigacions. / [EN] This thesis addresses unresolved issues in the field of Artificial Intelligence as applied to historical handwritten documents. The challenges include not only the degradation of the documents but also the scarcity of available data for training specialized models. This limitation is particularly relevant when the trend is to use large datasets and massive models to achieve significant breakthroughs.
First, we provide an overview of various techniques and concepts used throughout the thesis. Different ways of representing data are explored, including images, text, and graphs. Probabilistic Indices (PrIx) are introduced for textual representation and its encoding using TfIdf is be explained. We also discuss selecting the best input features for neural networks using Information Gain (IG). In the realm of neural networks, specific models such as Multilayer Perceptron (MLP), Convolutional Neural Networks (CNNs), and graph-based networks (GNNs) are covered, along with a brief introduction to transformers.
The first problem addressed in this thesis is the segmentation of historical handwritten books into semantic units, a complex and recurring challenge in archives worldwide. Unlike modern books, where chapter segmentation is relatively straightforward, historical books present unique challenges due to their irregularities and potential poor preservation. To the best of our knowledge, this thesis formally defines this problem. We propose a pipeline to consistently extract these semantic units in two variations: one with corpus-specific constraints and another without them.
Various types of neural networks are employed, including Convolutional Neural Networks (CNNs) for classifying different parts of the image and Region Proposal Networks (RPNs) and transformers for detecting and classifying regions. Additionally, a new metric is introduced to measure the information loss in the detection, alignment, and transcription of these semantic units. Finally, different decoding methods are compared, and the results are evaluated across up to five different datasets.
In another chapter, we tackle the challenge of classifying non-transcribed historical handwritten documents, specifically notarial deeds, from the Provincial Historical Archive of Cádiz. A framework is developed that employs Probabilistic Indices (PrIx) for classifying these documents, and this is compared to 1-best transcriptions obtained through Handwritten Text Recognition (HTR) techniques.
In addition to conventional classification within a closed set of classes (Close Set Classification, CSC), this thesis introduces the Open Set Classification (OSC) framework. This approach not only classifies documents into predefined classes but also identifies those that do not belong to any of the established classes, allowing an expert to label them.
Various techniques are compared, and two are proposed. One approach without using a threshold on the posterior probabilities generated by the neural network model. At the same time, the other employs a threshold on these probabilities, with the option for manual adjustment according to the expert's needs.
In a third chapter, this thesis focuses on Information Extraction (IE) from handwritten tabular documents. A pipeline is developed that starts with detecting text in images containing tables, line by line, followed by its transcription using HTR techniques. In parallel, various models are trained to identify the structure of the tables, including rows, columns, and header sections.
The pipeline also addresses common issues in handwritten tables, such as multi-span columns and substituting ditto marks. Additionally, a language model specifically trained to detect table headers automatically is employed.
Two datasets are used to demonstrate the effectiveness of the pipeline in the IE task, and areas for improvement within the pipeline itself are identified for future research. / Prieto Fontcuberta, JR. (2024). Deep Learning Methodologies for Textual and Graphical Content-Based Analysis of Handwritten Text Images [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/206075
|
30 |
Gestion de l’hétérogénéité d’un SI de classification documentaire multifacette et positionnement dans l’environnement des ECM. / Management of heterogeneity of a documentary multifaceted classification Information System and position in the ECM environment.Ankoud, Manel 19 December 2014 (has links)
L’organisation des connaissances est une discipline investie par des bibliothécaires, documentalistes, archivistes spécialistes de l’information, informaticiens et tous professionnels de documents. Elle englobe toutes activités, études et recherches qui élaborent et traitent les processus d’organisation et de présentation des ressources documentaires utiles dans une organisation. Dans ce contexte, le projet ANR Miipa-Doc a pour objectifs d’explorer des nouvelles méthodes d’indexation ascendantes, en utilisant des termes descripteurs formulés par les individus plutôt que choisis parmi une liste préétablie, pour l’organisation des contenus documentaires complexes au sein des entreprises de large taille, et concevoir l’architecture logicielle correspondante.Dans ce projet notre contribution consiste à gérer l’hétérogénéité d’un système d’information d’organisation des contenus documentaires, basé sur une approche orientée métier et un SOC (système d’organisation des connaissances) folksonomique à facette. Nous proposons dans cette gestion une approche incrémentale dirigée par les modèles, issue de l’IDM (ingénierie dirigée par les modèles), basée sur des méta-modèles pour garantir l’aspect d’évolutivité. Après l’implémentation du prototype HyperTaging qui met en place ces deux approches, nous proposons un processus d’évaluation permet de positionner ce prototype et tous SI de classification documentaire dans l’environnement des ECM, en se basant sur des critères d’évaluation fins et particuliers. / The knowledge organization is invested by librarians, archivists, information specialists, IT professionals and all discipline of document. It includes all activities, studies and research which develop and treat organization process and presentation of relevant information resources in an organization. In this context the Miipa-Doc project aims to explore new ascendants indexing methods, using descriptors made by individuals rather than selected given list for complex contained in the organization document, in large size companies, and design the corresponding software architecture.Our contribution in this project is to manage the heterogeneity of an information system of document organization, based on a business-oriented approach and a KOS (knowledge organization system) of folksonomy facet. We propose an incremental approach this management model driven, outcome of MDE (Model Driven Engineering), based on meta-models to ensure scalability appearance. After implementing the HyperTaging prototype, that implements both approaches, we propose an evaluation process used to position the prototype and all IS of documentary classification in the environment of ECM based on purposes of delicate and particular evaluation criteria.
|
Page generated in 0.0527 seconds