• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 36
  • 36
  • 15
  • 10
  • 10
  • 10
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ADVANCES IN IMAGE-BASED DATA HIDING, FEATURE DETECTION, GRID ALIGNMENT, AND DOCUMENT CLASSIFICATION

Yujian Xu (14227856) 17 May 2024 (has links)
<p>Data embedding tools such as barcodes are very popular nowadays but not aesthetically pleasing. In this research, we propose a watermarking scheme and an image-based surface coding scheme using the grid points as fiducial markers and the shifted points as data-bearing features. Detecting and aligning point grids play a fundamental role in these applications. Joint determination of non-grid points and estimation of non-linear spatial distortions applied to the grid is a key challenge for grid alignment. We modify a SIFT-based surface feature detection method to eliminate as many spurious feature points as possible and propose a grid alignment algorithm that starts from a small nearly regular region found in the point set and then expands the list of candidate points included in the grid. Our method is tested on both synthetically generated and real samples. Furthermore, we extend some applications of the surface coding scheme to 3D space, including hyper-conformal mapping of the grid pattern onto the 3D models, 3D surface feature detection, and 3D grid points alignment. </p> <p>A document routing system is crucial to the concept of the smart office. We abstract it as an online class-incremental image classification problem. There are two kinds of classifiers to solve this problem: exemplar, and parametric classifiers. The architecture of exemplar-based classification is summarized here. We propose a one-versus-rest parametric classifier and four different updating algorithms based on the passive-aggressiveness algorithm. An adaptive thresholding method is also proposed to indicate the low-confidence prediction. We test our methods on 547 real document images that we collected and labeled and high cumulative accuracy is reported. </p>
22

Ανάπτυξη μεθόδων αυτόματης κατηγοριοποίησης κειμένων προσανατολισμένων στο φύλο

Αραβαντινού, Χριστίνα 15 May 2015 (has links)
Η εντυπωσιακή εξάπλωση των μέσων κοινωνικής δικτύωσης τα τελευταία χρόνια, θέτει βασικά ζητήματα τα οποία απασχολούν την ερευνητική κοινότητα. Η συγκέντρωση και οργάνωση του τεράστιου όγκου πληροφορίας βάσει θέματος, συγγραφέα, ηλικίας ή και φύλου αποτελούν χαρακτηριστικά παραδείγματα προβλημάτων που πρέπει να αντιμετωπιστούν. Η συσσώρευση παρόμοιας πληροφορίας από τα ψηφιακά ίχνη που αφήνει ο κάθε χρήστης καθώς διατυπώνει τη γνώμη του για διάφορα θέματα ή περιγράφει στιγμιότυπα από τη ζωή του δημιουργεί τάσεις, οι οποίες εξαπλώνονται ταχύτατα μέσω των tweets, των δημοσιευμάτων σε ιστολόγια (blogs) και των αναρτήσεων στο Facebook. Ιδιαίτερο ενδιαφέρον παρουσιάζει το πώς μπορεί όλη αυτή η πληροφορία να κατηγοριοποιηθεί βάσει δημογραφικών χαρακτηριστικών, όπως το φύλο ή η ηλικία. Άμεσες πληροφορίες που παρέχει ο κάθε χρήστης για τον εαυτό του, όπως επίσης και έμμεσες πληροφορίες που μπορούν να προκύψουν από την γλωσσολογική ανάλυση των κειμένων του χρήστη, αποτελούν σημαντικά δεδομένα που μπορούν να χρησιμοποιηθούν για την ανίχνευση του φύλου του συγγραφέα. Πιο συγκεκριμένα, η αναγνώριση του φύλου ενός χρήστη από δεδομένα κειμένου, μπορεί να αναχθεί σε ένα πρόβλημα κατηγοριοποίησης κειμένου. Το κείμενο υφίσταται επεξεργασία και στη συνέχεια, με τη χρήση μηχανικής μάθησης, εντοπίζεται το φύλο. Ειδικότερα, μέσω στατιστικής και γλωσσολογικής ανάλυσης των κειμένων, εξάγονται διάφορα χαρακτηριστικά (π.χ. συχνότητα εμφάνισης λέξεων, μέρη του λόγου, μήκος λέξεων, χαρακτηριστικά που συνδέονται με το περιεχόμενο κ.τ.λ.), τα οποία στη συνέχεια χρησιμοποιούνται για να γίνει η αναγνώριση του φύλου. Στην παρούσα διπλωματική εργασία σκοπός είναι η μελέτη και η ανάπτυξη ενός συστήματος κατηγοριοποίησης κειμένων ιστολογίου και ιστοσελίδων κοινωνικής δικτύωσης, βάσει του φύλου. Εξετάζεται η απόδοση διαφορετικών συνδυασμών χαρακτηριστικών και κατηγοριοποιητών στoν εντοπισμό του φύλου. / The rapid growth of social media in recent years creates important research tasks. The collection and management of the huge information available, based on topic, author, age or gender are some examples of the problems that need to be addressed. The gathering of such information from the digital traces of the users, when they express their opinions on different subjects or they describe moments of their lives, creates trends, which expand through tweets, blog posts and Facebook statuses. An interesting aspect is to classify all the available information, according to demographic characteristics, such as gender or age. The direct clues provided by the users about themselves, along with the indirect information that can come of the linguistic analysis of their texts, are useful elements that can be used for the identification of the authors’ gender. More specifically, the detection of the users’ gender from textual data can be faced as a document classification problem. The document is processed and then, machine learning techniques are applied, in order to detect the gender. The features used for the gender identification can be extracted from statistical and linguistic analysis of the document. In the present thesis, we aim to develop an automatic system for the classification of web blog and social media posts, according to their authors’ gender. We study the performance of different combinations of features and classifiers for the identification of the gender.
23

Sémantické anotace / Semantic annotations

Dědek, Jan January 2012 (has links)
Four relatively separate topics are presented in the thesis. Each topic represents one particular aspect of the Information Extraction discipline. The first two topics are focused on our information extraction methods based on deep language parsing. The first topic relates to how deep language parsing was used in our extraction method in combination with manually designed extraction rules. The second topic deals with a method for automated induction of extraction rules using Inductive Logic Programming. The third topic of the thesis combines information extraction with rule based reasoning. The core of our extraction method was experimentally reimplemented using semantic web technologies, which allows saving the extraction rules in so called shareable extraction ontologies that are not dependent on the original extraction tool. The last topic of the thesis deals with document classification and fuzzy logic. We are investigating the possibility of using information obtained by information extraction techniques to document classification. Our implementation of so called Fuzzy ILP Classifier was experimentally used for the purpose of document classification.
24

Deep Learning for Document Image Analysis

Tensmeyer, Christopher Alan 01 April 2019 (has links)
Automatic machine understanding of documents from image inputs enables many applications in modern document workflows, digital archives of historical documents, and general machine intelligence, among others. Together, the techniques for understanding document images comprise the field of Document Image Analysis (DIA). Within DIA, the research community has identified several sub-problems, such as page segmentation and Optical Character Recognition (OCR). As the field has matured, there has been a trend of moving away from heuristic-based methods, designed for particular tasks and domains of documents, and moving towards machine learning methods that learn to solve tasks from examples of input/output pairs. Within machine learning, a particular class of models, known as deep learning models, have established themselves as the state-of-the-art for many image-based applications, including DIA. While traditional machine learning models typically operate on features designed by researchers, deep learning models are able to learn task-specific features directly from raw pixel inputs.This dissertation is collection of papers that proposes several deep learning models to solve a variety of tasks within DIA. The first task is historical document binarization, where an input image of a degraded historical document is converted to a bi-tonal image to separate foreground text from background regions. The next part of the dissertation considers document segmentation problems, including identifying the boundary between the document page and its background, as well as segmenting an image of a data table into rows, columns, and cells. Finally, a variety of deep models are proposed to solve recognition tasks. These tasks include whole document image classification, identifying the font of a given piece of text, and transcribing handwritten text in low-resource languages.
25

Extended Multidimensional Conceptual Spaces in Document Classification

Hadish, Mulugeta January 2008 (has links)
No description available.
26

[en] A MULTI-AGENT FRAMEWORK FOR SEARCH AND FLEXIBILIZATION OF DOCUMENT CLASSIFICATION ALGORITHMS / [pt] UM FRAMEWORK MULTI-AGENTES PARA BUSCA E FLEXIBILIZAÇÃO DE ALGORITMOS DE CLASSIFICAÇÃO DE DOCUMENTOS

JOAO ALFREDO PINTO DE MAGALHAES 18 June 2003 (has links)
[pt] Vivemos na era da informação, onde o conhecimento é criado numa velocidade nunca antes vista. Esse aumento de velocidade teve como principalrazão a Internet, que alterou os paradigmas até então existentes de troca de informações entre as pessoas. Através da rede, trabalhos inteiros podem ser publicados, atingindo um público alvo impossível de ser alcançado através dos meios existentes anteriormente. Porém, o excesso de informação também pode agir no sentido contrário: muita informação pode ser igual a nenhuma informação. Nosso trabalho foi o de produzir um sistema multi-agentes para busca e classificação de documentos textuais de um domínio específico. Foi construída uma infra-estrutura que separa as questões referentes à busca e seleção dos documentos (plataforma) das referentes ao algoritmo de classificação utilizado (uma aplicação do conceito de separation of concerns). Dessa forma, é possível não só acoplar algoritmos já existentes, mas também gerar novos algoritmos levando em consideração características específicas do domínio de documentos abordado. Foram geradas quatro instâncias a partir do framework, uma aplicação de webclipping, um componente para auxílio a knowledge management, um motor de busca para websites e uma aplicação para a web semântica. / [en] We are living in the information age, where knowledge is constantly being created in a rate that was never seen before. This is mainly due to Internet, that changed all the information exchange paradigms between people. Through the net, it is possible to publish or exchange whole works, reaching an audience impossible to be reached through other means. However, excess of information can be harmful: having too much information can be equal to having no information at all. Our work was to build a multi-agent framework for search and flexibilization of textual document classification algorithms of a specific domain. We have built an infra-structure that separates the concerns of document search and selection (platform) from the concerns of document classification (an application of the separation of concerns concept). It is possible not only to use existing algorithms, but also to generate new ones that consider domain-specific characteristics of documents. We generated four instances of the framework, a webclipping application, a knowledge management component, a search engine for websites and an application for the semantic web.
27

Méthodes de classifications dynamiques et incrémentales : application à la numérisation cognitive d'images de documents / Incremental and dynamic learning for document image : application for intelligent cognitive scanning of documents

Ngo Ho, Anh Khoi 19 March 2015 (has links)
Cette thèse s’intéresse à la problématique de la classification dynamique en environnements stationnaires et non stationnaires, tolérante aux variations de quantités des données d’apprentissage et capable d’ajuster ses modèles selon la variabilité des données entrantes. Pour cela, nous proposons une solution faisant cohabiter des classificateurs one-class SVM indépendants ayant chacun leur propre procédure d’apprentissage incrémentale et par conséquent, ne subissant pas d’influences croisées pouvant émaner de la configuration des modèles des autres classificateurs. L’originalité de notre proposition repose sur l’exploitation des anciennes connaissances conservées dans les modèles de SVM (historique propre à chaque SVM représenté par l’ensemble des vecteurs supports trouvés) et leur combinaison avec les connaissances apportées par les nouvelles données au moment de leur arrivée. Le modèle de classification proposé (mOC-iSVM) sera exploité à travers trois variations exploitant chacune différemment l’historique des modèles. Notre contribution s’inscrit dans un état de l’art ne proposant pas à ce jour de solutions permettant de traiter à la fois la dérive de concepts, l’ajout ou la suppression de concepts, la fusion ou division de concepts, tout en offrant un cadre privilégié d’interactions avec l’utilisateur. Dans le cadre du projet ANR DIGIDOC, notre approche a été appliquée sur plusieurs scénarios de classification de flux d’images pouvant survenir dans des cas réels lors de campagnes de numérisation. Ces scénarios ont permis de valider une exploitation interactive de notre solution de classification incrémentale pour classifier des images arrivant en flux afin d’améliorer la qualité des images numérisées. / This research contributes to the field of dynamic learning and classification in case of stationary and non-stationary environments. The goal of this PhD is to define a new classification framework able to deal with very small learning dataset at the beginning of the process and with abilities to adjust itself according to the variability of the incoming data inside a stream. For that purpose, we propose a solution based on a combination of independent one-class SVM classifiers having each one their own incremental learning procedure. Consequently, each classifier is not sensitive to crossed influences which can emanate from the configuration of the models of the other classifiers. The originality of our proposal comes from the use of the former knowledge kept in the SVM models (represented by all the found support vectors) and its combination with the new data coming incrementally from the stream. The proposed classification model (mOC-iSVM) is exploited through three variations in the way of using the existing models at each step of time. Our contribution states in a state of the art where no solution is proposed today to handle at the same time, the concept drift, the addition or the deletion of concepts, the fusion or division of concepts while offering a privileged solution for interaction with the user. Inside the DIGIDOC project, our approach was applied to several scenarios of classification of images streams which can correspond to real cases in digitalization projects. These different scenarios allow validating an interactive exploitation of our solution of incremental classification to classify images coming in a stream in order to improve the quality of the digitized images.
28

Analýza sentimentu s využitím dolování dat / Sentiment Analysis with Use of Data Mining

Sychra, Martin January 2016 (has links)
The theme of the work is sentiment analysis, especially in terms of informatics (marginally from a linguistic point of view). The linguistic part discusses the term sentiment and language methods for its analysis, e.g. lemmatization, POS tagging, using the list of stopwords etc. More attention is paid to the structure of the sentiment analyzer which is based on some of the machine learning methods (support vector machines, Naive Bayes and maximum entropy classification). On the basis of the theoretical background, a functional analyzer is projected and implemented. The experiments are focused mainly on comparing the classification methods and on the benefits of using the individual preprocessing methods. The success rate of the constructed classifier reaches up to 84 % in the cross-validation.
29

Contribution à la construction d’ontologies et à la recherche d’information : application au domaine médical / Contribution to ontology building and to semantic information retrieval : application to medical domain

Drame, Khadim 10 December 2014 (has links)
Ce travail vise à permettre un accès efficace à des informations pertinentes malgré le volume croissant des données disponibles au format électronique. Pour cela, nous avons étudié l’apport d’une ontologie au sein d’un système de recherche d'information (RI).Nous avons tout d’abord décrit une méthodologie de construction d’ontologies. Ainsi, nous avons proposé une méthode mixte combinant des techniques de traitement automatique des langues pour extraire des connaissances à partir de textes et la réutilisation de ressources sémantiques existantes pour l’étape de conceptualisation. Nous avons par ailleurs développé une méthode d’alignement de termes français-anglais pour l’enrichissement terminologique de l’ontologie. L’application de notre méthodologie a permis de créer une ontologie bilingue de la maladie d’Alzheimer.Ensuite, nous avons élaboré des algorithmes pour supporter la RI sémantique guidée par une ontologie. Les concepts issus d’une ontologie ont été utilisés pour décrire automatiquement les documents mais aussi pour reformuler les requêtes. Nous nous sommes intéressés à : 1) l’identification de concepts représentatifs dans des corpus, 2) leur désambiguïsation, 3), leur pondération selon le modèle vectoriel, adapté aux concepts et 4) l’expansion de requêtes. Ces propositions ont permis de mettre en œuvre un portail de RI sémantique dédié à la maladie d’Alzheimer. Par ailleurs, le contenu des documents à indexer n’étant pas toujours accessible dans leur ensemble, nous avons exploité des informations incomplètes pour déterminer les concepts pertinents permettant malgré tout de décrire les documents. Pour cela, nous avons proposé deux méthodes de classification de documents issus d’un large corpus, l’une basée sur l’algorithme des k plus proches voisins et l’autre sur l’analyse sémantique explicite. Ces méthodes ont été évaluées sur de larges collections de documents biomédicaux fournies lors d’un challenge international. / This work aims at providing efficient access to relevant information among the increasing volume of digital data. Towards this end, we studied the benefit from using ontology to support an information retrieval (IR) system.We first described a methodology for constructing ontologies. Thus, we proposed a mixed method which combines natural language processing techniques for extracting knowledge from text and the reuse of existing semantic resources for the conceptualization step. We have also developed a method for aligning terms in English and French in order to enrich terminologically the resulting ontology. The application of our methodology resulted in a bilingual ontology dedicated to Alzheimer’s disease.We then proposed algorithms for supporting ontology-based semantic IR. Thus, we used concepts from ontology for describing documents automatically and for query reformulation. We were particularly interested in: 1) the extraction of concepts from texts, 2) the disambiguation of terms, 3) the vectorial weighting schema adapted to concepts and 4) query expansion. These algorithms have been used to implement a semantic portal about Alzheimer’s disease. Further, because the content of documents are not always fully available, we exploited incomplete information for identifying the concepts, which are relevant for indexing the whole content of documents. Toward this end, we have proposed two classification methods: the first is based on the k nearest neighbors’ algorithm and the second on the explicit semantic analysis. The two methods have been evaluated on large standard collections of biomedical documents within an international challenge.
30

Applying Natural Language Processing to document classification / Tillämpning av Naturlig Språkbehandling för dokumentklassificering

Kragbé, David January 2022 (has links)
In today's digital world, we produce and use more electronic documents than ever before. And this trend is far from slowing down. Particularly, more and more companies and businesses now need to treat a considerable amount of documents to deal with their clients' requests. Scaling this process often requires building an automatic document treatment pipeline. Since the treatment of a document depends on its content, those pipelines heavily rely on an automatic document classifier to correctly process the documents received. Such document classifier should be able to receive a document of any type and output its class based on the text content of the document. In this thesis, we designed and implemented a machine learning pipeline for automated insurance claims documents classification. In order to find the best pipeline, we created several combination of different classifiers (logistic regressor and random forest classifier) and embedding models (Fasttext and Doc2vec). We then compared the performances of all of the pipelines using a the precision and accuracy metrics. We found that a pipeline composed of a Fasttext embedding model combined with a logistic regressor classifier was the most performant, yielding a precision of 85% and an accuracy of 86% on our dataset. / I dagens digitala värld, producerar och använder vi fler elektroniska dokument än någonsin tidigare. Denna trend är långt ifrån att sakta ner sig. Särskilt fler och fler företag behöver nu behandla en stor mängd dokument för att hantera sina kunders önskemål. Att skala denna process kräver ofta att man bygger en pipeline för automatisk dokumentbehandling. Eftersom behandlingen av ett dokument beror på dess innehåll, är dessa pipelines starkt beroende av en automatisk dokumentklassificerare för att korrekt bearbeta de mottagna dokumenten. En sådan dokumentklassificerare skall kunna ta emot ett dokument av vilken typ som helst och mata ut dess klass baserat på dokumentets textinnehåll. I detta examensarbete, designade och implementerade vi en maskininlärningspipeline för automatiserad klassificering av försäkringskrav-dokument. För att hitta den bästa pipelinen, skapade vi flera kombinationer av olika klassificerare (logistisk regressor och random forest klassificerare) och inbäddningsmodeller (Fasttext och Doc2vec). Vi jämförde sedan prestandan för alla pipelines med hjälp av precisions- och noggrannhetsmåtten. Vi fann att en pipeline bestående av en Fasttext-inbäddningsmodell kombinerad med en logistisk regressorklassificerare var den mest presterande, vilket gav en precision på 85% och en noggrannhet på 86% på vår datauppsättning.

Page generated in 0.1217 seconds