• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparing Feature Extraction Methods and Effects of Pre-Processing Methods for Multi-Label Classification of Textual Data / Utvärdering av Metoder för Extraktion av Särdrag och Förbehandling av Data för Multi-Taggning av Textdata

Eklund, Martin January 2018 (has links)
This thesis aims to investigate how different feature extraction methods applied to textual data affect the results of multi-label classification. Two different Bag of Words extraction methods are used, specifically the Count Vector and the TF-IDF approaches. A word embedding method is also investigated, called the GloVe extraction method. Multi-label classification can be useful for categorizing items, such as pieces of music or news articles, that may belong to multiple classes or topics. The effect of using different pre-processing methods is also investigated, such as the use of N-grams, stop-word elimination, and stemming. Two different classifiers, an SVM and an ANN, are used for multi-label classification using a Binary Relevance approach. The results indicate that the choice of extraction method has a meaningful impact on the resulting classifications, but that no one method consistently outperforms the others. Instead the results show that the GloVe extraction method performs the best for the recall metrics, while the Bag of Words methods perform the best for the precision metrics. / Detta arbete ämnar att undersöka vilken effekt olika metoder för att extrahera särdrag ur textdata har när dessa används för att multi-tagga textdatan. Två metoder baserat på Bag of Words undersöks, närmare bestämt Count Vector-metoden samt TF-IDF-metoden. Även en metod som använder sig av word embessings undersöks, som kallas för GloVe-metoden. Multi-taggning av data kan vara användbart när datan, exempelvis musikaliska stycken eller nyhetsartiklar, kan tillhöra flera klasser eller områden. Även användandet av flera olika metoder för att förbehandla datan undersöks, såsom användandet utav N-gram, eliminering av icke-intressanta ord, samt transformering av ord med olika böjningsformer till gemensam stamform. Två olika klassificerare, en SVM samt en ANN, används för multi-taggningen genom använding utav en metod kallad Binary Relevance. Resultaten visar att valet av metod för extraktion av särdrag har en betydelsefull roll för den resulterande multi-taggningen, men att det inte finns en metod som ger bäst resultat genom alla tester. Istället indikerar resultaten att extraktionsmetoden baserad på GloVe presterar bäst när det gäller 'recall'-mätvärden, medan Bag of Words-metoderna presterar bäst gällade 'precision'-mätvärden.
2

Topical Classification of Images in Wikipedia : Development of topical classification models followed by a study of the visual content of Wikipedia / Ämneklassificering av bilder i Wikipedia : Utveckling av ämneklassificeringsmodeller följd av studier av Wikipedias bilddata

Vieira Bernat, Matheus January 2023 (has links)
With over 53 million articles and 11 million images, Wikipedia is the greatest encyclopedia in history. The number of users is equally significant, with daily views surpassing 1 billion. Such an enormous system needs automation of tasks to make it possible for the volunteers to maintain. When it comes to textual data, there is a system based on machine learning called ORES providing automation to tasks such as article quality estimation and article topic routing. A visual counterpart system also needs to be developed to support tasks such as vandalism detection in images and for a better understanding of the visual data of Wikipedia. Researchers from the Wikimedia Foundation identified a hindrance to implementing the visual counterpart of ORES: the images of Wikipedia lack topical metadata. Thus, this work aims to develop a deep learning model that classifies images into a set of topics, which have been pre-determined in parallel work. State-of-the-art image classification models and other methods to mitigate the existing class imbalance are used. The conducted experiments show, among others, that: using the data that considers the hierarchy of labels performs better; resampling techniques are ineffective at mitigating imbalance due to the high label concurrence; sample-weighting improves metrics; and that initializing parameters as pre-trained on ImageNet rather than randomly yields better metrics. Moreover, we find interesting outlier labels that, despite having fewer samples, obtain better performance metrics, which is believed to be either due to bias from pre-training or simply more signal in the label. The distribution of the visual data predicted by the models displayed. Finally, some qualitative examples of the model predictions to some images are presented, proving the ability of the model to find correct labels that are missing in the ground truth
3

Traitement automatique du langage naturel pour les textes juridiques : prédiction de verdict et exploitation de connaissances du domaine

Salaün, Olivier 12 1900 (has links)
À l'intersection du traitement automatique du langage naturel et du droit, la prédiction de verdict ("legal judgment prediction" en anglais) est une tâche permettant de représenter la question de la justice prédictive, c'est-à-dire tester les capacités d'un système automatique à prédire le verdict décidé par un juge dans une décision de justice. La thèse présente de bout en bout la mise en place d'une telle tâche formalisée sous la forme d'une classification multilabel, ainsi que différentes stratégies pour tenter d'améliorer les performances des classifieurs. Le tout se base sur un corpus de décisions provenant du Tribunal administratif du logement du Québec (litiges entre propriétaires et locataires). Tout d'abord, un prétraitement préliminaire et une analyse approfondie du corpus permettent d'en tirer les aspects métier les plus saillants. Cette étape primordiale permet de s'assurer que la tâche de prédiction de verdict a du sens, et de mettre en relief des biais devant être pris en considération pour les tâches ultérieures. En effet, à l'issue d'un premier banc d'essai comparant différents modèles sur cette tâche, ces derniers tendent à exacerber des biais préexistant dans le corpus (p. ex. ils donnent encore moins gain de cause aux locataires par rapport à un juge humain). Fort de ce constat, la suite des expériences vise à améliorer les performances de classification et à atténuer ces biais, en se focalisant sur CamemBERT. Pour ce faire, des connaissances du domaine cible (droit du logement) sont exploitées. Une première approche consiste à employer des articles de loi comme données d'entrée qui font l'objet de différentes représentations, mais c'est cependant loin d'être la panacée. Une autre approche employant la modélisation thématique s'intéresse aux thèmes pouvant être extraits à partir du texte décrivant les faits litigieux. Une évaluation automatique et manuelle des thèmes obtenus démontre leur informativité vis-à-vis des motifs amenant des justiciables à se rendre au tribunal. Avec ce constat, la dernière partie de notre travail revisite une nouvelle fois la tâche de prédiction de verdict en s'appuyant à la fois sur des systèmes de recherche d'information (RI), et des thèmes associés aux décisions. Les modèles conçus ici ont la particularité de s'appuyer sur une jurisprudence (décisions passées pertinentes) récoltée selon différents critères de recherche (p. ex. similarité au niveau du texte et/ou des thèmes). Les modèles utilisant des critères de RI basés sur des sacs-de-mots (Lucene) et des thèmes obtiennent des gains significatifs en termes de scores F1 Macro. Cependant, le problème d'amplification des biais persiste encore bien qu'atténué. De manière globale, l'exploitation de connaissances du domaine permet d'améliorer les performances des prédicteurs de verdict, mais la persistance de biais dans les résultats décourage le déploiement de tels modèles à grande échelle dans le monde réel. D'un autre côté, les résultats de la modélisation thématique laissent entrevoir de meilleurs débouchés pour ce qui relève de l'accessibilité et de la lisibilité des documents juridiques par des utilisateurs humains. / At the intersection of natural language processing and law, legal judgment prediction is a task that can represent the problem of predictive justice, or in other words, the capacity of an automated system to predict the verdict decided by a judge in a court ruling. The thesis presents from end to end the implementation of such a task formalized as a multilabel classification, along with different strategies attempting to improve classifiers' performance. The whole work is based on a corpus of decisions from the Administrative housing tribunal of Québec (disputes between landlords and tenants). First of all, a preliminary preprocessing and an in-depth analysis of the corpus highlight its most prominent domain aspects. This crucial step ensures that the verdict prediction task is sound, and also emphasizes biases that must be taken into consideration for future tasks. Indeed, a first testbed comparing different models on this task reveals that they tend to exacerbate biases pre-existing within the corpus (i.e. their verdicts are even less favourable to tenants compared with a human judge). In light of this, the next experiments aim at improving classification performance and at mitigating these biases, by focusing on CamemBERT. In order to do so, knowledge from the target domain (housing law) are exploited. A first approach consists in employing articles of law as input features which are used under different representations, but such method is far from being a panacea. Another approach relying on topic modeling focuses on topics that can be extracted from the text describing the disputed facts. An automatic and manual evaluation of topics obtained shows evidence of their informativeness about reasons leading litigants to go to court. On this basis, the last part of our work revisits the verdict prediction task by relying on both information retrieval (IR) system, and topics assigned to decisions. The models designed here have the particularity to rely on jurisprudence (relevant past cases) retrieved with different search criteria (e.g. similarity at the text or topics level). Models using IR criteria based on bags-of-words (Lucene) and topics obtain significant gains in terms of Macro F1 scores. However, the aforementioned amplified biases issue, though mitigated, still remains. Overall, the exploitation of domain-related knowledge can improve the performance of verdict predictors, but the persistence of biases in the predictions hinders the deployment of such models on a large scale in the real world. On the other hand, results obtained from topic modeling suggest better prospects for anything that can improve the accessibility and readability of legal documents by human users.

Page generated in 0.1231 seconds