• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 19
  • 11
  • 10
  • 8
  • 7
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

MLID : A multilabelextension of the ID3 algorithm

Starefors, Henrik, Persson, Rasmus January 2016 (has links)
AbstractMachine learning is a subfield within artificial intelligence that revolves around constructingalgorithms that can learn from, and make predictions on data. Instead of following strict andstatic instruction, the system operates by adapting and learning from input data in order tomake predictions and decisions. This work will focus on a subcategory of machine learningcalled “MultilabelClassification”, which is the concept of where items introduced to thesystem is categorized by an analytical model, learned through supervised learning, whereeach instance of the dataset can belong to multiple labels, or classes.This paper presents the task of implementing a multilabelclassifier based on the ID3algorithm, which we call MLID (MultilabelIterative Dichotomiser). The solution is presentedboth in a sequentially executed version as well as an parallelized one.We also presents acomparison based on accuracy and execution time, that is performed against algorithms of asimilar nature in order to evaluate the viability of using ID3 as a base to further expand andbuild upon in regards of multi label classification.In order to evaluate the performance of the MLID algorithm, we have measured theexecution time, accuracy, and made a summarization of precision and recall into what iscalled Fmeasure,which is the harmonic mean of both precision and sensitivity of thealgorithm. These results are then compared to already defined and established algorithms,on a range of datasets of varying sizes, in order to assess the viability of the MLID algorithm.The results produced when comparing MLID against other multilabelalgorithms such asBinary relevance, Classifier Chains and Random Trees shows that MLID can compete withother classifiers in term of accuracy and Fmeasure,but in terms of training the algorithm,the time required is proven inferior. Through these results, we can conclude that MLID is aviable option to use as a multilabelclassifier. Although, some constraints inherited from theoriginal ID3 algorithm does impede the full utility of the algorithm, we are certain thatfollowing the same path of development and improvement as ID3 experienced would allowMLID to develop towards a suitable choice of algorithm for a diverse range of multilabelclassification problems.
2

Leveraging Collective Wisdom in A MultiLabeled Blog Categorization Environment

January 2015 (has links)
abstract: One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To cope with the relentless expansion, many enthusiastic bloggers have embarked on voluntarily writing, tagging, labeling, and cataloguing their posts in hopes of reaching the widest possible audience. Unbeknown to them, this reaching-for-others process triggers the generation of a new kind of collective wisdom, a result of shared collaboration, and the exchange of ideas, purpose, and objectives, through the formation of associations, links, and relations. Mastering an understanding of the Blogosphere can greatly help facilitate the needs of the ever growing number of these users, as well as producers, service providers, and advertisers into facilitation of the categorization and navigation of this vast environment. This work explores a novel method to leverage the collective wisdom from the infused label space for blog search and discovery. The work demonstrates that the wisdom space can provide a most unique and desirable framework to which to discover the highly sought after background information that could aid in the building of classifiers. This work incorporates this insight into the construction of a better clustering of blogs which boosts the performance of classifiers for identifying more relevant labels for blogs, and offers a mechanism that can be incorporated into replacing spurious labels and mislabels in a multi-labeled space. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
3

Comparing Feature Extraction Methods and Effects of Pre-Processing Methods for Multi-Label Classification of Textual Data / Utvärdering av Metoder för Extraktion av Särdrag och Förbehandling av Data för Multi-Taggning av Textdata

Eklund, Martin January 2018 (has links)
This thesis aims to investigate how different feature extraction methods applied to textual data affect the results of multi-label classification. Two different Bag of Words extraction methods are used, specifically the Count Vector and the TF-IDF approaches. A word embedding method is also investigated, called the GloVe extraction method. Multi-label classification can be useful for categorizing items, such as pieces of music or news articles, that may belong to multiple classes or topics. The effect of using different pre-processing methods is also investigated, such as the use of N-grams, stop-word elimination, and stemming. Two different classifiers, an SVM and an ANN, are used for multi-label classification using a Binary Relevance approach. The results indicate that the choice of extraction method has a meaningful impact on the resulting classifications, but that no one method consistently outperforms the others. Instead the results show that the GloVe extraction method performs the best for the recall metrics, while the Bag of Words methods perform the best for the precision metrics. / Detta arbete ämnar att undersöka vilken effekt olika metoder för att extrahera särdrag ur textdata har när dessa används för att multi-tagga textdatan. Två metoder baserat på Bag of Words undersöks, närmare bestämt Count Vector-metoden samt TF-IDF-metoden. Även en metod som använder sig av word embessings undersöks, som kallas för GloVe-metoden. Multi-taggning av data kan vara användbart när datan, exempelvis musikaliska stycken eller nyhetsartiklar, kan tillhöra flera klasser eller områden. Även användandet av flera olika metoder för att förbehandla datan undersöks, såsom användandet utav N-gram, eliminering av icke-intressanta ord, samt transformering av ord med olika böjningsformer till gemensam stamform. Två olika klassificerare, en SVM samt en ANN, används för multi-taggningen genom använding utav en metod kallad Binary Relevance. Resultaten visar att valet av metod för extraktion av särdrag har en betydelsefull roll för den resulterande multi-taggningen, men att det inte finns en metod som ger bäst resultat genom alla tester. Istället indikerar resultaten att extraktionsmetoden baserad på GloVe presterar bäst när det gäller 'recall'-mätvärden, medan Bag of Words-metoderna presterar bäst gällade 'precision'-mätvärden.
4

Modélisation de documents combinant texte et image : application à la catégorisation et à la recherche d'information multimédia / Representation of documents combining text and image : application to categorization and multimedia information retrieval

Moulin, Christophe 22 June 2011 (has links)
L'exploitation des documents multimédias pose des problèmes de représentation des informations textuelles et visuelles contenues dans ces documents. Notre but est de proposer un modèle permettant de représenter chacune de ces informations et de les combiner en vue de deux tâches : la catégorisation et la recherche d'information. Ce modèle représente les documents sous forme de sacs de mots nécessitant la création de vocabulaires spécifiques. Le vocabulaire textuel, généralement de très grande taille, est constitué des mots apparaissant dans les documents. Le vocabulaire visuel est quant à lui construit en extrayant des caractéristiques de bas niveau des images. Nous étudions les différentes étapes de sa création et la pondération tfidf des mots visuels dans les images, inspirée des approches classiquement utilisées pour les mots textuels. Dans le contexte de la catégorisation de documents textuels, nous introduisons un critère qui sélectionne les mots les plus discriminants pour les catégories afin de réduire la taille du vocabulaire sans dégrader les résultats du classement. Nous présentons aussi dans le cadre multilabel, une méthode permettant de sélectionner les différentes catégories à associer à un document. En recherche d’information, nous proposons une approche analytique par apprentissage pour combiner linéairement les résultats issus des informations textuelles et visuelles, permettant d'améliorer significativement la recherche. Notre modèle est validé pour ces différentes tâches en participant à des compétitions internationales telles que XML Mining et ImageCLEF et sur des collections de taille conséquente / Exploiting multimedia documents leads to representation problems of the textual and visual information within documents. Our goal is to propose a model to represent these both information and to combine them for two tasks: categorization and information retrieval. This model represents documents as bags of words, which requires to define adapted vocabularies. The textual vocabulary, usually very large, corresponds to the words of documents while the visual one is created by extracting low-level features from images. We study the different steps of its creation and the tf.idf weighting of visual words in images usually used for textual words. In the context of the text categorization, we introduce a criterion to select the most discriminative words for categories in order to reduce the vocabulary size without degrading the results of classification. We also present in the multilabel context, a method that lets us to select the number of categories which must be associated with a document. In multimedia information retrieval, we propose an analytical approach based on machine learning techniques to linearly combine the results from textual and visual information which significantly improves research results. Our model has shown its efficiency on different collections of important size and was evaluated in several international competitions such as XML Mining and ImageCLEF
5

Técnicas de classificação hierárquica multirrótulo / Hierarchical multilabel classification techniques

Cerri, Ricardo 23 February 2010 (has links)
Muitos dos problemas de classificação descritos na literatura de Aprendizado de Máquina e Mineração de Dados dizem respeito à classificação de dados em que cada exemplo a ser classificado pertence a um conjunto finito, e geralmente pequeno, de classes que estão em um mesmo nível. Vários problemas de classificação, entretanto, são de natureza hierárquica, em que classes podem ser subclasses ou superclasses de outras classes. Em muitos problemas hierárquicos, principalmente no campo da Bioinformática, um ou mais exemplos podem ser associados a mais de uma classe simultaneamente. Esses problemas são conhecidos como problemas de classificação hierárquica tirrótulo. Nesta pesquisa, foram investigadas diferentes técnicas para lidar com esses tipos de problemas. Essas técnicas são baseadas em duas abordagens: local ou Top-Down e global ou One-Shot. Três técnicas descritas na literatura foram utilizadas. A primeira delas, chamada HMC-BR, é baseada na abordagem Top-Down, e utiliza uma estratégia de classificação binária chamada Um-Contra-Todos. As outras duas técnicas, baseadas na abordagem One-Shot, são chamadas C4.5H (uma extensão do algoritmo de indução de àrvores de decis~ao C4.5), e de Clus-HMC (baseada na noção de Predictive Clustering Trees, em que àrvores de decisão são estruturadas como uma hierarquia de grupos (clusters)). Além das técnicas descritas na literatura, duas novas técnicas foram propostas e implementadas nesta pesquisa, chamadas de HMC-LP e HMC-CT. Essas técnicas são variações hierárquicas de técnicas de classificação multirrótulo não hierárquicas. A técnica HMC-LP utiliza uma estratégia de combinação de classes e a técnica HMC-CT utiliza uma estratégia de decomposição de classes. Para a avaliação das técnicas, foram utilizadas medidas específicas para esse tipo de classificação. Os resultados experimentais mostraram que as técnicas propostas obtiveram desempenhos superiores ou semelhantes aos das técnicas descritas na literatura, dependendo da medida de avaliação utilizada e das características dos conjuntos de dados / Many of the classification problems described in the literature of Machine Learning and Data Mining are related to data classification where each example to be classified belongs to a finite, and usually small, set of classes located at the same level. There are many classification problems, however, that are of hierarchical nature, where classes can be subclasses or superclasses of other classes. In many hierarchical problems, mainly in the Bioinformatics field, one or more examples can be associated to more than one class simultaneously. These problems are known as hierarchical multilabel classification problems. In this research, different techniques to deal with these kinds of problems were investigated, based on two approaches, named local or Top-Down and global or One-Shot. Three techniques described in the literature were used. The first one, named HMC-BR, is based on the Top-Down approach, and uses a binary classification strategy named One-Against-All. The other two techniques, based on the One-Shot approach, are named C4.5H (an extension of the decision tree induction algorithm C4.5), and Clus-HMC (based on the notion of Predictive Clustering Trees, where decision trees are structured as a hierarchy of clusters). In addition to the techniques described in the literature, two new techniques were proposed, named HMC-LP and HMC-CT. These techniques are hierarchical variations of non-hierarchical multilabel classification techniques. The HMCLP technique uses a label combination strategy and the HMC-CT technique uses a label decomposition strategy. The evaluation of the techniques was performed using specific metrics for this kind of classification. The experimental results showed that the proposed techniques achieved better or similar performances than the techniques described in the literature, depending on the evaluation metric used and on the characteristics of the datasets
6

Deep Active Learning Explored Across Diverse Label Spaces

January 2018 (has links)
abstract: Deep learning architectures have been widely explored in computer vision and have depicted commendable performance in a variety of applications. A fundamental challenge in training deep networks is the requirement of large amounts of labeled training data. While gathering large quantities of unlabeled data is cheap and easy, annotating the data is an expensive process in terms of time, labor and human expertise. Thus, developing algorithms that minimize the human effort in training deep models is of immense practical importance. Active learning algorithms automatically identify salient and exemplar samples from large amounts of unlabeled data and can augment maximal information to supervised learning models, thereby reducing the human annotation effort in training machine learning models. The goal of this dissertation is to fuse ideas from deep learning and active learning and design novel deep active learning algorithms. The proposed learning methodologies explore diverse label spaces to solve different computer vision applications. Three major contributions have emerged from this work; (i) a deep active framework for multi-class image classication, (ii) a deep active model with and without label correlation for multi-label image classi- cation and (iii) a deep active paradigm for regression. Extensive empirical studies on a variety of multi-class, multi-label and regression vision datasets corroborate the potential of the proposed methods for real-world applications. Additional contributions include: (i) a multimodal emotion database consisting of recordings of facial expressions, body gestures, vocal expressions and physiological signals of actors enacting various emotions, (ii) four multimodal deep belief network models and (iii) an in-depth analysis of the effect of transfer of multimodal emotion features between source and target networks on classification accuracy and training time. These related contributions help comprehend the challenges involved in training deep learning models and motivate the main goal of this dissertation. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
7

Técnicas de classificação hierárquica multirrótulo / Hierarchical multilabel classification techniques

Ricardo Cerri 23 February 2010 (has links)
Muitos dos problemas de classificação descritos na literatura de Aprendizado de Máquina e Mineração de Dados dizem respeito à classificação de dados em que cada exemplo a ser classificado pertence a um conjunto finito, e geralmente pequeno, de classes que estão em um mesmo nível. Vários problemas de classificação, entretanto, são de natureza hierárquica, em que classes podem ser subclasses ou superclasses de outras classes. Em muitos problemas hierárquicos, principalmente no campo da Bioinformática, um ou mais exemplos podem ser associados a mais de uma classe simultaneamente. Esses problemas são conhecidos como problemas de classificação hierárquica tirrótulo. Nesta pesquisa, foram investigadas diferentes técnicas para lidar com esses tipos de problemas. Essas técnicas são baseadas em duas abordagens: local ou Top-Down e global ou One-Shot. Três técnicas descritas na literatura foram utilizadas. A primeira delas, chamada HMC-BR, é baseada na abordagem Top-Down, e utiliza uma estratégia de classificação binária chamada Um-Contra-Todos. As outras duas técnicas, baseadas na abordagem One-Shot, são chamadas C4.5H (uma extensão do algoritmo de indução de àrvores de decis~ao C4.5), e de Clus-HMC (baseada na noção de Predictive Clustering Trees, em que àrvores de decisão são estruturadas como uma hierarquia de grupos (clusters)). Além das técnicas descritas na literatura, duas novas técnicas foram propostas e implementadas nesta pesquisa, chamadas de HMC-LP e HMC-CT. Essas técnicas são variações hierárquicas de técnicas de classificação multirrótulo não hierárquicas. A técnica HMC-LP utiliza uma estratégia de combinação de classes e a técnica HMC-CT utiliza uma estratégia de decomposição de classes. Para a avaliação das técnicas, foram utilizadas medidas específicas para esse tipo de classificação. Os resultados experimentais mostraram que as técnicas propostas obtiveram desempenhos superiores ou semelhantes aos das técnicas descritas na literatura, dependendo da medida de avaliação utilizada e das características dos conjuntos de dados / Many of the classification problems described in the literature of Machine Learning and Data Mining are related to data classification where each example to be classified belongs to a finite, and usually small, set of classes located at the same level. There are many classification problems, however, that are of hierarchical nature, where classes can be subclasses or superclasses of other classes. In many hierarchical problems, mainly in the Bioinformatics field, one or more examples can be associated to more than one class simultaneously. These problems are known as hierarchical multilabel classification problems. In this research, different techniques to deal with these kinds of problems were investigated, based on two approaches, named local or Top-Down and global or One-Shot. Three techniques described in the literature were used. The first one, named HMC-BR, is based on the Top-Down approach, and uses a binary classification strategy named One-Against-All. The other two techniques, based on the One-Shot approach, are named C4.5H (an extension of the decision tree induction algorithm C4.5), and Clus-HMC (based on the notion of Predictive Clustering Trees, where decision trees are structured as a hierarchy of clusters). In addition to the techniques described in the literature, two new techniques were proposed, named HMC-LP and HMC-CT. These techniques are hierarchical variations of non-hierarchical multilabel classification techniques. The HMCLP technique uses a label combination strategy and the HMC-CT technique uses a label decomposition strategy. The evaluation of the techniques was performed using specific metrics for this kind of classification. The experimental results showed that the proposed techniques achieved better or similar performances than the techniques described in the literature, depending on the evaluation metric used and on the characteristics of the datasets
8

Novel Data Mining Methods for Virtual Screening of Biological Active Chemical Compounds

Soufan, Othman 23 November 2016 (has links)
Drug discovery is a process that takes many years and hundreds of millions of dollars to reveal a confident conclusion about a specific treatment. Part of this sophisticated process is based on preliminary investigations to suggest a set of chemical compounds as candidate drugs for the treatment. Computational resources have been playing a significant role in this part through a step known as virtual screening. From a data mining perspective, availability of rich data resources is key in training prediction models. Yet, the difficulties imposed by big expansion in data and its dimensionality are inevitable. In this thesis, I address the main challenges that come when data mining techniques are used for virtual screening. In order to achieve an efficient virtual screening using data mining, I start by addressing the problem of feature selection and provide analysis of best ways to describe a chemical compound for an enhanced screening performance. High-throughput screening (HTS) assays data used for virtual screening are characterized by a great class imbalance. To handle this problem of class imbalance, I suggest using a novel algorithm called DRAMOTE to narrow down promising candidate chemicals aimed at interaction with specific molecular targets before they are experimentally evaluated. Existing works are mostly proposed for small-scale virtual screening based on making use of few thousands of interactions. Thus, I propose enabling large-scale (or big) virtual screening through learning millions of interaction while exploiting any relevant dependency for a better accuracy. A novel solution called DRABAL that incorporates structure learning of a Bayesian Network as a step to model dependency between the HTS assays, is showed to achieve significant improvements over existing state-of-the-art approaches.
9

Automatická klasifikace smluv pro portál HlidacSmluv.cz / Automated contract classification for portal HlidacSmluv.cz

Maroušek, Jakub January 2020 (has links)
The Contracts Register is a public database containing contracts concluded by public institutions. Due to the number of documents in the database, data analysis is proble- matic. The objective of this thesis is to find a machine learning approach for sorting the contracts into categories by their area of interest (real estate services, construction, etc.) and implement the approach for usage on the web portal Hlídač státu. A large number of categories and a lack of a tagged dataset of contracts complicate the solution. 1
10

Traitement automatique du langage naturel pour les textes juridiques : prédiction de verdict et exploitation de connaissances du domaine

Salaün, Olivier 12 1900 (has links)
À l'intersection du traitement automatique du langage naturel et du droit, la prédiction de verdict ("legal judgment prediction" en anglais) est une tâche permettant de représenter la question de la justice prédictive, c'est-à-dire tester les capacités d'un système automatique à prédire le verdict décidé par un juge dans une décision de justice. La thèse présente de bout en bout la mise en place d'une telle tâche formalisée sous la forme d'une classification multilabel, ainsi que différentes stratégies pour tenter d'améliorer les performances des classifieurs. Le tout se base sur un corpus de décisions provenant du Tribunal administratif du logement du Québec (litiges entre propriétaires et locataires). Tout d'abord, un prétraitement préliminaire et une analyse approfondie du corpus permettent d'en tirer les aspects métier les plus saillants. Cette étape primordiale permet de s'assurer que la tâche de prédiction de verdict a du sens, et de mettre en relief des biais devant être pris en considération pour les tâches ultérieures. En effet, à l'issue d'un premier banc d'essai comparant différents modèles sur cette tâche, ces derniers tendent à exacerber des biais préexistant dans le corpus (p. ex. ils donnent encore moins gain de cause aux locataires par rapport à un juge humain). Fort de ce constat, la suite des expériences vise à améliorer les performances de classification et à atténuer ces biais, en se focalisant sur CamemBERT. Pour ce faire, des connaissances du domaine cible (droit du logement) sont exploitées. Une première approche consiste à employer des articles de loi comme données d'entrée qui font l'objet de différentes représentations, mais c'est cependant loin d'être la panacée. Une autre approche employant la modélisation thématique s'intéresse aux thèmes pouvant être extraits à partir du texte décrivant les faits litigieux. Une évaluation automatique et manuelle des thèmes obtenus démontre leur informativité vis-à-vis des motifs amenant des justiciables à se rendre au tribunal. Avec ce constat, la dernière partie de notre travail revisite une nouvelle fois la tâche de prédiction de verdict en s'appuyant à la fois sur des systèmes de recherche d'information (RI), et des thèmes associés aux décisions. Les modèles conçus ici ont la particularité de s'appuyer sur une jurisprudence (décisions passées pertinentes) récoltée selon différents critères de recherche (p. ex. similarité au niveau du texte et/ou des thèmes). Les modèles utilisant des critères de RI basés sur des sacs-de-mots (Lucene) et des thèmes obtiennent des gains significatifs en termes de scores F1 Macro. Cependant, le problème d'amplification des biais persiste encore bien qu'atténué. De manière globale, l'exploitation de connaissances du domaine permet d'améliorer les performances des prédicteurs de verdict, mais la persistance de biais dans les résultats décourage le déploiement de tels modèles à grande échelle dans le monde réel. D'un autre côté, les résultats de la modélisation thématique laissent entrevoir de meilleurs débouchés pour ce qui relève de l'accessibilité et de la lisibilité des documents juridiques par des utilisateurs humains. / At the intersection of natural language processing and law, legal judgment prediction is a task that can represent the problem of predictive justice, or in other words, the capacity of an automated system to predict the verdict decided by a judge in a court ruling. The thesis presents from end to end the implementation of such a task formalized as a multilabel classification, along with different strategies attempting to improve classifiers' performance. The whole work is based on a corpus of decisions from the Administrative housing tribunal of Québec (disputes between landlords and tenants). First of all, a preliminary preprocessing and an in-depth analysis of the corpus highlight its most prominent domain aspects. This crucial step ensures that the verdict prediction task is sound, and also emphasizes biases that must be taken into consideration for future tasks. Indeed, a first testbed comparing different models on this task reveals that they tend to exacerbate biases pre-existing within the corpus (i.e. their verdicts are even less favourable to tenants compared with a human judge). In light of this, the next experiments aim at improving classification performance and at mitigating these biases, by focusing on CamemBERT. In order to do so, knowledge from the target domain (housing law) are exploited. A first approach consists in employing articles of law as input features which are used under different representations, but such method is far from being a panacea. Another approach relying on topic modeling focuses on topics that can be extracted from the text describing the disputed facts. An automatic and manual evaluation of topics obtained shows evidence of their informativeness about reasons leading litigants to go to court. On this basis, the last part of our work revisits the verdict prediction task by relying on both information retrieval (IR) system, and topics assigned to decisions. The models designed here have the particularity to rely on jurisprudence (relevant past cases) retrieved with different search criteria (e.g. similarity at the text or topics level). Models using IR criteria based on bags-of-words (Lucene) and topics obtain significant gains in terms of Macro F1 scores. However, the aforementioned amplified biases issue, though mitigated, still remains. Overall, the exploitation of domain-related knowledge can improve the performance of verdict predictors, but the persistence of biases in the predictions hinders the deployment of such models on a large scale in the real world. On the other hand, results obtained from topic modeling suggest better prospects for anything that can improve the accessibility and readability of legal documents by human users.

Page generated in 0.0499 seconds