• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 397
  • 64
  • 43
  • 26
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 626
  • 626
  • 284
  • 222
  • 213
  • 150
  • 138
  • 131
  • 101
  • 95
  • 93
  • 88
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Collaboration between UK universities : a machine-learning based webometric analysis

Kenekayoro, Patrick January 2014 (has links)
Collaboration is essential for some types of research, which is why some agencies include collaboration among the requirements for funding research projects. Studying collaborative relationships is important because analyses of collaboration networks can give insights into knowledge based innovation systems, the roles that different organisations play in a research field and the relationships between scientific disciplines. Co-authored publication data is widely used to investigate collaboration between organisations, but this data is not free and thus may not be accessible for some researchers. Hyperlinks have some similarities with citations, so hyperlink data may be used as an indicator to estimate the extent of collaboration between academic institutions and may be able to show types of relationships that are not present in co-authorship data. However, it has been shown that using raw hyperlink counts for webometric research can sometimes produce unreliable results, so researchers have attempted to find alternate counting methods and have tried to identify the reasons why hyperlinks may have been created in academic websites. This thesis uses machine learning techniques, an approach that has not previously been widely used in webometric research, to automatically classify hyperlinks and text in university websites in an attempt to filter out irrelevant hyperlinks when investigating collaboration between academic institutions. Supervised machine learning methods were used to automatically classify the web page types that can be found in Higher Education Institutions’ websites. The results were assessed to see whether ii automatically filtered hyperlink data gave better results than raw hyperlink data in terms of identifying patterns of collaboration between UK universities. Unsupervised learning methods were used to automatically identify groups of university departments that are collaborating or that may benefit from collaborating together, based on their co-appearance in research clusters. Results show that the machine learning methods used in this thesis can automatically identify both the source and target web page categories of hyperlinks in university websites with up to 78% accuracy; which means that it can increase the possibility for more effective hyperlink classification or for identifying the reasons why hyperlinks may have been created in university websites, if those reasons can be inferred from the relationship between the source and target page types. When machine learning techniques were used to filter hyperlinks that may not have been created because of collaboration from the hyperlink data, there was an increased correlation between hyperlink data and other collaboration indicators. This emphasises the possibility for using machine learning methods to make hyperlink data a more reliable data source for webometric research. The reasons for university name mentions in the different web page types found in an academic institution’s website are broadly the same as the reasons for link creation, this means that classification based on inter-page relationships may also be used to improve name mentions data for webometrics research. iii Clustering research groups based on the text in their homepages may be useful for identifying those research groups or departments with similar research interests which may be valuable for policy makers in monitoring research fields; based on the sizes of identified clusters and for identifying future collaborators; based on co-appearances in clusters, if identical research interests is a factor that can influence the choice of a future collaborator. In conclusion, this thesis shows that machine learning techniques can be used to significantly improve the quality of hyperlink data for webometrics research, and can also be used to analyse other web based data to give additional insights that may be beneficial for webometrics studies.
232

CREDIT CARD FRAUD DETECTION (Machine learning algorithms) / Kreditkortsbedrägeri med användning av maskininlärningsalgoritmer

Westerlund, Fredrik January 2017 (has links)
Credit card fraud is a field with perpetrators performing illegal actions that may affect other individuals or companies negatively. For instance, a criminalcan steal credit card information from an account holder and then conduct fraudulent transactions. The activities are a potential contributory factor to how illegal organizations such as terrorists and drug traffickers support themselves financially. Within the machine learning area, there are several methods that possess the ability to detect credit card fraud transactions; supervised learning and unsupervised learning algorithms. This essay investigates the supervised approach, where two algorithms (Hellinger Distance Decision Tree (HDDT) and Random Forest) are evaluated on a real life dataset of 284,807 transactions. Under those circumstances, the main purpose is to develop a “well-functioning” model with a reasonable capacity to categorize transactions as fraudulent or legit. As the data is heavily unbalanced, reducing the false-positive rate is also an important part when conducting research in the chosen area. In conclusion, evaluated algorithms present a fairly similar outcome, where both models have the capability to distinguish the classes from each other. However, the Random Forest approach has a better performance than HDDT in all measures of interest.
233

Minimisation de fonctions de perte calibrée pour la classification des images / Minimization of calibrated loss functions for image classification

Bel Haj Ali, Wafa 11 October 2013 (has links)
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d’un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d’autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie. / Image classification becomes a big challenge since it concerns on the one hand millions or billions of images that are available on the web and on the other hand images used for critical real-time applications. This classification involves in general learning methods and classifiers that must require both precision as well as speed performance. These learning problems concern a large number of application areas: namely, web applications (profiling, targeting, social networks, search engines), "Big Data" and of course computer vision such as the object recognition and image classification. This thesis concerns the last category of applications and is about supervised learning algorithms based on the minimization of loss functions (error) called "calibrated" for two kinds of classifiers: k-Nearest Neighbours (kNN) and linear classifiers. Those learning methods have been tested on large databases of images and then applied to biomedical images. In a first step, this thesis revisited a Boosting kNN algorithm for large scale classification. Then, we introduced a new method of learning these NN classifiers using a Newton descent approach for a faster convergence. In a second part, this thesis introduces a new learning algorithm based on stochastic Newton descent for linear classifiers known for their simplicity and their speed of convergence. Finally, these three methods have been used in a medical application regarding the classification of cells in biology and pathology.
234

Semi-supervised and transductive learning algorithms for predicting alternative splicing events in genes.

Tangirala, Karthik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / As genomes are sequenced, a major challenge is their annotation -- the identification of genes and regulatory elements, their locations and their functions. For years, it was believed that one gene corresponds to one protein, but the discovery of alternative splicing provided a mechanism for generating different gene transcripts (isoforms) from the same genomic sequence. In the recent years, it has become obvious that a large fraction of genes undergoes alternative splicing. Thus, understanding alternative splicing is a problem of great interest to biologists. Supervised machine learning approaches can be used to predict alternative splicing events at genome level. However, supervised approaches require large amounts of labeled data to produce accurate classifiers. While large amounts of genomic data are produced by the new sequencing technologies, labeling these data can be costly and time consuming. Therefore, semi-supervised learning approaches that can make use of large amounts of unlabeled data, in addition to small amounts of labeled data are highly desirable. In this work, we study the usefulness of a semi-supervised learning approach, co-training, for classifying exons as alternatively spliced or constitutive. The co-training algorithm makes use of two views of the data to iteratively learn two classifiers that can inform each other, at each step, with their best predictions on the unlabeled data. We consider three sets of features for constructing views for the problem of predicting alternatively spliced exons: lengths of the exon of interest and its flanking introns, exonic splicing enhancers (a.k.a., ESE motifs) and intronic regulatory sequences (a.k.a., IRS motifs). Naive Bayes and Support Vector Machine (SVM) algorithms are used as based classifiers in our study. Experimental results show that the usage of the unlabeled data can result in better classifiers as compared to those obtained from the small amount of labeled data alone. In addition to semi-supervised approaches, we also also study the usefulness of graph based transductive learning approaches for predicting alternatively spliced exons. Similar to the semi-supervised learning algorithms, transductive learning algorithms can make use of unlabeled data, together with labeled data, to produce labels for the unlabeled data. However, a classification model that could be used to classify new unlabeled data is not learned in this case. Experimental results show that graph based transductive approaches can make effective use of the unlabeled data.
235

Analyse et reconnaissance des émotions lors de conversations de centres d'appels / Automatic emotions recognition during call center conversations

Vaudable, Christophe 11 July 2012 (has links)
La reconnaissance automatique des émotions dans la parole est un sujet de recherche relativement récent dans le domaine du traitement de la parole, puisqu’il est abordé depuis une dizaine d’années environs. Ce sujet fait de nos jours l’objet d’une grande attention, non seulement dans le monde académique mais aussi dans l’industrie, grâce à l’augmentation des performances et de la fiabilité des systèmes. Les premiers travaux étaient fondés sur des donnés jouées par des acteurs, et donc non spontanées. Même aujourd’hui, la plupart des études exploitent des séquences pré-segmentées d’un locuteur unique et non une communication spontanée entre plusieurs locuteurs. Cette méthodologie rend les travaux effectués difficilement généralisables pour des informations collectées de manière naturelle.Les travaux entrepris dans cette thèse se basent sur des conversations de centre d’appels, enregistrés en grande quantité et mettant en jeu au minimum 2 locuteurs humains (un client et un agent commercial) lors de chaque dialogue. Notre but est la détection, via l’expression émotionnelle, de la satisfaction client. Dans une première partie nous présentons les scores pouvant être obtenus sur nos données à partir de modèles se basant uniquement sur des indices acoustiques ou lexicaux. Nous montrons que pour obtenir des résultats satisfaisants une approche ne prenant en compte qu’un seul de ces types d’indices ne suffit pas. Nous proposons pour palier ce problème une étude sur la fusion d’indices de types acoustiques, lexicaux et syntaxico-sémantiques. Nous montrons que l’emploi de cette combinaison d’indices nous permet d’obtenir des gains par rapport aux modèles acoustiques même dans les cas ou nous nous basons sur une approche sans pré-traitements manuels (segmentation automatique des conversations, utilisation de transcriptions fournies par un système de reconnaissance de la parole). Dans une seconde partie nous remarquons que même si les modèles hybrides acoustiques/linguistiques nous permettent d’obtenir des gains intéressants la quantité de données utilisées dans nos modèles de détection est un problème lorsque nous testons nos méthodes sur des données nouvelles et très variées (49h issus de la base de données de conversations). Pour remédier à ce problème nous proposons une méthode d’enrichissement de notre corpus d’apprentissage. Nous sélectionnons ainsi, de manière automatique, de nouvelles données qui seront intégrées dans notre corpus d’apprentissage. Ces ajouts nous permettent de doubler la taille de notre ensemble d’apprentissage et d’obtenir des gains par rapport aux modèles de départ. Enfin, dans une dernière partie nous choisissons d’évaluées nos méthodes non plus sur des portions de dialogues comme cela est le cas dans la plupart des études, mais sur des conversations complètes. Nous utilisons pour cela les modèles issus des études précédentes (modèles issus de la fusion d’indices, des méthodes d’enrichissement automatique) et ajoutons 2 groupes d’indices supplémentaires : i) Des indices « structurels » prenant en compte des informations comme la durée de la conversation, le temps de parole de chaque type de locuteurs. ii) des indices « dialogiques » comprenant des informations comme le thème de la conversation ainsi qu’un nouveau concept que nous nommons « implication affective ». Celui-ci a pour but de modéliser l’impact de la production émotionnelle du locuteur courant sur le ou les autres participants de la conversation. Nous montrons que lorsque nous combinons l’ensemble de ces informations nous arrivons à obtenir des résultats proches de ceux d’un humain lorsqu’il s’agit de déterminer le caractère positif ou négatif d’une conversation / Automatic emotion recognition in speech is a relatively recent research subject in the field of natural language processing considering that the subject has been proposed for the first time about ten years ago. This subject is nowadays the object of much attention, not only in academia but also in industry, thank to the increased models performance and system reliability. The first studies were based on acted data and non spontaneous speech. Up until now, most experiments carried out by the research community on emotions were realized pre-segmented sequences and with a unique speaker and not on spontaneous speech with several speaker. With this methodology the models built on acted data are hardly usable on data collected in natural context The studies we present in this thesis are based on call center’s conversation with about 1620 hours of dialogs and with at least two human speakers (a commercial agent and a client) for each conversation. Our aim is the detection, via emotional expression, of the client satisfaction.In the first part of this work we present the results we obtained from models using only acoustic or linguistic features for emotion detection. We show that to obtain correct results an approach taking into account only one of these features type is not enough. To overcome this problem we propose the combination of three type of features (acoustic, lexical and semantic). We show that the use of models with features fusion allows higher score for the recognition step in all case compared to the model using only acoustic features. This gain is also obtained if we use an approach without manual pre-processing (automatic segmentation of conversation, transcriptions based on automatic speech recognition).In the second part of our study we notice that even if models based on features combination are relevant for emotion detection the amount of data we use in our training set is too small if we used it on large amount of data test. To overcome this problem we propose a new method to automatically complete training set with new data. We base this selection on linguistic and acoustic criterion. These new information are issued from 100 hours of data. These additions allow us to double the amount of data in our training set and increase emotion recognition rate compare to the non-enrich models. Finally, in the last part we choose to evaluate our method on entire conversation and not only on conversations turns as in most studies. To define the classification of a dialog we use models built on the previous steps of this works and we add two new features group:i) structural features including information like the length of the conversation, the proportion of speech for each speaker in the dialogii) dialogic features including informations like the topic of a conversation and a new concept we call “affective implication”. The aim of the affective implication is to represent the impact of the current speaker’s emotional production on the other speakers. We show that if we combined all information we can obtain results close to those of humans
236

Ensemble multi-label learning in supervised and semi-supervised settings / Apprentissage multi-label ensembliste dans le context supervisé et semi-supervisé

Gharroudi, Ouadie 21 December 2017 (has links)
L'apprentissage multi-label est un problème d'apprentissage supervisé où chaque instance peut être associée à plusieurs labels cibles simultanément. Il est omniprésent dans l'apprentissage automatique et apparaît naturellement dans de nombreuses applications du monde réel telles que la classification de documents, l'étiquetage automatique de musique et l'annotation d'images. Nous discutons d'abord pourquoi les algorithmes multi-label de l'etat-de-l'art utilisant un comité de modèle souffrent de certains inconvénients pratiques. Nous proposons ensuite une nouvelle stratégie pour construire et agréger les modèles ensemblistes multi-label basés sur k-labels. Nous analysons ensuite en profondeur l'effet de l'étape d'agrégation au sein des approches ensemblistes multi-label et étudions comment cette agrégation influece les performances de prédictive du modèle enfocntion de la nature de fonction cout à optimiser. Nous abordons ensuite le problème spécifique de la selection de variables dans le contexte multi-label en se basant sur le paradigme ensembliste. Trois méthodes de sélection de caractéristiques multi-label basées sur le paradigme des forêts aléatoires sont proposées. Ces méthodes diffèrent dans la façon dont elles considèrent la dépendance entre les labels dans le processus de sélection des varibales. Enfin, nous étendons les problèmes de classification et de sélection de variables au cadre d'apprentissage semi-supervisé. Nous proposons une nouvelle approche de sélection de variables multi-label semi-supervisée basée sur le paradigme de l'ensemble. Le modèle proposé associe des principes issues de la co-training en conjonction avec une métrique interne d'évaluation d'importnance des varaibles basée sur les out-of-bag. Testés de manière satisfaisante sur plusieurs données de référence, les approches développées dans cette thèse sont prometteuses pour une variété d'ap-plications dans l'apprentissage multi-label supervisé et semi-supervisé. Testés de manière satisfaisante sur plusieurs jeux de données de référence, les approches développées dans cette thèse affichent des résultats prometteurs pour une variété domaine d'applications de l'apprentissage multi-label supervisé et semi-supervisé / Multi-label learning is a specific supervised learning problem where each instance can be associated with multiple target labels simultaneously. Multi-label learning is ubiquitous in machine learning and arises naturally in many real-world applications such as document classification, automatic music tagging and image annotation. In this thesis, we formulate the multi-label learning as an ensemble learning problem in order to provide satisfactory solutions for both the multi-label classification and the feature selection tasks, while being consistent with respect to any type of objective loss function. We first discuss why the state-of-the art single multi-label algorithms using an effective committee of multi-label models suffer from certain practical drawbacks. We then propose a novel strategy to build and aggregate k-labelsets based committee in the context of ensemble multi-label classification. We then analyze the effect of the aggregation step within ensemble multi-label approaches in depth and investigate how this aggregation impacts the prediction performances with respect to the objective multi-label loss metric. We then address the specific problem of identifying relevant subsets of features - among potentially irrelevant and redundant features - in the multi-label context based on the ensemble paradigm. Three wrapper multi-label feature selection methods based on the Random Forest paradigm are proposed. These methods differ in the way they consider label dependence within the feature selection process. Finally, we extend the multi-label classification and feature selection problems to the semi-supervised setting and consider the situation where only few labelled instances are available. We propose a new semi-supervised multi-label feature selection approach based on the ensemble paradigm. The proposed model combines ideas from co-training and multi-label k-labelsets committee construction in tandem with an inner out-of-bag label feature importance evaluation. Satisfactorily tested on several benchmark data, the approaches developed in this thesis show promise for a variety of applications in supervised and semi-supervised multi-label learning
237

Ditch detection using refined LiDAR data : A bachelor’s thesis at Jönköping University / Dikesdetektion med hjälp av raffinerad LiDAR-data

Flyckt, Jonatan, Andersson, Filip January 2019 (has links)
In this thesis, a method for detecting ditches using digital elevation data derived from LiDAR scans was developed in collaboration with the Swedish Forest Agency. The objective was to compare a machine learning based method with a state-of-the-art automated method, and to determine which LiDAR-based features represent the strongest ditch predictors. This was done by using the digital elevation data to develop several new features, which were used as inputs in a random forest machine learning classifier. The output from this classifier was processed to remove noise, before a binarisation process produced the final ditch prediction. Several metrics including Cohen's Kappa index were calculated to evaluate the performance of the method. These metrics were then compared with the metrics from the results of a reproduced state-of-the-art automated method. The confidence interval for the Cohen's Kappa metric for the population was calculated to be [0.567 , 0.645] with a 95 % certainty. Features based on the Impoundment attribute derived from the digital elevation data overall represented the strongest ditch predictors. Our method outperformed the state-of-the-art automated method by a high margin. This thesis proves that it is possible to use AI and machine learning with digital elevation data to detect ditches to a substantial extent.
238

A COMPARATIVE STUDY OF FFN AND CNN WITHIN IMAGE RECOGNITION : The effects of training and accuracy of different artificial neural network designs

Knutsson, Magnus, Lindahl, Linus January 2019 (has links)
Image recognition and -classification is becoming more important as the need to be able to process large amounts of images is becoming more common. The aim of this thesis is to compare two types of artificial neural networks, FeedForward Network and Convolutional Neural Network, to see how these compare when performing the task of image recognition. Six models of each type of neural network was created that differed in terms of width, depth and which activation function they used in order to learn. This enabled the experiment to also see if these parameters had any effect on the rate which a network learn and how the network design affected the validation accuracy of the models. The models were implemented using the API Keras, and trained and tested using the dataset CIFAR-10. The results showed that within the scope of this experiment the CNN models were always preferable as they achieved a statistically higher validation accuracy compared to their FFN counterparts.
239

Classificação semi-supervisionada baseada em desacordo por similaridade / Semi-supervised learning based in disagreement by similarity

Gutiérrez, Victor Antonio Laguna 03 May 2010 (has links)
O aprendizado semi-supervisionado é um paradigma do aprendizado de máquina no qual a hipótese é induzida aproveitando tanto os dados rotulados quantos os dados não rotulados. Este paradigma é particularmente útil quando a quantidade de exemplos rotulados é muito pequena e a rotulação manual dos exemplos é uma tarefa muito custosa. Nesse contexto, foi proposto o algoritmo Cotraining, que é um algoritmo muito utilizado no cenário semi-supervisionado, especialmente quando existe mais de uma visão dos dados. Esta característica do algoritmo Cotraining faz com que a sua aplicabilidade seja restrita a domínios multi-visão, o que diminui muito o potencial do algoritmo para resolver problemas reais. Nesta dissertação, é proposto o algoritmo Co2KNN, que é uma versão mono-visão do algoritmo Cotraining na qual, ao invés de combinar duas visões dos dados, combina duas estratégias diferentes de induzir classificadores utilizando a mesma visão dos dados. Tais estratégias são chamados de k-vizinhos mais próximos (KNN) Local e Global. No KNN Global, a vizinhança utilizada para predizer o rótulo de um exemplo não rotulado é conformada por aqueles exemplos que contém o novo exemplo entre os seus k vizinhos mais próximos. Entretanto, o KNN Local considera a estratégia tradicional do KNN para recuperar a vizinhança de um novo exemplo. A teoria do Aprendizado Semi-supervisionado Baseado em Desacordo foi utilizada para definir a base teórica do algoritmo Co2KNN, pois argumenta que para o sucesso do algoritmo Cotraining, é suficiente que os classificadores mantenham um grau de desacordo que permita o processo de aprendizado conjunto. Para avaliar o desempenho do Co2KNN, foram executados diversos experimentos que sugerem que o algoritmo Co2KNN tem melhor performance que diferentes algoritmos do estado da arte, especificamente, em domínios mono-visão. Adicionalmente, foi proposto um algoritmo otimizado para diminuir a complexidade computacional do KNN Global, permitindo o uso do Co2KNN em problemas reais de classificação / Semi-supervised learning is a machine learning paradigm in which the induced hypothesis is improved by taking advantage of unlabeled data. Semi-supervised learning is particularly useful when labeled data is scarce and difficult to obtain. In this context, the Cotraining algorithm was proposed. Cotraining is a widely used semisupervised approach that assumes the availability of two independent views of the data. In most real world scenarios, the multi-view assumption is highly restrictive, impairing its usability for classifification purposes. In this work, we propose the Co2KNN algorithm, which is a one-view Cotraining approach that combines two different k-Nearest Neighbors (KNN) strategies referred to as global and local k-Nearest Neighbors. In the global KNN, the nearest neighbors used to classify a new instance are given by the set of training examples which contains this instance within its k-nearest neighbors. In the local KNN, on the other hand, the neighborhood considered to classify a new instance is the set of training examples computed by the traditional KNN approach. The Co2KNN algorithm is based on the theoretical background given by the Semi-supervised Learning by Disagreement, which claims that the success of the combination of two classifiers in the Cotraining framework is due to the disagreement between the classifiers. We carried out experiments showing that Co2KNN improves significatively the classification accuracy specially when just one view of training data is available. Moreover, we present an optimized algorithm to cope with time complexity of computing the global KNN, allowing Co2KNN to tackle real classification problems
240

Classificadores baseados em vetores de suporte gerados a partir de dados rotulados e não-rotulados. / Learning support vector machines from labeled and unlabeled data.

Oliveira, Clayton Silva 30 March 2006 (has links)
Treinamento semi-supervisionado é uma metodologia de aprendizado de máquina que conjuga características de treinamento supervisionado e não-supervisionado. Ela se baseia no uso de bases semi-rotuladas (bases contendo dados rotulados e não-rotulados) para o treinamento de classificadores. A adição de dados não-rotulados, mais baratos e geralmente disponíveis em maior quantidade do que os dados rotulados, pode aumentar o desempenho e/ou baratear o custo de treinamento desses classificadores (a partir da diminuição da quantidade de dados rotulados necessários). Esta dissertação analisa duas estratégias para se executar treinamento semi-supervisionado, especificamente em Support Vector Machines (SVMs): formas direta e indireta. A estratégia direta é atualmente mais conhecida e estudada, e permite o uso de dados rotulados e não-rotulados, ao mesmo tempo, em tarefas de aprendizagem de classificadores. Entretanto, a inclusão de muitos dados não-rotulados pode tornar o treinamento demasiadamente lento. Já a estratégia indireta é mais recente, sendo capaz de agregar os benefícios do treinamento semi-supervisionado direto com tempos menores para o aprendizado de classificadores. Esta opção utiliza os dados não-rotulados para pré-processar a base de dados previamente à tarefa de aprendizagem do classificador, permitindo, por exemplo, a filtragem de eventuais ruídos e a reescrita da base em espaços de variáveis mais convenientes. Dentro do escopo da forma indireta, está a principal contribuição dessa dissertação: idealização, implementação e análise do algoritmo split learning. Foram obtidos ótimos resultados com esse algoritmo, que se mostrou eficiente em treinar SVMs de melhor desempenho e em períodos menores a partir de bases semi-rotuladas. / Semi-supervised learning is a machine learning methodology that mixes features of supervised and unsupervised learning. It allows the use of partially labeled databases (databases with labeled and unlabeled data) to train classifiers. The addition of unlabeled data, which are cheaper and generally more available than labeled data, can enhance the performance and/or decrease the costs of learning such classifiers (by diminishing the quantity of required labeled data). This work analyzes two strategies to perform semi-supervised learning, specifically with Support Vector Machines (SVMs): direct and indirect concepts. The direct strategy is currently more popular and studied; it allows the use of labeled and unlabeled data, concomitantly, in learning classifiers tasks. However, the addition of many unlabeled data can lead to very long training times. The indirect strategy is more recent; it is able to attain the advantages of the direct semi-supervised learning with shorter training times. This alternative uses the unlabeled data to pre-process the database prior to the learning task; it allows denoising and rewriting the data in better feature espaces. The main contribution of this Master thesis lies within the indirect strategy: conceptualization, experimentation, and analysis of the split learning algorithm, that can be used to perform indirect semi-supervised learning using SVMs. We have obtained promising empirical results with this algorithm, which is efficient to train better performance SVMs in shorter times from partially labeled databases.

Page generated in 0.0971 seconds