• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applying Discriminant Functions with One-Class SVMs for Multi-Class Classification

Lee, Zhi-Ying 09 August 2007 (has links)
AdaBoost.M1 has been successfully applied to improve the accuracy of a learning algorithm for multi-class classification problems. However, it assumes that the performance of each base classifier must be better than 1/2, and this may be hard to achieve in practice for a multi-class problem. A new algorithm called AdaBoost.MK only requiring base classifiers better than a random guessing (1/k) is thus designed. Early SVM-based multi-class classification algorithms work by splitting the original problem into a set of two-class sub-problems. The time and space required by these algorithms are very demanding. In order to have low time and space complexities, we develop a base classifier that integrates one-class SVMs with discriminant functions. In this study, a hybrid method that integrates AdaBoost.MK and one-class SVMs with improved discriminant functions as the base classifiers is proposed to solve a multi-class classification problem. Experimental results on data sets from UCI and Statlog show that the proposed approach outperforms many popular multi-class algorithms including support vector clustering and AdaBoost.M1 with one-class SVMs as the base classifiers.
2

Classifica??o com algoritmo AdaBoost.M1 : o mito do limiar de erro de treinamento

Le?es Neto, Ant?nio do Nascimento 20 November 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-02-16T13:18:07Z No. of bitstreams: 1 Ant?nio_do_Nascimento_Le?es_ Neto_Dis.pdf: 1049012 bytes, checksum: 293046d3be865048cd37706b38494e1a (MD5) / Approved for entry into archive by Caroline Xavier (caroline.xavier@pucrs.br) on 2018-02-22T16:34:51Z (GMT) No. of bitstreams: 1 Ant?nio_do_Nascimento_Le?es_ Neto_Dis.pdf: 1049012 bytes, checksum: 293046d3be865048cd37706b38494e1a (MD5) / Made available in DSpace on 2018-02-22T16:40:19Z (GMT). No. of bitstreams: 1 Ant?nio_do_Nascimento_Le?es_ Neto_Dis.pdf: 1049012 bytes, checksum: 293046d3be865048cd37706b38494e1a (MD5) Previous issue date: 2017-11-20 / The accelerated growth of data repositories, in the different areas of activity, opens space for research in the area of data mining, in particular, with the methods of classification and combination of classifiers. The Boosting method is one of them, which combines the results of several classifiers in order to obtain better results. The main purpose of this dissertation is the experimentation of alternatives to increase the effectiveness and performance of the algorithm AdaBoost.M1, which is the implementation often employed by the Boosting method. An empirical study was perfered taking into account stochastic aspects trying to shed some light on an obscure internal parameter, in which algorithm creators and other researchers assumed that the training error threshold should be correlated with the number of classes in the target data set and logically, most data sets should use a value of 0.5. In this paper, we present an empirical evidence that this is not a fact, but probably a myth originated by the mistaken application of the theoretical assumption of the joint effect. To achieve this goal, adaptations were proposed for the algorithm, focusing on finding a better suggestion to define this threshold in a general case. / O crescimento acelerado dos reposit?rios de dados, nas diversas ?reas de atua??o, abre espa?o para pesquisas na ?rea da minera??o de dados, em espec?fico, com os m?todos de classifica??o e de combina??o de classificadores. O Boosting ? um desses m?todos, e combina os resultados de diversos classificadores com intuito de obter melhores resultados. O prop?sito central desta disserta??o ? responder a quest?o de pesquisa com a experimenta??o de alternativas para aumentar a efic?cia e o desempenho do algoritmo AdaBoost.M1 que ? a implementa??o frequentemente empregada pelo Boosting. Foi feito um estudo emp?rico levando em considera??o aspectos estoc?sticos tentando lan?ar alguma luz sobre um par?metro interno obscuro em que criadores do algoritmo e outros pesquisadores assumiram que o limiar de erro de treinamento deve ser correlacionado com o n?mero de classes no conjunto de dados de destino e, logicamente, a maioria dos conjuntos de dados deve usar um valor de 0.5. Neste trabalho, apresentamos evid?ncias emp?ricas de que isso n?o ? um fato, mas provavelmente um mito originado pela aplica??o da primeira defini??o do algoritmo. Para alcan?ar esse objetivo, foram propostas adapta??es para o algoritmo, focando em encontrar uma sugest?o melhor para definir esse limiar em um caso geral.

Page generated in 0.0399 seconds