Spelling suggestions: "subject:"classifier ensemble"" "subject:"elassifier ensemble""
1 |
Support Vector Machine Ensemble Based on Feature and Hyperparameter Variation.WANDEKOKEN, E. D. 23 February 2011 (has links)
Made available in DSpace on 2016-08-29T15:33:14Z (GMT). No. of bitstreams: 1
tese_4163_.pdf: 479699 bytes, checksum: 04f01a137084c0859b4494de6db8b3ac (MD5)
Previous issue date: 2011-02-23 / Classificadores do tipo máquina de vetores de suporte (SVM) são atualmente considerados
uma das técnicas mais poderosas para se resolver problemas de classificação com duas classes.
Para aumentar o desempenho alcançado por classificadores SVM individuais, uma abordagem
bem estabelecida é usar uma combinação de SVMs, a qual corresponde a um conjunto de classificadores SVMs que são, simultaneamente, individualmente precisos e coletivamente divergentes em suas decisões. Este trabalho propõe uma abordagem para se criar combinações de
SVMs, baseada em um processo de três estágios. Inicialmente, são usadas execuções complementares de uma busca baseada em algoritmos genéticos (GEFS), com o objetivo de investigar
globalmente o espaço de características para definir um conjunto de subconjuntos de características. Em seguida, para cada um desses subconjuntos de características definidos, uma SVM
que usa parâmetros otimizados é construída. Por fim, é empregada uma busca local com o
objetivo de selecionar um subconjunto otimizado dessas SVMs, e assim formar a combinação
de SVMs que é finalmente produzida. Os experimentos foram realizados num contexto de detecção de defeitos em máquinas industriais. Foram usados 2000 exemplos de sinais de vibração
de moto bombas instaladas em plataformas de petróleo. Os experimentos realizados mostram
que o método proposto para se criar combinação de SVMs apresentou um desempenho superior
em comparação a outras abordagens de classificação bem estabelecidas.
|
2 |
Cooperative Training in Multiple Classifier SystemsDara, Rozita Alaleh January 2007 (has links)
Multiple classifier system has shown to be an effective technique for classification.
The success of multiple classifiers does not entirely depend on the base classifiers
and/or the aggregation technique. Other parameters, such as training data, feature
attributes, and correlation among the base classifiers may also contribute to the
success of multiple classifiers. In addition, interaction of these parameters with each other may have an impact on multiple classifiers performance. In the present study, we intended to examine some of these interactions and investigate further the effects of these interactions on the performance of classifier ensembles.
The proposed research introduces a different direction in the field of multiple
classifiers systems. We attempt to understand and compare ensemble methods from
the cooperation perspective. In this thesis, we narrowed down our focus on cooperation at training level. We first developed measures to estimate the degree and type of cooperation among training data partitions. These evaluation measures enabled us to evaluate the diversity and correlation among a set of disjoint and overlapped partitions. With the aid of properly selected measures and training information, we proposed two new data partitioning approaches: Cluster, De-cluster, and Selection (CDS) and Cooperative Cluster, De-cluster, and Selection (CO-CDS). In the end, a
comprehensive comparative study was conducted where we compared our proposed
training approaches with several other approaches in terms of robustness of their
usage, resultant classification accuracy and classification stability.
Experimental assessment of CDS and CO-CDS training approaches validates
their robustness as compared to other training approaches. In addition, this study
suggests that: 1) cooperation is generally beneficial and 2) classifier ensembles that
cooperate through sharing information have higher generalization ability compared
to the ones that do not share training information.
|
3 |
Cooperative Training in Multiple Classifier SystemsDara, Rozita Alaleh January 2007 (has links)
Multiple classifier system has shown to be an effective technique for classification.
The success of multiple classifiers does not entirely depend on the base classifiers
and/or the aggregation technique. Other parameters, such as training data, feature
attributes, and correlation among the base classifiers may also contribute to the
success of multiple classifiers. In addition, interaction of these parameters with each other may have an impact on multiple classifiers performance. In the present study, we intended to examine some of these interactions and investigate further the effects of these interactions on the performance of classifier ensembles.
The proposed research introduces a different direction in the field of multiple
classifiers systems. We attempt to understand and compare ensemble methods from
the cooperation perspective. In this thesis, we narrowed down our focus on cooperation at training level. We first developed measures to estimate the degree and type of cooperation among training data partitions. These evaluation measures enabled us to evaluate the diversity and correlation among a set of disjoint and overlapped partitions. With the aid of properly selected measures and training information, we proposed two new data partitioning approaches: Cluster, De-cluster, and Selection (CDS) and Cooperative Cluster, De-cluster, and Selection (CO-CDS). In the end, a
comprehensive comparative study was conducted where we compared our proposed
training approaches with several other approaches in terms of robustness of their
usage, resultant classification accuracy and classification stability.
Experimental assessment of CDS and CO-CDS training approaches validates
their robustness as compared to other training approaches. In addition, this study
suggests that: 1) cooperation is generally beneficial and 2) classifier ensembles that
cooperate through sharing information have higher generalization ability compared
to the ones that do not share training information.
|
Page generated in 0.0699 seconds