Spelling suggestions: "subject:"1earning detector quantization"" "subject:"1earning detector cuantization""
1 |
Bagged clusteringLeisch, Friedrich January 1999 (has links) (PDF)
A new ensemble method for cluster analysis is introduced, which can be interpreted in two different ways: As complexity-reducing preprocessing stage for hierarchical clustering and as combination procedure for several partitioning results. The basic idea is to locate and combine structurally stable cluster centers and/or prototypes. Random effects of the training set are reduced by repeatedly training on resampled sets (bootstrap samples). We discuss the algorithm both from a more theoretical and an applied point of view and demonstrate it on several data sets. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
2 |
Control de semáforos para emergencias del Cuerpo General de Bomberos Voluntarios del Perú usando redes neuronalesAyala Garrido, Brenda Elizabeth, Acevedo Bustamante, Felipe January 2015 (has links)
La presente tesis, tuvo como objetivo mostrar una estrategia a través de redes neuronales, para los vehículos del Cuerpo General de Bomberos Voluntarios del Perú (CGBVP) durante una emergencia en el distrito de Surco, contribuyendo a la fluidez vehicular de las unidades en situaciones de emergencia. A nivel mundial se puede apreciar que se han desarrollado diferentes estrategias o sistemas que apoyan a las unidades de emergencia.
El desarrollo del sistema propuesto consiste en preparar los semáforos con anticipación al paso de una unidad. Para ello se consideraron dos tipos de datos, ubicación y dirección, con el fin de activar los semáforos tiempo antes que el vehículo llegue a la intersección.
El presente estudio analizó la red Neuronal LVQ (Learning Vector Quantization) y 2 tipos de red Backpropagation con el fin de determinar cuál de ellas es la más adecuada para el caso propuesto.
Finalmente a través de simulaciones se determinó la red Backpropagation [100 85 10] obtuvo mejores resultados, siendo el de regresión igual a 0.99 y presentando valores de error en un rango de 10^-5 o menores.
El algoritmo por Backpropagation [100 85 10] demostró durante sus 3 simulaciones responder correctamente a los 3 escenarios planteados. Demostrando únicamente variaciones pequeñas durante las simulaciones pero ninguna superando valores aceptables de 0 o 1 lógico.
The following thesis had as objective to show a strategy using neural networks to help vehicles of the fire fighter brigade in Peru (CGBVP) during emergencies on the district of Surco, helping with the response times of the unit on emergency situations. Worldwide can be seen that strategies or systems are being used to help lower the problems of traffic.
The development of the proposed system consist on preparing the traffic lights previous the arrival of the unit to the intersection. For this 2 type of data is being considered, location and direction, in order to activate the lights time before the vehicle arrives to the intersection.
The present study analyzed the LVQ (Learning Vector Quantization) and 2 types of backpropagation networks in order to determine which of them is the most fitting for the situation to handle.
Finally, going through the simulations it was determined that the [100 85 10] backpropagation network had the best response, being the regression 0.99 and showing error on the range of 10^-5 or lowers.
The algorithm by backpropagation [100 85 10] showed during the 3 simulations that works property on all 3 situations. It showed small variations on some of the simulations but nothing out of the acceptable values of a logic 1 or 0.
|
3 |
Uma abordagem adaptativa de learning vector quantization para classificação de dados intervalaresSilva Filho, Telmo de Menezes e 27 February 2013 (has links)
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-09T14:01:45Z
No. of bitstreams: 2
Dissertacao Telmo Filho_DEFINITIVA.pdf: 781380 bytes, checksum: fb398deff6f8aa856428277eb3236020 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-09T14:01:45Z (GMT). No. of bitstreams: 2
Dissertacao Telmo Filho_DEFINITIVA.pdf: 781380 bytes, checksum: fb398deff6f8aa856428277eb3236020 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-02-27 / A Análise de Dados Simbólicos lida com tipos de dados complexos, capazes de modelar a
variabilidade interna dos dados e dados imprecisos. Dados simbólicos intervalares surgem
naturalmente de valores como variação de temperatura diária, pressão sanguínea, entre
outros. Esta dissertação introduz um algoritmo de Learning Vector Quantization para
dados simbólicos intervalares, que usa uma distância Euclidiana intervalar ponderada e
generalizada para medir a distância entre instâncias de dados e protótipos.
A distância proposta tem quatro casos especiais. O primeiro caso é a distância
Euclidiana intervalar e tende a modelar classes e clusters com formas esféricas. O
segundo caso é uma distância intervalar baseada em protótipos que modela subregiões
não-esféricas e de tamanhos similares dentro das classes. O terceiro caso permite à
distância lidar com subregiões não-esféricas e de tamanhos variados dentro das classes. O
último caso permite à distância modelar classes desbalanceadas, compostas de subregiões
de várias formas e tamanhos. Experimentos são feitos para avaliar os desempenhos
do Learning Vector Quantization intervalar proposto, usando todos os quatro casos da
distância proposta. Três conjuntos de dados intervalares sintéticos e um conjunto de
dados intervalares reais são usados nesses experimentos e seus resultados mostram a
utilidade de uma distância localmente ponderada.
|
4 |
Uma abordagem adaptativa de learning vector quantization para classificação de dados intervalaresSilva Filho, Telmo de Menezes e 27 February 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:06:49Z
No. of bitstreams: 2
Dissertacao Telmo Silva Filho.pdf: 781380 bytes, checksum: fb398deff6f8aa856428277eb3236020 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:23:59Z (GMT) No. of bitstreams: 2
Dissertacao Telmo Silva Filho.pdf: 781380 bytes, checksum: fb398deff6f8aa856428277eb3236020 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T13:23:59Z (GMT). No. of bitstreams: 2
Dissertacao Telmo Silva Filho.pdf: 781380 bytes, checksum: fb398deff6f8aa856428277eb3236020 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-02-27 / A Análise de Dados Simbólicos lida com tipos de dados complexos, capazes de modelar a
variabilidade interna dos dados e dados imprecisos. Dados simbólicos intervalares surgem
naturalmente de valores como variação de temperatura diária, pressão sanguínea, entre
outros. Esta dissertação introduz um algoritmo de Learning Vector Quantization para
dados simbólicos intervalares, que usa uma distância Euclidiana intervalar ponderada e
generalizada para medir a distância entre instâncias de dados e protótipos.
A distância proposta tem quatro casos especiais. O primeiro caso é a distância
Euclidiana intervalar e tende a modelar classes e clusters com formas esféricas. O
segundo caso é uma distância intervalar baseada em protótipos que modela subregiões
não-esféricas e de tamanhos similares dentro das classes. O terceiro caso permite à
distância lidar com subregiões não-esféricas e de tamanhos variados dentro das classes. O
último caso permite à distância modelar classes desbalanceadas, compostas de subregiões
de várias formas e tamanhos. Experimentos são feitos para avaliar os desempenhos
do Learning Vector Quantization intervalar proposto, usando todos os quatro casos da
distância proposta. Três conjuntos de dados intervalares sintéticos e um conjunto de
dados intervalares reais são usados nesses experimentos e seus resultados mostram a
utilidade de uma distância localmente ponderada.
|
5 |
The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure PredictionClayton, Arnshea 09 June 2006 (has links)
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
|
6 |
Projeto de classificadores de padrÃes baseados em protÃtipos usando evoluÃÃo diferencial / On the efficient design of a prototype-based classifier using differential evolutionLuiz Soares de Andrade Filho 28 November 2014 (has links)
Nesta dissertaÃÃo à apresentada uma abordagem evolucionÃria para o projeto eciente de classificadores baseados em protÃtipos utilizando EvoluÃÃo Diferencial. Para esta finalidade foram reunidos conceitos presentes na famÃlia de redes neurais LVQ (Learning Vector Quantization, introduzida por Kohonen para classificaÃÃo supervisionada, juntamente com conceitos extraÃdos da tÃcnica de clusterizaÃÃo automÃtica proposta por Das et al. baseada na metaheurÃstica EvoluÃÃo Diferencial. A abordagem proposta visa determinar tanto o nÃmero Ãtimo de protÃtipos por classe, quanto as posiÃÃes correspondentes de cada protÃtipo no espaÃo de cobertura do problema. AtravÃs de simulaÃÃes computacionais abrangentes realizadas sobre vÃrios conjuntos de dados comumente utilizados em estudos de comparaÃÃo de desempenho, foi demonstrado que o classificador resultante, denominado LVQ-DE, alcanÃa resultados equivalentes (ou muitas vezes atà melhores) que o estado da arte em classificadores baseados em protÃtipos, com um nÃmero muito menor de protÃtipos. / In this Master's dissertation we introduce an evolutionary approach for the eficient design of prototyp e-based classiers using dierential evolution (DE). For
this purp ose we amalgamate ideas from the Learning Vector Quantization (LVQ)
framework for sup ervised classication by Kohonen (KOHONEN, 2001), with
the DE-based automatic clustering approach by Das et al. (DAS; ABRAHAM;
KONAR, 2008) in order to evolve sup ervised classiers. The prop osed approach
is able to determine b oth the optimal numb er of prototyp es p er class and the
corresp onding p ositions of these prototyp es in the data space. By means of
comprehensive computer simulations on b enchmarking datasets, we show that
the resulting classier, named LVQ-DE, consistently outp erforms state-of-the-art
prototyp e-based classiers.
|
7 |
Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification ModelsKaden, Marika 14 July 2016 (has links) (PDF)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.
|
8 |
Automatic Target Recognition In Infrared ImageryBayik, Tuba Makbule 01 September 2004 (has links) (PDF)
The task of automatically recognizing targets in IR imagery has a history of approximately 25 years of research and development. ATR is an application of pattern recognition and scene analysis in the field of defense industry and it is still one of the challenging problems. This thesis may be viewed as an exploratory study of ATR problem with encouraging recognition algorithms implemented in the area. The examined algorithms are among the solutions to the ATR problem, which are reported to have good performance in the literature. Throughout the study, PCA, subspace LDA, ICA, nearest mean classifier, K nearest neighbors classifier, nearest neighbor classifier, LVQ classifier are implemented and their performances are compared in the aspect of recognition rate. According to the simulation results, the system, which uses the ICA as the feature extractor and LVQ as the classifier, has the best performing results. The good performance of this system is due to the higher order statistics of the data and the success of LVQ in modifying the decision boundaries.
|
9 |
[en] DATA SELECTION FOR LVQ / [pt] SELEÇÃO DE DADOS EM LVQRODRIGO TOSTA PERES 20 September 2004 (has links)
[pt] Nesta dissertação, propomos uma metodologia para seleção de
dados em
modelos de Aprendizado por Quantização Vetorial,
referenciado amplamente na
literatura pela sigla em inglês LVQ. Treinar um modelo
(ajuste dentro-daamostra)
com um subconjunto selecionado a partir do conjunto de dados
disponíveis para o aprendizado pode trazer grandes
benefícios no resultado de
generalização (fora-da-amostra). Neste sentido, é muito
importante realizar uma
busca para selecionar dados que, além de serem
representativos de suas
distribuições originais, não sejam ruído (no sentido
definido ao longo desta
dissertação). O método proposto procura encontrar os pontos
relevantes do
conjunto de entrada, tendo como base a correlação do erro
de cada ponto com o
erro do restante da distribuição. Procura-se, em geral,
eliminar considerável parte
do ruído mantendo os pontos que são relevantes para o
ajuste do modelo
(aprendizado). Assim, especificamente em LVQ, a atualização
dos protótipos
durante o aprendizado é realizada com um subconjunto do
conjunto de
treinamento originalmente disponível. Experimentos
numéricos foram realizados
com dados simulados e reais, e os resultados obtidos foram
muito interessantes,
mostrando claramente a potencialidade do método proposto. / [en] In this dissertation, we consider a methodology for
selection of data in
models of Learning Vector Quantization (LVQ). The
generalization can be
improved by using a subgroup selected from the available
data set. We search the
original distribution to select relevant data that aren't
noise. The search aims at
relevant points in the training set based on the
correlation between the error of
each point and the average of error of the remaining data.
In general, it is desired
to eliminate a considerable part of the noise, keeping the
points that are relevant
for the learning model. Thus, specifically in LVQ, the
method updates the
prototypes with a subgroup of the originally available
training set. Numerical
experiments have been done with simulated and real data.
The results were very
interesting and clearly indicated the potential of the
method.
|
10 |
Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification ModelsKaden, Marika 23 May 2016 (has links)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.:Symbols and Abbreviations
1 Introduction
1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . 1
1.2 Utilized Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Prototype Based Methods 19
2.1 Unsupervised Vector Quantization . . . . . . . . . . . . . . . . . . 22
2.1.1 C-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . 25
2.1.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1.4 Common Generalizations . . . . . . . . . . . . . . . . . . . 30
2.2 Supervised Vector Quantization . . . . . . . . . . . . . . . . . . . . 35
2.2.1 The Family of Learning Vector Quantizers - LVQ . . . . . . 36
2.2.2 Generalized Learning Vector Quantization . . . . . . . . . 38
2.3 Semi-Supervised Vector Quantization . . . . . . . . . . . . . . . . 42
2.3.1 Learning Associations by Self-Organization . . . . . . . . . 42
2.3.2 Fuzzy Labeled Self-Organizing Map . . . . . . . . . . . . . 43
2.3.3 Fuzzy Labeled Neural Gas . . . . . . . . . . . . . . . . . . 45
2.4 Dissimilarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4.1 Differentiable Kernels in Generalized LVQ . . . . . . . . . 52
2.4.2 Dissimilarity Adaptation for Performance Improvement . 56
3 Deeper Insights into Classification Problems
- From the Perspective of Generalized LVQ- 81
3.1 Classification Models . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 The Classification Task . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Evaluation of Classification Results . . . . . . . . . . . . . . . . . . 88
3.4 The Classification Task as an Ill-Posed Problem . . . . . . . . . . . 92
4 Auxiliary Structure Information and Appropriate Dissimilarity
Adaptation in Prototype Based Methods 93
4.1 Supervised Vector Quantization for Functional Data . . . . . . . . 93
4.1.1 Functional Relevance/Matrix LVQ . . . . . . . . . . . . . . 95
4.1.2 Enhancement Generalized Relevance/Matrix LVQ . . . . 109
4.2 Fuzzy Information About the Labels . . . . . . . . . . . . . . . . . 121
4.2.1 Fuzzy Semi-Supervised Self-Organizing Maps . . . . . . . 122
4.2.2 Fuzzy Semi-Supervised Neural Gas . . . . . . . . . . . . . 123
5 Variants of Classification Costs and Class Sensitive Learning 137
5.1 Border Sensitive Learning in Generalized LVQ . . . . . . . . . . . 137
5.1.1 Border Sensitivity by Additive Penalty Function . . . . . . 138
5.1.2 Border Sensitivity by Parameterized Transfer Function . . 139
5.2 Optimizing Different Validation Measures by the Generalized LVQ 147
5.2.1 Attention Based Learning Strategy . . . . . . . . . . . . . . 148
5.2.2 Optimizing Statistical Validation Measurements for
Binary Class Problems in the GLVQ . . . . . . . . . . . . . 155
5.3 Integration of Structural Knowledge about the Labeling in Fuzzy
Supervised Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . 160
6 Conclusion and Future Work 165
My Publications 168
A Appendix 173
A.1 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . 173
A.2 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 175
A.3 Fuzzy Supervised Neural Gas Algorithm Solved by SGD . . . . . 179
Bibliography 182
Acknowledgements 201
|
Page generated in 0.1655 seconds