Spelling suggestions: "subject:"equential learning"" "subject:"aequential learning""
11 |
Effects of Network Size in a Recurrent Bayesian Confidence Propagating Neural Network With two Synaptic TracesLaius Lundgren, William, Karlsson, Ludwig January 2021 (has links)
A modular Recurrent Bayesian Confidence PropagatingNeural Networks (BCPNN) with two synaptic time tracesis a computational neural network that can serve as a modelof biological short term memory. The units in the network aregrouped into modules called hypercolumns within which there isa competitive winner-takes-all mechanism.In this work, the network’s capacity to store sequentialmemories is investigated while varying the size of and numberof hyperocolumns in the network. The network is trained on setsof temporal sequences where each sequence consist of a set ofsymbols represented as semi-stable attractor state patterns in thenetwork and evaluated by its ability to later recall the sequences.For a given distribution of training sequence the networks’ability to store and recall sequences was seen to significantlyincrease with the size of the hypercolumns. As the number ofhypercolumns was increased, the storage capacity increased upto a clear level in most cases. After this point it was observedto remain constant and did not improve by adding any morehypercolumns (for a given sequence distribution). The storagecapacity was also seen to depend a lot on the distribution of thesequences. / Ett modulärt Recurrent Bayesian Confidence Propagating Neural Network (BCPNN) med två synaptiskatidsspår är ett neuronnät som kan användas som en modell förbiologiskt korttidsminne. Enheterna i nätverket är grupperade imoduler kallade hyperkolumner inom vilka enheterna konkurrerarenligt en ”winner-takes-all”-mekanism.I det här arbetet undersöktes hur nätverkets förmåga attlagra sekventiella minnen beror på storleken och antalet hyperkolumner.Nätverket tränades på ett antal temporala följderdär varje följd bestod av en mängd symboler som representeradesom attraktor-tillstånd i nätverket och bedömdes baserat på dessförmåga att komma ihåg följder det lärt sig under träning.För en given fördelning av träningsföljder ökade nätverketsförmåga att lagra och återkalla följder med storleken på hyperkolumnerna.Då antalet hyperkolumner ökades ökade ocks i de flesta fall lagringsförmågan upp till en viss nivå varefterytterligare hyperkolumner inte gav några vidare förbättringar(för en given fördelning av sekvenser). Lagringskapacitetenberodde också mycket på fördelningen av följder. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
|
12 |
Um método de aprendizagem seqüencial com filtro de Kalman e Extreme Learning Machine para problemas de regressão e previsão de séries temporaisNÓBREGA, Jarley Palmeira 24 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T12:52:14Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_Jarley_Nobrega_CORRIGIDA.pdf: 12392055 bytes, checksum: 30d9ff36e7236d22ddc3a16dd942341f (MD5) / Made available in DSpace on 2016-03-15T12:52:14Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_Jarley_Nobrega_CORRIGIDA.pdf: 12392055 bytes, checksum: 30d9ff36e7236d22ddc3a16dd942341f (MD5)
Previous issue date: 2015-08-24 / Em aplicações de aprendizagem de máquina, é comum encontrar situações onde o
conjunto de entrada não está totalmente disponível no início da fase de treinamento. Uma solução
conhecida para essa classe de problema é a realização do processo de aprendizagem através do
fornecimento sequencial das instâncias de treinamento. Entre as abordagens mais recentes para
esses métodos, encontram-se as baseadas em redes neurais do tipo Single Layer Feedforward
Network (SLFN), com destaque para as extensões da Extreme Learning Machine (ELM) para
aprendizagem sequencial.
A versão sequencial da ELM, chamada de Online Sequential Extreme Learning Machine
(OS-ELM), utiliza uma solução recursiva de mínimos quadrados para atualizar os pesos de
saída da rede através de uma matriz de covariância. Entretanto, a implementação da OS-ELM e
suas extensões sofrem com o problema de multicolinearidade entre os elementos da matriz de
covariância.
Essa tese introduz um novo método para aprendizagem sequencial com capacidade para
tratar os efeitos da multicolinearidade. Chamado de Kalman Learning Machine (KLM), o
método proposto utiliza o filtro de Kalman para a atualização sequencial dos pesos de saída
de uma SLFN baseada na OS-ELM. Esse trabalho também propõe uma abordagem para a
estimativa dos parâmetros do filtro, com o objetivo de diminuir a complexidade computacional
do treinamento. Além disso, uma extensão do método chamada de Extended Kalman Learning
Machine (EKLM) é apresentada, voltada para problemas onde a natureza do sistema em estudo
seja não linear.
O método proposto nessa tese foi comparado com alguns dos mais recentes e efetivos
métodos para o tratamento de multicolinearidade em problemas de aprendizagem sequencial. Os
experimentos executados mostraram que o método proposto apresenta um desempenho melhor
que a maioria dos métodos do estado da arte, quando medidos o de erro de previsão e o tempo
de treinamento. Um estudo de caso foi realizado, aplicando o método proposto a um problema
de previsão de séries temporais para o mercado financeiro. Os resultados confirmaram que o
KLM consegue simultaneamente reduzir o erro de previsão e o tempo de treinamento, quando
comparado com os demais métodos investigados nessa tese. / In machine learning applications, there are situations where the input dataset is not fully
available at the beginning of the training phase. A well known solution for this class of problem
is to perform the learning process through the sequential feed of training instances. Among most
recent approaches for sequential learning, we can highlight the methods based on Single Layer
Feedforward Network (SLFN) and the extensions of the Extreme Learning Machine (ELM)
approach for sequential learning.
The sequential version of the ELM algorithm, named Online Sequential Extreme Learning
Machine (OS-ELM), uses a recursive least squares solution for updating the output weights
through a covariance matrix. However, the implementation of OS-ELM and its extensions suffer
from the problem of multicollinearity for the hidden layer output matrix.
This thesis introduces a new method for sequential learning in which the effects of multicollinearity
is handled. The proposed Kalman Learning Machine (KLM) updates sequentially
the output weights of an OS-ELM based network by using the Kalman filter iterative procedure.
In this work, in order to reduce the computational complexity of the training process, a new
approach for estimating the filter parameters is presented. Moreover, an extension of the method,
named Extended Kalman Learning Machine (EKLM), is presented for problems where the
dynamics of the model are non linear.
The proposed method was evaluated by comparing the related state-of-the-art methods
for sequential learning based on the original OS-ELM. The results of the experiments show
that the proposed method can achieve the lowest forecast error when compared with most of
their counterparts. Moreover, the KLM algorithm achieved the lowest average training time
when all experiments were considered, as an evidence that the proposed method can reduce the
computational complexity for the sequential learning process. A case study was performed by
applying the proposed method for a problem of financial time series forecasting. The results
reported confirm that the KLM algorithm can decrease the forecast error and the average training
time simultaneously, when compared with other sequential learning algorithms.
|
13 |
The Basal Ganglia and Sequential LearningSmith, Denise P. A. 27 November 2012 (has links)
No description available.
|
Page generated in 0.1038 seconds