Spelling suggestions: "subject:"feedforward neural networks"" "subject:"jeedforward neural networks""
1 |
Neural network training for modelling and controlMcLoone, Sean Francis January 1996 (has links)
No description available.
|
2 |
A practical framework for training sigma-pi neural networks with an application in rotation invariant pattern recognitionHeywood, M. I. January 1994 (has links)
No description available.
|
3 |
A Neural Network Classifier for Spectral Pattern Recognition. On-Line versus Off-Line Backpropagation TrainingStaufer-Steinnocher, Petra, Fischer, Manfred M. 12 1900 (has links) (PDF)
In this contributon we evaluate on-line and off-line techniques to train a single
hidden layer neural network classifier with logistic hidden and softmax output transfer
functions on a multispectral pixel-by-pixel classification problem. In contrast to
current practice a multiple class cross-entropy error function has been chosen as the
function to be minimized. The non-linear diffierential equations cannot be solved in
closed form. To solve for a set of locally minimizing parameters we use the gradient
descent technique for parameter updating based upon the backpropagation technique
for evaluating the partial derivatives of the error function with respect to the
parameter weights. Empirical evidence shows that on-line and epoch-based gradient
descent backpropagation fail to converge within 100,000 iterations, due to the fixed
step size. Batch gradient descent backpropagation training is superior in terms of
learning speed and convergence behaviour. Stochastic epoch-based training tends to
be slightly more effective than on-line and batch training in terms of generalization
performance, especially when the number of training examples is larger. Moreover, it
is less prone to fall into local minima than on-line and batch modes of operation. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
|
4 |
An Analysis of Overfitting in Particle Swarm Optimised Neural Networksvan Wyk, Andrich Benjamin January 2014 (has links)
The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains
on training data at the cost of generalisation accuracy is known to be speci c to the
training algorithm used. This study investigates over tting within the context of particle
swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are
compared in terms of FFNN accuracy and a description of the over tting behaviour is
established. Each of the PSO components are in turn investigated to determine their
e ect on FFNN over tting. A study of the maximum velocity (Vmax) parameter is
performed and it is found that smaller Vmax values are optimal for FFNN training. The
analysis is extended to the inertia and acceleration coe cient parameters, where it is
shown that speci c interactions among the parameters have a dominant e ect on the
resultant FFNN accuracy and may be used to reduce over tting. Further, the signi cant
e ect of the swarm size on network accuracy is also shown, with a critical range being
identi ed for the swarm size for e ective training. The study is concluded with an
investigation into the e ect of the di erent activation functions. Given strong empirical
evidence, an hypothesis is made that stating the gradient of the activation function
signi cantly a ects the convergence of the PSO. Lastly, the PSO is shown to be a very
effective algorithm for the training of self-adaptive FFNNs, capable of learning from
unscaled data. / Dissertation (MSc)--University of Pretoria, 2014. / tm2015 / Computer Science / MSc / Unrestricted
|
5 |
Explanation and Downscalability of Google's Dependency Parser Parsey McParsefaceEndreß, Hannes 10 January 2023 (has links)
Using the data collected during the hyperparameter tuning for Google's Dependency Parser Parsey McParseface, Feedforward neural networks and the correlation between its hyperparameter during the networks training are explained and analysed in depth.:1 Introduction to Neural Networks 4
1.1 History of AI 4
1.2 The role of Neural Networks in AI Research 6
1.2.1 Artificial Intelligence 6
1.2.2 Machine Learning 6
1.2.3 Neural Network 8
1.3 Structure of Neural Networks 8
1.3.1 Biology Analogy of Artificial Neural Networks 9
1.3.2 Architecture of Artificial Neural Networks 9
1.3.3 Biological Model of Nodes – Neurons 11
1.3.4 Structure of Artificial Neurons 12
1.4 Training a Neural Network 21
1.4.1 Data 21
1.4.2 Hyperparameters 22
1.4.3 Training process 26
1.4.4 Overfitting 27
2 Natural Language Processing (NLP) 29
2.1 Data Preparation 29
2.1.1 Text Preprocessing 29
2.1.2 Part-of-Speech Tagging 30
2.2 Dependency Parsing 31
2.2.1 Dependency Grammar 31
2.2.2 Dependency Parsing Rule-Based & Data-Driven Approach 33
2.2.3 Syntactic Parser 33
2.3 Parsey McParseface 34
2.3.1 SyntaxNet 34
2.3.2 Corpus 34
2.3.3 Architecture 34
2.3.4 Improvements to the Feed Forward Neural Network 38
3 Training of Parsey’s Cousins 41
3.1 Training a Model 41
3.1.1 Building the Framework 41
3.1.2 Corpus 41
3.1.3 Training Process 43
3.1.4 Settings for the Training 44
3.2 Results and Analysis 46
3.2.1 Results from Google’s Models 46
3.2.2 Effect of Hyperparameter 47
4 Conclusion 63
5 Bibliography 65
6 Appendix 74
|
6 |
Perspektivní obvodové struktury pro modulární neuronové sítě / Promising Circuit Structures for Modular Neural NetworksBohrn, Marek January 2014 (has links)
The thesis deals with design of novel circuit structure suitable for hardware implementations of feedforward neural networks. The structure utilizes innovative data bus structure. The main contribution of the structure is in optimization of the utilization of implemented computing units. Proposed architecture is flexible and suitable for implementations of variety of feedforward neural network structures.
|
7 |
Artificial neural network methods in few-body systemsRampho, Gaotsiwe Joel 30 November 2002 (has links)
Physics / M. Sc. (Physics)
|
8 |
Artificial neural network methods in few-body systemsRampho, Gaotsiwe Joel 30 November 2002 (has links)
Physics / M. Sc. (Physics)
|
9 |
Métodos neuronais para a solução da equação algébrica de Riccati e o LQR / Neural methods for the solution of Equation Of algebraic Riccati and LQRSILVA, Fabio Nogueira da 20 June 2008 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T18:28:45Z
No. of bitstreams: 1
FabioSilva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5) / Made available in DSpace on 2017-08-14T18:28:45Z (GMT). No. of bitstreams: 1
FabioSilva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5)
Previous issue date: 2008-06-20 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ) / Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão (FAPEMA) / We present in this work the results about two neural networks methods to solve the algebraic Riccati(ARE), what are used in many applications, mainly in the Linear Quadratic Regulator (LQR), H2 and H1 controls. First is showed the real symmetric form of the ARE and two methods based on neural computation. One feedforward neural network (FNN), that de¯nes an error as function of the ARE and a recurrent neural network (RNN), which converts a constrain optimization problem, restricted to the state space model, into an unconstrained convex optimization problem de¯ning an energy as function of the ARE and Cholesky factor. A proposal to chose the learning parameters of the RNN used to solve the ARE, by making a surface of the parameters variations, thus we can tune the neural network for a better performance. Computational experiments related with the plant matrices perturbations of the tested systems in order to perform an analysis of the behavior of the presented methodologies, that are based on homotopies methods, where we chose a good initial condition and compare the results to the Schur method. Two 6th order systems were used, a Doubly Fed Induction Generator(DFIG) and an aircraft plant. The results showed the RNN a good alternative compared with the FNN and Schur methods. / Apresenta-se nesta dissertação os resultados a respeito de dois métodos neuronais para a resolução da equação algébrica de Riccati(EAR), que tem varias aplicações, sendo principalmente usada pelos Regulador Linear Quadrático(LQR), controle H2 e controle H1. É apresentado a EAR real e simétrica e dois métodos baseados em uma rede neuronal direta (RND) que tem a função de erro associada a EAR e uma rede neuronal recorrente (RNR) que converte um problema de otimização restrita ao modelo de espaço de estados em outro de otimização convexa em função da EAR e do fator de Cholesky de modo a usufruir das propriedades de convexidade e condições de otimalidade. Uma proposta para a escolha dos parâmetros da RNR usada para solucionar a EAR por meio da geração de superfícies com a variação paramétrica da RNR, podendo assim melhor sintonizar a rede neuronal para um melhor desempenho. Experimentos computacionais relacionados a perturbações nos sistemas foram realizados para analisar o comportamento das metodologias apresentadas, tendo como base o princípio dos métodos homotópicos, com uma boa condição inicial, a partir de uma ponto de operação estável e comparamos os resultados com o método de Schur. Foram usadas as plantas de dois sistemas: uma representando a dinâmica de uma aeronave e outra de um motor de indução eólico duplamente alimentado(DFIG), ambos sistemas de 6a ordem. Os resultados mostram que a RNR é uma boa alternativa se comparado com a RND e com o método de Schur.
|
10 |
Métodos Neuronais para a Solução da Equação Algébrica de Riccati e o LQR / Neural methods for the solution of Equation Of algebraic Riccati and LQRSilva, Fabio Nogueira da 20 June 2008 (has links)
Made available in DSpace on 2016-08-17T14:53:01Z (GMT). No. of bitstreams: 1
Fabio Nogueira da Silva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5)
Previous issue date: 2008-06-20 / FUNDAÇÃO DE AMPARO À PESQUISA E AO DESENVOLVIMENTO CIENTIFICO E TECNOLÓGICO DO MARANHÃO / We present in this work the results about two neural networks methods to solve
the algebraic Riccati(ARE), what are used in many applications, mainly in the
Linear Quadratic Regulator (LQR), H2 and H1 controls. First is showed the
real symmetric form of the ARE and two methods based on neural computation.
One feedforward neural network (FNN), that de¯nes an error as function of
the ARE and a recurrent neural network (RNN), which converts a constrain
optimization problem, restricted to the state space model, into an unconstrained
convex optimization problem de¯ning an energy as function of the ARE and
Cholesky factor. A proposal to chose the learning parameters of the RNN used
to solve the ARE, by making a surface of the parameters variations, thus we can
tune the neural network for a better performance.
Computational experiments related with the plant matrices perturbations of
the tested systems in order to perform an analysis of the behavior of the presented
methodologies, that are based on homotopies methods, where we chose a good
initial condition and compare the results to the Schur method. Two 6th order
systems were used, a Doubly Fed Induction Generator(DFIG) and an aircraft
plant. The results showed the RNN a good alternative compared with the FNN
and Schur methods. / Apresenta-se nesta dissertação os resultados a respeito de dois métodos neuronais
para a resolução da equação algébrica de Riccati(EAR), que tem varias aplicações,
sendo principalmente usada pelos Regulador Linear Quadrático(LQR), controle
H2 e controle H1. É apresentado a EAR real e simétrica e dois métodos baseados
em uma rede neuronal direta (RND) que tem a função de erro associada a EAR
e uma rede neuronal recorrente (RNR) que converte um problema de otimização
restrita ao modelo de espaço de estados em outro de otimização convexa em
função da EAR e do fator de Cholesky de modo a usufruir das propriedades de
convexidade e condições de otimalidade.
Uma proposta para a escolha dos parâmetros da RNR usada para solucionar
a EAR por meio da geração de superfícies com a variação paramétrica da RNR,
podendo assim melhor sintonizar a rede neuronal para um melhor desempenho.
Experimentos computacionais relacionados a perturbações nos sistemas foram
realizados para analisar o comportamento das metodologias apresentadas, tendo
como base o princípio dos métodos homotópicos, com uma boa condição inicial,
a partir de uma ponto de operação estável e comparamos os resultados com o
método de Schur. Foram usadas as plantas de dois sistemas: uma representando
a dinâmica de uma aeronave e outra de um motor de indução eólico duplamente
alimentado(DFIG), ambos sistemas de 6a ordem. Os resultados mostram que
a RNR é uma boa alternativa se comparado com a RND e com o método de
Schur.
|
Page generated in 0.069 seconds