Spelling suggestions: "subject:"artificial neurons"" "subject:"aartificial neurons""
1 |
Adaptive neurocomputation with spiking semiconductor neuronsZhao, Le January 2015 (has links)
In this thesis, we study the neurocomputation by implementing two different neuron models. One is a semi magnetic micro p-n wire that emulates nerve fibres and supports the electrical propagation and regeneration. The other is a silicon neuron based on Hodgkin-Huxley conductance model that can generate spatiotemporal spiking patterns. The former model focuses on the spatial propagation of electrical pulses along a transmission line and presents the thesis that action potentials may be represented by solitary waves. The later model focuses on the dynamical properties such as how the output patterns of the active networks adapt to external stimulus. To demonstrate the dynamical properties of spiking networks, we present a central pattern generator (CPG) network with winnerless competition architecture. The CPG consists of three silicon neurons which are connected via reciprocally inhibitory synapses. The network of three neurons was stimulated with current steps possessing different time delays and that the voltage oscillations of the three neurons were recorded as a function of the strengths of inhibitory synaptic interconnections and internal parameters of neurons, such as voltage thresholds, time delays, etc. The architecture of the network is robust and sensitively depends on the stimulus. Stimulus dependent rhythms can be generated by the CPG network. The stimulus-dependent sequential switching between collective modes of oscillations in the network can explain the fundamental contradiction between sensitivity and robustness to external stimulus and the mechanism of pattern memorization. We successfully apply the CPG in modulating the heart rate of animal models (rats). The CPG was stimulated with respiratory signals and generated tri-phasic patterns corresponding to the respiratory cycles. The tri-phasic stimulus from the CPG was used to synchronize the heart rate with respiration. In this way, we artificially induce the respiratory sinus arrhythmia (RSA), which refers to the heart rate fluctuation in synchrony with respiration. RSA is lost in heart failure. Our CPG paves to way to novel medical devices that can provide a therapy for heart failure.
|
2 |
Propriedades de recuperação de memória em redes neurais atratoras. / Recovery of memory properties of Neural Networks in attractors.Rodrigues Neto, Camilo 05 June 1997 (has links)
Redes neurais atratoras são redes de neurônios artificiais com realimentacão e sem estrutura de conexão pré-definida. Estes tipos de redes apresentam uma rica dinâmica dissipativa e são freqüentemente utilizadas como memórias associativas. Tais dispositivos tem a propriedade de recuperar uma memória previamente armazenada, mesmo quando expostos a informação parcial ou degradada daquela memória. Armazenar uma memória significa criar um atrator para ela na dinâmica da rede e isto e feito especificando-se adequadamente os pesos sinápticos. Nesta tese, nos concentramos basicamente em duas maneiras de se definir os pesos sinapticos, que dão origem ao modelo da pseudo-inversa e ao modelo dos pesos ótimos. Para redes neurais extremamente diluídas, onde a conectividade C e o número de neurônios N satisfazem à condição C« In N obtivemos os diagramas de fase no espaço completo de parâmetros dos modelos da pseudo-inversa e dos pesos ótimos através da analise da dinâmica da correlação de recuperação dos padrões armazenados. Alem disso, investigamos as propriedades de recuperação de redes neurais completamente conectadas através de duas abordagens: a investigação analítica da vizinhança dos padrões armazenados e a enumeração exaustiva dos atratores por meio de simulações numéricas. Finalmente. estudamos analiticamente o problema da categorizarão no modelo da pseudo-inversa. A categorizar;ao em redes neurais atratoras e a capacidade da rede treinada com exemplos de um conceito desenvolver um atrator para este conceito. / Attractor neural networks are feedback neural networks with no pre-defined connection structure. These types of neural networks present a rich dissipative dynamics and, in general, are used as associative memory devices. Such devices have the capacity to retrieve a previously stored memory, even when exposed to partial or degraded information. To store a memory means to create an attractor for it in the network dynamics, and this is done by specifying the set of synaptic weighs. In this thesis, we concentrate on two classical ways of specifying the synaptics weighs: the pseudo-inverse and the optimal weighs models. For extremely diluted neural networks, for which the connectivity C and the number of neurons N satisfy the condition C « In N, we obtain the phase diagrams in the complete space of the model parameters through the analytical study of the retrieval overlap dynamics. We also investigate the retrieval properties of fully connected neural networks using two approaches: the analytical study of the neighborhood of the stored patterns, and the exhaustive enumeration of the attractors via numerical simulations. Finally, we study analytically the problem of categorization in the pseudo-inverse model. Categorization in attractor neural networks is the capacity to create an attractor for a concept to which the network has had access only through a finite number of examples.
|
3 |
Propriedades de recuperação de memória em redes neurais atratoras. / Recovery of memory properties of Neural Networks in attractors.Camilo Rodrigues Neto 05 June 1997 (has links)
Redes neurais atratoras são redes de neurônios artificiais com realimentacão e sem estrutura de conexão pré-definida. Estes tipos de redes apresentam uma rica dinâmica dissipativa e são freqüentemente utilizadas como memórias associativas. Tais dispositivos tem a propriedade de recuperar uma memória previamente armazenada, mesmo quando expostos a informação parcial ou degradada daquela memória. Armazenar uma memória significa criar um atrator para ela na dinâmica da rede e isto e feito especificando-se adequadamente os pesos sinápticos. Nesta tese, nos concentramos basicamente em duas maneiras de se definir os pesos sinapticos, que dão origem ao modelo da pseudo-inversa e ao modelo dos pesos ótimos. Para redes neurais extremamente diluídas, onde a conectividade C e o número de neurônios N satisfazem à condição C« In N obtivemos os diagramas de fase no espaço completo de parâmetros dos modelos da pseudo-inversa e dos pesos ótimos através da analise da dinâmica da correlação de recuperação dos padrões armazenados. Alem disso, investigamos as propriedades de recuperação de redes neurais completamente conectadas através de duas abordagens: a investigação analítica da vizinhança dos padrões armazenados e a enumeração exaustiva dos atratores por meio de simulações numéricas. Finalmente. estudamos analiticamente o problema da categorizarão no modelo da pseudo-inversa. A categorizar;ao em redes neurais atratoras e a capacidade da rede treinada com exemplos de um conceito desenvolver um atrator para este conceito. / Attractor neural networks are feedback neural networks with no pre-defined connection structure. These types of neural networks present a rich dissipative dynamics and, in general, are used as associative memory devices. Such devices have the capacity to retrieve a previously stored memory, even when exposed to partial or degraded information. To store a memory means to create an attractor for it in the network dynamics, and this is done by specifying the set of synaptic weighs. In this thesis, we concentrate on two classical ways of specifying the synaptics weighs: the pseudo-inverse and the optimal weighs models. For extremely diluted neural networks, for which the connectivity C and the number of neurons N satisfy the condition C « In N, we obtain the phase diagrams in the complete space of the model parameters through the analytical study of the retrieval overlap dynamics. We also investigate the retrieval properties of fully connected neural networks using two approaches: the analytical study of the neighborhood of the stored patterns, and the exhaustive enumeration of the attractors via numerical simulations. Finally, we study analytically the problem of categorization in the pseudo-inverse model. Categorization in attractor neural networks is the capacity to create an attractor for a concept to which the network has had access only through a finite number of examples.
|
4 |
The functionality of spatial and time domain artificial neural modelsCapanni, Niccolo Francesco January 2006 (has links)
This thesis investigates the functionality of the units used in connectionist Artificial Intelligence systems. Artificial Neural Networks form the foundation of the research and their units, Artificial Neurons, are first compared with alternative models. This initial work is mainly in the spatial-domain and introduces a new neural model, termed a Taylor Series neuron. This is designed to be flexible enough to assume most mathematical functions. The unit is based on Power Series theory and a specifically implemented Taylor Series neuron is demonstrated. These neurons are of particular usefulness in evolutionary networks as they allow the complexity to increase without adding units. Training is achieved via various traditiona and derived methods based on the Delta Rule, Backpropagation, Genetic Algorithms and associated evolutionary techniques. This new neural unit has been presented as a controllable and more highly functional alternative to previous models. The work on the Taylor Series neuron moved into time-domain behaviour and through the investigation of neural oscillators led to an examination of single-celled intelligence from which the later work developed. Connectionist approaches to Artificial Intelligence are almost always based on Artificial Neural Networks. However, another route towards Parallel Distributed Processing was introduced. This was inspired by the intelligence displayed by single-celled creatures called Protoctists (Protists). A new system based on networks of interacting proteins was introduced. These networks were tested in pattern-recognition and control tasks in the time-domain and proved more flexible than most neuron models. They were trained using a Genetic Algorithm and a derived Backpropagation Algorithm. Termed "Artificial BioChemical Networks" (ABN) they have been presented as an alternative approach to connectionist systems.
|
Page generated in 0.0759 seconds