• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • Tagged with
  • 11
  • 11
  • 11
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parameter Estimation, Optimal Control and Optimal Design in Stochastic Neural Models

Iolov, Alexandre V. January 2016 (has links)
This thesis solves estimation and control problems in computational neuroscience, mathematically dealing with the first-passage times of diffusion stochastic processes. We first derive estimation algorithms for model parameters from first-passage time observations, and then we derive algorithms for the control of first-passage times. Finally, we solve an optimal design problem which combines elements of the first two: we ask how to elicit first-passage times such as to facilitate model estimation based on said first-passage observations. The main mathematical tools used are the Fokker-Planck partial differential equation for evolution of probability densities, the Hamilton-Jacobi-Bellman equation of optimal control and the adjoint optimization principle from optimal control theory. The focus is on developing computational schemes for the solution of the problems. The schemes are implemented and are tested for a wide range of parameters.
2

Modelling and Verifying Dynamic Properties of Neuronal Networks in Coq

Bahrami, Abdorrahim 08 September 2021 (has links)
Since the mid-1990s, formal verification has become increasingly important because it can provide guarantees that a software system is free of bugs and working correctly based on a provided model. Verification of biological and medical systems is a promising application of formal verification. Human neural networks have recently been emulated and studied as a biological system. Some recent research has been done on modelling some crucial neuronal circuits and using model checking techniques to verify their temporal properties. In large case studies, model checkers often cannot prove the given property at the desired level of generality. In this thesis, we provide a model using the Coq proof assistant and prove some properties concerning the dynamic behavior of some basic neuronal structures. Understanding the behavior of these modules is crucial because they constitute the elementary building blocks of bigger neuronal circuits. By using a proof assistant, we guarantee that the properties are true in the general case, that is, true for any input values, any length of input, and any amount of time. In this thesis, we define a model of human neural networks. We verify some properties of this model starting with properties of neurons. Neurons are the smallest unit in a human neuronal network. In the next step, we prove properties about functional structures of human neural networks which are called archetypes. Archetypes consist of two or more neurons connected in a suitable way. They are known for displaying some particular classes of behaviours, and their compositions govern several important functions such as walking, breathing, etc. The next step is verifying properties about structures that couple different archetypes to perform more complicated actions. We prove a property about one of these kinds of compositions. With such a model, there is the potential to detect inactive regions of the human brain and to treat mental disorders. Furthermore, our approach can be generalized to the verification of other kinds of networks, such as regulatory, metabolic, or environmental networks.
Read more
3

Application and Simulation of Neuromorphic Devices for use in Neural Networks

Wenke, Sam 28 September 2018 (has links)
No description available.
4

Universality and Individuality in Recurrent Networks extended to Biologically inspired networks

Joshi, Nishant January 2020 (has links)
Activities in the motor cortex are found to be dynamical in nature. Modeling these activities and comparing them with neural recordings helps in understanding the underlying mechanism for the generation of these activities. For this purpose, Recurrent Neural networks or RNNs, have emerged as an appropriate tool. A clear understanding of how the design choices associated with these networks affect the learned dynamics and internal representation still remains elusive. A previous work exploring the dynamical properties of discrete time RNN architectures (LSTM, UGRNN, GRU, and Vanilla) such as the fixed point topology and the linearised dynamics remains invariant when trained on 3 bit Flip- Flop task. In contrast, they show that these networks have unique representational geometry. The goal for this work is to understand if these observations also hold for networks that are more biologically realistic in terms of neural activity. Therefore, we chose to analyze rate networks that have continuous dynamics and biologically realistic connectivity constraints and on spiking neural networks, where the neurons communicate via discrete spikes as observed in the brain. We reproduce the aforementioned study for discrete architectures and then show that the fixed point topology and linearized dynamics remain invariant for the rate networks but the methods are insufficient for finding the fixed points of spiking networks. The representational geometry for the rate networks and spiking networks are found to be different from the discrete architectures but very similar to each other. Although, a small subset of discrete architectures (LSTM) are observed to be close in representation to the rate networks. We show that although these different network architectures with varying degrees of biological realism have individual internal representations, the underlying dynamics while performing the task are universal. We also observe that some discrete networks have close representational similarities with rate networks along with the dynamics. Hence, these discrete networks can be good candidates for reproducing and examining the dynamics of rate networks. / Aktiviteter i motorisk cortex visar sig vara dynamiska till sin natur. Att modellera dessa aktiviteter och jämföra dem med neurala inspelningar hjälper till att förstå den underliggande mekanismen för generering av dessa aktiviteter. För detta ändamål har återkommande neurala nätverk eller RNN uppstått som ett lämpligt verktyg. En tydlig förståelse för hur designvalen associerade med dessa nätverk påverkar den inlärda dynamiken och den interna representationen är fortfarande svårfångad. Ett tidigare arbete som utforskar de dynamiska egenskaperna hos diskreta RNN- arkitekturer (LSTM, UGRNN, GRU och Vanilla), såsom fastpunkts topologi och linjäriserad dynamik, förblir oförändrad när de tränas på 3-bitars Flip- Flop-uppgift. Däremot visar de att dessa nätverk har unik representationsgeometri. Målet för detta arbete är att förstå om dessa observationer också gäller för nätverk som är mer biologiskt realistiska när det gäller neural aktivitet. Därför valde vi att analysera hastighetsnätverk som har kontinuerlig dynamik och biologiskt realistiska anslutningsbegränsningar och på spikande neurala nätverk, där neuronerna kommunicerar via diskreta spikar som observerats i hjärnan. Vi reproducerar den ovannämnda studien för diskreta arkitekturer och visar sedan att fastpunkts topologi och linjäriserad dynamik förblir oförändrad för hastighetsnätverken men metoderna är otillräckliga för att hitta de fasta punkterna för spiknätverk. Representationsgeometrin för hastighetsnätverk och spiknätverk har visat sig skilja sig från de diskreta arkitekturerna men liknar varandra. Även om en liten delmängd av diskreta arkitekturer (LSTM) observeras vara nära i förhållande till hastighetsnäten. Vi visar att även om dessa olika nätverksarkitekturer med varierande grad av biologisk realism har individuella interna representationer, är den underliggande dynamiken under uppgiften universell. Vi observerar också att vissa diskreta nätverk har nära representationslikheter med hastighetsnätverk tillsammans med dynamiken. Följaktligen kan dessa diskreta nätverk vara bra kandidater för att reproducera och undersöka dynamiken i hastighetsnät.
Read more
5

Evolution of spiking neural networks for temporal pattern recognition and animat control

Abdelmotaleb, Ahmed Mostafa Othman January 2016 (has links)
I extended an artificial life platform called GReaNs (the name stands for Gene Regulatory evolving artificial Networks) to explore the evolutionary abilities of biologically inspired Spiking Neural Network (SNN) model. The encoding of SNNs in GReaNs was inspired by the encoding of gene regulatory networks. As proof-of-principle, I used GReaNs to evolve SNNs to obtain a network with an output neuron which generates a predefined spike train in response to a specific input. Temporal pattern recognition was one of the main tasks during my studies. It is widely believed that nervous systems of biological organisms use temporal patterns of inputs to encode information. The learning technique used for temporal pattern recognition is not clear yet. I studied the ability to evolve spiking networks with different numbers of interneurons in the absence and the presence of noise to recognize predefined temporal patterns of inputs. Results showed, that in the presence of noise, it was possible to evolve successful networks. However, the networks with only one interneuron were not robust to noise. The foraging behaviour of many small animals depends mainly on their olfactory system. I explored whether it was possible to evolve SNNs able to control an agent to find food particles on 2-dimensional maps. Using ring rate encoding to encode the sensory information in the olfactory input neurons, I managed to obtain SNNs able to control an agent that could detect the position of the food particles and move toward it. Furthermore, I did unsuccessful attempts to use GReaNs to evolve an SNN able to control an agent able to collect sound sources from one type out of several sound types. Each sound type is represented as a pattern of different frequencies. In order to use the computational power of neuromorphic hardware, I integrated GReaNs with the SpiNNaker hardware system. Only the simulation part was carried out using SpiNNaker, but the rest steps of the genetic algorithm were done with GReaNs.
Read more
6

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
7

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
8

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
9

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. January 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
10

Topological Optimization in Network Dynamical Systems / Topologieoptimierung in Netzwerke Dynamische Systeme

Van Bussel, Frank 25 August 2010 (has links)
No description available.

Page generated in 0.0752 seconds