• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Improved Path Integration Mechanism Using Neural Fields Which Implement A Biologically Plausible Analogue To A Kalman Filter

Connors, Warren Anthoney 22 February 2013 (has links)
Interaction with the world is necessary for both animals and robots to complete tasks. This interaction requires a sense of self, or the orientation of the robot or animal with respect to the world. Creating and maintaining this model is a task which is easily maintained by animals, however can be difficult for robots due to the uncertainties in the world, sensing, and movement of the robot. This estimation difficulty is increased in sensory deprived environments, where no external, inputs are available to correct the estimate. Therefore, self generated cues of movement are needed, such as vestibular input in an animal, or accelerometer input in a robot. In spite of the difficulties, animals can easily maintain this model. This leads to the question of whether we can learn from nature by examining the biological mechanisms for pose estimation in animals. Previous work has shown that neural fields coupled with a mechanism for updating the estimate can be used to maintain a pose estimate through a sustained area of activity called a packet. Analysis of this mechanism however has shown conditions where the field can provide unexpected results or break down due to high accelerations input into the field. This analysis illustrates the challenges of controlling the activity packet size under strong inputs, and a limited speed capability using the existing mechanism. As a result of this, a novel weight combination method is proposed to provide a higher speed and increased robustness. The results of this is an increase of over two times the existing speed capability, and a resistance of the field to break down under strong rotational inputs. This updated neural field model provides a method for maintaining a stable pose estimate. To show this, a novel comparison between the proposed neural field model and the Kalman filter is considered, resulting in comparable performance in pose prediction. This work shows that an updated neural field model provides a biologically plausible pose prediction model using Bayesian inference, providing a biological analogue to a Kalman filter.
2

Development of a computational and neuroinformatics framework for large-scale brain modelling

Sanz Leon, Paula 16 October 2014 (has links)
The central theme of this thesis is the development of both a generalised computational model for large-scale brain networks and the neuroinformatics platform that enables a systematic exploration and analysis of those models. In this thesis we describe the mathematical framework of the computational model at the core of the tool The Virtual brain (TVB), designed to recreate collective whole brain dynamics by virtualising brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. We also review previous studies related to brain network models and multimodal neuroimaging integration and detail how they are related to the general model presented in this work. Practical examples describing how to build a minimal *in silico* primate brain model are given. Finally, we explain how the resulting software tool, TVB, facilitates the collaboration between experimentalists and modellers by exposing both a comprehensive simulator for brain dynamics and an integrative framework for the management, analysis, and simulation of structural and functional data in an accessible, web-based interface. / The central theme of this thesis is the development of both a generalised computational model for large-scale brain networks and the neuroinformatics platform that enables a systematic exploration and analysis of those models. In this thesis we describe the mathematical framework of the computational model at the core of the tool The Virtual brain (TVB), designed to recreate collective whole brain dynamics by virtualising brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. We also review previous studies related to brain network models and multimodal neuroimaging integration and detail how they are related to the general model presented in this work. Practical examples describing how to build a minimal *in silico* primate brain model are given. Finally, we explain how the resulting software tool, TVB, facilitates the collaboration between experimentalists and modellers by exposing both a comprehensive simulator for brain dynamics and an integrative framework for the management, analysis, and simulation of structural and functional data in an accessible, web-based interface.
3

Stochastic neural field models of binocular rivalry waves

Webber, Matthew January 2013 (has links)
Binocular rivalry is an interesting phenomenon where perception oscillates between different images presented to the two eyes. This thesis is primarily concerned with modelling travelling waves of visual perception during transitions between these perceptual states. In order to model this effect in such a way that we retain as much analytical insight into the mechanisms as possible we employed neural field theory. That is, rather than modelling individual neurons in a neural network we treat the cortical surface as a continuous medium and establish integro-differential equations for the activity of a neural population. Our basic model which has been used by many previous authors both within and outside of neural field theory is to consider a one dimensional network of neurons for each eye. It is assumed that each network responds maximally to a particular feature of the underlying image, such as orientation. Recurrent connections within each network are taken to be excitatory and connections between the networks are taken to be inhibitory. In order for such a topology to exhibit the oscillations found in binocular rivalry there needs to be some form of slow adaptation which weakens the cross-connections under continued firing. By first considering a deterministic version of this model, we will show that, in fact, this slow adaptation also serves as a necessary "symmetry breaking" mechanism. Using this knowledge to make some mild assumptions we are then able to derive an expression for the shape of a travelling wave and its wave speed. We then go on to show that these predictions of our model are consistent not only with numerical simulations but also experimental evidence. It will turn out that it is not acceptable to completely ignore noise as it is a fundamental part of the underlying biology. Since methods for analyzing stochastic neural fields did not exist before our work, we first adapt methods originally intended for reaction-diffusion PDE systems to a stochastic version of a simple neural field equation. By regarding the motion of a stochastic travelling wave as being made up of two distinct components, firstly, the drift-diffusion of its overall position, secondly, fast fluctuations in its shape around some average front shape, we are able to derive a stochastic differential equation for the front position with respect to time. It is found that the front position undergoes a drift-diffusion process with constant coefficients. We then go on to show that our analysis agrees with numerical simulation. The original problem of stochastic binocular rivalry is then re-visited with this new toolkit and we are able to predict that the first passage time of a perceptual wave hitting a fixed barrier should be an inverse Gaussian distribution, a result which could potentially be experimentally tested. We also consider the implications of our stochastic work on different types of neural field equation to those used for modelling binocular rivalry. In particular, for neural fields which support pulled fronts propagating into an unstable state, the stochastic version of such an equation has wave fronts which undergo subdiffusive motion as opposed to the standard diffusion in the binocular rivalry case.
4

Spatiotemporal patterns of neural fields in a spherical cortex with general connectivity

Unknown Date (has links)
The human brain consists of billions of neurons and these neurons pool together in groups at different scales. On one hand, these neural entities tend to behave as single units and on the other hand show collective macroscopic patterns of activity. The neural units communicate with each other and process information over time. This communication is through small electrical impulses which at the macroscopic scale are measurable as brain waves. The electric field that is produced collectively by macroscopic groups of neurons within the brain can be measured on the surface of the skull via a brain imaging modality called Electroencephalography (EEG). The brain as a neural system has variant connection topology, in which an area might not only be connected to its adjacent neighbors homogeneously but also distant areas can directly transfer brain activity [16]. Timing of these brain activity communications between different neural units bring up overall emerging spatiotemporal patterns. The dynamics of these patterns and formation of neural activities in cortical surface is influenced by the presence of long-range connections between heterogeneous neural units. Brain activity at large-scale is thought to be involved in the information processing and the implementation of cognitive functions of the brain. This research aims to determine how the spatiotemporal pattern formation phenomena in the brain depend on its connection topology. This connection topology consists of homogeneous connections in local cortical areas alongside the couplings between distant functional units as heterogeneous connections. Homogeneous connectivity or synaptic weight distribution representing the large-scale anatomy of cortex is assumed to depend on the Euclidean distance between interacting neural units. Altering characteristics of inhomogeneous pathways as control parameters guide the brain pattern formation through phase transitions at critical points. In this research, linear stability analysis is applied to a macroscopic neural field in a one-dimensional circular and a twodimensional spherical model of the brain in order to find destabilization mechanism and subsequently emerging patterns. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
5

A dynamic neural field model of visual working memory and change detection

Johnson, Jeffrey S 01 January 2008 (has links)
Many tasks rely on our ability to hold information about a stimulus in mind after it is no longer visible and to compare this information with incoming perceptual information. This ability relies on a short-term form of memory known as visual working memory. Research and theory at the behavioral and neural levels has begun to provide important insights into the basic properties of the neuro-cognitive systems underlying this form of memory. However, to date, no neurally-plausible theory has been proposed that addresses both the storage of information in working memory and the comparison process in a single framework. To address these limitations, I have developed a new model where working memory is realized via peaks of activation in dynamic neural fields, and comparison emerges as a result of interactions among the model's layers. In a series of simulations, I show how the model can be used to capture each of the components underlying performance in simple visual comparison tasks--from the encoding, consolidation, and maintenance of information in working memory, to comparison and updating in response to changed inputs. Importantly, the proposed model demonstrates how these elementary perceptual and cognitive functions emerge from the coordinated activity of an integrated, dynamic neural system. The model also makes novel predictions that were tested in a series of behavioral experiments. Specifically, when similar items are stored, shared lateral inhibition produces a sharpening of the peaks of activation associated with each item in memory. In the context of the model, this leads to the prediction that change detection will be enhanced for similar versus dissimilar features. This prediction was confirmed in a series of change detection experiments exploring memory for both color and orientation. In addition to sharpening, shared lateral inhibition among similar items produces mutual repulsion between nearby peaks. This leads to the prediction that when similar features are held, they will be systematically biased away from each other over delays. This prediction was confirmed in a cued color recall experiment comparing memory for a "far" color with memory for two "close" colors.
6

Learning 3D Shape Representations for Reconstruction and Modeling

Biao, Zhang 04 1900 (has links)
Neural fields, also known as neural implicit representations, are powerful for modeling 3D shapes. They encode shapes as continuous functions mapping 3D coordinates to scalar values like the signed distance function (SDF) or occupancy probability. Neural fields represent complex shapes using an MLP. The MLP takes spatial coordinates, undergoes nonlinear transformations, and approximates the continuous function of the neural field. During training, the MLP's weights are learned through backpropagation. This PhD thesis presents novel methods for shape representation learning and generation with neural fields. The first part introduces an interpretable and high-quality reconstruction method for neural fields. A neural network predicts labeled points, improving surface visualization and interpretability. The method achieves accurate reconstruction even with rendered image input. A binary classifier, based on predicted labeled points, represents the shape's surface with precision. The second part focuses on shape generation, a challenge in generative modeling. Complex data structures like oct-trees or BSP-trees are challenging to generate with neural networks. To address this, a two-step framework is proposed: an autoencoder compresses the neural field into a fixed-size latent space, followed by training generative models within that space. Incorporating sparsity into the shape autoencoding network reduces dimensionality while maintaining high-quality shape reconstruction. Autoregressive transformer models enable the generation of complex shapes with intricate details. This research explores the potential of denoising diffusion models for 3D shape generation. The latent space efficiency is improved by further compression, leading to more efficient and effective generation of high-quality shapes. Remarkable shape reconstruction results are achieved, even without sparse structures. The approach combines the latest generative model advancements with novel techniques, advancing the field. It has the potential to revolutionize shape generation in gaming, manufacturing, and beyond. In summary, this PhD thesis proposes novel methods for shape representation learning, generation, and reconstruction. It contributes to the field of shape analysis and generation by enhancing interpretability, improving reconstruction quality, and pushing the boundaries of efficient and effective 3D shape generation.
7

The study of neural oscillations by traversing scales in the brain

Hutt, Axel 27 May 2011 (has links) (PDF)
The work presents recent contributions in the field of computational neuroscience and sketches possible research perspectives.
8

Nonlinear analysis methods in neural field models / Méthodes d'analyse non linéaires appliquées aux modèles des champs neuronaux

Veltz, Romain 16 December 2011 (has links)
Cette thèse traite de modèles mésoscopiques de cortex appelés champs neuronaux. Les équations des champs neuronaux décrivent l'activité corticale de populations de neurones, ayant des propriétés anatomiques/fonctionnelles communes. Elles ont été introduites dans les années 1950 et portent le nom d'équations de Wilson et Cowan. Mathématiquement, elles consistent en des équations intégro-différentielles avec retards, les retards modélisant les délais de propagation des signaux ainsi que le passage des signaux à travers les synapses et l'arbre dendritique. Dans la première partie, nous rappelons la biologie nécessaire à la compréhension de cette thèse et dérivons les équations principales. Puis, nous étudions ces équations du point de vue des systèmes dynamiques en caractérisant leurs points d'équilibres et la dynamique dans la seconde partie. Dans la troisième partie, nous étudions de façon générale ces équations à retards en donnant des formules pour les diagrammes de bifurcation, en prouvant un théorème de la variété centrale et en calculant les principales formes normales. Nous appliquons tout d'abord ces résultats à des champs neuronaux simples mono-dimensionnels qui permettent une étude détaillée de la dynamique. Enfin, dans la dernière partie, nous appliquons ces différents résultats à trois modèles de cortex visuel. Les deux premiers modèles sont issus de la littérature et décrivent respectivement une hypercolonne, /i.e./ l'élément de base de la première aire visuelle (V1) et un réseau de telles hypercolonnes. Le dernier modèle est un nouveau modèle de V1 qui généralise les deux modèles précédents tout en permettant une étude poussée des effets spécifiques des retards / This thesis deals with mesoscopic models of cortex called neural fields. The neural field equations describe the activity of neuronal populations, with common anatomical / functional properties. They were introduced in the 1950s and are called the equations of Wilson and Cowan. Mathematically, they consist of integro-differential equations with delays, the delays modeling the signal propagation and the passage of signals across synapses and the dendritic tree. In the first part, we recall the biology necessary to understand this thesis and derive the main equations. Then, we study these equations with the theory of dynamical systems by characterizing their equilibrium points and dynamics in the second part. In the third part, we study these delayed equations in general by giving formulas for the bifurcation diagrams, by proving a center manifold theorem, and by calculating the principal normal forms. We apply these results to one-dimensional neural fields which allows a detailed study of the dynamics. Finally, in the last part, we study three models of visual cortex. The first two models are from the literature and describe respectively a hypercolumn, i.e. the basic element of the first visual area (V1) and a network of such hypercolumns. The latest model is a new model of V1 which generalizes the two previous models while allowing a detailed study of specific effects of delays
9

Plasticité corticale, champs neuronaux dynamiques et auto-organisation / Cortical plasticity, dynamic neural fields and self-organization

Detorakis, Georgios 23 October 2013 (has links)
L'objectif de ce travail est de modéliser la formation, la maintenance et la réorganisation des cartes corticales somesthésiques en utilisant la théorie des champs neuronaux dynamiques. Un champ de neurones dynamique est une équation intégro-différentiel qui peut être utilisée pour décrire l'activité d'une surface corticale. Un tel champ a été utilisé pour modéliser une partie des aires 3b de la région du cortex somatosensoriel primaire et un modèle de peau a été conçu afin de fournir les entrées au modèle cortical. D'un point de vue computationel, ce modèle s'inscrit dans une démarche de calculs distribués, numériques et adaptatifs. Ce modèle s'avère en particulier capable d'expliquer la formation initiale des cartes mais aussi de rendre compte de leurs réorganisations en présence de lésions corticales ou de privation sensorielle, l'équilibre entre excitation et inhibition jouant un rôle crucial. De plus, le modèle est en adéquation avec les données neurophysiologiques de la région 3b et se trouve être capable de rendre compte de nombreux résultats expérimentaux. Enfin, il semble que l'attention joue un rôle clé dans l'organisation des champs récepteurs du cortex somato-sensoriel. Nous proposons donc, au travers de ce travail, une définition de l'attention somato-sensorielle ainsi qu'une explication de son influence sur l'organisation des cartes au travers d'un certain nombre de résultats expérimentaux. En modifiant les gains des connexions latérales, il est possible de contrôler la forme de la solution du champ, conduisant à des modifications importantes de l'étendue des champs récepteurs. Cela conduit au final au développement de zones finement cartographiées conduisant à de meilleures performances haptiques / The aim of the present work is the modeling of the formation, maintenance and reorganization of somatosensory cortical maps using the theory of dynamic neural fields. A dynamic neural field is a partial integro-differential equation that is used to model the cortical activity of a part of the cortex. Such a neural field is used in this work in order to model a part of the area 3b of the primary somatosensory cortex. In addition a skin model is used in order to provide input to the cortical model. From a computational point of view the model is able to perform distributed, numerical and adaptive computations. The model is able to explain the formation of topographic maps and their reorganization in the presence of a cortical lesion or a sensory deprivation, where balance between excitation and inhibition plays a crucial role. In addition, the model is consistent with neurophysiological data of area 3b. Finally, it has been shown that attention plays a key role in the organization of receptive fields of neurons of the somatosensory cortex. Therefore, in this work has been proposed a definition of somatosensory attention and a potential explanation of its influence on somatotopic organization through a number of experimental results. By changing the gains of lateral connections, it is possible to control the shape of the solution of the neural field. This leads to significant alterations of receptive fields sizes, resulting to a better performance during the execution of demanding haptic tasks
10

Nonlinear analysis methods in neural field models

Veltz, Romain, Veltz, Romain 16 December 2011 (has links) (PDF)
This thesis deals with mesoscopic models of cortex called neural fields. The neural field equations describe the activity of neuronal populations, with common anatomical / functional properties. They were introduced in the 1950s and are called the equations of Wilson and Cowan. Mathematically, they consist of integro-differential equations with delays, the delays modeling the signal propagation and the passage of signals across synapses and the dendritic tree. In the first part, we recall the biology necessary to understand this thesis and derive the main equations. Then, we study these equations with the theory of dynamical systems by characterizing their equilibrium points and dynamics in the second part. In the third part, we study these delayed equations in general by giving formulas for the bifurcation diagrams, by proving a center manifold theorem, and by calculating the principal normal forms. We apply these results to one-dimensional neural fields which allows a detailed study of the dynamics. Finally, in the last part, we study three models of visual cortex. The first two models are from the literature and describe respectively a hypercolumn, i.e. the basic element of the first visual area (V1) and a network of such hypercolumns. The latest model is a new model of V1 which generalizes the two previous models while allowing a detailed study of specific effects of delays

Page generated in 0.0532 seconds