• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Oscillations and spike statistics in biophysical attractor networks

Lundqvist, Mikael January 2013 (has links)
The work of this thesis concerns how cortical memories are stored and retrieved. In particular, large-scale simulations are used to investigate the extent to which associative attractor theory is compliant with known physiology and in vivo dynamics. The first question we ask is whether dynamical attractors can be stored in a network with realistic connectivity and activity levels. Using estimates of biological connectivity we demonstrated that attractor memories can be stored and retrieved in biologically realistic networks, operating on psychophysical timescales and displaying firing rate patterns similar to in vivo layer 2/3 cells. This was achieved in the presence of additional complexity such as synaptic depression and cellular adaptation. Fast transitions into attractor memory states were related to the self-balancing inhibitory and excitatory currents in the network. In order to obtain realistic firing rates in the network, strong feedback inhibition was used, dynamically maintaining balance for a wide range of excitation levels. The balanced currents also led to high spike train variability commonly observed in vivo. The feedback inhibition in addition resulted in emergent gamma oscillations associated with attractor retrieval. This is congruent with the view of gamma as accompanying active cortical processing. While dynamics during retrieval of attractor memories did not depend on the size of the simulated network, above a certain size the model displayed the presence of an emergent attractor state, not coding for any memory but active as a default state of the network. This default state was accompanied by oscillations in the alpha frequency band. Such alpha oscillations are correlated with idling and cortical inhibition in vivo and have similar functional correlates in the model. Both inhibitory and excitatory, as well as phase effects of ongoing alpha observed in vivo was reproduced in the model in a simulated threshold-stimulus detection task. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper8: In press.</p>
2

Modeling inhibition-mediated neural dynamics in the rodent spatial navigation system

Lyttle, David Nolan January 2013 (has links)
The work presented in this dissertation focuses on the use of computational and mathematical models to investigate how mammalian brains construct and maintain stable representations of space and location. Recordings of the activities of cells in the hippocampus and entorhinal cortex have provided strong, direct evidence that these cells and brain areas are involved in generating internal representations of the location of an animal in space. The emphasis of the first two portions of the dissertation are on understanding the factors that influence the scale and stability of these representations, both of which are important for accurate spatial navigation. In addition, it is argued in both cases that many of the computations observed in these systems emerge at least in part as a consequence of a particular type of network structure, where excitatory neurons are driven by external sources, and then mutually inhibit each other via interactions mediated by inhibitory cells. The first contribution of this thesis, which is described in chapter 2, is an investigation into the origin of the change in the scale of spatial representations across the dorsoventral axis of the hippocampus. Here it will be argued that this change in scale is due to increased processing of nonspatial information, rather than a dorsoventral change in the scale of the spatially-modulated inputs to this structure. Chapter 3 explores the factors influencing the dynamical stability of class of pattern-forming networks known as continuous attractor networks, which have been used to model various components of the spatial navigation systems, including head direction cells, place cells, and grid cells. Here it will be shown that network architecture, the amount of input drive, and the timescales at which cells interact all influence the stability of the patterns formed by these networks. Finally, in chapter 4, a new technique for analyzing neural data is introduced. This technique is a spike train similarity measure designed to compare spike trains on the basis of shared inhibition and bursts.
3

A Method of Structural Health Monitoring for Unpredicted Combinations of Damage

Butler, Martin A. January 2019 (has links)
No description available.
4

Mémoire et connectivité corticale / Memory and cortical connectivity

Dubreuil, Alexis 01 July 2014 (has links)
Le système nerveux central est capable de mémoriser des percepts sur de longues échelles de temps (mémoire à long terme), ainsi que de maintenir activement ces percepts en mémoire pour quelques secondes en vue d’effectuer des tâches comportementales (mémoire de travail). Ces deux phénomènes peuvent être étudiés conjointement dans le cadre de la théorie des réseaux de neurones à attracteurs. Dans ce cadre, un percept, représenté par un patron d’activité neuronale, est stocké en mémoire à long terme et peut être chargé en mémoire de travail à condition que le réseau soit capable de maintenir de manière stable et autonome ce patron d’activité. Une telle dynamique est rendue possible par la forme spécifique de la connectivité du réseau. Ici on examine des modèles de connectivité corticale à différentes échelles, dans le but d’étudier quels circuits corticaux peuvent soutenir efficacement des dynamiques de type réseau à attracteurs. Ceci est fait en montrant comment les performances de modèles théoriques, quantifiées par la capacité de stockage des réseaux (nombre de percepts qu’il est possible de stocker, puis réutiliser), dépendent des caractéristiques de la connectivité. Une première partie est dédiée à l’étude de réseaux complètement connectés où un neurone peut potentiellement être connecté à chacun des autres neurones du réseau. Cette situation modélise des colonnes corticales dont le rayon est de l’ordre de quelques centaines de microns. On s’intéresse d’abord à la capacité de stockage de réseaux où les synapses entre neurones sont décrites par des variables binaires, modifiées de manière stochastique lorsque des patrons d’activité sont imposés sur le réseau. On étend cette étude à des cas où les synapses peuvent être dans K états discrets, ce qui, par exemple, permet de modéliser le fait que les connections entre deux cellules pyramidales voisines du cortex sont connectées par l’intermédiaire de plusieurs contacts synaptiques. Dans un second temps, on étudie des réseaux modulaires où chaque module est un réseau complètement connecté et où la connectivité entre modules est diluée. On montre comment la capacité de stockage dépend de la connectivité entre modules et de l’organisation des patrons d’activité à stocker. La comparaison avec les mesures expérimentales sur la connectivité à grande échelle du cortex permet de montrer que ces connections peuvent implémenter un réseau à attracteur à l’échelle de plusieurs aires cérébrales. Enfin on étudie un réseau dont les unités sont connectées par des poids dont l’amplitude a un coût qui dépend de la distance entre unités. On utilise une approche à la Gardner pour calculer la distribution des poids qui optimise le stockage de patrons d’activité dans ce réseau. On interprète chaque unité de ce réseau comme une aire cérébrale et on compare la distribution des poids obtenue théoriquement avec des mesures expérimentales de connectivité entre aires cérébrales. / The central nervous system is able to memorize percepts on long time scales (long-term memory), as well as actively maintain these percepts in memory for a few seconds in order to perform behavioral tasks (working memory). These two phenomena can be studied together in the framework of the attractor neural network theory. In this framework, a percept, represented by a pattern of neural activity, is stored as a long-term memory and can be loaded in working memory if the network is able to maintain, in a stable and autonomous manner, this pattern of activity. Such a dynamics is made possible by the specific form of the connectivity of the network. Here we examine models of cortical connectivity at different scales, in order to study which cortical circuits can efficiently sustain attractor neural network dynamics. This is done by showing how the performance of theoretical models, quantified by the networks storage capacity (number of percepts it is possible to store), depends on the characteristics of the connectivity. In the first part we study fully-connected networks, where potentially each neuron connects to all the other neurons in the network. This situation models cortical columns whose radius is of the order of a few hundred microns. We first compute the storage capacity of networks whose synapses are described by binary variables that are modified in a stochastic manner when patterns of activity are imposed on the network. We generalize this study to the case in which synapses can be in K discrete states, which, for instance, allows to model the fact that two neighboring pyramidal cells in cortex touches each others at multiple contact points. In the second part, we study modular networks where each module is a fully-connected network and connections between modules are diluted. We show how the storage capacity depends on the connectivity between modules and on the organization of the patterns of activity to store. The comparison with experimental measurements of large-scale connectivity suggests that these connections can implement an attractor neural network at the scale of multiple cortical areas. Finally, we study a network in which units are connected by weights whose amplitude has a cost that depends on the distance between the units. We use a Gardner's approach to compute the distribution of weights that optimizes storage in this network. We interpret each unit of this network as a cortical area and compare the obtained theoretical weights distribution with measures of connectivity between cortical areas.

Page generated in 0.0435 seconds