• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • Tagged with
  • 22
  • 22
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Theoretical research on phase dynamics and information processing of neuronal rhythmical networks / リズムを有する神経ネットワークの位相のダイナミクスと情報処理に関する理論的研究

Terada, Yu 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20512号 / 情博第640号 / 新制||情||111(附属図書館) / 京都大学大学院情報学研究科複雑系科学専攻 / (主査)教授 青柳 富誌生, 教授 船越 満明, 教授 西村 直志 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
12

Mémoire et connectivité corticale / Memory and cortical connectivity

Dubreuil, Alexis 01 July 2014 (has links)
Le système nerveux central est capable de mémoriser des percepts sur de longues échelles de temps (mémoire à long terme), ainsi que de maintenir activement ces percepts en mémoire pour quelques secondes en vue d’effectuer des tâches comportementales (mémoire de travail). Ces deux phénomènes peuvent être étudiés conjointement dans le cadre de la théorie des réseaux de neurones à attracteurs. Dans ce cadre, un percept, représenté par un patron d’activité neuronale, est stocké en mémoire à long terme et peut être chargé en mémoire de travail à condition que le réseau soit capable de maintenir de manière stable et autonome ce patron d’activité. Une telle dynamique est rendue possible par la forme spécifique de la connectivité du réseau. Ici on examine des modèles de connectivité corticale à différentes échelles, dans le but d’étudier quels circuits corticaux peuvent soutenir efficacement des dynamiques de type réseau à attracteurs. Ceci est fait en montrant comment les performances de modèles théoriques, quantifiées par la capacité de stockage des réseaux (nombre de percepts qu’il est possible de stocker, puis réutiliser), dépendent des caractéristiques de la connectivité. Une première partie est dédiée à l’étude de réseaux complètement connectés où un neurone peut potentiellement être connecté à chacun des autres neurones du réseau. Cette situation modélise des colonnes corticales dont le rayon est de l’ordre de quelques centaines de microns. On s’intéresse d’abord à la capacité de stockage de réseaux où les synapses entre neurones sont décrites par des variables binaires, modifiées de manière stochastique lorsque des patrons d’activité sont imposés sur le réseau. On étend cette étude à des cas où les synapses peuvent être dans K états discrets, ce qui, par exemple, permet de modéliser le fait que les connections entre deux cellules pyramidales voisines du cortex sont connectées par l’intermédiaire de plusieurs contacts synaptiques. Dans un second temps, on étudie des réseaux modulaires où chaque module est un réseau complètement connecté et où la connectivité entre modules est diluée. On montre comment la capacité de stockage dépend de la connectivité entre modules et de l’organisation des patrons d’activité à stocker. La comparaison avec les mesures expérimentales sur la connectivité à grande échelle du cortex permet de montrer que ces connections peuvent implémenter un réseau à attracteur à l’échelle de plusieurs aires cérébrales. Enfin on étudie un réseau dont les unités sont connectées par des poids dont l’amplitude a un coût qui dépend de la distance entre unités. On utilise une approche à la Gardner pour calculer la distribution des poids qui optimise le stockage de patrons d’activité dans ce réseau. On interprète chaque unité de ce réseau comme une aire cérébrale et on compare la distribution des poids obtenue théoriquement avec des mesures expérimentales de connectivité entre aires cérébrales. / The central nervous system is able to memorize percepts on long time scales (long-term memory), as well as actively maintain these percepts in memory for a few seconds in order to perform behavioral tasks (working memory). These two phenomena can be studied together in the framework of the attractor neural network theory. In this framework, a percept, represented by a pattern of neural activity, is stored as a long-term memory and can be loaded in working memory if the network is able to maintain, in a stable and autonomous manner, this pattern of activity. Such a dynamics is made possible by the specific form of the connectivity of the network. Here we examine models of cortical connectivity at different scales, in order to study which cortical circuits can efficiently sustain attractor neural network dynamics. This is done by showing how the performance of theoretical models, quantified by the networks storage capacity (number of percepts it is possible to store), depends on the characteristics of the connectivity. In the first part we study fully-connected networks, where potentially each neuron connects to all the other neurons in the network. This situation models cortical columns whose radius is of the order of a few hundred microns. We first compute the storage capacity of networks whose synapses are described by binary variables that are modified in a stochastic manner when patterns of activity are imposed on the network. We generalize this study to the case in which synapses can be in K discrete states, which, for instance, allows to model the fact that two neighboring pyramidal cells in cortex touches each others at multiple contact points. In the second part, we study modular networks where each module is a fully-connected network and connections between modules are diluted. We show how the storage capacity depends on the connectivity between modules and on the organization of the patterns of activity to store. The comparison with experimental measurements of large-scale connectivity suggests that these connections can implement an attractor neural network at the scale of multiple cortical areas. Finally, we study a network in which units are connected by weights whose amplitude has a cost that depends on the distance between the units. We use a Gardner's approach to compute the distribution of weights that optimizes storage in this network. We interpret each unit of this network as a cortical area and compare the obtained theoretical weights distribution with measures of connectivity between cortical areas.
13

Borderline consciousness, phenomenal consciousness, and artificial consciousness : a unified approach

Chin, Chuanfei January 2015 (has links)
Borderline conscious creatures are neither definitely conscious nor definitely not conscious. In this thesis, I explain what borderline consciousness is and why it poses a significant epistemological challenge to scientists who investigate phenomenal consciousness as a natural kind. When these scientists discover more than one overlapping kind in their samples of conscious creatures, how can they identify the kind to which all and only conscious creatures belong? After assessing three pessimistic responses, I argue that different groups of scientists can legitimately use the concept of phenomenal consciousness to refer to different kinds, in accord with their empirical interests. They can thereby resolve three related impasses on the status of borderline conscious creatures, the neural structure of phenomenal consciousness, and the possibility of artificial consciousness. The thesis has three parts: First, I analyse the concept of borderline consciousness. My analysis counters several arguments which conclude that borderline consciousness is inconceivable. Then I explain how borderline consciousness produces the multiple kinds problem in consciousness science. Second, I assess three recent philosophical responses to this problem. One response urges scientists to eliminate the concept of consciousness, while another judges them to be irremediably ignorant of the nature of consciousness. The final response concludes that scientific progress is limited by the concept's referential indeterminacy. I argue that these responses are too pessimistic, though they point to a more promising approach. Third, I propose that empirically constrained stipulation can solve the multiple kinds problem. Biologists face the same problem because of their longstanding controversy over what counts as a species. Building on new arguments for stipulating the reference of species concepts, I demonstrate that this use of stipulation in biology is neither epistemologically complacent nor metaphysically capricious; it also need not sow semantic confusion. Then I defend its use in consciousness science. My approach is shown to be consistent with our understanding of natural kinds, borderline cases, and phenomenal consciousness.
14

Complex Dynamics Enabled by Basic Neural Features

Regel, Diemut 18 July 2019 (has links)
No description available.
15

Neural Network Models For Neurophysiology Data

Bryan Jimenez (13979295) 25 October 2022 (has links)
<p>    </p> <p>Over the last decade, measurement technology that records neural activity such as ECoG and Utah array has dramatically improved. These advancements have given researchers access to recordings from multiple neurons simultaneously. Efficient computational and statistical methods are required to analyze this data type successfully. The time-series model is one of the most common approaches for analyzing this data type. Unfortunately, even with all the advances made with time-series models, it is not always enough since these models often need massive amounts of data to achieve good results. This is especially true in the field of neuroscience, where the datasets are often limited, therefore imposing constraints on the type and complexity of the models we can use. Not only that, but the Signal-to- noise ratio tends to be lower than in other machine learning datasets. This paper will introduce different architectures and techniques to overcome constraints imposed by these small datasets. There are two major experiments that we will discuss. (1) We will strive to develop models for participants who lost the ability to speak by building upon the previous state-of-the-art model for decoding neural activity (ECoG data) into English text. (2) We will introduce two new models, RNNF and Neural RoBERTa. These new models impute missing neural data from neural recordings (Utah arrays) of monkeys performing kinematic tasks. These new models with the help of novel data augmentation techniques (dynamic masking) outperformed state-of-the-art models such as Neural Data Transformer (NDT) in the Neural Latents Benchmark competition. </p>
16

<b>Leveraging Whole Brain Imaging to Identify Brain Regions Involved in Alcohol Frontloading</b>

Cherish Elizabeth Ardinger (9706763) 03 January 2024 (has links)
<p dir="ltr">Frontloading is an alcohol drinking pattern where intake is skewed toward the onset of access. The goal of the current study was to identify brain regions involved in frontloading using whole brain imaging. 63 C57Bl/6J (32 female and 31 male) mice underwent 8 days of binge drinking using drinking-in-the-dark (DID). Three hours into the dark cycle, mice received 20% (v/v) alcohol or water for two hours on days 1-7. Intake was measured in 1-minute bins using volumetric sippers, which facilitated analyses of drinking patterns. Mice were perfused 80 minutes into the day 8 DID session and brains were extracted and processed for iDISCO clearing and c-fos immunohistochemistry. For brain network analyses, day 8 drinking patterns were used to characterize mice as frontloaders or non-frontloaders using a change-point analysis described in our recent ACER publication (Ardinger et al., 2022). Groups were female frontloaders (n = 20), female non-frontloaders (n = 2), male frontloaders (n = 13) and male non-frontloaders (n = 8). There were no differences in total alcohol intake as a function of frontloading status. Water drinkers had an n of 10 for each sex. As only two female mice were characterized as non-frontloaders, it was not possible to construct a functional correlation network for this group. Following light sheet imaging, ClearMap2.1 was used to register brains to the Allen Brain Atlas and detect fos+ cells. Functional correlation matrices were calculated for each group from log<sub>10</sub> c-fos values. Euclidean distances were calculated from these R values and hierarchical clustering was used to determine modules (highly connected groups of brain regions) at a tree-cut height of 50%. In males, alcohol access decreased modularity (3 modules in both frontloaders and non-frontloaders) as compared to water drinkers (7 modules). In females, an opposite effect was observed. Alcohol access (9 modules) increased modularity as compared to water drinkers (5 modules). These results suggest sex differences in how alcohol consumption reorganizes the functional architecture of networks. Next, key brain regions in each network were identified. Connector hubs, which primarily facilitate communication between modules, and provincial hubs, which facilitate communication within modules, were of specific interest for their important and differing roles. In males, 4 connector hubs and 17 provincial hubs were uniquely identified in frontloaders (i.e., were brain regions that did not have this status in male non-frontloaders or water drinkers). These represented a group of hindbrain regions (e.g., locus coeruleus and the pontine gray) connected to striatal/cortical regions (e.g., cortical amygdalar area) by the paraventricular nucleus of the thalamus. In females, 16 connector and 17 provincial hubs were uniquely identified which were distributed across 8 of the 9 modules in the female alcohol drinker network. Only one brain region (the nucleus raphe pontis) was a connector hub in both sexes, suggesting that frontloading in males and females may be driven by different brain regions. In conclusion, alcohol consumption led to fewer, but more densely connected, groups of brain regions in males but not females, and recruited different hub brain regions between the sexes. These results suggest target brain regions for future studies to try to manipulate frontloading behavior and more broadly contribute to the literature on alcohol’s effect on neural networks.</p>
17

Mathematical Description of Differential Hebbian Plasticity and its Relation to Reinforcement Learning / Mathematische Beschreibung Hebb'scher Plastizität und deren Beziehung zu Bestärkendem Lernen

Kolodziejski, Christoph Markus 13 February 2009 (has links)
No description available.
18

Analysis of the brainstem auditory evoked potentials in neurological disease

Ragi, Elias January 1985 (has links)
Many phenomena in the BAEP are difficult to explain on the basis of the accepted hypothesis of its origin (after Jewett, 1970). The alternative mechanism of origin to which these phenomena point is summation of oscillations. Therefore, simulation of the BAEP by a mathematical model consisting of the addition of four sine waves was tested. The model did simulate a normal BAEP as well variations in the waveform produced by reversing click polarity. This simulation gives further clues to the origin of the BAEP. The four sine waves begin simultaneously; corresponding BAEP oscillations must, therefore, originate from a single structure. These oscillations begin in less than half a millisecond after the click. This suggests that the structure from which they arise is outside the brainstem. This alternative mechanism indicates that wave latencies do not reflect nervous conduction between discrete nuclei, and interpretation of BAEP abnormality need to be reconsidered. It also implies that mathematical frequency analysis is more appropriate, but this could be applied only when these methods have been perfected. Meanwhile, through visual analysis and recognition of oscillations, abnormality can be detected and described in terms that may have physiological significance.
19

Neural mechanisms of information processing and transmission

Leugering, Johannes 05 November 2021 (has links)
This (cumulative) dissertation is concerned with mechanisms and models of information processing and transmission by individual neurons and small neural assemblies. In this document, I first provide historical context for these ideas and highlight similarities and differences to related concepts from machine learning and neuromorphic engineering. With this background, I then discuss the four main themes of my work, namely dendritic filtering and delays, homeostatic plasticity and adaptation, rate-coding with spiking neurons, and spike-timing based alternatives to rate-coding. The content of this discussion is in large part derived from several of my own publications included in Appendix C, but it has been extended and revised to provide a more accessible and broad explanation of the main ideas, as well as to show their inherent connections. I conclude that fundamental differences remain between our understanding of information processing and transmission in machine learning on the one hand and theoretical neuroscience on the other, which should provide a strong incentive for further interdisciplinary work on the domain boundaries between neuroscience, machine learning and neuromorphic engineering.
20

Pattern formation in neural circuits by the interaction of travelling waves with spike-timing dependent plasticity

Bennett, James Edward Matthew January 2014 (has links)
Spontaneous travelling waves of neuronal activity are a prominent feature throughout the developing brain and have been shown to be essential for achieving normal function, but the mechanism of their action on post-synaptic connections remains unknown. A well-known and widespread mechanism for altering synaptic strengths is spike-timing dependent plasticity (STDP), whereby the temporal relationship between the pre- and post-synaptic spikes determines whether a synapse is strengthened or weakened. Here, I answer the theoretical question of how these two phenomenon interact: what types of connectivity patterns can emerge when travelling waves drive a downstream area that implements STDP, and what are the critical features of the waves and the plasticity rules that shape these patterns? I then demonstrate how the theory can be applied to the development of the visual system, where retinal waves are hypothesised to play a role in the refinement of downstream connections. My major findings are as follows. (1) Mathematically, STDP translates the correlated activity of travelling waves into coherent patterns of synaptic connectivity; it maps the spatiotemporal structure in waves into a spatial pattern of synaptic strengths, building periodic structures into feedforward circuits. This is analogous to pattern formation in reaction diffusion systems. The theory reveals a role for the wave speed and time scale of the STDP rule in determining the spatial frequency of the connectivity pattern. (2) Simulations verify the theory and extend it from one-dimensional to two-dimensional cases, and from simplified linear wavefronts to more complex realistic and noisy wave patterns. (3) With appropriate constraints, these pattern formation abilities can be harnessed to explain a wide range of developmental phenomena, including how receptive fields (RFs) in the visual system are refined in size and topography and how simple-cell and direction selective RFs can develop. The theory is applied to the visual system here but generalises across different brain areas and STDP rules. The theory makes several predictions that are testable using existing experimental paradigms.

Page generated in 0.102 seconds