• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 34
  • 18
  • 5
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 58
  • 42
  • 34
  • 33
  • 29
  • 29
  • 27
  • 24
  • 24
  • 23
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Architektury hlubokého učení pro analýzu populačních neaurálních dat / Deep-learning architectures for analysing population neural data

Houška, Petr January 2021 (has links)
Accurate models of visual system are key for understanding how our brains process visual information. In recent years, deep neural networks (DNN) have been rapidly gaining traction in this domain. However, only few studies attempted to incorporate known anatomical properties of visual system into standard DNN architectures adapted from the general machine learning field, to improve their interpretability and performance on visual data. In this thesis, we optimize a recent biologically inspired deep learning architecture designed for analysis of population data recorded from mammalian primary visual cortex when presented with natural images as stimuli. We reimplement this prior modeling in existing neuroscience focused deep learning framework NDN3 and assess it in terms of stability and sensitivity to hyperparameters and architecture fine-tuning. We proceed to extend the model with components of various DNN models, analysing novel combinations and techniques from classical computer vision deep learning, comparing their effectiveness against the bio-inspired components. We were able to identify modifications that greatly increase the stability of the model while securing moderate improvement in overall performance. Furthermore, we document the importance of small hyperparameters adjustments versus...
162

Universality and Individuality in Recurrent Networks extended to Biologically inspired networks

Joshi, Nishant January 2020 (has links)
Activities in the motor cortex are found to be dynamical in nature. Modeling these activities and comparing them with neural recordings helps in understanding the underlying mechanism for the generation of these activities. For this purpose, Recurrent Neural networks or RNNs, have emerged as an appropriate tool. A clear understanding of how the design choices associated with these networks affect the learned dynamics and internal representation still remains elusive. A previous work exploring the dynamical properties of discrete time RNN architectures (LSTM, UGRNN, GRU, and Vanilla) such as the fixed point topology and the linearised dynamics remains invariant when trained on 3 bit Flip- Flop task. In contrast, they show that these networks have unique representational geometry. The goal for this work is to understand if these observations also hold for networks that are more biologically realistic in terms of neural activity. Therefore, we chose to analyze rate networks that have continuous dynamics and biologically realistic connectivity constraints and on spiking neural networks, where the neurons communicate via discrete spikes as observed in the brain. We reproduce the aforementioned study for discrete architectures and then show that the fixed point topology and linearized dynamics remain invariant for the rate networks but the methods are insufficient for finding the fixed points of spiking networks. The representational geometry for the rate networks and spiking networks are found to be different from the discrete architectures but very similar to each other. Although, a small subset of discrete architectures (LSTM) are observed to be close in representation to the rate networks. We show that although these different network architectures with varying degrees of biological realism have individual internal representations, the underlying dynamics while performing the task are universal. We also observe that some discrete networks have close representational similarities with rate networks along with the dynamics. Hence, these discrete networks can be good candidates for reproducing and examining the dynamics of rate networks. / Aktiviteter i motorisk cortex visar sig vara dynamiska till sin natur. Att modellera dessa aktiviteter och jämföra dem med neurala inspelningar hjälper till att förstå den underliggande mekanismen för generering av dessa aktiviteter. För detta ändamål har återkommande neurala nätverk eller RNN uppstått som ett lämpligt verktyg. En tydlig förståelse för hur designvalen associerade med dessa nätverk påverkar den inlärda dynamiken och den interna representationen är fortfarande svårfångad. Ett tidigare arbete som utforskar de dynamiska egenskaperna hos diskreta RNN- arkitekturer (LSTM, UGRNN, GRU och Vanilla), såsom fastpunkts topologi och linjäriserad dynamik, förblir oförändrad när de tränas på 3-bitars Flip- Flop-uppgift. Däremot visar de att dessa nätverk har unik representationsgeometri. Målet för detta arbete är att förstå om dessa observationer också gäller för nätverk som är mer biologiskt realistiska när det gäller neural aktivitet. Därför valde vi att analysera hastighetsnätverk som har kontinuerlig dynamik och biologiskt realistiska anslutningsbegränsningar och på spikande neurala nätverk, där neuronerna kommunicerar via diskreta spikar som observerats i hjärnan. Vi reproducerar den ovannämnda studien för diskreta arkitekturer och visar sedan att fastpunkts topologi och linjäriserad dynamik förblir oförändrad för hastighetsnätverken men metoderna är otillräckliga för att hitta de fasta punkterna för spiknätverk. Representationsgeometrin för hastighetsnätverk och spiknätverk har visat sig skilja sig från de diskreta arkitekturerna men liknar varandra. Även om en liten delmängd av diskreta arkitekturer (LSTM) observeras vara nära i förhållande till hastighetsnäten. Vi visar att även om dessa olika nätverksarkitekturer med varierande grad av biologisk realism har individuella interna representationer, är den underliggande dynamiken under uppgiften universell. Vi observerar också att vissa diskreta nätverk har nära representationslikheter med hastighetsnätverk tillsammans med dynamiken. Följaktligen kan dessa diskreta nätverk vara bra kandidater för att reproducera och undersöka dynamiken i hastighetsnät.
163

Individual differences in structure learning

Newlin, Philip 13 May 2022 (has links)
Humans have a tendency to impute structure spontaneously even in simple learning tasks, however the way they approach structure learning can vary drastically. The present study sought to determine why individuals learn structure differently. One hypothesized explanation for differences in structure learning is individual differences in cognitive control. Cognitive control allows individuals to maintain representations of a task and may interact with reinforcement learning systems. It was expected that individual differences in propensity to apply cognitive control, which shares component processes with hierarchical reinforcement learning, may explain how individuals learn structure differently in a simple structure learning task. Results showed that proactive control and model-based control explained differences in the rate at which individuals applied structure learning.
164

Neural Network Models For Neurophysiology Data

Bryan Jimenez (13979295) 25 October 2022 (has links)
<p>    </p> <p>Over the last decade, measurement technology that records neural activity such as ECoG and Utah array has dramatically improved. These advancements have given researchers access to recordings from multiple neurons simultaneously. Efficient computational and statistical methods are required to analyze this data type successfully. The time-series model is one of the most common approaches for analyzing this data type. Unfortunately, even with all the advances made with time-series models, it is not always enough since these models often need massive amounts of data to achieve good results. This is especially true in the field of neuroscience, where the datasets are often limited, therefore imposing constraints on the type and complexity of the models we can use. Not only that, but the Signal-to- noise ratio tends to be lower than in other machine learning datasets. This paper will introduce different architectures and techniques to overcome constraints imposed by these small datasets. There are two major experiments that we will discuss. (1) We will strive to develop models for participants who lost the ability to speak by building upon the previous state-of-the-art model for decoding neural activity (ECoG data) into English text. (2) We will introduce two new models, RNNF and Neural RoBERTa. These new models impute missing neural data from neural recordings (Utah arrays) of monkeys performing kinematic tasks. These new models with the help of novel data augmentation techniques (dynamic masking) outperformed state-of-the-art models such as Neural Data Transformer (NDT) in the Neural Latents Benchmark competition. </p>
165

A Conserved Cortical Computation Revealed by Connecting Behavior toWhole-Brain Activity in C. elegans: An In Silico Systems Approach

Ryan, William George, V 28 July 2022 (has links)
No description available.
166

Chaos and Learning in Discrete-Time Neural Networks

Banks, Jess M. 27 October 2015 (has links)
No description available.
167

<b>Leveraging Whole Brain Imaging to Identify Brain Regions Involved in Alcohol Frontloading</b>

Cherish Elizabeth Ardinger (9706763) 03 January 2024 (has links)
<p dir="ltr">Frontloading is an alcohol drinking pattern where intake is skewed toward the onset of access. The goal of the current study was to identify brain regions involved in frontloading using whole brain imaging. 63 C57Bl/6J (32 female and 31 male) mice underwent 8 days of binge drinking using drinking-in-the-dark (DID). Three hours into the dark cycle, mice received 20% (v/v) alcohol or water for two hours on days 1-7. Intake was measured in 1-minute bins using volumetric sippers, which facilitated analyses of drinking patterns. Mice were perfused 80 minutes into the day 8 DID session and brains were extracted and processed for iDISCO clearing and c-fos immunohistochemistry. For brain network analyses, day 8 drinking patterns were used to characterize mice as frontloaders or non-frontloaders using a change-point analysis described in our recent ACER publication (Ardinger et al., 2022). Groups were female frontloaders (n = 20), female non-frontloaders (n = 2), male frontloaders (n = 13) and male non-frontloaders (n = 8). There were no differences in total alcohol intake as a function of frontloading status. Water drinkers had an n of 10 for each sex. As only two female mice were characterized as non-frontloaders, it was not possible to construct a functional correlation network for this group. Following light sheet imaging, ClearMap2.1 was used to register brains to the Allen Brain Atlas and detect fos+ cells. Functional correlation matrices were calculated for each group from log<sub>10</sub> c-fos values. Euclidean distances were calculated from these R values and hierarchical clustering was used to determine modules (highly connected groups of brain regions) at a tree-cut height of 50%. In males, alcohol access decreased modularity (3 modules in both frontloaders and non-frontloaders) as compared to water drinkers (7 modules). In females, an opposite effect was observed. Alcohol access (9 modules) increased modularity as compared to water drinkers (5 modules). These results suggest sex differences in how alcohol consumption reorganizes the functional architecture of networks. Next, key brain regions in each network were identified. Connector hubs, which primarily facilitate communication between modules, and provincial hubs, which facilitate communication within modules, were of specific interest for their important and differing roles. In males, 4 connector hubs and 17 provincial hubs were uniquely identified in frontloaders (i.e., were brain regions that did not have this status in male non-frontloaders or water drinkers). These represented a group of hindbrain regions (e.g., locus coeruleus and the pontine gray) connected to striatal/cortical regions (e.g., cortical amygdalar area) by the paraventricular nucleus of the thalamus. In females, 16 connector and 17 provincial hubs were uniquely identified which were distributed across 8 of the 9 modules in the female alcohol drinker network. Only one brain region (the nucleus raphe pontis) was a connector hub in both sexes, suggesting that frontloading in males and females may be driven by different brain regions. In conclusion, alcohol consumption led to fewer, but more densely connected, groups of brain regions in males but not females, and recruited different hub brain regions between the sexes. These results suggest target brain regions for future studies to try to manipulate frontloading behavior and more broadly contribute to the literature on alcohol’s effect on neural networks.</p>
168

Foundations Of Memory Capacity In Models Of Neural Cognition

Chowdhury, Chandradeep 01 December 2023 (has links) (PDF)
A central problem in neuroscience is to understand how memories are formed as a result of the activities of neurons. Valiant’s neuroidal model attempted to address this question by modeling the brain as a random graph and memories as subgraphs within that graph. However the question of memory capacity within that model has not been explored: how many memories can the brain hold? Valiant introduced the concept of interference between memories as the defining factor for capacity; excessive interference signals the model has reached capacity. Since then, exploration of capacity has been limited, but recent investigations have delved into the capacity of the Assembly Calculus, a derivative of Valiant's Neuroidal model. In this paper, we provide rigorous definitions for capacity and interference and present theoretical formulations for the memory capacity within a finite set, where subsets represent memories. We propose that these results can be adapted to suit both the Neuroidal model and Assembly calculus. Furthermore, we substantiate our claims by providing simulations that validate the theoretical findings. Our study aims to contribute essential insights into the understanding of memory capacity in complex cognitive models, offering potential ideas for applications and extensions to contemporary models of cognition.
169

Beyond AMPA and NMDA: Slow synaptic mGlu/TRPC currents : Implications for dendritic integration

Petersson, Marcus January 2010 (has links)
<p>In order to understand how the brain functions, under normal as well as pathological conditions, it is important to study the mechanisms underlying information integration. Depending on the nature of an input arriving at a synapse, different strategies may be used by the neuron to integrate and respond to the input. Naturally, if a short train of high-frequency synaptic input arrives, it may be beneficial for the neuron to be equipped with a fast mechanism that is highly sensitive to inputs on a short time scale. If, on the contrary, inputs arriving with low frequency are to be processed, it may be necessary for the neuron to possess slow mechanisms of integration. For example, in certain working memory tasks (e. g. delay-match-to-sample), sensory inputs may arrive separated by silent intervals in the range of seconds, and the subject should respond if the current input is identical to the preceeding input. It has been suggested that single neurons, due to intrinsic mechanisms outlasting the duration of input, may be able to perform such calculations. In this work, I have studied a mechanism thought to be particularly important in supporting the integration of low-frequency synaptic inputs. It is mediated by a cascade of events that starts with activation of group I metabotropic glutamate receptors (mGlu1/5), and ends with a membrane depolarization caused by a current that is mediated by canonical transient receptor potential (TRPC) ion channels. This current, denoted I<sub>TRPC</sub>, is the focus of this thesis.</p><p>A specific objective of this thesis is to study the role of I<sub>TRPC</sub> in the integration of synaptic inputs arriving at a low frequency, < 10 Hz. Our hypothesis is that, in contrast to the well-studied, rapidly decaying AMPA and NMDA currents, I<sub>TRPC</sub> is well-suited for supporting temporal summation of such synaptic input. The reason for choosing this range of frequencies is that neurons often communicate with signals (spikes) around 8 Hz, as shown by single-unit recordings in behaving animals. This is true for several regions of the brain, including the entorhinal cortex (EC) which is known to play a key role in producing working memory function and enabling long-term memory formation in the hippocampus.</p><p>Although there is strong evidence suggesting that I<sub>TRPC</sub> is important for neuronal communication, I have not encountered a systematic study of how this current contributes to synaptic integration. Since it is difficult to directly measure the electrical activity in dendritic branches using experimental techniques, I use computational modeling for this purpose. I implemented the components necessary for studying I<sub>TRPC</sub>, including a detailed model of extrasynaptic glutamate concentration, mGlu1/5 dynamics and the TRPC channel itself. I tuned the model to replicate electrophysiological in vitro data from pyramidal neurons of the rodent EC, provided by our experimental collaborator. Since we were interested in the role of I<sub>TRPC</sub> in temporal summation, a specific aim was to study how its decay time constant (τ<sub>decay</sub>) is affected by synaptic stimulus parameters.</p><p>The hypothesis described above is supported by our simulation results, as we show that synaptic inputs arriving at frequencies as low as 3 - 4 Hz can be effectively summed. We also show that τ<sub>decay</sub> increases with increasing stimulus duration and frequency, and that it is linearly dependent on the maximal glutamate concentration. Under some circumstances it was problematic to directly measure τ<sub>decay</sub>, and we then used a pair-pulse paradigm to get an indirect estimate of τ<sub>decay</sub>.</p><p>I am not aware of any computational model work taking into account the synaptically evoked I<sub>TRPC</sub> current, prior to the current study, and believe that it is the first of its kind. We suggest that I<sub>TRPC</sub> is important for slow synaptic integration, not only in the EC, but in several cortical and subcortical regions that contain mGlu1/5 and TRPC subunits, such as the prefrontal cortex. I will argue that this is further supported by studies using pharmacological blockers as well as studies on genetically modified animals.</p> / QC 20101005
170

Development and application of image analysis techniques to study structural and metabolic neurodegeneration in the human hippocampus using MRI and PET

Bishop, Courtney Alexandra January 2012 (has links)
Despite the association between hippocampal atrophy and a vast array of highly debilitating neurological diseases, such as Alzheimer’s disease and frontotemporal lobar degeneration, tools to accurately and robustly quantify the degeneration of this structure still largely elude us. In this thesis, we firstly evaluate previously-developed hippocampal segmentation methods (FMRIB’s Integrated Registration and Segmentation Tool (FIRST), Freesurfer (FS), and three versions of a Classifier Fusion (CF) technique) on two clinical MR datasets, to gain a better understanding of the modes of success and failure of these techniques, and to use this acquired knowledge for subsequent method improvement (e.g., FIRSTv3). Secondly, a fully automated, novel hippocampal segmentation method is developed, termed Fast Marching for Automated Segmentation of the Hippocampus (FMASH). This combined region-growing and atlas-based approach uses a 3D Sethian Fast Marching (FM) technique to propagate a hippocampal region from an automatically-defined seed point in the MR image. Region growth is dictated by both subject-specific intensity features and a probabilistic shape prior (or atlas). Following method development, FMASH is thoroughly validated on an independent clinical dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), with an investigation of the dependency of such atlas-based approaches on their prior information. In response to our findings, we subsequently present a novel label-warping approach to effectively account for the detrimental effects of using cross-dataset priors in atlas-based segmentation. Finally, a clinical application of MR hippocampal segmentation is presented, with a combined MR-PET analysis of wholefield and subfield hippocampal changes in Alzheimer’s disease and frontotemporal lobar degeneration. This thesis therefore contributes both novel computational tools and valuable knowledge for further neurological investigations in both the academic and the clinical field.

Page generated in 0.1399 seconds