• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 34
  • 18
  • 5
  • 1
  • 1
  • Tagged with
  • 228
  • 228
  • 59
  • 42
  • 36
  • 33
  • 30
  • 29
  • 27
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

ON GEOMETRIC AND ALGEBRAIC PROPERTIES OF HUMAN BRAIN FUNCTIONAL NETWORKS

Duy Duong-Tran (12337325) 19 April 2022 (has links)
<p>It was only in the last decade that Magnetic Resonance Imaging (MRI) technologies have achieved high-quality levels that enabled comprehensive assessments of individual human brain structure and functions. One of the most important advancements put forth by Thomas Yeo and colleagues in 2011 was the intrinsic functional connectivity MRI (fcMRI) networks which are highly reproducible and feature consistently across different individual brains. This dissertation aims to unravel different characteristics of human brain fcMRI networks, separately through network morphospace and collectively through stochastic block models.</p><p><br></p><p>The quantification of human brain functional (re-)configurations across varying cognitive demands remains an unresolved topic. Such functional reconfigurations are rather subtle at the whole-brain level. Hence, we propose a mesoscopic framework focused on functional networks (FNs) or communities to quantify functional (re-)configurations. To do so, we introduce a 2D network morphospace that relies on two novel mesoscopic metrics, Trapping Efficiency (TE) and Exit Entropy (EE). We use this framework to quantify the Network Configural Breadth across different tasks. Network configural breadth is shown to significantly predict behavioral measures, such as episodic memory, verbal episodic memory, fluid intelligence and general intelligence.</p><p><br></p><p>To properly estimate and assess whole-brain functional connectomes (FCs) is among one of the most challenging tasks in computational neuroscience. Among the steps in constructing large-scale brain networks, thresholding of statistically spurious edge(s) in FCs is the most critical. State-of-the-art thresholding methods are largely ad hoc. Meanwhile, a dominant proportion of the brain connectomics research relies heavily on using a priori set of highly-reproducible human brain functional sub-circuits (functional networks (FNs)) without properly considering whether a given FN is information-theoretically relevant with respect to a given FC. Leveraging recent theoretical developments in Stochastic block model (SBM), we first formally defined and subsequently quantified the level of information-theoretical prominence of a priori set of FNs across different subjects and fMRI task conditions for any given input FC. The main contribution of this work is to provide an automated thresholding method of individuals’ FCs based on prior knowledge of human brain functional sub-circuitry.</p>
162

Architektury hlubokého učení pro analýzu populačních neaurálních dat / Deep-learning architectures for analysing population neural data

Houška, Petr January 2021 (has links)
Accurate models of visual system are key for understanding how our brains process visual information. In recent years, deep neural networks (DNN) have been rapidly gaining traction in this domain. However, only few studies attempted to incorporate known anatomical properties of visual system into standard DNN architectures adapted from the general machine learning field, to improve their interpretability and performance on visual data. In this thesis, we optimize a recent biologically inspired deep learning architecture designed for analysis of population data recorded from mammalian primary visual cortex when presented with natural images as stimuli. We reimplement this prior modeling in existing neuroscience focused deep learning framework NDN3 and assess it in terms of stability and sensitivity to hyperparameters and architecture fine-tuning. We proceed to extend the model with components of various DNN models, analysing novel combinations and techniques from classical computer vision deep learning, comparing their effectiveness against the bio-inspired components. We were able to identify modifications that greatly increase the stability of the model while securing moderate improvement in overall performance. Furthermore, we document the importance of small hyperparameters adjustments versus...
163

Universality and Individuality in Recurrent Networks extended to Biologically inspired networks

Joshi, Nishant January 2020 (has links)
Activities in the motor cortex are found to be dynamical in nature. Modeling these activities and comparing them with neural recordings helps in understanding the underlying mechanism for the generation of these activities. For this purpose, Recurrent Neural networks or RNNs, have emerged as an appropriate tool. A clear understanding of how the design choices associated with these networks affect the learned dynamics and internal representation still remains elusive. A previous work exploring the dynamical properties of discrete time RNN architectures (LSTM, UGRNN, GRU, and Vanilla) such as the fixed point topology and the linearised dynamics remains invariant when trained on 3 bit Flip- Flop task. In contrast, they show that these networks have unique representational geometry. The goal for this work is to understand if these observations also hold for networks that are more biologically realistic in terms of neural activity. Therefore, we chose to analyze rate networks that have continuous dynamics and biologically realistic connectivity constraints and on spiking neural networks, where the neurons communicate via discrete spikes as observed in the brain. We reproduce the aforementioned study for discrete architectures and then show that the fixed point topology and linearized dynamics remain invariant for the rate networks but the methods are insufficient for finding the fixed points of spiking networks. The representational geometry for the rate networks and spiking networks are found to be different from the discrete architectures but very similar to each other. Although, a small subset of discrete architectures (LSTM) are observed to be close in representation to the rate networks. We show that although these different network architectures with varying degrees of biological realism have individual internal representations, the underlying dynamics while performing the task are universal. We also observe that some discrete networks have close representational similarities with rate networks along with the dynamics. Hence, these discrete networks can be good candidates for reproducing and examining the dynamics of rate networks. / Aktiviteter i motorisk cortex visar sig vara dynamiska till sin natur. Att modellera dessa aktiviteter och jämföra dem med neurala inspelningar hjälper till att förstå den underliggande mekanismen för generering av dessa aktiviteter. För detta ändamål har återkommande neurala nätverk eller RNN uppstått som ett lämpligt verktyg. En tydlig förståelse för hur designvalen associerade med dessa nätverk påverkar den inlärda dynamiken och den interna representationen är fortfarande svårfångad. Ett tidigare arbete som utforskar de dynamiska egenskaperna hos diskreta RNN- arkitekturer (LSTM, UGRNN, GRU och Vanilla), såsom fastpunkts topologi och linjäriserad dynamik, förblir oförändrad när de tränas på 3-bitars Flip- Flop-uppgift. Däremot visar de att dessa nätverk har unik representationsgeometri. Målet för detta arbete är att förstå om dessa observationer också gäller för nätverk som är mer biologiskt realistiska när det gäller neural aktivitet. Därför valde vi att analysera hastighetsnätverk som har kontinuerlig dynamik och biologiskt realistiska anslutningsbegränsningar och på spikande neurala nätverk, där neuronerna kommunicerar via diskreta spikar som observerats i hjärnan. Vi reproducerar den ovannämnda studien för diskreta arkitekturer och visar sedan att fastpunkts topologi och linjäriserad dynamik förblir oförändrad för hastighetsnätverken men metoderna är otillräckliga för att hitta de fasta punkterna för spiknätverk. Representationsgeometrin för hastighetsnätverk och spiknätverk har visat sig skilja sig från de diskreta arkitekturerna men liknar varandra. Även om en liten delmängd av diskreta arkitekturer (LSTM) observeras vara nära i förhållande till hastighetsnäten. Vi visar att även om dessa olika nätverksarkitekturer med varierande grad av biologisk realism har individuella interna representationer, är den underliggande dynamiken under uppgiften universell. Vi observerar också att vissa diskreta nätverk har nära representationslikheter med hastighetsnätverk tillsammans med dynamiken. Följaktligen kan dessa diskreta nätverk vara bra kandidater för att reproducera och undersöka dynamiken i hastighetsnät.
164

Individual differences in structure learning

Newlin, Philip 13 May 2022 (has links)
Humans have a tendency to impute structure spontaneously even in simple learning tasks, however the way they approach structure learning can vary drastically. The present study sought to determine why individuals learn structure differently. One hypothesized explanation for differences in structure learning is individual differences in cognitive control. Cognitive control allows individuals to maintain representations of a task and may interact with reinforcement learning systems. It was expected that individual differences in propensity to apply cognitive control, which shares component processes with hierarchical reinforcement learning, may explain how individuals learn structure differently in a simple structure learning task. Results showed that proactive control and model-based control explained differences in the rate at which individuals applied structure learning.
165

Neural Network Models For Neurophysiology Data

Bryan Jimenez (13979295) 25 October 2022 (has links)
<p>    </p> <p>Over the last decade, measurement technology that records neural activity such as ECoG and Utah array has dramatically improved. These advancements have given researchers access to recordings from multiple neurons simultaneously. Efficient computational and statistical methods are required to analyze this data type successfully. The time-series model is one of the most common approaches for analyzing this data type. Unfortunately, even with all the advances made with time-series models, it is not always enough since these models often need massive amounts of data to achieve good results. This is especially true in the field of neuroscience, where the datasets are often limited, therefore imposing constraints on the type and complexity of the models we can use. Not only that, but the Signal-to- noise ratio tends to be lower than in other machine learning datasets. This paper will introduce different architectures and techniques to overcome constraints imposed by these small datasets. There are two major experiments that we will discuss. (1) We will strive to develop models for participants who lost the ability to speak by building upon the previous state-of-the-art model for decoding neural activity (ECoG data) into English text. (2) We will introduce two new models, RNNF and Neural RoBERTa. These new models impute missing neural data from neural recordings (Utah arrays) of monkeys performing kinematic tasks. These new models with the help of novel data augmentation techniques (dynamic masking) outperformed state-of-the-art models such as Neural Data Transformer (NDT) in the Neural Latents Benchmark competition. </p>
166

A Conserved Cortical Computation Revealed by Connecting Behavior toWhole-Brain Activity in C. elegans: An In Silico Systems Approach

Ryan, William George, V 28 July 2022 (has links)
No description available.
167

Chaos and Learning in Discrete-Time Neural Networks

Banks, Jess M. 27 October 2015 (has links)
No description available.
168

<b>Leveraging Whole Brain Imaging to Identify Brain Regions Involved in Alcohol Frontloading</b>

Cherish Elizabeth Ardinger (9706763) 03 January 2024 (has links)
<p dir="ltr">Frontloading is an alcohol drinking pattern where intake is skewed toward the onset of access. The goal of the current study was to identify brain regions involved in frontloading using whole brain imaging. 63 C57Bl/6J (32 female and 31 male) mice underwent 8 days of binge drinking using drinking-in-the-dark (DID). Three hours into the dark cycle, mice received 20% (v/v) alcohol or water for two hours on days 1-7. Intake was measured in 1-minute bins using volumetric sippers, which facilitated analyses of drinking patterns. Mice were perfused 80 minutes into the day 8 DID session and brains were extracted and processed for iDISCO clearing and c-fos immunohistochemistry. For brain network analyses, day 8 drinking patterns were used to characterize mice as frontloaders or non-frontloaders using a change-point analysis described in our recent ACER publication (Ardinger et al., 2022). Groups were female frontloaders (n = 20), female non-frontloaders (n = 2), male frontloaders (n = 13) and male non-frontloaders (n = 8). There were no differences in total alcohol intake as a function of frontloading status. Water drinkers had an n of 10 for each sex. As only two female mice were characterized as non-frontloaders, it was not possible to construct a functional correlation network for this group. Following light sheet imaging, ClearMap2.1 was used to register brains to the Allen Brain Atlas and detect fos+ cells. Functional correlation matrices were calculated for each group from log<sub>10</sub> c-fos values. Euclidean distances were calculated from these R values and hierarchical clustering was used to determine modules (highly connected groups of brain regions) at a tree-cut height of 50%. In males, alcohol access decreased modularity (3 modules in both frontloaders and non-frontloaders) as compared to water drinkers (7 modules). In females, an opposite effect was observed. Alcohol access (9 modules) increased modularity as compared to water drinkers (5 modules). These results suggest sex differences in how alcohol consumption reorganizes the functional architecture of networks. Next, key brain regions in each network were identified. Connector hubs, which primarily facilitate communication between modules, and provincial hubs, which facilitate communication within modules, were of specific interest for their important and differing roles. In males, 4 connector hubs and 17 provincial hubs were uniquely identified in frontloaders (i.e., were brain regions that did not have this status in male non-frontloaders or water drinkers). These represented a group of hindbrain regions (e.g., locus coeruleus and the pontine gray) connected to striatal/cortical regions (e.g., cortical amygdalar area) by the paraventricular nucleus of the thalamus. In females, 16 connector and 17 provincial hubs were uniquely identified which were distributed across 8 of the 9 modules in the female alcohol drinker network. Only one brain region (the nucleus raphe pontis) was a connector hub in both sexes, suggesting that frontloading in males and females may be driven by different brain regions. In conclusion, alcohol consumption led to fewer, but more densely connected, groups of brain regions in males but not females, and recruited different hub brain regions between the sexes. These results suggest target brain regions for future studies to try to manipulate frontloading behavior and more broadly contribute to the literature on alcohol’s effect on neural networks.</p>
169

Foundations Of Memory Capacity In Models Of Neural Cognition

Chowdhury, Chandradeep 01 December 2023 (has links) (PDF)
A central problem in neuroscience is to understand how memories are formed as a result of the activities of neurons. Valiant’s neuroidal model attempted to address this question by modeling the brain as a random graph and memories as subgraphs within that graph. However the question of memory capacity within that model has not been explored: how many memories can the brain hold? Valiant introduced the concept of interference between memories as the defining factor for capacity; excessive interference signals the model has reached capacity. Since then, exploration of capacity has been limited, but recent investigations have delved into the capacity of the Assembly Calculus, a derivative of Valiant's Neuroidal model. In this paper, we provide rigorous definitions for capacity and interference and present theoretical formulations for the memory capacity within a finite set, where subsets represent memories. We propose that these results can be adapted to suit both the Neuroidal model and Assembly calculus. Furthermore, we substantiate our claims by providing simulations that validate the theoretical findings. Our study aims to contribute essential insights into the understanding of memory capacity in complex cognitive models, offering potential ideas for applications and extensions to contemporary models of cognition.
170

The role of the deep cerebellar nuclei in motor behaviors and locomotion

Khajeh, Ramin January 2024 (has links)
Computational methods in neuroscience have advanced our understanding of neuronal regulation of motor behavior and locomotion and have been applied to identify encoding of behavioral features in circuits. The cerebellum has an established role in sensorimotor processing during coordinated movements, referred to as the “head ganglion of the proprioceptive system” (Sherrington, 1906). Increasing evidence also highlights its role in the processing of behaviorally meaningful stimuli that have the potential of guiding adaptative movements relevant to the task and priming downstream targets for action. Yet the extent to which these diverse encodings of signals in complex motor tasks are present in the cerebellar nuclei and their influence on behavior remains unknown. To shed new light on the role of this subcortical region using computational approaches, this thesis begins with an introduction that reviews the circuity of the mammalian cerebellum, highlights its proposed functions in motor behavior, and explores our understanding of its role in locomotion. In the first chapter, I analyze electrophysiological recordings from cerebellar nuclei in a locomotor obstacle avoidance task in mice that involves a rich and diverse set of task relevant features. Given the complexity of and correlations between the behavioral features, statistical modeling is required to attribute the firing rates to the correct combinations. This model enables identifying the encoding of these signals and reporting on the prevalence and degree to which they are present across individual cells in the nuclei. Additionally, this model allows investigation into the encoding of groups of cells that are selective for specific features. Chapter 2 uses network modeling to generate hypotheses about population level activity in two cortical areas, the primary and supplementary motor areas, and differentiate their computations in monkeys performing a cycling task. Finally, in chapter 3 I concentrate on a specific class of recurrent network models in the balanced state and investigate the linkage between connectivity distribution and firing sparsity, which has the potential to further our understanding on the emergence of feature selectivity in excitatory/inhibitory circuits.

Page generated in 0.1873 seconds