• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 34
  • 18
  • 5
  • 1
  • 1
  • Tagged with
  • 228
  • 228
  • 59
  • 42
  • 36
  • 33
  • 30
  • 29
  • 27
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Exploration of hierarchical leadership and connectivity in neural networks in vitro.

Ham, Michael I. 12 1900 (has links)
Living neural networks are capable of processing information much faster than a modern computer, despite running at significantly lower clock speeds. Therefore, understanding the mechanisms neural networks utilize is an issue of substantial importance. Neuronal interaction dynamics were studied using histiotypic networks growing on microelectrode arrays in vitro. Hierarchical relationships were explored using bursting (when many neurons fire in a short time frame) dynamics, pairwise neuronal activation, and information theoretic measures. Together, these methods reveal that global network activity results from ignition by a small group of burst leader neurons, which form a primary circuit that is responsible for initiating most network-wide burst events. Phase delays between leaders and followers reveal information about the nature of the connection between the two. Physical distance from a burst leader appears to be an important factor in follower response dynamics. Information theory reveals that mutual information between neuronal pairs is also a function of physical distance. Activation relationships in developing networks were studied and plating density was found to play an important role in network connectivity development. These measures provide unique views of network connectivity and hierarchical relationship in vitro which should be included in biologically meaningful models of neural networks.
112

Classification of Neuronal Subtypes in the Striatum and the Effect of Neuronal Heterogeneity on the Activity Dynamics / Klassificering av neuronala subtyper i striatum och effekten av neuronal heterogenitet på aktivitetsdynamiken

Bekkouche, Bo January 2016 (has links)
Clustering of single-cell RNA sequencing data is often used to show what states and subtypes cells have. Using this technique, striatal cells were clustered into subtypes using different clustering algorithms. Previously known subtypes were confirmed and new subtypes were found. One of them is a third medium spiny neuron subtype. Using the observed heterogeneity, as a second task, this project questions whether or not differences in individual neurons have an impact on the network dynamics. By clustering spiking activity from a neural network model, inconclusive results were found. Both algorithms indicating low heterogeneity, but by altering the quantity of a subtype between a low and high number, and clustering the network activity in each case, results indicate that there is an increase in the heterogeneity. This project shows a list of potential striatal subtypes and gives reasons to keep giving attention to biologically observed heterogeneity.
113

Experimental demonstration of single neuron specificity during underactuated neurocontrol

Brown, Samuel Garrett 29 September 2020 (has links)
Population-level neurocontrol has been advanced predominately through the miniaturization of hardware, such as MEMS-based electrodes. However, miniaturization alone may not be viable as a method for single-neuron resolution control within large ensembles, as it is typically infeasible to create electrode densities approaching 1:1 ratios with the neurons whose control is desired. That is, even advanced neural interfaces will likely remain underactuated, in that there will be fewer inputs (electrodes) within a given area than there are outputs (neurons). A complementary “software” approach could allow individual electrodes to independently control multiple neurons simultaneously, to improve performance beyond naïve hardware limits. An underactuated control schema, demonstrated in theoretical analysis and simulation (Ching & Ritt, 2013), uses stimulus strength-duration tradeoffs to activate a target neuron while leaving non-targets inactive. Here I experimentally test this schema in vivo, by independently controlling pairs of cortical neurons receiving common optogenetic input, in anesthetized mice. With this approach, neurons could be specifically and independently controlled following a short (~3 min) identification procedure. However, drift in neural responsiveness limited the performance over time. I developed an adaptive control procedure that fits stochastic Integrate and Fire (IAF) models to blocks of neural recordings, based on the deviation of expected from observed spiking, and selects optimal stimulation parameters from the updated models for subsequent blocks. I find the adaptive approach can maintain control over long time periods (>20 minutes) in about 30% of tested candidate neuron pairs. Because stimulation distorts the observation of neural activity, I further analyzed the influence of various forms of spike sorting corruption, and proposed methods to compensate for their effects on neural control systems. Overall, these results demonstrate the feasibility of underactuated neurocontrol for in vivo applications as a method for increasing the controllable population of high density neural interfaces.
114

Moving in time: a neural network model of rhythm-based motor sequence performance

Zeid, Omar Mohamed 05 September 2019 (has links)
Many complex actions are precomposed, by sequencing simpler motor actions. For such a complex action to be executed accurately, those simpler actions must be planned in the desired order, held in working memory, and then enacted one-by-one until the sequence is complete. Examples of this phenomenon include writing, typing, and speaking. Under most circumstances, the ability to learn and reproduce novel motor sequences is hindered when additional information is presented. However, in cases where the motor sequence is musical in nature (e.g. a choreographed dance or a piano melody), one must learn two sequences at the same time, one of motor actions and one of the time intervals between actions. Despite this added complexity, humans learn and perform rhythm-based motor sequences regularly. It has been shown that people can learn motoric and rhythmic sequences separately and then combine them with little trouble (Ullén & Bengtsson 2003). Also, functional MRI data suggest that there are distinct sets of neural regions responsible for the two different sequence types (Bengtsson et al. 2004). Although research on musical rhythm is extensive, few computational models exist to extend and inform our understanding of its neural bases. To that end, this dissertation introduces the TAMSIN (Timing And Motor System Integration Network) model, a systems-level neural network model designed to replicate rhythm-based motor sequence performance. TAMSIN utilizes separate Competitive Queuing (CQ) modules for motoric and temporal sequences, as well as modules designed to coordinate these sequence types into a cogent output performance consistent with a perceived beat and tempo. Chapters 1-4 explore prior literature on CQ architectures, rhythmic perception/production, and computational modeling, thereby illustrating the need for a model to tie those research areas together. Chapter 5 details the structure of the TAMSIN model and its mathematical specification. Chapter 6 presents and discusses the results of the model simulated under various circumstances. Chapter 7 compares the simulation results to behavioral and imaging results from the experimental literature. The final chapter discusses future modifications that could be made to TAMSIN to simulate aspects of rhythm learning, rhythm perception, and disordered productions, such as those seen in Parkinson’s disease.
115

A Computational Model of Adaptive Sensory Processing in the Electroreception of Mormyrid Electric Fish

Agmon, Eran 01 January 2011 (has links)
Electroreception is a sensory modality found in some fish, which enables them to sense the environment through the detection of electric fields. Biological experimentation on this ability has built an intricate framework that has identified many of the components involved in electroreception's production, but lack the framework for bringing the details back together into a system-level model of how they operate together. This thesis builds and tests a computational model of the Electrosensory Lateral Line Lobe (ELL) in mormyrid electric fish in an attempt to bring some of electroreception's structural details together to help explain its function. The ELL is a brain region that functions as a primary processing area of electroreception. It acts as an adaptive filter that learns to predict reoccurring stimuli and removes them from its sensory stream, passing only novel inputs to other brain regions for further processing. By creating a model of the ELL, the relevant components which underlie the ELL's functional, electrophysiological patterns can be identified and scientific hypotheses regarding their behavior can be tested. Systems science's approach is adopted to identify the ELL's relevant components and bring them together into a unified conceptual framework. The methodological framework of computational neuroscience is used to create a computational model of this structure of relevant components and to simulate their interactions. Individual activation tendencies of the different included cell types are modeled with dynamical systems equations and are interconnected according to the connectivity of the real ELL. Several of the ELL's input patterns are modeled and incorporated in the model. The computational approach claims that if all of the relevant components of a system are captured and interconnected accurately in a computer program, then when provided with accurate representations of the inputs a simulation should produce functional patterns similar to those of the real system. These simulated patterns generated by the ELL model are compared to recordings from real mormyrid ELLs and their correspondences validate or nullify the model's integrity. By building a computation model that can capture the relevant components of the ELL's structure and through simulation reproduces its function, a systems-level understanding begins to emerge and leads to a description of how the ELL's structure, along with relevant inputs, generate its function. The model can be manipulated more easily than a biological ELL, and allows us to test hypotheses regarding how changes in the structures affect the function, and how different inputs propagate through the structure in a way that produces complex functional patterns.
116

Data augmentation and image understanding / Datenerweiterung und Bildverständnis

Hernandez-Garcia, Alex 18 February 2022 (has links)
Interdisciplinary research is often at the core of scientific progress. This dissertation explores some advantageous synergies between machine learning, cognitive science and neuroscience. In particular, this thesis focuses on vision and images. The human visual system has been widely studied from both behavioural and neuroscientific points of view, as vision is the dominant sense of most people. In turn, machine vision has also been an active area of research, currently dominated by the use of artificial neural networks. This work focuses on learning representations that are more aligned with visual perception and the biological vision. For that purpose, I have studied tools and aspects from cognitive science and computational neuroscience, and attempted to incorporate them into machine learning models of vision. A central subject of this dissertation is data augmentation, a commonly used technique for training artificial neural networks to augment the size of data sets through transformations of the images. Although often overlooked, data augmentation implements transformations that are perceptually plausible, since they correspond to the transformations we see in our visual world–changes in viewpoint or illumination, for instance. Furthermore, neuroscientists have found that the brain invariantly represents objects under these transformations. Throughout this dissertation, I use these insights to analyse data augmentation as a particularly useful inductive bias, a more effective regularisation method for artificial neural networks, and as the framework to analyse and improve the invariance of vision models to perceptually plausible transformations. Overall, this work aims to shed more light on the properties of data augmentation and demonstrate the potential of interdisciplinary research.
117

MASSIVELY DISTRIBUTED NEUROMORPHIC CONTROL FOR LEGGED ROBOTS MODELED AFTER INSECT STEPPING

Szczecinski, Nicholas S. 12 March 2013 (has links)
No description available.
118

Framework for In-Silico Neuromodulatory Peripheral Nerve Electrode Experiments to Inform Design and Visualize Mechanisms

Nathaniel L Lazorchak (16641687) 30 August 2023 (has links)
<p> The nervous system exists as our interface to the world, both integrating and interpreting sensory information and coordinating voluntary and involuntary movements. Given its importance, it has become a target for neuromodulatory therapies. The research to develop these therapies cannot be done purely on living tissues - animals, manpower, and equipment make that cost prohibitive and, given the cost of life required, it would be unethical to not search for alternatives. Computation modeling, the use of mathematics and modern computational power to simulate phenomena, has sought to provide such an alternative since the work of Hodgkin and Huxley in 1952. These models, though they cannot yet replace in-vivo and in-vitro experiments, can ease the burden on living tissues and provide details difficult or impossible to ascertain from them. This thesis iterates on previous frameworks for performing in-silico experiments for the purposes of mechanistic exploration and threshold prediction. To do so, an existing volume conductor model and validated nerve-fiber model were joined and a series of programs were developed around them to perform a set of in-silico experiments. The experiments are designed to predict changes in thresholds of behaviors elicited by bioelectric neuromodulation to parametric changes in experimental setup and to explore the mechanisms behind bioelectric neuromodulation, particularly surrounding the recently discovered Low Frequency Alternating Current (LFAC) waveform. This framework improved upon its predecessors through efficiency-oriented design and modularity, allowing for rapid simulation on consumer-grade computers. Results show a high degree of convergence with in-vivo experimental results, such as mechanistic alignment with LFAC and being within an order of magnitude of in-vivo pulse-stimulation threshold results for equivalent in-vivo and in-silico experimental designs. </p>
119

Machine Learning Algorithms for Pattern Discovery in Spatio-temporal Data With Application to Brain Imaging Analysis

Asadi, Nima, 0000-0002-5102-6927 January 2022 (has links)
Temporal networks have become increasingly pervasive in many real-world applications. Due to the existence of diverse and evolving entities in such networks, understanding the structure and characterizing patterns in them is a complex task. A prime real-world example of such networks is the functional connectivity of the brain. These networks are commonly generated by measuring the statistical relationship between the oxygenation level-dependent signal of spatially separate regions of the brain over the time of an experiment involving a task being performed or at rest in an MRI scanner. Due to certain characteristics of fMRI data, such as high dimensionality and high noise level, extracting spatio-temporal patterns in such networks is a complicated task. Therefore, it is necessary for state-of-the-art data-driven analytical methods to be developed and employed for this domain. In this thesis, we suggest methodological tools within the area of spatio-temporal pattern discovery to explore and address several questions in the domain of computational neuroscience. One of the important objectives in neuroimaging research is the detection of informative brain regions for characterizing the distinction between the activation patterns of the brains among groups with different cognitive conditions. Popular approaches for achieving this goal include the multivariate pattern analysis(MVPA), regularization-based methods, and other machine learning based approaches. However, these approaches suffer from a number of limitations, such as requirement of manual tuning of parameter as well as incorrect identification of truly informative regions in certain cases. We therefore propose a maximum relevance minimum redundancy search algorithm to alleviate these limitations while increasing the precision of detection of infor- mative activation clusters. The second question that this thesis work addresses is how to detect the temporal ties in a dynamic connectivity network that are not formed at random or due to local properties of the nodes. To explore the solution to this problem, a null model is proposed that estimates the latent characteristics of the distributions of the temporal links through optimization, followed by a statistical test to filter the links whose formation can be reduced to the local properties of their interacting nodes. We demonstrate the benefits of this approach by applying it to a real resting state fMRI dataset, and provide further discussion on various aspects and advantages of it. Lastly, this dissertation delves into the task of learning a spatio-temporal representation to discover contextual patterns in evolutionary structured data. For this purpose, a representation learning approach is proposed based on the transformer model to extract the spatio-temporal contextual information from the fMRI data. Representation learning is a core component in data-driven modeling of various complex phenomena. Learning a contextually informative set of features can specially benefit the analysis of fMRI data due to the complexities and dynamic dependencies present in such datasets. The proposed framework takes the multivariate BOLD time series of the regions of the brain as well as their functional connectivity network simultaneously as the input to create a set of meaningful features which can in turn be used in var- ious downstream tasks such as classification, feature extraction, and statistical analysis. This architecture uses the attention mechanism as well as the graph convolution neural network to jointly inject the contextual information regarding the dynamics in time series data and their connectivity into the representation. The benefits of this framework are demonstrated by applying it to two resting state fMRI datasets, and further discussion is provided on various aspects and advantages of it over a number of commonly adopted architectures. / Computer and Information Science
120

Integrating statistical and machine learning approaches to identify receptive field structure in neural populations

Sarmashghi, Mehrad 17 January 2023 (has links)
Neural coding is essential for understanding how the activity of individual neurons or ensembles of neurons relates to cognitive processing of the world. Neurons can code for multiple variables simultaneously and neuroscientists are interested in classifying neurons based on the variables they represent. Building a model identification paradigm to identify neurons in terms of their coding properties is essential to understanding how the brain processes information. Statistical paradigms are capable of methodologically determining the factors influencing neural observations and assessing the quality of the resulting models to characterize and classify individual neurons. However, as neural recording technologies develop to produce data from massive populations, classical statistical methods often lack the computational efficiency required to handle such data. Machine learning (ML) approaches are known for enabling efficient large scale data analysis; however, they require huge training data sets, and model assessment and interpretation are more challenging than for classical statistical methods. To address these challenges, we develop an integrated framework, combining statistical modeling and machine learning approaches to identify the coding properties of neurons from large populations. In order to evaluate our approaches, we apply them to data from a population of neurons in rat hippocampus and prefrontal cortex (PFC), to characterize how spatial learning and memory processes are represented in these areas. The data consist of local field potentials (LFP) and spiking data simultaneously recorded from the CA1 region of hippocampus and the PFC of a male Long Evans rat performing a spatial alternation task on a W-shaped track. We have examined this data in three separate but related projects. In one project, we build an improved class of statistical models for neural activity by expanding a common set of basis functions to increase the statistical power of the resulting models. In the second project, we identify the individual neurons in hippocampus and PFC and classify them based on their coding properties by using statistical model identification methods. We found that a substantial proportion of hippocampus and PFC cells are spatially selective, with position and velocity coding, and rhythmic firing properties. These methods identified clear differences between hippocampal and prefrontal populations, and allowed us to classify the coding properties of the full population of neurons in these two regions. For the third project, we develop a supervised machine learning classifier based on convolutional neural networks (CNNs), which use classification results from statistical models and additional simulated data as ground truth signals for training. This integration of statistical and ML approaches allows for statistically principled and computationally efficient classification of the coding properties of general neural populations.

Page generated in 0.1761 seconds