121 |
CANDID - A Neurodynamical Model of Idea GenerationIyer, Laxmi R 19 April 2012 (has links)
No description available.
|
122 |
Neurologically Based Control for Quadruped WalkingHunt, Alexander Jacob 27 January 2016 (has links)
No description available.
|
123 |
TACTILE SPATIAL ACUITY FROM CHILDHOOD INTO ADULTHOODPeters, Ryan M. 10 1900 (has links)
<p>Measurement of human tactile spatial acuity – the ability to perceive the</p> <p>fine spatial structure of surfaces contacting our fingertips – provides a valuable</p> <p>tool for probing both the peripheral and central nervous system. However,</p> <p>measures of tactile spatial acuity have long been plagued by a prodigious amount</p> <p>of variability present between individuals in their sense of touch. Previously</p> <p>proposed sources of variability include sex, and age; here we propose a novel</p> <p>source of variability – fingertip size. Building upon anatomical research, we</p> <p>hypothesize that mechanoreceptors are more sparsely distributed in larger fingers.</p> <p>In this thesis, I provide empirical and theoretical support for the hypothesis</p> <p>that fingertip growth from childhood into adulthood sets up an apparent sex</p> <p>difference in human tactile spatial acuity during young adulthood (Chapter 2), and</p> <p>also predicts changes in acuity more strongly than does age over development</p> <p>(Chapter 3). To further understand how fingertip size could limit an individual's</p> <p>tactile spatial acuity, we develop an ideal observer model using</p> <p>neurophysiological data collected by other labs (Chapter 4).</p> <p>In summary, this research provides support for a novel source of variability</p> <p>in the sense of touch: one that parsimoniously explains an apparent sex difference,</p> <p>and helps clarify the source of changes in tactile spatial acuity occurring with age</p> <p>during childhood.</p> / Doctor of Philosophy (PhD)
|
124 |
Factors affecting the predictive ability of computational models of subthalamic deep brain stimulationBower, Kelsey L. 25 January 2022 (has links)
No description available.
|
125 |
Towards a Computational Theory of the Brain: The Simplest Neural Models, and a Hypothesis for LanguageMitropolsky, Daniel January 2024 (has links)
Obtaining a computational understanding of the brain is one of the most important problems in basic science. However, the brain is an incredibly complex organ, and neurobiological research has uncovered enormous amounts of detail at almost every level of analysis (the synapse, the neuron, other brain cells, brain circuits, areas, and so on); it is unclear which of these details are conceptually significant to the basic way in which the brain computes. An essential approach to the eventual resolution of this problem is the definition and study of theoretical computational models, based on varying abstractions and inclusions of such details.
This thesis defines and studies a family of models, called NEMO, based on a particular set of well-established facts or well-founded assumptions in neuroscience: atomic neural firing, random connectivity, inhibition as a local dynamic firing threshold, and fully local plasticity. This thesis asks: what sort of algorithms are possible in these computational models? To the extent possible, what seem to be the simplest assumptions where interesting computation becomes possible? Additionally, can we find algorithms for cognitive phenomena that, in addition to serving as a "proof of capacity" of the computational model, otherwise reflect what is known about these processes in the brain? The major contributions of this thesis include:
1. The formal definition of the basic-NEMO and NEMO models, with an explication of their neurobiological underpinnings (that is, realism as abstractions of the brain).
2. Algorithms for the creation of neural \emph{assemblies}, or highly dense interconnected subsets of neurons, and various operations manipulating such assemblies, including reciprocal projection, merge, association, disassociation, and pattern completion, all in the basic-NEMO model. Using these operations, we show the Turing-completeness of the NEMO model (with some specific additional assumptions).
3. An algorithm for parsing a small but non-trivial subset of English and Russian (and more generally any regular language) in the NEMO model, with meta-features of the algorithm broadly in line with what is known about language in the brain.
4. An algorithm for parsing a much larger subset of English (and other languages), in particular handling dependent (embedded) clauses, in the NEMO model with some additional memory assumptions. We prove that an abstraction of this algorithm yields a new characterization of the context-free languages.
5. Algorithms for the blocks-world planning task, which involves outputting a sequence of steps to rearrange a stack of cubes in one order into another target order, in the NEMO model. A side consequence of this work is an algorithm for a chaining operation in basic-NEMO.
6. Algorithms for several of the most basic and initial steps in language acquisition in the baby brain. This includes an algorithm for the learning of the simplest, concrete nouns and action verbs (words like "cat" and "jump") from whole sentences in basic-NEMO with a novel representation of word and contextual inputs. Extending the same model, we present an algorithm for an elementary component of syntax, namely learning the word order of 2-constituent intransitive and 3-constituent transitive sentences. These algorithms are very broadly in line with what is known about language in the brain.
|
126 |
Neural Tabula Rasa: Foundations for Realistic Memories and LearningPerrine, Patrick R 01 June 2023 (has links) (PDF)
Understanding how neural systems perform memorization and inductive learning tasks are of key interest in the field of computational neuroscience. Similarly, inductive learning tasks are the focus within the field of machine learning, which has seen rapid growth and innovation utilizing feedforward neural networks. However, there have also been concerns regarding the precipitous nature of such efforts, specifically in the area of deep learning. As a result, we revisit the foundation of the artificial neural network to better incorporate current knowledge of the brain from computational neuroscience. More specifically, a random graph was chosen to model a neural system. This random graph structure was implemented along with an algorithm for storing information, allowing the network to create memories by creating subgraphs of the network. This implementation was derived from a proposed neural computation system, the Neural Tabula Rasa, by Leslie Valiant. Contributions of this work include a new approximation of memory size, several algorithms for implementing aspects of the Neural Tabula Rasa, and empirical evidence of the functional form for memory capacity of the system. This thesis intends to benefit the foundations of learning systems, as the ability to form memories is required for a system to inductively learn.
|
127 |
Bayesian and information-theoretic tools for neuroscienceEndres, Dominik M. January 2006 (has links)
The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neural code. Secondly, tools from information theory are used to measure the information contained in a biological neural code. Chapter 3: The REC model proposed by Harpur and Prager codes inputs into a sparse, factorial representation, maintaining reconstruction accuracy. Here I propose a modification of the REC model to determine the optimal network dimensionality. The resulting code for unfiltered natural images is accurate, highly sparse and a large fraction of the code elements show localized features. Furthermore, I propose an activation algorithm for the network that is faster and more accurate than a gradient descent based activation method. Moreover, it is demonstrated that asymmetric noise promotes sparseness. Chapter 4: A fast, exact alternative to Bayesian classification is introduced. Computational time is quadratic in both the number of observed data points and the number of degrees of freedom of the underlying model. As an example application, responses of single neurons from high-level visual cortex (area STSa) to rapid sequences of complex visual stimuli are analyzed. Chapter 5: I present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. The model complexity, exact values of the expectations of entropies and their variances can be computed with polynomial effort given the data. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is first tested on artificial data. To that end, an information theoretic similarity measure is derived. Second, the algorithm is demonstrated to be useful in neuroscience by studying the information content of the neural responses analyzed in the previous chapter. It is shown that the information throughput of STS neurons is maximized for stimulus durations of approx. 60ms.
|
128 |
Timing cues for azimuthal sound source localizationBenichoux, Victor 25 November 2013 (has links) (PDF)
Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
|
129 |
Bayesian learning methods for modelling functional MRIGroves, Adrian R. January 2009 (has links)
Bayesian learning methods are the basis of many powerful analysis techniques in neuroimaging, permitting probabilistic inference on hierarchical, generative models of data. This thesis primarily develops Bayesian analysis techniques for magnetic resonance imaging (MRI), which is a noninvasive neuroimaging tool for probing function, perfusion, and structure in the human brain. The first part of this work fits nonlinear biophysical models to multimodal functional MRI data within a variational Bayes framework. Simultaneously-acquired multimodal data contains mixtures of different signals and therefore may have common noise sources, and a method for automatically modelling this correlation is developed. A Gaussian process prior is also used to allow spatial regularization while simultaneously applying informative priors on model parameters, restricting biophysically-interpretable parameters to reasonable values. The second part introduces a novel data fusion framework for multivariate data analysis which finds a joint decomposition of data across several modalities using a shared loading matrix. Each modality has its own generative model, including separate spatial maps, noise models and sparsity priors. This flexible approach can perform supervised learning by using target variables as a modality. By inferring the data decomposition and multivariate decoding simultaneously, the decoding targets indirectly influence the component shapes and help to preserve useful components. The same framework is used for unsupervised learning by placing independent component analysis (ICA) priors on the spatial maps. Linked ICA is a novel approach developed to jointly decompose multimodal data, and is applied to combined structural and diffusion images across groups of subjects. This allows some of the benefits of tensor ICA and spatially-concatenated ICA to be combined, and allows model comparison between different configurations. This joint decomposition framework is particularly flexible because of its separate generative models for each modality and could potentially improve modelling of functional MRI, magnetoencephalography, and other functional neuroimaging modalities.
|
130 |
Stochastic population oscillators in ecology and neuroscienceLai, Yi Ming January 2012 (has links)
In this thesis we discuss the synchronization of stochastic population oscillators in ecology and neuroscience. Traditionally, the synchronization of oscillators has been studied in deterministic systems with various modes of synchrony induced by coupling between the oscillators. However, recent developments have shown that an ensemble of uncoupled oscillators can be synchronized by a common noise source alone. By considering the effects of noise-induced synchronization on biological oscillators, we are able to explain various biological phenomena in ecological and neurobiological contexts - most importantly, the long-observed Moran effect. Our formulation of the systems as limit cycle oscillators arising from populations of individuals, each with a random element to its behaviour, also allows us to examine the interaction between an external noise source and this intrinsic stochasticity. This provides possible explanations as to why in ecological systems large-amplitude cycles may not be observed in the wild. In neural population oscillators, we were able to observe not just synchronization, but also clustering in some pa- rameter regimes. Finally, we are also able to extend our methods to include coupling in our models. In particular, we examine the competing effects of dispersal and extrinsic noise on the synchronization of a pair of Rosenzweig-Macarthur predator-prey systems. We discover that common environmental noise will ultimately synchronize the oscillators, but that the approach to synchrony depends on whether or not dispersal in the absence of noise supports any stable asynchronous states. We also show how the combination of correlated (shared) and uncorrelated (unshared) noise with dispersal can lead to a multistable steady-state probability density. Similar analysis on a coupled system of neural oscillators would be an interesting project for future work, which, among other future directions of research, is discussed in the concluding section of this thesis.
|
Page generated in 0.137 seconds