• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 34
  • 18
  • 5
  • 1
  • 1
  • Tagged with
  • 228
  • 228
  • 59
  • 42
  • 36
  • 33
  • 30
  • 29
  • 27
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Neural encoding by bursts of spikes

Elijah, Daniel January 2014 (has links)
Neurons can respond to input by firing isolated action potentials or spikes. Sequences of spikes have been linked to the encoding of neuron input. However, many neurons also fire bursts; mechanistically distinct responses consisting of brief high-frequency spike firing. Bursts form separate response symbols but historically have not been thought to encode input. However, recent experimental evidence suggests that bursts can encode input in parallel with tonic spikes. The recognition of bursts as distinct encoding symbols raises important questions; these form the basic aims of this thesis: (1) What inputs do bursts encode? (2) Does burst structure provide extra information about different inputs. (3) Is burst coding robust against the presence of noise; an inherent property of all neural systems? (4) What mechanisms are responsible for burst input encoding? (5) How does burst coding manifest in in-vivo neurons. To answer these questions, bursting is studied using a combination of neuron models and in-vivo hippocampal neuron recordings. Models ranged from neuron-specific cell models to models belonging to three fundamentally different burst dynamic classes (unspecific to any neural region). These classes are defined using concepts from non-linear system theory. Together, analysing these model types with in-vivo recordings provides a specific and general analysis of burst encoding. For neuron-specific and unspecific models, a number of model types expressing different levels of biological realism are analysed. For the study of thalamic encoding, two models containing either a single simplified burst-generating current or multiple currents are used. For models simulating three burst dynamic classes, three further models of different biological complexity are used. The bursts generated by models and real neurons were analysed by assessing the input they encode using methods such as information theory, and reverse correlation. Modelled bursts were also analysed for their resilience to simulated neural noise. In all cases, inputs evoking bursts and tonic spikes were distinct. The structure of burst-evoking input depended on burst dynamic class rather than the biological complexity of models. Different n-spike bursts encoded different inputs that, if read by downstream cells, could discriminate complex input structure. In the thalamus, this n-spike burst code explains informative responses that were not due to tonic spikes. In-vivo hippocampal neurons and a pyramidal cell model both use the n-spike code to mark different LFP features. This n-spike burst may therefore be a general feature of bursting relevant to both model and in-vivo neurons. Bursts can also encode input corrupted by neural noise, often outperforming the encoding of single spikes. Both burst timing and internal structure are informative even when driven by strongly noise-corrupted input. Also, bursts induce input-dependent spike correlations that remain informative despite strong added noise. As a result, bursts endow their constituent spikes with extra information that would be lost if tonic spikes were considered the only informative responses.
182

Neural engineering: modeling bioelectric activities from neuromuscular system with its applications. / CUHK electronic theses & dissertations collection

January 2004 (has links)
Ma Ting. / "July 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 181-196). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
183

Traiter le cerveau avec les neurosciences : théorie de champ-moyen, effets de taille finie et capacité de codage des réseaux de neurones stochastiques / Attacking the brain with neuroscience : mean-field theory, finite size effects and encoding capability of stochastic neural networks

Fasoli, Diego 25 September 2013 (has links)
Ce travail a été développé dans le cadre du projet européen FACETS-ITN, dans le domaine des Neurosciences Computationnelles. Son but est d’améliorer la compréhension des réseaux de neurones stochastiques de taille finie, pour des sources corrélées à caractère aléatoire et pour des matrices de connectivité biologiquement réalistes. Ce résultat est obtenu par l’analyse de la matrice de corrélation du réseau et la quantification de la capacité de codage du système en termes de son information de Fisher. Les méthodes comprennent diverses techniques mathématiques, statistiques et numériques, dont certaines ont été importés d’autres domaines scientifiques, comme la physique et la théorie de l’estimation. Ce travail étend de précédents résultats fondées sur des hypothèses simplifiées qui ne sont pas réaliste d’un point de vue biologique et qui peuvent être pertinents pour la compréhension des principes de travail liés cerveau. De plus, ce travail fournit les outils nécessaires à une analyse complète de la capacité de traitement de l’information des réseaux de neurones, qui sont toujours manquante dans la communauté scientifique. / The brain is the most complex system in the known universe. Its nested structure with small-world properties determines its function and behavior. The analysis of its structure requires sophisticated mathematical and statistical techniques. In this thesis we shed new light on neural networks, attacking the problem from different points of view, in the spirit of the Theory of Complexity and in terms of their information processing capabilities. In particular, we quantify the Fisher information of the system, which is a measure of its encoding capability. The first technique developed in this work is the mean-field theory of rate and FitzHugh-Nagumo networks without correlations in the thermodynamic limit, through both mathematical and numerical analysis. The second technique, the Mayer’s cluster expansion, is taken from the physics of plasma, and allows us to determine numerically the finite size effects of rate neurons, as well as the relationship of the Fisher information to the size of the network for independent Brownian motions. The third technique is a perturbative expansion, which allows us to determine the correlation structure of the rate network for a variety of different types of connectivity matrices and for different values of the correlation between the sources of randomness in the system. With this method we can also quantify numerically the Fisher information not only as a function of the network size, but also for different correlation structures of the system. The fourth technique is a slightly different type of perturbative expansion, with which we can study the behavior of completely generic connectivity matrices with random topologies. Moreover this method provides an analytic formula for the Fisher information, which is in qualitative agreement with the other results in this thesis. Finally, the fifth technique is purely numerical, and uses an Expectation-Maximization algorithm and Monte Carlo integration in order to evaluate the Fisher information of the FitzHugh-Nagumo network. In summary, this thesis provides an analysis of the dynamics and the correlation structure of the neural networks, confirms this through numerical simulation and makes two key counterintuitive predictions. The first is the formation of a perfect correlation between the neurons for particular values of the parameters of the system, a phenomenon that we term stochastic synchronization. The second, which is somewhat contrary to received opinion, is the explosion of the Fisher information and therefore of the encoding capability of the network for highly correlated neurons. The techniques developed in this thesis can be used also for a complete quantification of the information processing capabilities of the network in terms of information storage, transmission and modification, but this would need to be performed in the future.
184

Processamento de informação em redes neurais sensoriais / Information processing in sensory neural networks

Thiago Schiavo Mosqueiro 26 August 2015 (has links)
Com os avanços em eletrônica analógica e digital dos últimos 50 anos, a neurociência ganhou grande momentum e nasceu uma de suas áreas que atualmente mais recebe financiamento: neurociência computacional. Estudos nessa área, ainda considerada recente, vão desde estudos moleculares de trocas iônicas por canais iônicos (escala nanométrica), até influências de populações neurais no comportamento de grandes mamíferos (escala de até metros). O coração da neurociência computacional compreende técnicas inter- e multidisciplinares, envolvendo biologia de sistemas, bioquímica, modelagem matemática, estatística, termodinâmica, física estatística, etc. O impacto em áreas de grande interesse, como o desenvolvimento de fármacos e dispositivos militares, é a grande força motriz desta área. Especificamente para este último, a compreensão do código neural e como informação sensorial é trabalhada por populações de neurônios é essencial. E ainda estamos num estágio muito inicial de desvendar todo o funcionamento de muitos dos sistemas sensoriais mais complexos. Um exemplo é de um dos sentidos que parece existir desde as formas mais primitivas de vida: o olfato. Em mamíferos, o número de estudos parece sempre crescer com os anos. Ainda estamos, no entanto, longe de um consenso sobre o funcionamento de muitos dos mecanismos básicos do olfato. A literatura é extensa em termos bioquímicos e comportamental, mas reunir tudo em um único modelo é talvez o grande desafio atual. Nesta tese discuto, em duas partes, sistemas sensoriais seguindo uma linha bastante ligada ao sistema olfativo. Na primeira parte, um modelo formal que lembra o bulbo olfativo (de mamíferos) é considerado para investigar a relação entre a performance da codificação neural e a existência de uma dinâmica crítica. Em especial, discuto sobre últimos experimentos baseados em observações de leis de potência como evidências da existência de criticalidade e ótima performance em populações neurais. Mostro que, apesar de a performance das redes estar, sim, ligada ao ponto crítico do sistema, a existência de leis de potência não está ligada nem com tal ponto crítico, nem com a ótima performance. Experimentos recentes confirmam estas observações. Na segunda parte, discuto e proponho uma modelagem inicial para o órgão central do sentido olfativo em insetos: o Corpo Cogumelar. A novidade deste modelo está na integração temporal, além de conseguir tanto fazer reconhecimento de padrões (qual odor) e estimativa de concentrações de odores. Com este modelo, proponho uma explicação para uma recente observação de antecipação neural no Corpo Cogumelar, em que sua última camada paradoxalmente parece antecipar a primeira camada. Proponho a existência de um balanço entre agilidade do código neural contra acurácia no reconhecimento de padrões. Este balanço pode ser empiricamente testado. Também proponho a existência de um controle de ganho no Corpo Cogumelar que seria responsável pela manutenção dos ingredientes principais para reconhecimento de padrões e aprendizado. Ambas estas partes contribuem para o compreendimento de como sistemas sensoriais operam e quais os mecanismos fundamentais que os fornecem performance invejável. / With the advances in digital and analogical electronics in the last 50 years, neuroscience gained great momentum and one of its most well-financed sub-areas was born: computational neuroscience. Studies in this area, still considered recent by many, range from the ionic balance in the molecular level (scale of few nanometers), up to how neural populations influence behavior of large mammalians (scale of meters). The computational neuroscience core is highly based on inter- and multi-disciplinary techniques, involving systems biology, biochemistry, mathematical modeling, thermodynamics, statistical physics, etc. The impact in areas of current great interest, like in pharmaceutical drugs development and military devices, is its major flagship. Specifically for the later, deep understanding of neural code and how sensory information is filtered by neural populations is essential. And we are still grasping at the surface of really understanding many of the complex sensory systems we know. An example of such sensory modality that coexisted among all kinds of life forms is olfaction. In mammalians, the number of studies in this area seems to be growing steadily. However, we are still far from a complete agreement on how the basic mechanisms in olfaction work. There is a large literature of biochemical and behavioral studies, yet there is not a single model that comprises all this information and reproduces any olfactory system completely. In this thesis, I discuss in two parts sensory systems following a general line of argument based on olfaction. In the first part, a formal model that resembles the olfactory bulb (mammalians) is considered to investigate the relationship between performance in information coding and the existence of a critical dynamics. I show that, while the performance of neural networks may be intrinsically linked to a critical point, power laws are not exactly linked to neither critical points or performance optimization. Recent experiments corroborate this observation. In the second part, I discuss and propose a first dynamical model to the central organ responsible for olfactory learning in insects: the Mushroom Bodies. The novelty in this model is in the time integration, besides being able of pattern recognition (which odor) and concentration estimation at the same time. With this model, I propose an explanation for a seemingly paradoxical observation of coding anticipation in the Mushroom Bodies, where the last neural layer seems to trail the input layer. I propose the existence of a balance between accuracy and speed of pattern recognition in the Mushroom Bodies based on its fundamental morphological structure. I also propose the existence of a robust gain-control structure that sustain the key ingredients for pattern recognition and learning. This balance can be empirically tested. Both parts contribute to the understanding of the basic mechanisms behind sensory systems.
185

Pattern formation in a neural field model : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics at Massey University, Auckland, New Zealand

Elvin, Amanda Jane January 2008 (has links)
In this thesis I study the effects of gap junctions on pattern formation in a neural field model for working memory. I review known results for the base model (the “Amari model”), then see how the results change for the “gap junction model”. I find steady states of both models analytically and numerically, using lateral inhibition with a step firing rate function, and a decaying oscillatory coupling function with a smooth firing rate function. Steady states are homoclinic orbits to the fixed point at the origin. I also use a method of piecewise construction of solutions by deriving an ordinary differential equation from the partial integro-differential formulation of the model. Solutions are found numerically using AUTO and my own continuation code in MATLAB. Given an appropriate level of threshold, as the firing rate function steepens, the solution curve becomes discontinuous and stable homoclinic orbits no longer exist in a region of parameter space. These results have not been described previously in the literature. Taking a phase space approach, the Amari model is written as a four-dimensional, reversible Hamiltonian system. I develop a numerical technique for finding both symmetric and asymmetric homoclinic orbits. I discover a small separate solution curve that causes the main curve to break as the firing rate function steepens and show there is a global bifurcation. The small curve and the global bifurcation have not been reported previously in the literature. Through the use of travelling fronts and construction of an Evans function, I show the existence of stable heteroclinic orbits. I also find asymmetric steady state solutions using other numerical techniques. Various methods of determining the stability of solutions are presented, including a method of eigenvalue analysis that I develop. I then find both stable and transient Turing structures in one and two spatial dimensions, as well as a Type-I intermittency. To my knowledge, this is the first time transient Turing structures have been found in a neural field model. In the Appendix, I outline numerical integration schemes, the pseudo-arclength continuation method, and introduce the software package AUTO used throughout the thesis.
186

Aspects of memory and representation in cortical computation

Rehn, Martin January 2006 (has links)
Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag. / In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience. / QC 20100916
187

Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticity

Brink, Stephen Isaac 24 August 2012 (has links)
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.
188

From dynamics to computations in recurrent neural networks / Dynamique et traitement d’information dans les réseaux neuronaux récurrents

Mastrogiuseppe, Francesca 04 December 2017 (has links)
Le cortex cérébral des mammifères est constitué de larges et complexes réseaux de neurones. La tâche de ces assemblées de cellules est d’encoder et de traiter, le plus précisément possible, l'information sensorielle issue de notre environnement extérieur. De façon surprenante, les enregistrements électrophysiologiques effectués sur des animaux en comportement ont montré que l’activité corticale est excessivement irrégulière. Les motifs temporels d’activité ainsi que les taux de décharge moyens des cellules varient considérablement d’une expérience à l’autre, et ce malgré des conditions expérimentales soigneusement maintenues à l’identique. Une hypothèse communément répandue suggère qu'une partie importante de cette variabilité émerge de la connectivité récurrente des réseaux. Cette hypothèse se fonde sur la modélisation des réseaux fortement couplés. Une étude classique [Sompolinsky et al, 1988] a en effet montré qu'un réseau de cellules aux connections aléatoires exhibe une transition de phase : l’activité passe d'un point fixe ou le réseau est inactif, à un régime chaotique, où les taux de décharge des cellules fluctuent au cours du temps et d’une cellule à l’autre. Ces analyses soulèvent néanmoins de nombreuse questions : de telles fluctuations sont-elles encore visibles dans des réseaux corticaux aux architectures plus réalistes? De quelle façon cette variabilité intrinsèque dépend-elle des paramètres biophysiques des cellules et de leurs constantes de temps ? Dans quelle mesure de tels réseaux chaotiques peuvent-ils sous-tendre des computations ? Dans cette thèse, on étudiera la dynamique et les propriétés computationnelles de modèles de circuits de neurones à l’activité hétérogène et variable. Pour ce faire, les outils mathématiques proviendront en grande partie des systèmes dynamiques et des matrices aléatoires. Ces approches seront couplées aux méthodes statistiques des champs moyens développées pour la physique des systèmes désordonnées. Dans la première partie de cette thèse, on étudiera le rôle de nouvelles contraintes biophysiques dans l'apparition d’une activité irrégulière dans des réseaux de neurones aux connections aléatoires. Dans la deuxième et la troisième partie, on analysera les caractéristiques de cette variabilité intrinsèque dans des réseaux partiellement structurées supportant des calculs simples comme la prise de décision ou la création de motifs temporels. Enfin, inspirés des récents progrès dans le domaine de l’apprentissage statistique, nous analyserons l’interaction entre une architecture aléatoire et une structure de basse dimension dans la dynamique des réseaux non-linéaires. Comme nous le verrons, les modèles ainsi obtenus reproduisent naturellement un phénomène communément observé dans des enregistrements électrophysiologiques : une dynamique de population de basse dimension combinée avec représentations neuronales irrégulières, à haute dimension, et mixtes. / The mammalian cortex consists of large and intricate networks of spiking neurons. The task of these complex recurrent assemblies is to encode and process with high precision the sensory information which flows in from the external environment. Perhaps surprisingly, electrophysiological recordings from behaving animals have pointed out a high degree of irregularity in cortical activity. The patterns of spikes and the average firing rates change dramatically when recorded in different trials, even if the experimental conditions and the encoded sensory stimuli are carefully kept fixed. 
One current hypothesis suggests that a substantial fraction of that variability emerges intrinsically because of the recurrent circuitry, as it has been observed in network models of strongly interconnected units. In particular, a classical study [Sompolinsky et al, 1988] has shown that networks of randomly coupled rate units can exhibit a transition from a fixed point, where the network is silent, to chaotic activity, where firing rates fluctuate in time and across units. Such analysis left a large number of questions unsolved: can fluctuating activity be observed in realistic cortical architectures? How does variability depend on the biophysical parameters and time scales? How can reliable information transmission and manipulation be implemented with such a noisy code? 
In this thesis, we study the spontaneous dynamics and the computational properties of realistic models of large neural circuits which intrinsically produce highly variable and heterogeneous activity. The mathematical tools of our analysis are inherited from dynamical systems and random matrix theory, and they are combined with the mean field statistical approaches developed for the study of physical disordered systems. 
In the first part of the dissertation, we study how strong rate irregularities can emerge in random networks of rate units which obey some among the biophysical constraints that real cortical neurons are subject to. In the second and third part of the dissertation, we investigate how variability is characterized in partially structured models which can support simple computations like pattern generation and decision making. To this aim, inspired by recent advances in networks training techniques, we address how random connectivity and low-dimensional structure interact in the non-linear network dynamics. The network models that we derive naturally capture the ubiquitous experimental observations that the population dynamics is low-dimensional, while neural representations are irregular, high-dimensional and mixed.
189

Apprentissage actif sous contrainte de budget en robotique et en neurosciences computationnelles. Localisation robotique et modélisation comportementale en environnement non stationnaire / Active learning under budget constraint in robotics and computational neuroscience. Robotic localization and behavioral modeling in non-stationary environment

Aklil, Nassim 27 September 2017 (has links)
La prise de décision est un domaine très étudié en sciences, que ce soit en neurosciences pour comprendre les processus sous tendant la prise de décision chez les animaux, qu’en robotique pour modéliser des processus de prise de décision efficaces et rapides dans des tâches en environnement réel. En neurosciences, ce problème est résolu online avec des modèles de prises de décision séquentiels basés sur l’apprentissage par renforcement. En robotique, l’objectif premier est l’efficacité, dans le but d’être déployés en environnement réel. Cependant en robotique ce que l’on peut appeler le budget et qui concerne les limitations inhérentes au matériel, comme les temps de calculs, les actions limitées disponibles au robot ou la durée de vie de la batterie du robot, ne sont souvent pas prises en compte à l’heure actuelle. Nous nous proposons dans ce travail de thèse d’introduire la notion de budget comme contrainte explicite dans les processus d’apprentissage robotique appliqués à une tâche de localisation en mettant en place un modèle basé sur des travaux développés en apprentissage statistique qui traitent les données sous contrainte de budget, en limitant l’apport en données ou en posant une contrainte de temps plus explicite. Dans le but d’envisager un fonctionnement online de ce type d’algorithmes d’apprentissage budgétisé, nous discutons aussi certaines inspirations possibles qui pourraient être prises du côté des neurosciences computationnelles. Dans ce cadre, l’alternance entre recherche d’information pour la localisation et la décision de se déplacer pour un robot peuvent être indirectement liés à la notion de compromis exploration-exploitation. Nous présentons notre contribution à la modélisation de ce compromis chez l’animal dans une tâche non stationnaire impliquant différents niveaux d’incertitude, et faisons le lien avec les méthodes de bandits manchot. / Decision-making is a highly researched field in science, be it in neuroscience to understand the processes underlying animal decision-making, or in robotics to model efficient and rapid decision-making processes in real environments. In neuroscience, this problem is resolved online with sequential decision-making models based on reinforcement learning. In robotics, the primary objective is efficiency, in order to be deployed in real environments. However, in robotics what can be called the budget and which concerns the limitations inherent to the hardware, such as computation times, limited actions available to the robot or the lifetime of the robot battery, are often not taken into account at the present time. We propose in this thesis to introduce the notion of budget as an explicit constraint in the robotic learning processes applied to a localization task by implementing a model based on work developed in statistical learning that processes data under explicit constraints, limiting the input of data or imposing a more explicit time constraint. In order to discuss an online functioning of this type of budgeted learning algorithms, we also discuss some possible inspirations that could be taken on the side of computational neuroscience. In this context, the alternation between information retrieval for location and the decision to move for a robot may be indirectly linked to the notion of exploration-exploitation compromise. We present our contribution to the modeling of this compromise in animals in a non-stationary task involving different levels of uncertainty, and we make the link with the methods of multi-armed bandits.
190

Systèmes neuromorphiques temps réel : contribution à l’intégration de réseaux de neurones biologiquement réalistes avec fonctions de plasticité

Belhadj-Mohamed, Bilel 22 July 2010 (has links)
Cette thèse s’intègre dans le cadre du projet Européen FACETS. Pour ce projet, des systèmes matériels mixtes analogique-numérique effectuant des simulations en temps réel des réseaux de neurones doivent être développés. Le but est d’aider à la compréhension des phénomènes d’apprentissage dans le néocortex. Des circuits intégrés spécifiques analogiques ont préalablement été conçus par l’équipe pour simuler le comportement de plusieurs types de neurones selon le formalisme de Hodgkin-Huxley. La contribution de cette thèse consiste à la conception et la réalisation des circuits numériques permettant de gérer la connectivité entre les cellules au sein du réseau de neurones, suivant les règles de plasticité configurées par l’utilisateur. L’implantation de ces règles est réalisée sur des circuits numériques programmables (FPGA) et est optimisée pour assurer un fonctionnement temps réel pour des réseaux de grande taille. Des nouvelles méthodes de calculs et de communication ont été développées pour satisfaire les contraintes temporelles et spatiales imposées par le degré de réalisme souhaité. Entre autres, un protocole de communication basé sur la technique anneau à jeton a été conçu pour assurer le dialogue entre plusieurs FPGAs situés dans un système multicarte tout en garantissant l’aspect temps-réel des simulations. Les systèmes ainsi développés seront exploités par les laboratoires partenaires, neurobiologistes ou informaticiens. / This work has been supported by the European FACETS project. Within this project, we contribute in developing hardware mixed-signal devices for real-time spiking neural network simulation. These devices may potentially contribute to an improved understanding of learning phenomena in the neo-cortex. Neuron behaviours are reproduced using analog integrated circuits which implement Hodgkin-Huxley based models. In this work, we propose a digital architecture aiming to connect many neuron circuits together, forming a network. The inter-neuron connections are reconfigurable and can be ruled by a plasticity model. The architecture is mapped onto a commercial programmable circuit (FPGA). Many methods are developed to optimize the utilisation of hardware resources as well as to meet real-time constraints. In particular, a token-passing communication protocol has been designed and developed to guarantee real-time aspects of the dialogue between several FPGAs in a multiboard system allowing the integration of a large number of neurons. The global system is able to run neural simulations in biological real-time with high degree of realism, and then can be used by neurobiologists and computer scientists to carry on neural experiments.

Page generated in 0.1529 seconds