• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 6
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A statistical mechanics approach to the modelling and analysis of place-cell activity / Activité de cellules de lieu de l'hippocampe : modélisation et analyse par des méthodes de physique statistique

Rosay, Sophie 07 October 2014 (has links)
Les cellules de lieu de l’hippocampe sont des neurones aux propriétés intrigantes, commele fait que leur activité soit corrélée à la position spatiale de l’animal. Il est généralementconsidéré que ces propriétés peuvent être expliquées en grande partie par les comporte-ments collectifs de modèles schématiques de neurones en interaction. La physique statis-tique fournit des outils permettant l’étude analytique et numérique de ces comportementscollectifs.Nous abordons ici le problème de l’utilisation de ces outils dans le cadre du paradigmedu “réseau attracteur”, une hypothèse théorique sur la nature de la mémoire. La questionest de savoir comment ces méthodes et ce cadre théorique peuvent aider à comprendrel’activité des cellules de lieu. Dans un premier temps, nous proposons un modèle de cellulesde lieu dans lequel la localisation spatiale de l’activité neuronale est le résultat d’unedynamique d’attracteur. Plusieurs aspects des propriétés collectives de ce modèle sontétudiés. La simplicité du modèle permet de les comprendre en profondeur. Le diagrammede phase du modèle est calculé et discuté en comparaison avec des travaux précedents.Du point de vue dynamique, l’évolution du système présente des motifs particulièrementriches. La seconde partie de cette thèse est à propos du décodage de l’activité des cellulesde lieu. Nous nous demandons quelle est l’implication de l’hypothèse des attracteurs surce problème. Nous comparons plusieurs méthodes de décodage et leurs résultats sur letraitement de données expérimentales. / Place cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat.
2

Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes / Inférence et modélisation de réseaux biologiques par la physique statistique : des attracteurs neuronaux au paysage de fitness des protéines

Posani, Lorenzo 07 December 2018 (has links)
L'avènement récent des procédures expérimentales à haut débit a ouvert une nouvelle ère pour l'étude quantitative des systèmes biologiques. De nos jours, les enregistrements d'électrophysiologie et l'imagerie du calcium permettent l'enregistrement simultané in vivo de centaines à des milliers de neurones. Parallèlement, grâce à des procédures de séquençage automatisées, les bibliothèques de protéines fonctionnelles connues ont été étendues de milliers à des millions en quelques années seulement. L'abondance actuelle de données biologiques ouvre une nouvelle série de défis aux théoriciens. Des méthodes d’analyse précises et transparentes sont nécessaires pour traiter cette quantité massive de données brutes en observables significatifs. Parallèlement, l'observation simultanée d'un grand nombre d'unités en interaction permet de développer et de valider des modèles théoriques visant à la compréhension mécanistique du comportement collectif des systèmes biologiques. Dans ce manuscrit, nous proposons une approche de ces défis basée sur des méthodes et des modèles issus de la physique statistique, en développent et appliquant ces méthodes au problèmes issu de la neuroscience et de la bio-informatique : l’étude de la mémoire spatiale dans le réseau hippocampique, et la reconstruction du paysage adaptatif local d'une protéine. / The recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data.
3

Padrões estruturados e campo aleatório em redes complexas

Doria, Felipe França January 2016 (has links)
Este trabalho foca no estudo de duas redes complexas. A primeira é um modelo de Ising com campo aleatório. Este modelo segue uma distribuição de campo gaussiana e bimodal. Uma técnica de conectividade finita foi utilizada para resolvê-lo. Assim como um método de Monte Carlo foi aplicado para verificar os resultados. Há uma indicação em nossos resultados que para a distribuição gaussiana a transição de fase é sempre de segunda ordem. Para as distribuições bimodais há um ponto tricrítico, dependente do valor da conectividade . Abaixo de um certo mínimo de , só existe transição de segunda ordem. A segunda é uma rede neural atratora métrica. Mais precisamente, estudamos a capacidade deste modelo para armazenar os padrões estruturados. Em particular, os padrões escolhidos foram retirados de impressões digitais, que apresentam algumas características locais. Os resultados mostram que quanto menor a atividade de padrões de impressões digitais, maior a relação de carga e a qualidade de recuperação. Uma teoria, também foi desenvolvido como uma função de cinco parâmetros: a relação de carga, a conectividade, o grau de densidade da rede, a relação de aleatoriedade e a correlação do padrão espacial. / This work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
4

Padrões estruturados e campo aleatório em redes complexas

Doria, Felipe França January 2016 (has links)
Este trabalho foca no estudo de duas redes complexas. A primeira é um modelo de Ising com campo aleatório. Este modelo segue uma distribuição de campo gaussiana e bimodal. Uma técnica de conectividade finita foi utilizada para resolvê-lo. Assim como um método de Monte Carlo foi aplicado para verificar os resultados. Há uma indicação em nossos resultados que para a distribuição gaussiana a transição de fase é sempre de segunda ordem. Para as distribuições bimodais há um ponto tricrítico, dependente do valor da conectividade . Abaixo de um certo mínimo de , só existe transição de segunda ordem. A segunda é uma rede neural atratora métrica. Mais precisamente, estudamos a capacidade deste modelo para armazenar os padrões estruturados. Em particular, os padrões escolhidos foram retirados de impressões digitais, que apresentam algumas características locais. Os resultados mostram que quanto menor a atividade de padrões de impressões digitais, maior a relação de carga e a qualidade de recuperação. Uma teoria, também foi desenvolvido como uma função de cinco parâmetros: a relação de carga, a conectividade, o grau de densidade da rede, a relação de aleatoriedade e a correlação do padrão espacial. / This work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
5

Padrões estruturados e campo aleatório em redes complexas

Doria, Felipe França January 2016 (has links)
Este trabalho foca no estudo de duas redes complexas. A primeira é um modelo de Ising com campo aleatório. Este modelo segue uma distribuição de campo gaussiana e bimodal. Uma técnica de conectividade finita foi utilizada para resolvê-lo. Assim como um método de Monte Carlo foi aplicado para verificar os resultados. Há uma indicação em nossos resultados que para a distribuição gaussiana a transição de fase é sempre de segunda ordem. Para as distribuições bimodais há um ponto tricrítico, dependente do valor da conectividade . Abaixo de um certo mínimo de , só existe transição de segunda ordem. A segunda é uma rede neural atratora métrica. Mais precisamente, estudamos a capacidade deste modelo para armazenar os padrões estruturados. Em particular, os padrões escolhidos foram retirados de impressões digitais, que apresentam algumas características locais. Os resultados mostram que quanto menor a atividade de padrões de impressões digitais, maior a relação de carga e a qualidade de recuperação. Uma teoria, também foi desenvolvido como uma função de cinco parâmetros: a relação de carga, a conectividade, o grau de densidade da rede, a relação de aleatoriedade e a correlação do padrão espacial. / This work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
6

Attractor Neural Network modelling of the Lifespan Retrieval Curve

Pereira, Patrícia January 2020 (has links)
Human capability to recall episodic memories depends on how much time has passed since the memory was encoded. This dependency is described by a memory retrieval curve that reflects an interesting phenomenon referred to as a reminiscence bump - a tendency for older people to recall more memories formed during their young adulthood than in other periods of life. This phenomenon can be modelled with an attractor neural network, for example, the firing-rate Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. In this work, the mechanisms underlying the reminiscence bump in the neural network model are systematically studied. The effects of synaptic plasticity, network architecture and other relevant parameters on the characteristics of the reminiscence bump are systematically investigated. The most influential factors turn out to be the magnitude of dopamine-linked plasticity at birth and the time constant of exponential plasticity decay with age that set the position of the bump. The other parameters mainly influence the general amplitude of the lifespan retrieval curve. Furthermore, the recency phenomenon, i.e. the tendency to remember the most recent memories, can also be parameterized by adding a constant to the exponentially decaying plasticity function representing the decrease in the level of dopamine neurotransmitters. / Människans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.

Page generated in 0.0889 seconds