• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 34
  • 18
  • 5
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 58
  • 42
  • 34
  • 33
  • 29
  • 29
  • 27
  • 24
  • 24
  • 23
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Medidas de dependência entre séries temporais: estudo comparativo, análise estatística e aplicações em neurociências / Measures of dependence between time series: Comparative study, statistical analysis and applications in neuroscience

Carlos Stein Naves de Brito 29 July 2010 (has links)
Medidas de dependência entre séries temporais são estudadas com a perspectiva de evidenciar como diferentes regiões do cérebro interagem, por meio da aplicação a sinais eletrofisiológicos. Baseado na representação auto-regressiva e espectral de séries temporais, diferentes medidas são comparadas entre si, incluindo coerência espectral e a coerência parcial direcionada, e introduz-se uma nova medida, denominada transferência parcial direcionada. As medidas são analisadas pelas propriedades de parcialização, relações diretas ou indiretas e direcionalidade temporal, e são mostradas suas relações com a correlação quadrática. Conclui-se que, entre as medidas analisadas, a coerência parcial direcionada e a transferência parcial direcionada possuem o maior número de características desejáveis, fundamentadas no conceito de causalidade de Granger. A estatística assintótica é desenvolvida para todas as medidas, incluindo intervalo de confiança e teste de hipótese nula, assim como sua implementação computacional. A aplicação a séries simuladas e a análise de dados eletrofisiológicos reais ilustram o estudo comparativo e a aplicabilidade das novas estatísticas apresentadas. / Measures of dependence between temporal series are studied in the context of revealing how different brain regions interact, through their application to electrophysiology. Based on the spectral and autoregressive model of time series, different measures are compared, including coherence and partial directed coherence, and a new measure is introduced, named partial directed transfer. The measures are analyzed through the properties of partialization, direct or indirect relations and temporal directionality, and their relation to quadratic correlation is shown. It results that among the presented measures, partial directed coherence and partial directed transfer reveal the highest number of desirable properties, being grounded on the concept of Granger causality. The asymptotic statistics for all measures are developed, including confidence intervals and null hypothesis testing, as well as their computational implementation. The application to simulated series and the analysis of electrophysiological data illustrate the comparative study and the applicability of the newly presented statistics.
192

Characterizing and comparing acoustic representations in convolutional neural networks and the human auditory system

Thompson, Jessica A. F. 04 1900 (has links)
Le traitement auditif dans le cerveau humain et dans les systèmes informatiques consiste en une cascade de transformations représentationnelles qui extraient et réorganisent les informations pertinentes pour permettre l'exécution des tâches. Cette thèse s'intéresse à la nature des représentations acoustiques et aux principes de conception et d'apprentissage qui soutiennent leur développement. Les objectifs scientifiques sont de caractériser et de comparer les représentations auditives dans les réseaux de neurones convolutionnels profonds (CNN) et la voie auditive humaine. Ce travail soulève plusieurs questions méta-scientifiques sur la nature du progrès scientifique, qui sont également considérées. L'introduction passe en revue les connaissances actuelles sur la voie auditive des mammifères et présente les concepts pertinents de l'apprentissage profond. Le premier article soutient que les questions philosophiques les plus pressantes à l'intersection de l'intelligence artificielle et biologique concernent finalement la définition des phénomènes à expliquer et ce qui constitue des explications valables de tels phénomènes. Je surligne les théories pertinentes de l'explication scientifique que j’espére fourniront un échafaudage pour de futures discussions. L'article 2 teste un modèle populaire de cortex auditif basé sur des modulations spectro-temporelles. Nous constatons qu'un modèle linéaire entraîné uniquement sur les réponses BOLD aux ondulations dynamiques simples (contenant seulement une fréquence fondamentale, un taux de modulation temporelle et une échelle spectrale) peut se généraliser pour prédire les réponses aux mélanges de deux ondulations dynamiques. Le troisième article caractérise la spécificité linguistique des couches CNN et explore l'effet de l'entraînement figé et des poids aléatoires. Nous avons observé trois régions distinctes de transférabilité: (1) les deux premières couches étaient entièrement transférables, (2) les couches 2 à 8 étaient également hautement transférables, mais nous avons trouvé évidence de spécificité de la langue, (3) les couches suivantes entièrement connectées étaient plus spécifiques à la langue mais pouvaient être adaptées sur la langue cible. Dans l'article 4, nous utilisons l'analyse de similarité pour constater que la performance supérieure de l'entraînement figé obtenues à l'article 3 peuvent être attribuées aux différences de représentation dans l'avant-dernière couche: la deuxième couche entièrement connectée. Nous analysons également les réseaux aléatoires de l'article 3, dont nous concluons que la forme représentationnelle est doublement contrainte par l'architecture et la forme de l'entrée et de la cible. Pour tester si les CNN acoustiques apprennent une hiérarchie de représentation similaire à celle du système auditif humain, le cinquième article compare l'activité des réseaux «freeze trained» de l'article 3 à l'activité IRMf 7T dans l'ensemble du système auditif humain. Nous ne trouvons aucune évidence d'une hiérarchie de représentation partagée et constatons plutôt que tous nos régions auditifs étaient les plus similaires à la première couche entièrement connectée. Enfin, le chapitre de discussion passe en revue les mérites et les limites d'une approche d'apprentissage profond aux neurosciences dans un cadre de comparaison de modèles. Ensemble, ces travaux contribuent à l'entreprise naissante de modélisation du système auditif avec des réseaux de neurones et constituent un petit pas vers une science unifiée de l'intelligence qui étudie les phénomènes qui se manifestent dans l'intelligence biologique et artificielle. / Auditory processing in the human brain and in contemporary machine hearing systems consists of a cascade of representational transformations that extract and reorganize relevant information to enable task performance. This thesis is concerned with the nature of acoustic representations and the network design and learning principles that support their development. The primary scientific goals are to characterize and compare auditory representations in deep convolutional neural networks (CNNs) and the human auditory pathway. This work prompts several meta-scientific questions about the nature of scientific progress, which are also considered. The introduction reviews what is currently known about the mammalian auditory pathway and introduces the relevant concepts in deep learning.The first article argues that the most pressing philosophical questions at the intersection of artificial and biological intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I highlight relevant theories of scientific explanation which we hope will provide scaffolding for future discussion. Article 2 tests a popular model of auditory cortex based on frequency-specific spectrotemporal modulations. We find that a linear model trained only on BOLD responses to simple dynamic ripples (containing only one fundamental frequency, temporal modulation rate, and spectral scale) can generalize to predict responses to mixtures of two dynamic ripples. Both the third and fourth article investigate how CNN representations are affected by various aspects of training. The third article characterizes the language specificity of CNN layers and explores the effect of freeze training and random weights. We observed three distinct regions of transferability: (1) the first two layers were entirely transferable between languages, (2) layers 2--8 were also highly transferable but we found some evidence of language specificity, (3) the subsequent fully connected layers were more language specific but could be successfully finetuned to the target language. In Article 4, we use similarity analysis to find that the superior performance of freeze training achieved in Article 3 can be largely attributed to representational differences in the penultimate layer: the second fully connected layer. We also analyze the random networks from Article 3, from which we conclude that representational form is doubly constrained by architecture and the form of the input and target. To test whether acoustic CNNs learn a similar representational hierarchy as that of the human auditory system, the fifth article presents a similarity analysis to compare the activity of the freeze trained networks from Article 3 to 7T fMRI activity throughout the human auditory system. We find no evidence of a shared representational hierarchy and instead find that all of our auditory regions were most similar to the first fully connected layer. Finally, the discussion chapter reviews the merits and limitations of a deep learning approach to neuroscience in a model comparison framework. Together, these works contribute to the nascent enterprise of modeling the auditory system with neural networks and constitute a small step towards a unified science of intelligence that studies the phenomena that are exhibited in both biological and artificial intelligence.
193

Beyond AMPA and NMDA: Slow synaptic mGlu/TRPC currents : Implications for dendritic integration

Petersson, Marcus January 2010 (has links)
In order to understand how the brain functions, under normal as well as pathological conditions, it is important to study the mechanisms underlying information integration. Depending on the nature of an input arriving at a synapse, different strategies may be used by the neuron to integrate and respond to the input. Naturally, if a short train of high-frequency synaptic input arrives, it may be beneficial for the neuron to be equipped with a fast mechanism that is highly sensitive to inputs on a short time scale. If, on the contrary, inputs arriving with low frequency are to be processed, it may be necessary for the neuron to possess slow mechanisms of integration. For example, in certain working memory tasks (e. g. delay-match-to-sample), sensory inputs may arrive separated by silent intervals in the range of seconds, and the subject should respond if the current input is identical to the preceeding input. It has been suggested that single neurons, due to intrinsic mechanisms outlasting the duration of input, may be able to perform such calculations. In this work, I have studied a mechanism thought to be particularly important in supporting the integration of low-frequency synaptic inputs. It is mediated by a cascade of events that starts with activation of group I metabotropic glutamate receptors (mGlu1/5), and ends with a membrane depolarization caused by a current that is mediated by canonical transient receptor potential (TRPC) ion channels. This current, denoted ITRPC, is the focus of this thesis. A specific objective of this thesis is to study the role of ITRPC in the integration of synaptic inputs arriving at a low frequency, < 10 Hz. Our hypothesis is that, in contrast to the well-studied, rapidly decaying AMPA and NMDA currents, ITRPC is well-suited for supporting temporal summation of such synaptic input. The reason for choosing this range of frequencies is that neurons often communicate with signals (spikes) around 8 Hz, as shown by single-unit recordings in behaving animals. This is true for several regions of the brain, including the entorhinal cortex (EC) which is known to play a key role in producing working memory function and enabling long-term memory formation in the hippocampus. Although there is strong evidence suggesting that ITRPC is important for neuronal communication, I have not encountered a systematic study of how this current contributes to synaptic integration. Since it is difficult to directly measure the electrical activity in dendritic branches using experimental techniques, I use computational modeling for this purpose. I implemented the components necessary for studying ITRPC, including a detailed model of extrasynaptic glutamate concentration, mGlu1/5 dynamics and the TRPC channel itself. I tuned the model to replicate electrophysiological in vitro data from pyramidal neurons of the rodent EC, provided by our experimental collaborator. Since we were interested in the role of ITRPC in temporal summation, a specific aim was to study how its decay time constant (τdecay) is affected by synaptic stimulus parameters. The hypothesis described above is supported by our simulation results, as we show that synaptic inputs arriving at frequencies as low as 3 - 4 Hz can be effectively summed. We also show that τdecay increases with increasing stimulus duration and frequency, and that it is linearly dependent on the maximal glutamate concentration. Under some circumstances it was problematic to directly measure τdecay, and we then used a pair-pulse paradigm to get an indirect estimate of τdecay. I am not aware of any computational model work taking into account the synaptically evoked ITRPC current, prior to the current study, and believe that it is the first of its kind. We suggest that ITRPC is important for slow synaptic integration, not only in the EC, but in several cortical and subcortical regions that contain mGlu1/5 and TRPC subunits, such as the prefrontal cortex. I will argue that this is further supported by studies using pharmacological blockers as well as studies on genetically modified animals. / QC 20101005
194

Role of Context in Episodic Memory : A Bayesian-Hebbian Neural Network Model of Episodic Recall

Raj, Rohan January 2022 (has links)
Episodic memory forms a fundamental aspect of human memory that accounts for the storage of events as well as the spatio-temporal relations between events during a lifetime. These spatio-temporal relations in which episodes are embedded can be understood as their contexts. Contexts play a crucial role in episodic memory retrieval. Despite this, little work has been done in the computational neuroscience literature on trying to investigate this relationship further. These interactions can be modelled with attractor neural networks such as the Bayesian Confidence Propagation Neural Network (BCPNN). In this project, the interaction between contextual aspects and memory items are studied by developing an abstract computational model of episodic memory retrieval. The effect of increasing the number of items associated with a particular context on the overall recall performance is examined. Finally, the role of synaptic plasticity modulation of certain item-context associations on recall is also analysed. It is found that an inverse relationship exists between the number of items associated with a context and their subsequent recall rates, i.e. as the number of items associated with an episodic context increase, the recall rates of the corresponding items decrease. Furthermore, it is found that the item-context pairs for which the synaptic plasticity is modulated during learning, have a significantly higher recall rate than the remaining unmodulated associations. / Episodiskt minne utgör en grundläggande aspekt av det mänskliga minnet som står för lagring av händelser samt de spatio-temporala relationerna mellan dem under en livstid. De spatio-temporala relationer i vilka episoderna är inbäddade kan betraktas som deras kontexter, vilka spelar en avgörande roll i episodisk minnesåterkallande. Trots detta har inte mycket forskning inom beräkningsneurovetenskapi gjorts för att närmare undersöka bakomliggande neurala mekanismer. Minnesåterkallandet inklusive interaktioner mellan kontext och minnesobjekt kan modelleras med återkopplade neurala nätverk, s k attraktornät, t ex av typen för Bayesian Confidence Propagation Neural Network (BCPNN). I det här projektet studeras interaktionen mellan kontextuella aspekter och minnesobjekt genom att utveckla en abstrakt BCPNN-modell av episodisk återkallande. Effekten av att variera antalet objekt-kontextassociationer på återkallningsbeteende undersöks. Slutligen analyseras också effekten av synaptisk plasticitetsmodulering av vissa minnesobjekt-kontext-associationer på korrekt minnesåterkallelse. Det observeras att när antalet objekt associerade med ett sammanhang ökar, minskar återkallningsfrekvensen för motsvarande objekt. Vidare är det konstaterat att minnesobjektföremålet-kontextpar för vilka den synaptiska plasticiteten moduleras under lärande, har en betydligt högre återkallningsgrad än de återstående omodulerade föreningarna.
195

Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscience

Dehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
196

Scale Invariant Object Recognition Using Cortical Computational Models and a Robotic Platform

Voils, Danny 01 January 2012 (has links)
This paper proposes an end-to-end, scale invariant, visual object recognition system, composed of computational components that mimic the cortex in the brain. The system uses a two stage process. The first stage is a filter that extracts scale invariant features from the visual field. The second stage uses inference based spacio-temporal analysis of these features to identify objects in the visual field. The proposed model combines Numenta's Hierarchical Temporal Memory (HTM), with HMAX developed by MIT's Brain and Cognitive Science Department. While these two biologically inspired paradigms are based on what is known about the visual cortex, HTM and HMAX tackle the overall object recognition problem from different directions. Image pyramid based methods like HMAX make explicit use of scale, but have no sense of time. HTM, on the other hand, only indirectly tackles scale, but makes explicit use of time. By combining HTM and HMAX, both scale and time are addressed. In this paper, I show that HTM and HMAX can be combined to make a com- plete cortex inspired object recognition model that explicitly uses both scale and time to recognize objects in temporal sequences of images. Additionally, through experimentation, I examine several variations of HMAX and its
197

Distinction entre besoin et désir : avec la perspective des neurosciences

Bosulu, Juvenal 08 1900 (has links)
Le besoin et le désir sont parfois dissociés (discordance), parfois associés (concordance). En effet, le besoin est lié à la privation et, parmi d’autres régions du cerveau, l’hypothalamus et l’insula jouent un rôle central dans l’émanation et la représentation des états de besoin internes de l’organisme, et le niveau de sérotonine semble indiquer les états de privation ou de satiation. Le désir quant à lui est lié à la prédiction de récompense. Cette dernière contrôle davantage le comportement, car elle active les régions centrales de la dopamine : l’aire tegmentale ventrale (VTA) et le nucleus accumbens (NAcc). Cela dit, l’interaction et la différence entre besoin et désir en termes de fonctionnement cérébral ne sont pas si définies, et on ne sait pas vraiment pourquoi parfois il y a concordance et parfois discordance. Ainsi, la première étude de cette thèse consistait à examiner le patron d’activation cérébrale lié à la perception des stimuli physiologiques et sociaux dont on a besoin, et leurs liens avec la sérotonine. La deuxième étude s’est attelée à comparer les patrons d’activation cérébrale liés à la perception des stimuli liés au besoin, en l’absence de désir ; et celle des stimuli liés au désir en l’absence de besoin. Pour répondre aux deux questions soulevées par ces deux premières études, nous avons utilisé des méta-analyses d’imageries cérébrales fonctionnelles. Nous avons trouvé que les besoins physiologiques et sociaux ont un patron d’activation commun au niveau de l’insula mi-postérieure, de la portion pré-limbique du cortex cingulaire antérieur, et du noyau caudé. De plus, ce patron d’activation commun possède une forte corrélation avec le récepteur 5HT4 parmi les récepteurs de la sérotonine. La deuxième étude a montré que le besoin semble davantage impliquer l’insula mi-postérieure, et que le désir implique les régions de la dopamine, notamment le VTA et le NAcc. Ceci suggère que le besoin dirige le choix et octroie la valeur aux stimuli via la prédiction des états internes ; tandis que le désir dirige le choix et octroie la valeur aux stimuli via la prédiction de récompense. Cette étude montre que ces deux types de valeurs sont indépendants, démontrant que le besoin et le désir peuvent arriver séparément (discordance). Toutefois, ces deux études n’expliquent pas l’effet sous-jacent du besoin, par lequel il amplifie le désir, le plaisir, etc. Le besoin est lié à la tendance qu’ont les êtres vivants à occuper des états préférés afin de réduire l’entropie. Dans la troisième étude, nous avons utilisé des méthodes computationnelles ; et trouvé que la tendance d’occuper les états préférés est influencée par les états de besoin, indépendamment de la prédiction de récompense ; et que l’entropie est largement réduite en présence d’une récompense menant à l’état préféré qu’en son absence. En effet, l’entropie signifie l’incertitude sur quel état occuper et la précision signifie l’inverse de l’entropie. Comme la dopamine signale le précision qu’une séquence d’événement (policy) mène à la récompense on peut comprendre l’amplification du désir par le besoin : le besoin amplifie le désir si, et seulement si, on est en face d’un stimulus qui signale la précision que la séquence d’événement mène à la récompense, et que cette récompense est la même celle qui réduit l’entropie en menant vers l’état préféré. En ce sens, le besoin et le désir sont en concordance lorsque le stimulus qui mène à l’état préféré est également la récompense prédite, c’est-à-dire celle à laquelle mène la policy. / Needing and wanting are sometimes dissociated and sometimes associated. Indeed, needing is related to deprivation, and among other brain regions, the hypothalamus and insula play a central role in the emanation and representation of need states, and serotonin levels seem to encode how deprived or satiated one is. Wanting is linked to reward prediction which has more power on behavioral activation than need states. This is due to the fact that reward predicting cues elicit activity within the mesolimbic dopamine circuitry, especially the ventral tegmental area (VTA) and the nucleus accumbens (NAcc). That being said, the interaction and the difference between needing and wanting as of how the brain works is not fully known, and we can't quite explain why sometimes they are associated and some other times dissociated. Hence, our first study looked at the brain activation pattern that is common for the perception of physiologically and socially needed stimuli, and the relation between such a common activation pattern and serotonin in the brain. The second study set out to compare the brain activation patterns of the perception of needed stimuli, in absence of wanting, and that of wanted stimuli in the absence of needing. Using functional brain imaging meta-analyses to answer those questions, we found that psychologically and socially needed stimuli have common activation patterns that peaked at the mid-posterior insula, the prelimbic anterior cingulate cortex, and the caudate nucleus. This common pattern has a strong correlation with the 5HT4 serotonin receptor. The second study showed that needing seems to more consistently activate the mid-posterior insula, whereas wanting more consistently activates dopaminergic regions, especially the VTA and NAcc. This suggests that needing directs choice and assigns value to stimuli via interoceptive prediction; while wanting directs choice and assigns value to stimuli based on reward prediction. The fact that we found these two types of values ​​to be independent shows that needing and wanting can occur separately (they can be dissociated). However, the first two studies do not explain what the underlying effect of needing is and how such an effect amplifies wanting, liking, etc. Indeed, needing is related to the tendency of living creatures to occupy preferred states in order to reduce entropy. In the third study, using computational methods, we found that this tendency to occupy preferred states is influenced by need states, independently of reward prediction, and that the presence of a reward leading to the preferred state reduces entropy when need states increase; compared to no reward. As entropy means the uncertainty on which state to occupy, and precision is the inverse of entropy, this result means that need can amplify wanting: it suffices to consider that the prediction of reward triggers dopamine which signals the precision (certainty or confidence) that the policy will lead to reward. It is in this sense that need amplifies wanting if, and only if, there is a cue that signals such precision. In other words, needing amplifies wanting if the reward that leads to the preferred state is the same as the one to which the policy specified by the precision leads to. Otherwise there will be discrepancy, and needing and wanting will happen independently.
198

Characterization of a Spiking Neuron Model via a Linear Approach

Jabalameli, Amirhossein 01 January 2015 (has links)
In the past decade, characterizing spiking neuron models has been extensively researched as an essential issue in computational neuroscience. In this thesis, we examine the estimation problem of two different neuron models. In Chapter 2, We propose a modified Izhikevich model with an adaptive threshold. In our two-stage estimation approach, a linear least squares method and a linear model of the threshold are derived to predict the location of neuronal spikes. However, desired results are not obtained and the predicted model is unsuccessful in duplicating the spike locations. Chapter 3 is focused on the parameter estimation problem of a multi-timescale adaptive threshold (MAT) neuronal model. Using the dynamics of a non-resetting leaky integrator equipped with an adaptive threshold, a constrained iterative linear least squares method is implemented to fit the model to the reference data. Through manipulation of the system dynamics, the threshold voltage can be obtained as a realizable model that is linear in the unknown parameters. This linearly parametrized realizable model is then utilized inside a prediction error based framework to identify the threshold parameters with the purpose of predicting single neuron precise firing times. This estimation scheme is evaluated using both synthetic data obtained from an exact model as well as the experimental data obtained from in vitro rat somatosensory cortical neurons. Results show the ability of this approach to fit the MAT model to different types of reference data.
199

Nonrenewal spiking in Neural and Calcium signaling

Ramlow, Lukas 24 January 2024 (has links)
Sowohl in der neuronalen als auch in der Kalzium Signalübertragung werden Informationen durch kurze Pulse oder Spikes, übertragen. Obwohl beide Systeme grundlegende Eigenschaften der Spike-Erzeugung teilen, wurden Integrate-and-fire (IF)-Modelle bisher nur auf neuronale Systeme angewendet. Diese Modelle bleiben auch dann behandelbar, wenn sie um Prozesse erweitert werden, die in Übereinstimmung mit Experimenten Spike-Zeiten mit korrelierten Interspike-Intervallen (ISI) erzeugen. Die statistische Analyse solcher nicht erneuerbarer Modelle ist Gegenstand dieser Arbeit. Das zweite Kapitel konzentriert sich auf die Berechnung des seriellen Korrelationskoeffizienten (SCC) in neuronalen Systemen. Es wird ein adaptives Modell betrachtet, das durch einen korrelierten Eingangsstrom getrieben wird. Es zeigt sich, dass neben den langsamen Prozessen auch die Dynamik des Modells den SCC bestimmt. Obwohl die Theorie für schwach gestörte IF-Modelle entwickelt wurde, kann sie auch auf stärker gestörte leitfähigkeitsbasierte Modelle angewendet werden und ist damit in der Lage, ein breites Spektrum biophysikalischer Situationen zu beschreiben. Im dritten Kapitel wird ein IF-Modell zur Beschreibung von Kalzium-Spikes formuliert, das die stochastische Freisetzung von Kalzium aus dem endoplasmatischen Retikulum (ER) und dessen Entleerung berücksichtigt. Die beobachtete Zeitskalentrennung zwischen Kalziumfreisetzung und Spikegenerierung motiviert eine Diffusionsnäherung, die eine analytische Behandlung des Modells ermöglicht. Die experimentell beobachtete Transiente, in der sich die ISIs einem stationären Wert annähern, kann durch die Entleerung des ER beschrieben werden. Es wird untersucht, wie die Statistiken der Transienten mit den stationären Intervallkorrelationen zusammenhängen. Es zeigt sich, dass eine stärkere Anpassung der Intervalle und eine kurze Transiente mit stärkeren Korrelationen einhergehen. Der Vergleich mit experimentellen Daten bestätigt diese Trends qualitativ. / In both neuronal and calcium signaling, information is transmitted by short pulses, so-called spikes. Although both systems share some basic principles of spike generation, integrate-and-fire (IF) models have so far only been applied to neuronal systems. These models remain analytically tractable even when extended to include processes that lead to the generation of spike times with correlated interspike intervals (ISIs) as observed in experiments. The statistical analysis of such non-renewal models is the subject of this thesis. In the second chapter we focus on the calculation of the serial correlation coefficient (SCC) in neural systems. We consider an adaptive model driven by a correlated input current. We show that in addition to the two slow processes, the dynamics of the model also determines the SCC. Although the theory is developed for weakly perturbed IF models, it can also be applied to more strongly perturbed conductance-based models and is thus able to account for a wide range of biophysical situations. In the third chapter, we formulate an IF model to describe the generation of calcium spikes, taking into account the stochastic release of calcium from the endoplasmic reticulum (ER) and its depletion. The observed time-scale separation between calcium release and spike generation motivates a diffusion approximation that allows an analytical treatment of the model. The experimentally observed transient, during which the ISIs approach a steady state value, can be captured by the depletion of the ER. We study how the transient ISI statistics are related to the stationary interval correlations. We show that a stronger adaptation of the intervals as well as a short transient are associated with stronger interval correlations. Comparison with experimental data qualitatively confirms these trends.
200

Single Cell Transcriptomic-informed Microcircuit Computer Modelling of Temporal Lobe Epilepsy

Reddy, Vineet 28 July 2022 (has links)
No description available.

Page generated in 0.1335 seconds