• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 18
  • 11
  • 10
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Biologicky motivovaná autoasociativní neuronová síť s dynamickými synapsemi. / Activity and Memory in Biologically Motivated Neural Network.

Štroffek, Július January 2018 (has links)
This work presents biologically motivated neural network model which works as an auto-associative memory. Architecture of the presented model is similar to the architecture of the Hopfield network which might be similar to some parts of the hippocampal network area CA3 (Cornu Amonis). Patterns learned and retrieved are not static but they are periodically repeating sequences of sparse synchronous activities. Patterns were stored to the network using the modified Hebb rule adjusted to store cyclic sequences. Capacity of the model is analyzed together with the numerical simulations. The model is further extended with short term potentiation (STP), which is forming the essential part of the successful pattern recall process. The memory capacity of the extended version of the model is highly increased. The joint version of the model combining both approaches is discussed. The model might be able to retrieve the pattern in short time interval without STP (fast patterns) or in a longer time period utilizing STP (slow patterns). We know from our everyday life that some patterns could be recalled promptly and some may need much longer time to reveal. Keywords auto-associative neural network, Hebbian learning, neural coding, memory, pattern recognition, short-term potentiation 1
12

Redes neurais não-supervisionadas para processamento de sequências temporais / Unsupervised neural networks for temporal sequence processing

Guilherme de Alencar Barreto 31 August 1998 (has links)
Em muitos domínios de aplicação, a variável tempo é uma dimensão essencial. Este é o caso da robótica, na qual trajetórias de robôs podem ser interpretadas como seqüências temporais cuja ordem de ocorrência de suas componentes precisa ser considerada. Nesta dissertação, desenvolve-se um modelo de rede neural não-supervisionada para aprendizagem e reprodução de trajetórias do Robô PUMA 560. Estas trajetórias podem ter estados em comum, o que torna o processo de reprodução susceptível a ambigüidades. O modelo proposto consiste em uma rede competitiva composta por dois conjuntos de pesos sinápticos; pesos intercamadas e pesos intracamada. Pesos intercamadas conectam as unidades na camada de entrada com os neurônios da camada de saída e codificam a informação espacial contida no estímulo de entrada atual. Os pesos intracamada conectam os neurônios da camada de saída entre si, sendo divididos em dois grupos: autoconexões e conexões laterais. A função destes é codificar a ordem temporal dos estados da trajetória, estabelecendo associações entre estados consecutivos através de uma regra hebbiana. Três mecanismos adicionais são propostos de forma a tornar a aprendizagem e reprodução das trajetórias mais confiável: unidades de contexto, exclusão de neurônios e redundância na representação dos estados. A rede funciona indicando na sua saída o estado atual e o próximo estado da trajetória. As simulações com o modelo proposto ilustram a habilidade do modelo em aprender e reproduzir múltiplas trajetórias com precisão e sem ambiguidades. A rede também é capaz de reproduzir trajetórias mesmo diante de perdas de neurônios e de generalizar diante da presença de ruído nos estímulos de entrada da rede. / In many application domains, the variable time is an essential dimension. This is the case of Robotics, where robot trajectories can be interpreted as temporal sequences in which the order of occurrence of each component needs to be considered. In this dissertation, an unsupervised neural network model is developed for learning and reproducing trajectories of a Robot PUMA 560. These trajectories can have states in common, making the process of reproduction susceptible to ambiguities. The proposed model consists of a competitive network with two groups of synaptic connections: interlayer anel intralayer ones. The interlayer weights connect units in the input layer with neurons in the output layer and they encode the spatial information contained in the current input stimulus. The intralayer weights connect the neurons of the output Iayer to each other, being divided in two groups: self-connections and lateral connections. The function of these links is to encode the temporal order of the trajectory states, establishing associations among consecutive states through a Hebbian rule. Three additional mechanisms are proposed in order to make trajectory Iearning and reproduction more reliable: context units, exclusion of neurons and redundancy in the representation of the states. The model outputs the current state and the next state of the trajectory. The simulations with the proposed model illustrate the ability of the network in learning and reproducing muItiple trajectories accurateIy and without arnbiguities. In addition, the proposed neural network model is able to reproduce trajectories even when neuron failures occur and can generalize well in the presence of noise in the input stimulus.
13

Neuroscience of decision making : from goal-directed actions to habits / Neuroscience de la prise de décision : des actions dirigées vers un but aux habitudes

Topalidou, Meropi 10 October 2016 (has links)
Les processus de type “action-conséquence” (orienté vers un but) et stimulus-réponse sont deux composants importants du comportement. Le premier évalue le bénéfice d’une action pour choisir la meilleure parmi celles disponibles (sélection d’action) alors que le deuxième est responsable du comportement automatique, suscitant une réponse dès qu’un stimulus connu est présent. De telles habitudes sont généralement associées (et surtout opposées) aux actions orientées vers un but qui nécessitent un processus délibératif pour évaluer la meilleure option à prendre pour atteindre un objectif donné. En utilisant un modèle computationnel, nous avons étudié l’hypothèse classique de la formation et de l’expression des habitudes au niveau des ganglions de la base et nous avons formulé une nouvelle hypothèse quant aux rôles respectifs des ganglions de la base et du cortex. Inspiré par les travaux théoriques et expérimentaux de Leblois et al. (2006) et Guthrie et al. (2013), nous avons conçu un modèle computationnel des ganglions de la base, du thalamus et du cortex qui utilise des boucles distinctes (moteur, cognitif et associatif) ce qui nous a permis de poser l’hypothèse selon laquelle les ganglions de la base ne sont nécessaires que pour l’acquisition d’habitudes alors que l’expression de telles habitudes peut être faite par le cortex seul. En outre, ce modèle a permis de prédire l’existence d’un apprentissage latent dans les ganglions de la base lorsque leurs sorties (GPi) sont inhibées. En utilisant une tâche de bandit manchot à 2 choix, cette hypothèse a été expérimentalement testée et confirmée chez le singe; suggérant au final de rejeter l’idée classique selon laquelle l’automatisme est un trait subcortical. / Action-outcome and stimulus-response processes are two important components of behavior. The former evaluates the benefit of an action in order to choose the best action among those available (action selection) while the latter is responsible for automatic behavior, eliciting a response as soon as a known stimulus is present. Such habits are generally associated (and mostly opposed) to goal-directed actions that require a deliberative process to evaluate the best option to take in order to reach a given goal. Using a computational model, we investigated the classic hypothesis of habits formation and expression in the basal ganglia and proposed a new hypothesis concerning the respective role for both the basal ganglia and the cortex. Inspired by previous theoretical and experimental works (Leblois et al., 2006; Guthrie et al., 2013), we designed a computational model of the basal ganglia-thalamus-cortex that uses segregated loops (motor, cognitive and associative) and makes the hypothesis that basal ganglia are only necessary for the acquisition of habits while the expression of such habits can be mediated through the cortex. Furthermore, this model predicts the existence of covert learning within the basal ganglia ganglia when their output is inhibited. Using a two-armed bandit task, this hypothesis has been experimentally tested and confirmed in monkey. Finally, this works suggest to revise the classical idea that automatism is a subcortical feature.
14

Inhibition and loss of information in unsupervised feature extraction

Kermani Kolankeh, Arash 27 March 2018 (has links)
In this thesis inhibition as a means for competition among neurons in an unsupervised learning system is studied. In the first part of the thesis, the role of inhibition in robustness against loss of information in the form of occlusion in visual data is investigated. In the second part, inhibition as a reason for loss of information in the mathematical models of neural system is addressed. In that part, a learning rule for modeling inhibition with lowered loss of information and also a dis-inhibitory system which induces a winner-take-all mechanism are introduced. The models used in this work are unsupervised feature extractors made of biologically plausible neural networks which simulate the V1 layer of the visual cortex.
15

Exploring the column elimination optimization in LIF-STDP networks

Sun, Mingda January 2022 (has links)
Spiking neural networks using Leaky-Integrate-and-Fire (LIF) neurons and Spike-timing-depend Plasticity (STDP) learning, are commonly used as more biological possible networks. Compare to DNNs and RNNs, the LIF-STDP networks are models which are closer to the biological cortex. LIF-STDP neurons use spikes to communicate with each other, and they learn through the correlation among these pre- and post-synaptic spikes. Simulation of such networks usually requires high-performance supercomputers which are almost all based on von Neumann architecture that separates storage and computation. In von Neumann architecture solutions, memory access is the bottleneck even for highly optimized Application-Specific Integrated Circuits (ASICs). In this thesis, we propose an optimization method that can reduce the memory access cost by avoiding a dual-access pattern. In LIF-STDP networks, the weights usually are stored in the form of a two-dimensional matrix. Pre- and post-synaptic spikes trigger row and column access correspondingly. But this dual-access pattern is very costly for DRAM. We eliminate the column access by introducing a post-synaptic buffer and an approximation function. The post-synaptic spikes are recorded in the buffer and are processed at pre-synaptic spikes together with the row updates. This column update elimination method will introduce errors due to the limited buffer size. In our error analysis, the experiments show that the probability of introducing intolerable errors can be bounded to a very small number with proper buffer size and approximation function. We also present a performance analysis of the Column Update Elimination (CUE) optimization. The error analysis of the column updates elimination method is the main contribution of our work. / Spikande neurala nätverk som använder LIF-neuroner och STDP-inlärning, används vanligtvis som ett mer biologiskt möjligt nätverk. Jämfört med DNN och RNN är LIF-STDP-nätverken modeller närmare den biologiska cortex. LIFSTDP-neuroner använder spikar för att kommunicera med varandra, och de lär sig genom korrelationen mellan dessa pre- och postsynaptiska spikar. Simulering av sådana nätverk kräver vanligtvis högpresterande superdatorer som nästan alla är baserade på von Neumann-arkitektur som separerar lagring och beräkning. I von Neumanns arkitekturlösningar är minnesåtkomst flaskhalsen även för högt optimerade Application-Specific Integrated Circuits (ASIC). I denna avhandling föreslår vi en optimeringsmetod som kan minska kostnaden för minnesåtkomst genom att undvika ett dubbelåtkomstmönster. I LIF-STDPnätverk lagras vikterna vanligtvis i form av en tvådimensionell matris. Preoch postsynaptiska toppar kommer att utlösa rad- och kolumnåtkomst på motsvarande sätt. Men detta mönster med dubbel åtkomst är mycket dyrt i DRAM. Vi eliminerar kolumnåtkomsten genom att införa en postsynaptisk buffert och en approximationsfunktion. De postsynaptiska topparna registreras i bufferten och bearbetas vid presynaptiska toppar tillsammans med raduppdateringarna. Denna metod för eliminering av kolumnuppdatering kommer att introducera fel på grund av den begränsade buffertstorleken. I vår felanalys visar experimenten att sannolikheten för att införa oacceptabla fel kan begränsas till ett mycket litet antal med korrekt buffertstorlek och approximationsfunktion. Vi presenterar också en prestandaanalys av CUE-optimeringen. Felanalysen av elimineringsmetoden för kolumnuppdateringar är det huvudsakliga bidraget från vårt arbete
16

Modelling synaptic rewiring in brain-like neural networks for representation learning / Modellering av synaptisk omkoppling i hjärnliknande neurala nätverk för representationsinlärning

Bhatnagar, Kunal January 2023 (has links)
This research investigated the concept of a sparsity method inspired by the principles of structural plasticity in the brain in order to create a sparse model of the Bayesian Confidence Propagation Neural Networks (BCPNN) during the training phase. This was done by extending the structural plasticity in the implementation of the BCPNN. While the initial algorithm presented two synaptic states (Active and Silent), this research extended it to three synaptic states (Active, Silent and Absent) with the aim to enhance sparsity configurability and emulate a more brain-like algorithm, drawing parallels with synaptic states observed in the brain. Benchmarking was conducted using the MNIST and Fashion-MNIST dataset, where the proposed threestate model was compared against the previous two-state model in terms of representational learning. The findings suggest that the three-state model not only provides added configurability but also, in certain low-sparsity settings, showcases similar representational learning abilities as the two-state model. Moreover, in high-sparsity settings, the three-state model demonstrates a commendable balance between accuracy and sparsity trade-off. / Denna forskning undersökte en konceptuell metod för gleshet inspirerad av principerna för strukturell plasticitet i hjärnan för att skapa glesa BCPNN. Forskningen utvidgade strukturell plasticitet i en implementering av BCPNN. Medan den ursprungliga algoritmen presenterade två synaptiska tillstånd (Aktiv och Tyst), utvidgade denna forskning den till tre synaptiska tillstånd (Aktiv, Tyst och Frånvarande) med målet att öka konfigurerbarheten av sparsitet och efterlikna en mer hjärnliknande algoritm, med paralleller till synaptiska tillstånd observerade i hjärnan. Jämförelse gjordes med hjälp av MNIST och Fashion-MNIST datasetet, där det föreslagna tre-tillståndsmodellen jämfördes med den tidigare tvåtillståndsmodellen med avseende på representationslärande. Resultaten tyder på att tre-tillståndsmodellen inte bara ger ökad konfigurerbarhet utan också, i vissa lågt glesa inställningar, visar samma inlärningsförmåga som två-tillståndsmodellen. Dessutom visar den tre-tillståndsmodellen i högsparsamma inställningar en anmärkningsvärd balans mellan noggrannhet och avvägningen mellan sparsitet.
17

Exploring Column Update Elimination Optimization for Spike-Timing-Dependent Plasticity Learning Rule / Utforskar kolumnuppdaterings-elimineringsoptimering för spik-timing-beroende plasticitetsinlärningsregel

Singh, Ojasvi January 2022 (has links)
Hebbian learning based neural network learning rules when implemented on hardware, store their synaptic weights in the form of a two-dimensional matrix. The storage of synaptic weights demands large memory bandwidth and storage. While memory units are optimized for only row-wise memory access, Hebbian learning rules, like the spike-timing dependent plasticity, demand both row and column-wise access of memory. This dual pattern of memory access accounts for the dominant cost in terms of latency as well as energy for realization of large scale spiking neural networks in hardware. In order to reduce the memory access cost in Hebbian learning rules, a Column Update Elimination optimization has been previously implemented, with great efficacy, on the Bayesian Confidence Propagation neural network, that faces a similar challenge of dual pattern memory access. This thesis explores the possibility of extending the column update elimination optimization to spike-timing dependent plasticity, by simulating the learning rule on a two layer network of leaky integrate-and-fire neurons on an image classification task. The spike times are recorded for each neuron in the network, to derive a suitable probability distribution function for spike rates per neuron. This is then used to derive an ideal postsynaptic spike history buffer size for the given algorithm. The associated memory access reductions are analysed based on data to assess feasibility of the optimization to the learning rule. / Hebbiansk inlärning baserat på neural nätverks inlärnings regler används vid implementering på hårdvara, de lagrar deras synaptiska vikter i form av en tvådimensionell matris. Lagringen av synaptiska vikter kräver stor bandbredds minne och lagring. Medan minnesenheter endast är optimerade för radvis minnesåtkomst. Hebbianska inlärnings regler kräver som spike-timing-beroende plasticitet, både rad- och kolumnvis åtkomst av minnet. Det dubbla mönstret av minnes åtkomsten står för den dominerande kostnaden i form av fördröjning såväl som energi för realiseringen av storskaliga spikande neurala nätverk i hårdvara. För att minska kostnaden för minnesåtkomst i hebbianska inlärnings regler har en Column Update Elimination-optimering tidigare implementerats, med god effektivitet på Bayesian Confidence Propagation neurala nätverket, som står inför en liknande utmaning med dubbel mönster minnesåtkomst. Denna avhandling undersöker möjligheten att utöka ColumnUpdate Elimination-optimeringen till spike-timing-beroende plasticitet. Detta genom att simulera inlärnings regeln på ett tvålagers nätverk av läckande integrera-och-avfyra neuroner på en bild klassificerings uppgift. Spike tiderna registreras för varje neuron i nätverket för att erhålla en lämplig sannolikhetsfördelning funktion för frekvensen av toppar per neuron. Detta används sedan för att erhålla en idealisk postsynaptisk spike historisk buffertstorlek för den angivna algoritmen. De associerade minnesåtkomst minskningarna analyseras baserat på data för att bedöma genomförbarheten av optimeringen av inlärnings regeln.
18

A plastic multilayer network of the early visual system inspired by the neocortical circuit

Teichmann, Michael 25 October 2018 (has links)
The ability of the visual system for object recognition is remarkable. A better understanding of its processing would lead to better computer vision systems and could improve our understanding of the underlying principles which produce intelligence. We propose a computational model of the visual areas V1 and V2, implementing a rich connectivity inspired by the neocortical circuit. We combined the three most important cortical plasticity mechanisms. 1) Hebbian synaptic plasticity to learn the synapse strengths of excitatory and inhibitory neurons, including trace learning to learn invariant representations. 2) Intrinsic plasticity to regulate the neurons responses and stabilize the learning in deeper layers. 3) Structural plasticity to modify the connections and to overcome the bias for the learnings from the initial definitions. Among others, we show that our model neurons learn comparable receptive fields to cortical ones. We verify the invariant object recognition performance of the model. We further show that the developed weight strengths and connection probabilities are related to the response correlations of the neurons. We link the connection probabilities of the inhibitory connections to the underlying plasticity mechanisms and explain why inhibitory connections appear unspecific. The proposed model is more detailed than previous approaches. It can reproduce neuroscientific findings and fulfills the purpose of the visual system, invariant object recognition. / Das visuelle System des Menschen hat die herausragende Fähigkeit zur invarianten Objekterkennung. Ein besseres Verständnis seiner Arbeitsweise kann zu besseren Computersystemen für das Bildverstehen führen und könnte darüber hinaus unser Verständnis von den zugrundeliegenden Prinzipien unserer Intelligenz verbessern. Diese Arbeit stellt ein Modell der visuellen Areale V1 und V2 vor, welches eine komplexe, von den Strukturen des Neokortex inspirierte, Verbindungsstruktur integriert. Es kombiniert die drei wichtigsten kortikalen Plastizitäten: 1) Hebbsche synaptische Plastizität, um die Stärke der exzitatorischen und inhibitorischen Synapsen zu lernen, welches auch „trace“-Lernen, zum Lernen invarianter Repräsentationen, umfasst. 2) Intrinsische Plastizität, um das Antwortverhalten der Neuronen zu regulieren und damit das Lernen in tieferen Schichten zu stabilisieren. 3) Strukturelle Plastizität, um die Verbindungen zu modifizieren und damit den Einfluss anfänglicher Festlegungen auf das Lernergebnis zu reduzieren. Neben weiteren Ergebnissen wird gezeigt, dass die Neuronen des Modells vergleichbare rezeptive Felder zu Neuronen des visuellen Kortex erlernen. Ebenso wird die Leistungsfähigkeit des Modells zur invariante Objekterkennung verifiziert. Des Weiteren wird der Zusammenhang von Gewichtsstärke und Verbindungswahrscheinlichkeit zur Korrelation der Aktivitäten der Neuronen aufgezeigt. Die gefundenen Verbindungswahrscheinlichkeiten der inhibitorischen Neuronen werden in Zusammenhang mit der Funktionsweise der inhibitorischen Plastizität gesetzt, womit erklärt wird warum inhibitorische Verbindungen unspezifisch erscheinen. Das vorgestellte Modell ist detaillierter als vorangegangene Arbeiten. Es ermöglicht neurowissenschaftliche Erkenntnisse nachzuvollziehen, wobei es ebenso die Hauptleistung des visuellen Systems erbringt, invariante Objekterkennung. Darüber hinaus ermöglichen sein Detailgrad und seine Selbstorganisationsprinzipien weitere neurowissenschaftliche Erkenntnisse und die Modellierung komplexerer Modelle der Verarbeitung im Gehirn.
19

Competition improves robustness against loss of information

Kolankeh, Arash Kermani, Teichmann, Michael, Hamker, Fred H. 21 July 2015 (has links) (PDF)
A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.
20

Odor coding and memory traces in the antennal lobe of honeybee

Galan, Roberto Fernandez 17 December 2003 (has links)
In dieser Arbeit werden zwei wesentliche neue Ergebnisse vorgestellt. Das erste bezieht sich auf die olfaktorische Kodierung und das zweite auf das sensorische Gedaechtnis. Beide Phaenomene werden am Beispiel des Gehirns der Honigbiene untersucht. In Bezug auf die olfaktorische Kodierung zeige ich, dass die neuronale Dynamik waehrend der Stimulation im Antennallobus duftspezifische Trajektorien beschreibt, die in duftspezifischen Attraktoren enden. Das Zeitinterval, in dem diese Attraktoren erreicht werden, betraegt unabhaengig von der Identitaet und der Konzentration des Duftes ungefaehr 800 ms. Darueber hinaus zeige ich, dass Support-Vektor Maschinen, und insbesondere Perzeptronen, ein realistisches und biologisches Model der Wechselwirkung zwischen dem Antennallobus (dem kodierenden Netwerk) und dem Pilzkoerper (dem dekodierenden Netzwerk) darstellen. Dieses Model kann sowohl Reaktionszeiten von ca. 300 ms als auch die Invarianz der Duftwahrnehmung gegenueber der Duftkonzentration erklaeren. In Bezug auf das sensorische Gedaechtnis zeige ich, dass eine einzige Stimulation ohne Belohnung dem Hebbschen Postulat folgend Veraenderungen der paarweisen Korrelationen zwischen Glomeruli induziert. Ich zeige, dass diese Veranderungen der Korrelationen bei 2/3 der Bienen ausreichen, um den letzten Stimulus zu bestimmen. In der zweiten Minute nach der Stimulation ist eine erfolgreiche Bestimmung des Stimulus nur bei 1/3 der Bienen moeglich. Eine Hauptkomponentenanalyse der spontanen Aktivitaet laesst erkennen, dass das dominante Muster des Netzwerks waehrend der spontanen Aktivitaet nach, aber nicht vor der Stimulation das duftinduzierte Aktivitaetsmuster bei 2/3 der Bienen nachbildet. Man kann deshalb die duftinduzierten (Veraenderungen der) Korrelationen als Spuren eines Kurzzeitgedaechtnisses bzw. als Hebbsche "Reverberationen" betrachtet werden. / Two major novel results are reported in this work. The first concerns olfactory coding and the second concerns sensory memory. Both phenomena are investigated in the brain of the honeybee as a model system. Considering olfactory coding I demonstrate that the neural dynamics in the antennal lobe describe odor-specific trajectories during stimulation that converge to odor-specific attractors. The time interval to reach these attractors is, regardless of odor identity and concentration, approximately 800 ms. I show that support-vector machines and, in particular perceptrons provide a realistic and biological model of the interaction between the antennal lobe (coding network) and the mushroom body (decoding network). This model can also account for reaction-times of about 300 ms and for concentration invariance of odor perception. Regarding sensory memory I show that a single stimulation without reward induces changes of pairwise correlation between glomeruli in a Hebbian-like manner. I demonstrate that those changes of correlation suffice to retrieve the last stimulus presented in 2/3 of the bees studied. Succesful retrieval decays to 1/3 of the bees within the second minute after stimulation. In addition, a principal-component analysis of the spontaneous activity reveals that the dominant pattern of the network during the spontaneous activity after, but not before stimulation, reproduces the odor-induced activity pattern in 2/3 of the bees studied. One can therefore consider the odor-induced (changes of) correlation as traces of a short-term memory or as Hebbian reverberations.

Page generated in 0.0856 seconds