Spelling suggestions: "subject:"likeindependent plasticity"" "subject:"platformindependent plasticity""
1 |
Memory stability and synaptic plasticityBillings, Guy January 2009 (has links)
Numerous experiments have demonstrated that the activity of neurons can alter the strength of excitatory synapses. This synaptic plasticity is bidirectional and synapses can be strengthened (potentiation) or weakened (depression). Synaptic plasticity offers a mechanism that links the ongoing activity of the brain with persistent physical changes to its structure. For this reason it is widely believed that synaptic plasticity mediates learning and memory. The hypothesis that synapses store memories by modifying their strengths raises an important issue. There should be a balance between the necessity that synapses change frequently, allowing new memories to be stored with high fidelity, and the necessity that synapses retain previously stored information. This is the plasticity stability dilemma. In this thesis the plasticity stability dilemma is studied in the context of the two dominant paradigms of activity dependent synaptic plasticity: Spike timing dependent plasticity (STDP) and long term potentiation and depression (LTP/D). Models of biological synapses are analysed and processes that might ameliorate the plasticity stability dilemma are identified. Two popular existing models of STDP are compared. Through this comparison it is demonstrated that the synaptic weight dynamics of STDP has a large impact upon the retention time of correlation between the weights of a single neuron and a memory. In networks it is shown that lateral inhibition stabilises the synaptic weights and receptive fields. To analyse LTP a novel model of LTP/D is proposed. The model centres on the distinction between early LTP/D, when synaptic modifications are persistent on a short timescale, and late LTP/D when synaptic modifications are persistent on a long timescale. In the context of the hippocampus it is proposed that early LTP/D allows the rapid and continuous storage of short lasting memory traces over a long lasting trace established with late LTP/D. It is shown that this might confer a longer memory retention time than in a system with only one phase of LTP/D. Experimental predictions about the dynamics of amnesia based upon this model are proposed. Synaptic tagging is a phenomenon whereby early LTP can be converted into late LTP, by subsequent induction of late LTP in a separate but nearby input. Synaptic tagging is incorporated into the LTP/D framework. Using this model it is demonstrated that synaptic tagging could lead to the conversion of a short lasting memory trace into a longer lasting trace. It is proposed that this allows the rescue of memory traces that were initially destined for complete decay. When combined with early and late LTP/D iii synaptic tagging might allow the management of hippocampal memory traces, such that not all memories must be stored on the longest, most stable late phase timescale. This lessens the plasticity stability dilemma in the hippocampus, where it has been hypothesised that memory traces must be frequently and vividly formed, but that not all traces demand eventual consolidation at the systems level.
|
2 |
Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles / Adaptive Computing Architectures Based on Nano-fabricated ComponentsBichler, Olivier 14 November 2012 (has links)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit «memristifs», récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase – «Phase-Change Memory» (PCM), «Conductive-Bridging RAM» (CBRAM), mémoire résistive – «Resistive RAM» (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme «Spike-Timing-Dependent Plasticity» (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou «crossbar», pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en «Complementary Metal Oxide Semiconductor» (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques. / In this thesis, we study the potential applications of emerging memory nano-devices in computing architecture. More precisely, we show that neuro-inspired architectural paradigms could provide the efficiency and adaptability required in some complex image/audio processing and classification applications. This, at a much lower cost in terms of power consumption and silicon area than current Von Neumann-derived architectures, thanks to a synaptic-like usage of these memory nano-devices. This work is focusing on memristive nano-devices, recently (re-)introduced by the discovery of the memristor in 2008 and their use as synapses in spiking neural network. In fact, this includes most of the emerging memory technologies: Phase-Change Memory (PCM), Conductive-Bridging RAM (CBRAM), Resistive RAM (RRAM)... These devices are particularly suitable for the implementation of natural unsupervised learning algorithms like Spike-Timing-Dependent Plasticity (STDP), requiring very little control circuitry.The integration of memristive devices in crossbar array could provide the huge density required by this type of architecture (several thousand synapses per neuron), which is impossible to match with a CMOS-only implementation. This can be seen as one of the main factors that hindered the rise of CMOS-based neural network computing architectures in the nineties, among the relative complexity and inefficiency of the back-propagation learning algorithm, despite all the promising aspects of such neuro-inspired architectures, like adaptability and fault-tolerance. In this work, we propose synaptic models for memristive devices and simulation methodologies for architectural design exploiting them. Novel neuro-inspired architectures are introduced and simulated for natural data processing. They exploit the synaptic characteristics of memristives nano-devices, along with the latest progresses in neurosciences. Finally, we propose hardware implementations for several device types. We assess their scalability and power efficiency potential, and their robustness to variability and faults, which are unavoidable at the nanometric scale of these devices. This last point is of prime importance, as it constitutes today the main difficulty for the integration of these emerging technologies in digital memories.
|
3 |
Utilisation des nano-composants électroniques dans les architectures de traitement associées aux imageurs / Integration of memory nano-devices in image sensors processing architectureRoclin, David 16 December 2014 (has links)
En utilisant les méthodes d’apprentissages tirées des récentes découvertes en neuroscience, les réseaux de neurones impulsionnels ont démontrés leurs capacités à analyser efficacement les grandes quantités d’informations provenant de notre environnement. L’implémentation de ces circuits à l’aide de processeurs classiques ne permet pas d’exploiter efficacement leur parallélisme. L’utilisation de mémoire numérique pour implémenter les poids synaptique ne permet pas la lecture ou la programmation parallèle des synapses et est limité par la bande passante reliant la mémoire à l’unité de calcul. Les technologies mémoire de type memristive pourrait permettre l’implémentation de ce parallélisme au coeur de la mémoire.Dans cette thèse, nous envisageons le développement d’un réseau de neurones impulsionnels dédié au monde de l’embarqué à base de dispositif mémoire émergents. Dans un premier temps, nous avons analysé un réseau impulsionnel afin d’optimiser ses différentes composantes : neurone, synapse et méthode d’apprentissage STDP en vue d’une implémentation numérique. Dans un second temps, nous envisageons l’implémentation de la mémoire synaptique par des dispositifs memristifs. Enfin, nous présentons le développement d’une puce co-intégrant des neurones implémentés en CMOS avec des synapses en technologie CBRAM. / By using learning mechanisms extracted from recent discoveries in neuroscience, spiking neural networks have demonstrated their ability to efficiently analyze the large amount of data from our environment. The implementation of such circuits on conventional processors does not allow the efficient exploitation of their parallelism. The use of digital memory to implement the synaptic weight does not allow the parallel reading or the parallel programming of the synapses and it is limited by the bandwidth of the connection between the memory and the processing unit. Emergent memristive memory technologies could allow implementing this parallelism directly in the heart of the memory.In this thesis, we consider the development of an embedded spiking neural network based on emerging memory devices. First, we analyze a spiking network to optimize its different components: the neuron, the synapse and the STDP learning mechanism for digital implementation. Then, we consider implementing the synaptic memory with emergent memristive devices. Finally, we present the development of a neuromorphic chip co-integrating CMOS neurons with CBRAM synapses.
|
4 |
Homo- et hétérosynaptique spike-timing-dependent plasticity aux synapses cortico- et thalamo-striatales / Homo- and heterosynaptic plasticity at cortico- and thalamo-striatal synapsesMendes, Alexandre 28 September 2017 (has links)
D’après le postulat de Hebb, les circuits neuronaux ajustent et modifient durablement leurs poids synaptiques en fonction des patrons de décharges de part et d’autre de la synapse. La « spike-timing-dependent plasticity » (STDP) est une règle d’apprentissage synaptique hebbienne dépendante de la séquence temporelle précise (de l’ordre de la milliseconde) des activités appariées des neurones pré- et post-synaptiques. Le striatum, le principal noyau d’entrée des ganglions de la base, reçoit des afférences excitatrices provenant du cortex cérébral et du thalamus dont les activités peuvent être concomitantes ou décalées dans le temps. Ainsi, l’encodage temporal des informations corticales et thalamiques via la STDP pourrait être crucial pour l’implication du striatum dans l’apprentissage procédural. Nous avons exploré les plasticités synaptiques cortico- et thalamo-striatales puis leurs interactions à travers le paradigme de la STDP. Les principaux résultats sont :1. Les « spike-timing-dependent plasticity » opposées cortico-striatales et thalamo-striatales induisent des plasticités hétérosynaptiques. Si la très grande majorité des études sont consacrées à la plasticité synaptique cortico-striatale, peu ont exploré les règles de plasticité synaptique aux synapses thalamo-striatale et leurs interactions avec la plasticité cortico-striatale. Nous avons étudié la STDP thalamo-striatale et comment les plasticités synaptiques thalamo- et cortico-striatales interagissent… / According to Hebbian postulate, neural circuits tune their synaptic weights depending on patterned firing of action potential on either side of the synapse. Spike-timing-dependent plasticity (STDP) is an experimental implementation of Hebbian plasticity that relies on the precise order and the millisecond timing of the paired activities in pre- and postsynaptic neurons. The striatum, the primary entrance to basal ganglia, integrates excitatory inputs from both cerebral cortex and thalamus whose activities can be concomitant or delayed. Thus, temporal coding of cortical and thalamic information via STDP paradigm may be crucial for the role of the striatum in procedural learning. Here, we explored cortico-striatal and thalamo-striatal synaptic plasticity and their interplay through STDP paradigm. The main results described here are:1. Opposing spike-timing dependent plasticity at cortical and thalamic inputs drive heterosynaptic plasticity in striatumIf the vast majority of the studies focused on cortico-striatal synaptic plasticity, much less is known about thalamo-striatal plasticity rules and their interplay with cortico-striatal plasticity. Here, we explored thalamo-striatal STDP and how thalamo-striatal and cortico-striatal synaptic plasticity interplay. a) While bidirectional and anti-Hebbian STDP was observed at cortico-striatal synapses, thalamo-striatal exhibited bidirectional and hebbian STDP...
|
5 |
Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticityHumble, James January 2013 (has links)
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data.
|
6 |
Effect of Channel Stochasticity on Spike Timing Dependent PlasticityTalasila, Harshit Sam 20 December 2011 (has links)
The variability of the postsynaptic response following a presynaptic action potential arises from: i) the neurotransmitter release being probabilistic and ii) channels in the postsynaptic cell involved in the response to neurotransmitter release, having stochastic properties. Spike timing dependent plasticity (STDP) is a form of plasticity that exhibits LTP or LTD depending on the precise order and timing of the firing of the synaptic cells. STDP plays a role in fundamental tasks such as learning and memory, thus understanding and characterizing the effect variability in synaptic transmission has on STDP is essential. To that end a model incorporating both forms of variability was constructed. It was shown that ion channel stochasticity increased the magnitude of maximal potentiation, increased the window of potentiation and severely reduced the post-LTP associated LTD in the STDP curves. The variability due to short term plasticity decreased the magnitude of maximal potentiation.
|
7 |
Effect of Channel Stochasticity on Spike Timing Dependent PlasticityTalasila, Harshit Sam 20 December 2011 (has links)
The variability of the postsynaptic response following a presynaptic action potential arises from: i) the neurotransmitter release being probabilistic and ii) channels in the postsynaptic cell involved in the response to neurotransmitter release, having stochastic properties. Spike timing dependent plasticity (STDP) is a form of plasticity that exhibits LTP or LTD depending on the precise order and timing of the firing of the synaptic cells. STDP plays a role in fundamental tasks such as learning and memory, thus understanding and characterizing the effect variability in synaptic transmission has on STDP is essential. To that end a model incorporating both forms of variability was constructed. It was shown that ion channel stochasticity increased the magnitude of maximal potentiation, increased the window of potentiation and severely reduced the post-LTP associated LTD in the STDP curves. The variability due to short term plasticity decreased the magnitude of maximal potentiation.
|
8 |
Self-organized Criticality in Neural Networks by Inhibitory and Excitatory Synaptic PlasticityEhsani, Masud 25 January 2022 (has links)
Neural networks show intrinsic ongoing activity even in the absence of information processing and task-driven activities. This spontaneous activity has been reported to have specific characteristics ranging from scale-free avalanches in microcircuits to the power-law decay of the power spectrum of oscillations in coarse-grained recordings of large populations of neurons. The emergence of scale-free activity and power-law distributions of observables has encouraged researchers to postulate that the neural system is operating near a continuous phase transition. At such a phase transition, changes in control parameters or the strength of the external input lead to a change in the macroscopic behavior of the system. On the other hand, at a critical point due to critical slowing down, the phenomenological mesoscopic modeling of the system becomes realizable. Two distinct types of phase transitions have been suggested as the operating point of the neural system, namely active-inactive and synchronous-asynchronous phase transitions.
In contrast to normal phase transitions in which a fine-tuning of the control parameter(s) is required to bring the system to the critical point, neural systems should be supplemented with self-tuning mechanisms that adaptively adjust the system near to the critical point (or critical region) in the phase space.
In this work, we introduce a self-organized critical model of the neural network. We consider dynamics of excitatory and inhibitory (EI) sparsely connected populations of spiking leaky integrate neurons with conductance-based synapses. Ignoring inhomogeneities and internal fluctuations, we first analyze the mean-field model. We choose the strength of the external excitatory input and the average strength of excitatory to excitatory synapses as control parameters of the model and analyze the bifurcation diagram of the mean-field equations. We focus on bifurcations at the low firing rate regime in which the quiescent state loses stability due to Saddle-node or Hopf bifurcations. In particular, at the Bogdanov-Takens (BT) bifurcation point which is the intersection of the Hopf bifurcation and Saddle-node bifurcation lines of the 2D dynamical system, the network shows avalanche dynamics with power-law avalanche size and duration distributions. This matches the characteristics of low firing spontaneous activity in the cortex. By linearizing gain functions and excitatory and inhibitory nullclines, we can approximate the location of the BT bifurcation point. This point in the control parameter phase space corresponds to the internal balance of excitation and inhibition and a slight excess of external excitatory input to the excitatory population. Due to the tight balance of average excitation and inhibition currents, the firing of the individual cells is fluctuation-driven. Around the BT point, the spiking of neurons is a Poisson process and the population average membrane potential of neurons is approximately at the middle of the operating interval $[V_{Rest}, V_{th}]$. Moreover, the EI network is close to both oscillatory and active-inactive phase transition regimes.
Next, we consider self-tuning of the system at this critical point. The self-organizing parameter in our network is the balance of opposing forces of inhibitory and excitatory populations' activities and the self-organizing mechanisms are long-term synaptic plasticity and short-term depression of the synapses. The former tunes the overall strength of excitatory and inhibitory pathways to be close to a balanced regime of these currents and the latter which is based on the finite amount of resources in brain areas, act as an adaptive mechanism that tunes micro populations of neurons subjected to fluctuating external inputs to attain the balance in a wider range of external input strengths.
Using the Poisson firing assumption, we propose a microscopic Markovian model which captures the internal fluctuations in the network due to the finite size and matches the macroscopic mean-field equation by coarse-graining. Near the critical point, a phenomenological mesoscopic model for excitatory and inhibitory fields of activity is possible due to the time scale separation of slowly changing variables and fast degrees of freedom. We will show that the mesoscopic model corresponding to the neural field model near the local Bogdanov-Takens bifurcation point matches Langevin's description of the directed percolation process. Tuning the system at the critical point can be achieved by coupling fast population dynamics with slow adaptive gain and synaptic weight dynamics, which make the system wander around the phase transition point. Therefore, by introducing short-term and long-term synaptic plasticity, we have proposed a self-organized critical stochastic neural field model.:1. Introduction
1.1. Scale-free Spontaneous Activity
1.1.1. Nested Oscillations in the Macro-scale Collective Activity
1.1.2. Up and Down States Transitions
1.1.3. Avalanches in Local Neuronal Populations
1.2. Criticality and Self-organized Criticality in Systems out of Equilibrium
1.2.1. Sandpile Models
1.2.2. Directed Percolation
1.3. Critical Neural Models
1.3.1. Self-Organizing Neural Automata
1.3.2. Criticality in the Mesoscopic Models of Cortical Activity
1.4. Balance of Inhibition and Excitation
1.5. Functional Benefits of Being in the Critical State
1.6. Arguments Against the Critical State of the Brain
1.7. Organization of the Current Work
2. Single Neuron Model
2.1. Impulse Response of the Neuron
2.2. Response of the Neuron to the Constant Input
2.3. Response of the Neuron to the Poisson Input
2.3.1. Potential Distribution of a Neuron Receiving Poisson Input
2.3.2. Firing Rate and Interspike intervals’ CV Near the Threshold
2.3.3. Linear Poisson Neuron Approximation
3. Interconnected Homogeneous Population of Excitatory and Inhibitory Neurons
3.1. Linearized Nullclines and Different Dynamic Regimes
3.2. Logistic Function Approximation of Gain Functions
3.3. Dynamics Near the BT Bifurcation Point
3.4. Avalanches in the Region Close to the BT Point
3.5. Stability Analysis of the Fixed Points in the Linear Regime
3.6. Characteristics of Avalanches
4. Long Term and Short Term Synaptic Plasticity rules Tune the EI Population Close to the BT Bifurcation Point
4.1. Long Term Synaptic Plasticity by STDP Tunes Synaptic Weights Close to the Balanced State
4.2. Short-term plasticity and Up-Down states transition
5. Interconnected network of EI populations: Wilson-Cowan Neural Field Model
6. Stochastic Neural Field
6.1. Finite size fluctuations in a single EI population
6.2. Stochastic Neural Field with a Tuning Mechanism to the
Critical State
7. Conclusion
|
9 |
Mécanismes d'apprentissage pour expliquer la rapidité, la sélectivité et l'invariance des réponses dans le cortex visuelMasquelier, Timothée 15 February 2008 (has links) (PDF)
Dans cette thèse je propose plusieurs mécanismes de plasticité synaptique qui pourraient expliquer la rapidité, la sélectivité et l'invariance des réponses neuronales dans le cortex visuel. Leur plausibilité biologique est discutée. J'expose également les résultats d'une expérience de psychophysique pertinente, qui montrent que la familiarité peut accélérer les traitements visuels. Au delà de ces résultats propres au système visuel, les travaux présentés ici créditent l'hypothèse de l'utilisation des dates de spikes pour encoder, décoder, et traiter l'information dans le cerveau – c'est la théorie dite du ‘codage temporel'. Dans un tel cadre, la Spike Timing Dependent Plasticity pourrait jouer un rôle clef, en détectant des patterns de spikes répétitifs et en permettant d'y répondre de plus en plus rapidement.
|
10 |
Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentiellesBichler, Olivier 14 November 2012 (has links) (PDF)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit "memristifs", récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase - "Phase-Change Memory" (PCM), "Conductive-Bridging RAM" (CBRAM), mémoire résistive - "Resistive RAM" (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme "Spike-Timing-Dependent Plasticity" (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou "crossbar", pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en "Complementary Metal Oxide Semiconductor" (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques.
|
Page generated in 0.1099 seconds