• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 4
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 23
  • 19
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modeling and Control of Dynamical Systems with Reservoir Computing

Canaday, Daniel M. January 2019 (has links)
No description available.
12

Reconfigurable neurons - making the most of configurable logic blocks (CLBs)

Ghani, A., See, Chan H., Migdadi, Hassan S.O., Asif, Rameez, Abd-Alhameed, Raed, Noras, James M. January 2015 (has links)
No / An area-efficient hardware architecture is used to map fully parallel cortical columns on Field Programmable Gate Arrays (FPGA) is presented in this paper. To demonstrate the concept of this work, the proposed architecture is shown at the system level and benchmarked with image and speech recognition applications. Due to the spatio-temporal nature of spiking neurons, this has allowed such architectures to map on FPGAs in which communication can be performed through the use of spikes and signal can be represented in binary form. The process and viability of designing and implementing the multiple recurrent neural reservoirs with a novel multiplier-less reconfigurable architectures is described.
13

Dynamical Complexity of Nonlinear Dynamical Systems with Multiple Delays

Tavakoli, Kamyar 23 October 2023 (has links)
The high-dimensional property of delay differential equations makes them useful for various purposes. The applications of systems modelled with delay differential equations demand different degrees of complexity. One solution to tune this property is to make the dynamics of the current state dependent on more delayed states. How the system responds to more delayed states depends on the system under study, as both decreases and increases in the complexity were observed in different nonlinear systems. However, it is also known that when there is an infinite number of delays that follow a continuous distribution, simpler dynamics usually expected due to the averaging over previous states that the delay kernel provides. The present thesis investigates the role of multiple delays in nonlinear time delay systems, as well as methods for evaluating their complexity. Through the use of pseudospectral differentiation, we first compute the Lyapunov exponents of such multi-delay systems. In systems with a large number of delays, chaos is found to be less likely to occur. However, in systems with oscillatory feedback functions, the entropy can increase just by adding a few delays. Our study also demonstrates that the transition to simpler dynamics in nonlinear delay systems can be either monotonous or abrupt. This is particularly true in first-order nonlinear systems, where increasing the width of the distribution of delays results in complexity collapse, even in the presence of a few discrete delays. The roots of the characteristic equation around a fixed point can be used to approximate the degree of complexity of the dynamics of such time-delay systems, as they can be linked to other dynamical invariants such as the Kolmogorov-Sinai entropy. The phenomenon of complexity collapse uncovered in our work was further studied in an 80/20 ratio excitatory-inhibitory neural network. We found that the smaller the time delay, the higher the likelihood of chaotic dynamics, and this also promotes asynchronous spiking activity. But for larger values of the delay, the neurons show synchronized oscillatory spiking activity. A global inhibition at a longer delay results in a novel dynamical pattern of randomly occurring epochs of synchrony within the chaotic dynamics. The final part of the thesis examines the behavior of time delay reservoir computing when there are multiple time delays. It is shown that the choice of spacing between time delays is crucial, and depends on the task at hand. The system was studied for a prediction task with one chaotic input as well as for a mixture of two chaotic inputs. It was found that, similar to the single delay case, there is a resonance when the difference between delays is equal to the clock cycle. Together, our research provides valuable insights into the dynamics and complexity of nonlinear multi-delay systems and the importance of the spacing between delays.
14

Online Machine Learning for Wireless Communications: Channel Estimation, Receive Processing, and Resource Allocation

Li, Lianjun 03 July 2023 (has links)
Machine learning (ML) has shown its success in many areas such as computer vision, natural language processing, robot control, and gaming. ML also draws significant attention in the wireless communication society. However, applying ML schemes to wireless communication networks is not straightforward, there are several challenges need to addressed: 1). Training data in communication networks, especially in physical and MAC layer, are extremely limited; 2). The high-dynamic wireless environment and fast changing transmission schemes in communication networks make offline training impractical; 3). ML tools are treated as black boxes, which lack of explainability. This dissertation tries to address those challenges by selecting training-efficient neural networks, devising online training frameworks for wireless communication scenarios, and incorporating communication domain knowledge into the algorithm design. Training-efficient ML algorithms are customized for three communication applications: 1). Symbol detection, where real-time online learning-based symbol detection algorithms are designed for MIMO-OFDM and massive MIMO-OFDM systems by utilizing reservoir computing, extreme learning machine, multi-mode reservoir computing, and StructNet; 2) Channel estimation, where residual learning-based offline method is introduced for WiFi-OFDM systems, and a StructNet-based online method is devised for MIMO-OFDM systems; 3) Radio resource management, where reinforcement learning-based schemes are designed for dynamic spectrum access, as well as ORAN intelligent network slicing management. All algorithms introduced in this dissertation have demonstrated outstanding performance in their application scenarios, which paves the path for adopting ML-based solutions in practical wireless networks. / Doctor of Philosophy / Machine learning (ML), which is a branch of computer science that trains machine how to learn a solution from data, has shown its success in many areas such as computer vision, natural language processing, robot control, and gaming. ML also draws significant attention in the wireless communication society. However, applying ML schemes to wireless communication networks is not straightforward, there are several challenges need to addressed: 1). Training issue: unlike areas such as computer vision where large amount of training data are available, the training data in communication systems are limited; 2). Uncertainty in generalization: ML usually requires offline training, where the ML models are trained by artificially generated offline data, with the assumption that offline training data have the same statistical property as the online testing one. However, when they are statistically different, the testing performance can not be guaranteed; 3). Lack of explainability, usually ML tools are treated as black boxes, whose behaviors can hardly be explained in an analytical way. When designed for wireless networks, it is desirable for ML to have similar levels of explainability as conventional methods. This dissertation tries to address those challenges by selecting training-efficient neural networks, devising online training frameworks for wireless communication scenarios, and incorporating communication domain knowledge into the algorithm design. Training-efficient ML algorithms are customized for three communication applications: 1). Symbol detection, which is a critical step of wireless communication receiver processing, it aims to recover the transmitted signals from the corruption of undesired wireless channel effects and hardware impairments; 2) Channel estimation, where transmitter transmits a special type of symbol called pilot whose value and position are known for the receiver, receiver estimates the underlying wireless channel by comparing the received symbols with the known pilots information; 3) Radio resource management, which allocates wireless resources such bandwidth and time slots to different users. All algorithms introduced in this dissertation have demonstrated outstanding performance in their application scenarios, which paves the path for adopting ML-based solutions in practical wireless networks.
15

Microring Based Neuromorphic Photonics

Bazzanella, Davide 23 May 2022 (has links)
This manuscript investigates the use of microring resonators to create all-optical reservoir-computing networks implemented in silicon photonics. Artificial neural networks and reservoir-computing are promising applications for integrated photonics, as they could make use of the bandwidth and the intrinsic parallelism of optical signals. This work mainly illustrates two aspects: the modelling of photonic integrated circuits and the experimental results obtained with all-optical devices. The modelling of photonic integrated circuits is examined in detail, both concerning fundamental theory and from the point of view of numerical simulations. In particular, the simulations focus on the nonlinear effects present in integrated optical cavities, which increase the inherent complexity of their optical response. Toward this objective, I developed a new numerical tool, precise, which can simulate arbitrary circuits, taking into account both linear propagation and nonlinear effects. The experimental results concentrate on the use of SCISSORs and a single microring resonator as reservoirs and the complex perceptron scheme. The devices have been extensively tested with logical operations, achieving bit error rates of less than 10^−5 at 16 Gbps in the case of the complex perceptron. Additionally, an in-depth explanation of the experimental setup and the description of the manufactured designs are provided. The achievements reported in this work mark an encouraging first step in the direction of the development of novel networks that employ the full potential of all-optical devices.
16

Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture

Nowshin, Fabiha January 2021 (has links)
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations. / M.S. / In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
17

Energy Efficient Deep Spiking Recurrent Neural Networks: A Reservoir Computing-Based Approach

Hamedani, Kian 18 June 2020 (has links)
Recurrent neural networks (RNNs) have been widely used for supervised pattern recognition and exploring the underlying spatio-temporal correlation. However, due to the vanishing/exploding gradient problem, training a fully connected RNN in many cases is very difficult or even impossible. The difficulties of training traditional RNNs, led us to reservoir computing (RC) which recently attracted a lot of attention due to its simple training methods and fixed weights at its recurrent layer. There are three different categories of RC systems, namely, echo state networks (ESNs), liquid state machines (LSMs), and delayed feedback reservoirs (DFRs). In this dissertation a novel structure of RNNs which is inspired by dynamic delayed feedback loops is introduced. In the reservoir (recurrent) layer of DFR, only one neuron is required which makes DFRs extremely suitable for hardware implementations. The main motivation of this dissertation is to introduce an energy efficient, and easy to train RNN while this model achieves high performances in different tasks compared to the state-of-the-art. To improve the energy efficiency of our model, we propose to adopt spiking neurons as the information processing unit of DFR. Spiking neural networks (SNNs) are the most biologically plausible and energy efficient class of artificial neural networks (ANNs). The traditional analog ANNs have marginal similarity with the brain-like information processing. It is clear that the biological neurons communicate together through spikes. Therefore, artificial SNNs have been introduced to mimic the biological neurons. On the other hand, the hardware implementation of SNNs have shown to be extremely energy efficient. Towards achieving this overarching goal, this dissertation presents a spiking DFR (SDFR) with novel encoding schemes, and defense mechanisms against adversarial attacks. To verify the effectiveness and performance of the SDFR, it is adopted in three different applications where there exists a significant Spatio-temporal correlations. These three applications are attack detection in smart grids, spectrum sensing of multi-input-multi-output(MIMO)-orthogonal frequency division multiplexing (OFDM) Dynamic Spectrum Sharing (DSS) systems, and video-based face recognition. In this dissertation, the performance of SDFR is first verified in cyber attack detection in Smart grids. Smart grids are a new generation of power grids which guarantee a more reliable and efficient transmission and delivery of power to the costumers. A more reliable and efficient power generation and distribution can be realized through the integration of internet, telecommunication, and energy technologies. The convergence of different technologies, brings up opportunities, but the challenges are also inevitable. One of the major challenges that pose threat to the smart grids is cyber-attacks. A novel method is developed to detect false data injection (FDI) attacks in smart grids. The second novel application of SDFR is the spectrum sensing of MIMO-OFDM DSS systems. DSS is being implemented in the fifth generation of wireless communication systems (5G) to improve the spectrum efficiency. In a MIMO-OFDM system, not all the subcarriers are utilized simultaneously by the primary user (PU). Therefore, it is essential to sense the idle frequency bands and assign them to the secondary user (SU). The effectiveness of SDFR in capturing the spatio-temporal correlation of MIMO-OFDM time-series and predicting the availability of frequency bands in the future time slots is studied as well. In the third application, the SDFR is modified to be adopted in video-based face recognition. In this task, the SDFR is leveraged to recognize the identities of different subjects while they rotate their heads in different angles. Another contribution of this dissertation is to propose a novel encoding scheme of spiking neurons which is inspired by the cognitive studies of rats. For the first time, the multiplexing of multiple neural codes is introduced and it is shown that the robustness and resilience of the spiking neurons is increased against noisy data, and adversarial attacks, respectively. Adversarial attacks are small and imperceptible perturbations of the input data, which have shown to be able to fool deep learning (DL) models. So far, many adversarial attack and defense mechanisms have been introduced for DL models. Compromising the security and reliability of artificial intelligence (AI) systems is a major concern of government, industry and cyber-security researchers, in that insufficient protections can compromise the security and privacy of everyone in society. Finally, a defense mechanism to protect spiking neurons against adversarial attacks is introduced for the first time. In a nutshell, this dissertation presents a novel energy efficient deep spiking recurrent neural network which is inspired by delayed dynamic loops. The effectiveness of the introduced model is verified in several different applications. At the end, novel encoding and defense mechanisms are introduced which improve the robustness of the model against noise and adversarial attacks. / Doctor of Philosophy / The ultimate goal of artificial intelligence (AI) is to mimic the human brain. Artificial neural networks (ANN) are an attempt to realize that goal. However, traditional ANNs are very far from mimicking biological neurons. It is well-known that biological neurons communicate with one another through signals in the format of spikes. Therefore, artificial spiking neural networks (SNNs) have been introduced which behave more similarly to biological neurons. Moreover, SNNs are very energy efficient which makes them a suitable choice for hardware implementation of ANNs (neuromporphic computing). Despite the many benefits that are brought about by SNNs, they are still behind traditional ANNs in terms of performance. Therefore, in this dissertation, a new structure of SNNs is introduced which outperforms the traditional ANNs in three different applications. This new structure is inspired by delayed dynamic loops which exist in biological brains. The main objective of this novel structure is to capture the spatio-temporal correlation which exists in time-series while the training overhead and power consumption is reduced. Another contribution of this dissertation is to introduce novel encoding schemes for spiking neurons. It is clear that biological neurons leverage spikes, but the language that they use to communicate is not clear. Hence, the spikes require to be encoded in a certain language which is called neural spike encoding scheme. Inspired by the cognitive studies of rats, a novel encoding scheme is presented. Lastly, it is shown that the introduced encoding scheme increases the robustness of SNNs against noisy data and adversarial attacks. AI models including SNNs have shown to be vulnerable to adversarial attacks. Adversarial attacks are minor perturbations of the input data that can cause the AI model to misscalassify the data. For the first time, a defense mechanism is introduced which can protect SNNs against such attacks.
18

Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing / demonstration of optoelectronic concept of neuromorphic computer by reservoir computing

Martinenghi, Romain 16 December 2013 (has links)
Le Reservoir Computing (RC) est un paradigme s’inspirant du cerveau humain, apparu récemment au début des années2000. Il s'agit d'un calculateur neuromorphique habituellement décomposé en trois parties dont la plus importanteappelée "réservoir" est très proche d'un réseau de neurones récurrent. Il se démarque des autres réseaux de neuronesartificiels notamment grâce aux traditionnelles phases d'apprentissage et d’entraînement qui ne sont plus appliquées surla totalité du réseau de neurones mais uniquement sur la lecture du réservoir, ce qui simplifie le fonctionnement etfacilite une réalisation physique. C'est précisément dans ce contexte qu’ont été réalisés les travaux de recherche de cettethèse, durant laquelle nous avons réalisé une première implémentation physique opto-électronique de système RC.Notre approche des systèmes physiques RC repose sur l'utilisation de dynamiques non-linéaires à retards multiples dansl'objectif de reproduire le comportement complexe d'un réservoir. L'utilisation d'un système dynamique purementtemporel pour reproduire la dimension spatio-temporelle d'un réseau de neurones traditionnel, nécessite une mise enforme particulière des signaux d'entrée et de sortie, appelée multiplexage temporel ou encore étape de masquage. Troisannées auront été nécessaires pour étudier et construire expérimentalement nos démonstrateurs physiques basés sur desdynamiques non-linéaires à retards multiples opto-électroniques, en longueur d'onde et en intensité. La validationexpérimentale de nos systèmes RC a été réalisée en utilisant deux tests de calcul standards. Le test NARMA10 (test deprédiction de séries temporelles) et la reconnaissance vocale de chiffres prononcés (test de classification de données) ontpermis de quantifier la puissance de calcul de nos systèmes RC et d'atteindre pour certaines configurations l'état del'art. / Reservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches.
19

Towards a robust Arabic speech recognition system based on reservoir computing

Alalshekmubarak, Abdulrahman January 2014 (has links)
In this thesis we investigate the potential of developing a speech recognition system based on a recently introduced artificial neural network (ANN) technique, namely Reservoir Computing (RC). This technique has, in theory, a higher capability for modelling dynamic behaviour compared to feed-forward ANNs due to the recurrent connections between the nodes in the reservoir layer, which serves as a memory. We conduct this study on the Arabic language, (one of the most spoken languages in the world and the official language in 26 countries), because there is a serious gap in the literature on speech recognition systems for Arabic, making the potential impact high. The investigation covers a variety of tasks, including the implementation of the first reservoir-based Arabic speech recognition system. In addition, a thorough evaluation of the developed system is conducted including several comparisons to other state- of-the-art models found in the literature, and baseline models. The impact of feature extraction methods are studied in this work, and a new biologically inspired feature extraction technique, namely the Auditory Nerve feature, is applied to the speech recognition domain. Comparing different feature extraction methods requires access to the original recorded sound, which is not possible in the only publicly accessible Arabic corpus. We have developed the largest public Arabic corpus for isolated words, which contains roughly 10,000 samples. Our investigation has led us to develop two novel approaches based on reservoir computing, ESNSVMs (Echo State Networks with Support Vector Machines) and ESNEKMs (Echo State Networks with Extreme Kernel Machines). These aim to improve the performance of the conventional RC approach by proposing different readout architectures. These two approaches have been compared to the conventional RC approach and other state-of-the- art systems. Finally, these developed approaches have been evaluated on the presence of different types and levels of noise to examine their resilience to noise, which is crucial for real world applications.
20

A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing / Un modèle de rejeu de séquences spatiales dans un réseau Hippocampe-Cortex préfrontal utilisant le reservoir computing

Cazin, Nicolas 12 July 2018 (has links)
Alors que le rat apprend à chercher de multiples sources de nourriture ou d'eau, des processus d'apprentissage de séquences spatiales et de rejeu ont lieu dans l'hippocampe et le cortex préfrontal.Des études récentes (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) mettent en évidence que la navigation spatiale dans l'hippocampe de rat implique le rejeu de l'activation de cellules de lieu durant les étant de sommeil et d'éveil en générant des petites sous séquences contigues d'activation de cellules de lieu cohérentes entre elles. Ces fragments sont observés en particulier lors d'évènements sharp wave ripple (SPWR).Les phénomènes de rejeu lors du sommeil dans le contexte de la consolidation de la mémoire à long terme ont beaucoup attiré l'attention. Ici nous nous focalisons sur le rôle du rejeu pendant l'état d'éveil.Nous formulons l'hypothèse que ces fragments peuvent être utilisés par le cortex préfrontal pour réaliser une tâche d'apprentissage spatial comprenant plusieurs buts.Nous proposons de développer un modèle intégré d'hippocampe et de cortex préfrontal capable de générer des séquences d'activation de cellules de lieu.Le travail collaboratif proposé prolonge les travaux existants sur un modèle de cognition spatiale pour des tâches orientés but plus simples (Barrera and Weitzenfeld 2008; Barrera et al. 2015) avec un nouveau modèle basé sur le rejeu pour la formation de mémoire dans l'hippocampe et l'apprentissage et génération de séquences spatiales par le cortex préfrontal.En contraste avec les travaux existants d'apprentissage de séquence qui repose sur des règles d'apprentissage sophistiquées, nous proposons d'utiliser un paradigme calculatoire appelé calcul par réservoir (Dominey 1995) dans lequel des groupes importants de neurones artificiels dont la connectivité est fixe traitent dynamiquement l'information au travers de réverbérations. Ce modèle calculatoire par réservoir consolide les fragments de séquence d'activations de cellule de lieu en une plus grande séquence qui pourra être rappelée elle-même par des fragments de séquence.Le travail proposé est supposé contribuer à une nouvelle compréhension du rôle du phénomène de rejeu dans l'acquisition de la mémoire dans une tâche complexe liée à l'apprentissage de séquence.Cette compréhension opérationnelle sera mise à profit et testée dans l'architecture cognitive incarnée d'un robot mobile selon l'approche animat (Wilson 1991) [etc...] / As rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]

Page generated in 0.0644 seconds